Developing Enterprise Web Applications in Java: Some tips

Dasun Pubudumal
9 min readJul 28, 2023

A high-level guide on developing a full-blown enterprise web application in Java

Image by pressfoto on Freepik

Java is that language that we’ve all seen a lot of enterprise-level organizations use for their applications. There’s a bunch of reasons for that; it’s compiled, write-once-run-anywhere (runs on top of a VM), its flavors that support web applications, build tools supported by a wide range of libraries developed by the community and the providers, and established frameworks that leverage Java’s capability to support tackling a plethora of real-world enterprise scenarios such as transaction and session management, authentication, configuration management, and build & deployment management. “Why would you choose Java?” is a a topic that demands its own article, and this article does not aim to cover that.

Instead, we’ll look at the general concepts in developing and deploying an enterprise solution that use modern technologies. Remember that with the plethora of tools we have today, it is almost always difficult to perform an exhaustive search on all the tools, do feasibility studies, and pick one. Generally, what most organizations do is that they research what other, more established (always look up, not down), organizations use and try to come up with a rigorous feasibility study on using certain tools & techniques for their own use-cases. I’m not suggesting to stay away from becoming an early-adopter, but it all depends on the risk that your organization can take. For Greenfield projects, it’s likely that trying out (or “spiking”) a technology to bring into the code-bases would introduce a lag in delivery, as it would require thorough experimentation. For legacy projects, the risk might be higher as there’s a chance that the new tool will not suite the existing architecture. Based on the risk your teams can handle, each technology can be adopted incrementally.

Setting up the basis

Handling Web Requests

The basis, or the core of an enterprise web application is the process that handles incoming web requests. The requests usually takes in the form of Hypter Text Transfer Protocol (HTTP), but we cannot ignore the power of WebSocket as an application-layer protocol for real-time communication (almost all the chat-based applications are written with this protocol underneath). With the inception of Representational State Transfer (REST), the power of HTTP was enriched with the use of HTTP methods as verbs. Traditional Simple Object Access Protocol (SOAP) that used “actions” (or operations) instead of state of an object. For example, if a client application want to create some resource (e.g. a payment) data to a SOAP web service, that application would have to invoke a URI with /createPayment with necessary and sufficient parameters wrapped within query parameters. REST, on the other hand, use the transfer of state with a verb: POST /payment with the proper JSON (XML can also be used, but REST has been a bit involved with and evolved to use JSON as its primary message format) document with correct data to maintain a stateless transfer.

Most modern web frameworks use the concept of MVC (Model View Controller) for dealing with enterprise web applications. Handling of the web request is done by the Controller part of the concept, where each request is directed to a certain controller (one controller can handle more than one request depending on the request path). In relatively older JavaEE frameworks (old versions of JAX-RS implementations, for example), such configurations were achieved using a file called a Dispatcher Servlet. This file (or the component) dispatches incoming web request based on certain meta data within the HTTP request (e.g. request path) to the respective servlet.

In modern frameworks, however, the concept of the dispatcher servlet has become hidden, or implicit. People starting out to develop web applications in the modern era are hardly aware of this component. Frameworks like Jersey (Eclipse’s Java EE implementation) and Spring conveniently use reflection-based annotations to provide a detour to setting up a relatively complex XML (i.e., the dispatcher servlet), by just annotating a class to become a servlet (i.e., a controller). The incoming request path is also annotated using annotation data with the annotation itself. Tools like Spring Boot further makes this process easier by providing out-of-the-box in-built web servers to run your application and convenient configuration options using dependencies. For example, if you want to attach a database to your web application, you only need to include the correct dependency into your build manifest, and configure it in a specific configuration file (a YAML or a .properties file). The Inversion of Control (IoC) container will take care of object instantiations related to the dependencies (e.g. DataSource class).

Handling Dependencies

Most web applications do not function in isolation; they almost always depend on other services/libraries to handle web requests. But there is a historical problem that ached developers for ages; how can we write testable code with a plenty of dependencies? With each dependency the complexity of the code increased and each dependency tend to couple the web services with those dependencies. Managing the instantiation of those dependencies was a hassle that the developers had to go through.

Modern web frameworks are equipped with comprehensive Context and Dependency Injection (CDI) strategies where the lifecycle of the objects are handled by a specific object container. Each object within the container (i.e., beans) can be injected into another object (bean), making the underlying services loosely coupled (the bean declaration was also an XML-based strategy until recent JavaEE and web frameworks extracted it out to an annotation-based functionality). Through CDI, it has become easier to write test-driven applications since the downstream dependencies can be mocked-out (JUnit, Mockito) with ease. The dependency scope is also managed by the container, that determines the period that the state of each dependency is held within the container. Usually, it’s the @ApplicationScoped (this is in JavaEE) annotation or @Component (this is in Spring) is used to generate a bean that lasts for the entire duration of the web application.

Therefore, it’s better to handle your dependencies and delegate the instantiation and injection of the dependencies to the container using CDI features of the framework of your choice. Note that dependencies aren’t the traditional dependencies that you include in your classpath; they are quite often the downstream services that you write to handle your web requests. For example, if you’re writing a web application to process payments, chances are that you adhere to a layered approach where each controller handles the web requests by delegating the web request to a downstream service, and that downstream services is likely to invoke several other web services (3rd party). The downstream service in question, therefore, is a dependency to upstream controller. In this case, it’s likely that it would require you to create a managed bean out of the service, inject to the controller (which is a bean itself) and let the container manage its lifecycle.

Managing Configuration

When you watch all those YouTube tutorials (there are a lot there) of spinning up a web application (let’s say using Spring Boot), you’re likely adviced to use Spring Initializr, generate the project, include the web-starter dependency and run the main method. And the video will most likely to run through some configuration files in case you need to, for example, change the web port, update database configurations, update the logging schema, et cetera.

Configuration management is a huge deal when it comes to large-scale enterprise systems. A mis-configuration can bring down the whole system causing a chain reaction producing a domino effect (there was a very recent outage of GitLab for a config issue). A mis-configuration, if the configurations aren’t separated from the application, can lead to re-deployment of an entire application. Therefore, configurations aren’t meant to be taken lightly.

There are multiple approaches of using configurations. You can use your environment variables that you push into the JVM upon the start of your application, and refer to them using your application.properties and autowire them with the @Value annotation. That’s one way. But modern systems (especially cloud systems with PaaS) are equipped with cloud config servers (e.g. AWS AppConfig) that lets you manage configuration as if it is its own deployment. They let you version configurations for each deployment, making it easy to roll back the configuration deployments. Your application would request configurations from the configuration server (ideally through an HTTP call) when it’s required.

To make this more easier, tools such as Microprofile use CDI to inject configurations to your applications. You would write (if not already exists in the Microprofile umbrella) a custom class that invokes the HTTP call required to fetch data from the config server, and Microprofile would invoke that class (now a bean) behind-the-scenes and wire them to a Plain-Old Java Object (POJO) upon the start of your application.

Test Driven Development

Except the more obvious use cases for unit and integration testing (that applies not only for web applications), there’s one more aspect to this in the modern cloud-era. If your deployment location is in a cloud, how would you test the application after development? There are two choices:

  1. Replicate the cloud stack in your local development machine.
  2. Have low-level environments to do dev-deployments

Provided that you’ve got enough bandwidth to create one more environment (despite the necessary ones such as QA, Staging, Production/Live), the 2nd point would be the most obvious choice of dev-testing the application. That way, we can assure that it will run on higher environments provided that necessary and sufficient cloud configurations exists in both lower and higher environments. But what if you’re not capable of provisioning for the 2nd point? What if, the pay-per-use cost of your cloud tool umbrella is costing too much already to provision another environment?

Then you’ve got no choice. You have to use tools such as localstack which, in my view, a genius little tool that you can use to spin some could services up in your local machine using Docker. Coupled with tools like Testcontainers, you can write test-driven web-based code that are battle-tested to a certain extent before going into higher environments.

Deployment

Artefacts

You need to be aware of your DevOps structure i.e., where are you going to deploy your application, and about the supplementary tools you’re going to use (e.g. Amazon Web Services Cloud Tools). If you are going to deploy your application in a serverless Function-as-a-Service (FaaS) platform it’s likely you’re going to need the flavors of the frameworks that are built for that (because you’ll run into problems such as cold-start; take a look at flavors such as Spring Native, Quarkus and Micronaught). Or else, if you’re going to use a service like Azure VM or AWS EC2, you’re good with Spring or Jersey, but make sure your application is lean and sleek in terms of memory and compute consumptions. Also, you will have to keep in mind about the artefact sizes; do not bundle unnecessary dependencies into the final artefact.

There was a time that most applications were deployed in application servers (is it still the case?). This is primarily because the application servers provide a lot of out-of-the-box tools such as transaction and session management through managing beans. The artefacts for application servers are a bit different; most of them were .war or .ear files that were deployed into the application server. People still keep the momentum that was started off by the arrival of application servers going. Usually, if you’ve got an application server, you have a specific development pipeline and a deployment method (e.g. Jenkins) and people would require some extensive training and resources to diverge from the traditionally established framework that worked. But the tide changed when the concepts such as containerization, serverless, FaaS, Microservices, and Edge Computing arrived.

A somewhat stabilized approach that has evolved over the modern era is to package everything that is required to run your application in a Docker image and deploy the image to a hub. A container runtime (e.g. Kubernetes or Swarm) will then fetch the image through the hub (configured with YAML) and run the containers within a cluster of nodes. Provisioning the nodes therefore is delegated to the container orchestration engine in Kubernetes (or Swarm) and other features such as auto-scaling are provided within them (Kubernetes; Swarm doesn’t provide auto-scaling). So in this case, your artefact would be a docker image. Images can be versioned through the hub itself.

CI/CD

Integration and deployment pipelines are becoming more and more important as companies go for more agile approaches of development. Continuous feature/patch requests would demand a proper integration procedure as well as frequent deployment-ready deliverables to be delivered through to and assessed by the end user groups. Therefore, a proper sequential flow of integration, build, test (both unit and integration), and deploy is required with apt circuit-breaking procedures in case of failures in a certain phase (test, for example).

The deploy phase would eventually deploy your artefacts to your runtime. Be it cloud or bare-metal, this phase is crucial. This should ideally deploy your artefacts as well as the configuration along with it. The two must be treated as an atomic operation; either both should occur, or none should. Also, if there are any other infrastructure-related components (e.g. message queues, topics, database tables, s3 buckets) are required to be deployed along with the artifacts as well.

Tools for CI/CD are generally not specific to Java. Almost all cloud code repositories provide functionality to perform CI/CD. It’s the build and run phases on the pipeline that are specific to Java, and must be handled with care.

--

--

Dasun Pubudumal

Software Engineer, CSE Graduate @ University of Moratuwa