Port binding in cloud-native apps
Avoiding container-determined ports and micromanaging port assignments.
In Beyond the Twelve-Factor App, I present a new set of guidelines that builds on Heroku’s original 12 factors and reflects today’s best practices for building cloud-native applications. I have changed the order of some to indicate a deliberate sense of priority, and added factors such as telemetry, security, and the concept of “API first” that should be considerations for any application that will be running in the cloud. These new 15-factor guidelines are:
- One codebase, one application
- API first
- Dependency management
- Design, build, release, and run
- Configuration, credentials, and code
- Logs
- Disposability
- Backing services
- Environment parity
- Administrative processes
- Port binding
- Stateless processes
- Concurrency
- Telemetry
- Authentication and authorization
The original Factor 7 states that cloud-native applications export services via port binding.
Avoiding container-determined ports
Web applications, especially those already running within an enterprise, are often executed within some kind of server container. The Java world is full of containers like Tomcat, JBoss, Liberty, and WebSphere. Other web applications might run inside other containers, like Microsoft Internet Information Server (IIS).
In a non-cloud environment, web applications are deployed to these containers, and the container is then responsible for assigning ports for applications when they start up.
One extremely common pattern in an enterprise that manages its own web servers is to host a number of applications in the same container, separating applications by port number (or URL hierarchy) and then using DNS to provide a user-friendly facade around that server. For example, you might have a (virtual or physical) host called appserver
, and a number of apps that have been assigned ports 8080 through 8090. Rather than making users remember port numbers, DNS is used to associate a host name like app1
with appserver:8080
, app2
with appserver:8081
, and so on.
Avoiding micromanaging port assignments
Embracing platform-as-a-service here allows developers and DevOps alike to not have to perform this kind of micromanagement anymore. Your cloud provider should be managing the port assignment for you because it is likely also managing routing, scaling, high availability, and fault tolerance, all of which require the cloud provider to manage certain aspects of the network, including routing host names to ports and mapping external port numbers to container-internal ports.
The reason the original factor for port binding used the word export is because it is assumed that a cloud-native application is self-contained and is never injected into any kind of external application server or container.
Practicality and the nature of existing enterprise applications may make it difficult or impossible to build applications this way. As a result, a slightly less restrictive guideline is that there must always be a 1:1 correlation between application and application server. In other words, your cloud provider might support a web app container, but it is extremely unlikely that it will support hosting multiple applications within the same container, as that makes durability, scalability, and resilience nearly impossible.
The developer impact of port binding for modern applications is fairly straightforward: your application might run as http://localhost:12001
when on the developer’s workstation, and in QA it might run as http://192.168.1.10:2000
, and in production as http://app.company.com
. An application developed with exported port binding in mind supports this environment-specific port binding without having to change any code.
Applications are backing services
Finally, an application developed to allow externalized, runtime port binding can act as a backing service for another application. This type of flexibility, coupled with all the other benefits of running on a cloud, is extremely powerful.