About Me

My photo
Rohit is an investor, startup advisor and an Application Modernization Scale Specialist working at Google.

Thursday, August 17, 2017

YES WE CAN - you can push JavaEE apps to CloudFoundry

We all know and love the Java Buildpack - the workhorse for pushing majority of our applications to Cloud Foundry. There is also another gem of a buildpack called the TomEE buildpack. As you can guess from the name it is a close cousin of the Tomcat buildpack with the added enhancement that it supports the JavaEE WebProfile, Full profile* and supports the push of ear files. 

Wait a Minute !!!!!!!!!!!!!!!!!

We can push web profile and full profile applications* packaged as ear and war files to Cloud Foundry and not just plain vanilla spring apps that run on Tomcat ?

Yes siree bob ....

These are the buildpacks we often use in replatforming to move JavaEE apps to Cloud Foundry with minimal changes.


So why go through all this rigmarole and not just push Docker images ? 

Well I contend that pushing ear and war files are better than pushing well formed Docker images because a proper Docker CI image pipeline starts looking like what a buildpack does. So skip all the preamble and discovery and leverage the power of buildpacks. Why transmogrify your app to include OS bits and layers etc ? Deal with the currency you are familiar with i.e. jar, war and ear files. 

All these buildpacks also have magic in the form of  auto-configuration to wire and map your Cloud Foundry bound managed and CUPS services into existing data-sources so that JNDI lookups in your application source don't have to change. This allows for external service configuration to be consumed seamlessly by your data and messaging layers.  

Finally if everything else fails then there is always Docker ... 

You have my attention now ?? Which buildpacks should I use ?

See that depends on three things 1. What is in the apps and 2. Which app server are they coming from

In general if possible we recommend bootifying your application and leveraging the most useful framework components of the JavaEE stack and running your app using the Java Buildpack. If this is not feasible then your first step is to cf push the app using the buildpack of your application server. This will minimize changes needed to your application.

Thereafter I would proceed to nuke ALL the server specific deployment descriptors such that you can run the app  on a generic EE server like TomEE or Payara. If you don't like buildpacks and prefer the fat jar or the uber jar approach instead then bundle the app server within the app and push using the Java Buildpack. 

Well now you have me thoroughly confused ... 

Don't worry here is a picture that will sort you out ...




I end this blog on a cautious note - There are NO silver bullets in software development. 

The benefits from the cloud  are maximized form the agility gained from running lighter weight, smaller scale well bounded cloud native apps. Moving monolithic apps to the platform without modernization will yield a benefit that you should invest back into modernizing your application along the 15 factor continuum.


* Note full profile app support in TomEE is not default. You have to do some acrobatics to bundle the right TomEE distribution into the TomEE offline/online buildpack
* Also note some aspects of JavaEE will NOT work in the cloud for instance if there are any 2pc transactions then those transaction managers will obviously not work in a platform that has ephemeral containers and file system.


Wednesday, August 16, 2017

Pushing Docker images to Pivotal Cloud Foundry

Everyone thinks that Cloud Foundry does NOT support Docker images. Well here is your periodic reminder that CF and by extension PCF does support pushing of Docker images both from public and private docker registries.  Start with reading these links Using Docker in Cloud Foundry and Deploy an app with Docker.

Lets push a sample batch WebSphere Liberty Profile application to PCF. This batch application lives at https://github.com/WASdev/sample.batch.sleepybatchlet

SleepyBatchlet is a simple sample batchlet for use with feature batch-1.0 on WebSphere Liberty Profile. batch-1.0 is Liberty's implementation of the Batch Programming Model in Java EE 7, as specified by JSR 352. The batchlet itself is rather uninteresting. All it does is sleep in 1 second increments for a default time of 15 seconds. The sleep time is configurable via batch property sleep.time.seconds. The batchlet prints a message to System.out each second, so you can easily verify that it's running.

A WebSphere Liberty Profile image was built using the following repo: https://hub.docker.com/_/websphere-liberty/  with the following Dockerfile and server.xml configuration. Please note the majority of the Dockerfile comes FROM https://github.com/WASdev/ci.docker/blob/master/ga/developer/kernel/Dockerfile that EXPOSE 9080 9443

Note the following stanza in the server.xml 
<httpEndpoint id="defaultHttpEndpoint"

host="*"

httpPort="9080"

httpsPort="9443" />

The WebSphere Liberty Profile application is listening on ports 9080 and 9443. Cloud Foundry by default ONLY routes to one HTTP port. When launching an application on Diego, the Cloud Controller honors any user-specified overrides such as a custom start command or custom environment variables.To determine which processes to run, the Cloud Controller fetches and stores the metadata associated with the Docker image. The Cloud Controller Instructs Diego and the Gorouter to route traffic to the lowest-numbered port exposed in the Docker image. So in this case Diego, goRouter and CC collaborate to automatically route traffic to port 9080 and ignore port 9443. 

The Dockerfile built simply copies the built application into the config/dropins folder of the LibertyProfile and drops the server.xml into the config folder and configures Liberty to install the right features needed at runtime. Its useful to look at all the Docker caveats as you compose the Dockerfile. Note you can only copies from the current Docker context into the image and cannot COPY or ADD paths starting at /.

Commands to build and push the Docker image:

docker build -t jsr352app . 

First run the app locally using the following command:

 docker run -d -p 80:9080 -p 443:9443 --name jsr352 jsr352app    

and validate output with
 docker logs --tail=all -f jsr352 

Your local IP address can be found with the following command

docker inspect -f '{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' jsr352

    

Push the built Docker image to a public Docker registry like DockerHub using the following instructions
docker push kelapure/jsr352app
The image can be found publicly at https://hub.docker.com/r/kelapure/jsr352app/


You can now push the app to CF and watch logs during the push
cf push batch-app --docker-image kelapure/jsr352app 

Some other options that are helpful when debugging failed docker image pushes are the -u and the -t option that disable health check and increase the staging app start timeout respectively.
-t 300 and -u none

cf logs batch-app

It will take a while (probably 5 mins. or so) for the app to start and the cf command line output will look like this 


0 of 1 instances running, 1 starting
0 of 1 instances running, 1 starting
0 of 1 instances running, 1 starting
0 of 1 instances running, 1 starting
0 of 1 instances running, 1 starting
1 of 1 instances running

App started
OK

App batch-app was started using this command `/opt/ibm/docker/docker-server run defaultServer`

Showing health and status for app batch-app in org pivot-rkelapure / space development as rkelapure@pivotal.io...
OK

requested state: started
instances: 1/1
usage: 1G x 1 instances
urls: batch-app.cfapps.pez.pivotal.io
last uploaded: Tue Aug 15 21:12:17 UTC 2017
stack: cflinuxfs2
buildpack: unknown


Multiple Application Ports

Please NOTE if you want to expose multiple ports in PCF you will need to use experimental APIs and follow these steps
  1. Push the WAS/Docker image
  2. Add the additional port via:
    cf curl "/v2/apps/<App GUID>" -X PUT -d '{"ports":[8080, 9060]}'
  3. Create a new, un-bound, route: cf create-route...
  4. Map the 2nd, new route to port 9060
    cf curl "/v2/route_mappings" -X POST -d '{ "app_guid": "<App GUID>", "route_guid": "<Route GUID>", "app_port": 9060}'
The Cloud Foundry community is actively working on adding support for multiple app port routing. You will find a WIP proposal on multiple custom app ports . Credit DereK Beauregard

Resources

  1. https://tecadmin.net/remove-docker-images-and-containers/
  2. https://github.com/WASdev/ci.docker.tutorials/tree/master/liberty

Tuesday, August 15, 2017

CRAP - Complexity Remediating Application Proxy

CRAP is a term that has negative connotations; however in our world of application replatforming and modernization it is the most used application remediation pattern to shield the complexity of an external domain from your own domain allowing the microservices model of your core domain to remain pure. CRAP is a specific instantiation of an anti-corruption layer that bridges cloud native apps to non-cloud native apps. 

Permit me a quick segway here into terminology ... 

So what the hell is a cloud native application ? There are two definitions* of cloud native apps that I really like : 

1. A cloud-native application is an application that has been designed, architected and implemented to run on a Platform-as-a-Service installation and to embrace horizontal elastic scaling. Cloud native architectures take full advantage of on-demand delivery, global deployment, elasticity, and higher-level services. They enable huge improvements in developer productivity, business agility, scalability, availability, utilization, and cost savings.

2. A cloud native application is an application that satisfies ALL 15 factors  that define the DNA of Highly Scalable, Resilient Cloud Applications.

Note none of these definitions are mine. 1. is from Kevin Hoffman and 2. is from Adrain Cockroft

What is a non cloud or anti-cloud application ? - An application that cannot and does NOT want to run in the cloud and cannot be easily remediated to run well in the cloud is a cloud-angry or anti-cloud application. Big COTS packages and other monstrosities like WebSphere Commerce and WebSphere Portal from IBM and ORCL generally fall into this category. Cloud angry apps can range to mammoth application server to small shared kernel libraries that rely on assumptions of a particular OS or file-system characteristics.