About Me

My photo
Rohit is an investor, startup advisor and an Application Modernization Scale Specialist working at Google.

Saturday, May 11, 2019

Load Testing Tools

The mention of load testing tools evokes images of archaic heavy weight license bearing tools like LoadRunner. You are fighting to find time slots for the load clients to run and target your application!  yikes. 

There are lighter weight and convenient tools that every programmer should add to his tool belt for performance testing and load simulation. You will find a listing of such tools  below inspired by a question asked on slack by John Feminella.

A useful thing to note is whether your load generator is focused on generating requests or whether it tries to simulate users. The former is an “open queue” and the latter is a “closed queue”. For the “same” amount of traffic they will behave differently. see: http://www.cs.cmu.edu/~harchol/Papers/nsdi_camera.pdf  thanks Jacques Chester

Say you want to generate a lot of concurrent requests for demo purposes  and want something robust and scriptable then these are the tools you should consider :

  1. Apache Bench (https://httpd.apache.org/docs/2.4/programs/ab.html) Prevalent in all major unix and linux based distros and easiest to use. ab -k -c 100 -n 20000 abc.com/hello.html for 100 simultaneous connections emitting 20,000 requests each as fast as they can.
  2. Gatling (http://gatling.io/) Cloud based load testing. Load testing as code.
  3. Tsung (http://tsung.erlang-projects.org/). Tsung  is a distributed load testing tool. It is protocol-independent and can currently be used to stress HTTP, WebDAV, SOAP, PostgreSQL, MySQL, AMQP, MQTT, LDAP and Jabber/XMPP servers. docs
  4. Hey (https://github.com/rakyll/hey)  More heavy duty, Gatling and makes pretty ASCII graphs
  5. wrk & wrk2 (https://github.com/wg/wrk) HTTP benchmarking tool that generates significant load when run on a single multi-core CPU. Also see wrk2
  6. Siege (https://github.com/JoeDog/siege) Siege is an open source regression test and benchmark utility. It can stress test a single URL with a user defined number of simulated users, or it can read many URLs into memory and stress them simultaneously. siege -c255 -r1000 https://abc.com/greeter/hello
  7. Httperf The httperf HTTP load generator. httperf is a tool for measuring web server performance. It provides a flexible facility for generating various HTTP workloads and for measuring server performance.
  8. Hey: https://github.com/rakyll/hey HTTP load generator, ApacheBench (ab) replacement, formerly known as rakyll/boom. hey runs provide number of requests in the provided concurrency level and prints stats. It also supports HTTP2 endpoints.
Happy Load & Chaos Testing!

Friday, February 8, 2019

Power of PCF Metrics For Day 2 App Ops

PCF Metrics is a powerful free batteries included Application Monitoring and Management tool that is bundled with PCF. It showcases the power of PCF allowing an enterprise to look at one pane of glass for logging and metrics. PCF Metrics if used wisely reduces the cost of leveraging an expensive log aggregation tool like Splunk and eliminate the use of an APM like Dynatrace or AppDynamics or whatever new flavor shows up in the market claiming AIOps.

So let's talk about some concrete uses cases of app developers using PCF Metrics. First of all PCF Metric is a resource hog when it comes to the Tile install. So there will be some sticker shock.  You want to install the latest version of PCFMetrics i.e. PCF Metrics 1.6. You will also need to install the Metrics Forwarder tile . 

So why do I have to bother with the Metrics Forwarder ? - The metrics forwarder service enables the gathering of fine-grained metrics from the PCF Metrics deployment. The Metrics Forwarder for PCF is a service that allows apps to emit custom metrics to Loggregator and consume those metrics from the Loggregator Firehose. These metrics are then consumed by PCFMetrics via an internal nozzle.

PCF Metrics 1.6 gives you the capability of monitoring custom application metrics  and set monitors and alerts on them and graph them on the dashboard. Yes you can define any application metric using micrometer and that metric shows up in the PCF Metrics dashboard. You can set alerts on metrics and have PCF metrics page you on Slack. This applies not only to custom metrics but also to container metrics like HIGH CPU. What this means is that if you have a non cooperative Ops team - then you can control your destiny and be notified immediately via SLACK when your SLO's are violated and can undertake remediation or triage of a situation immediately by taking thread dumps or heap dumps.

How does all this magic work ?

First you need a certain version of the Java Buildpack v 4.2

You can use Spring Boot Actuators to emit metrics to the Metrics Forwarder API. To do this, perform the following steps:
  1. Configure your app to use Spring Boot Actuators.
  2. Create the Metrics Forwarder (Tile ver. 1.11.3 )  for PCF service.
  3. Bind your app to the Metrics Forwarder for PCF service.
  4. Push or restage your app using the Java buildpack v4.2 or later.
Behind the Scenes


When an app is bound to the Metrics Forwarder service, the app receives credentials and the URL of the Forwarder API. It uses this information to post Spring Boot Actuator metrics to the Metrics Forwarder tile. This configuration data is stored in VCAP_SERVICES environment variables.When you cf push or cf restage the app, the Java buildpack downloads an additional metrics exporter jar, and includes it within the application droplet. When the app is running, the metrics exporter jar now added to the application context reads the Actuator metrics from a metrics registry every minute. It then posts the data to the Metrics Forwarder URL. From there, the Metrics Forwarder service sends this data to Loggregator. PCF Metrics then reads from the Firehose to ingest metrics data for retention and visualization.

How do you add custom metrics to a Spring Boot App ?

Micrometer is the metrics collection facility included in Spring Boot 2’s Actuator. It has also been backported to Spring Boot 1.5, 1.4, and 1.3 with the addition of the micrometer-spring-legacy dependency.

The PCF java buildpack includes a Cloud Foundry Spring Boot Metric Writer that  provides an extension to Spring Boot that writes Metrics to a Metric Forwarder service. Here are the gory details 

The CloudFoundryMetricWriterAutoConfiguration through spring boot autoconfig magic creates a RestOperationsMetricPublisher that publishes metrics to the Metrics Forwarder API. Therefore in order to publish metrics to  PCF Metrics you don't need a micrometer-registry. Core-micrometer is enough to get the  metrics published. You don't need to add any additional dependencies for Metric registry extensions. Spring Boot auto-configures a composite MeterRegistry and adds a registry to the composite for each of the supported implementations that it finds on the classpath. In PCF the Java Buildpack is configuring Spring Boot with the PCF Metrics Meter Registry.

You can take action on custom metrics by creating monitors that alert a slack endpoint. See https://docs.pivotal.io/pcf-metrics/1-6/using.html#monitors for configuring the right set of app specific alerts for your application.

For sample code for micrometer metrics in spring boot checkout https://github.com/micrometer-metrics/micrometer-samples-spring-boot and https://spring.io/blog/2018/05/02/spring-tips-metrics-collection-in-spring-boot-2-with-micrometer

Resources

How healthy is your Rabbit ?

We deal with RabbitMQ a lot in our AppTx engagements. If your RabbitMQ or microservices that deal with events and messaging are unhealthy this blog post has some hints towards fixing the issues.

Please remember these two tenets as you diagnose RabbitMQ performance issues with your application

  • To achieve predictable RabbitMQ response times, you will want to dedicate an ODB service instance in PCF (if you can leverage isolated zones, the better). This is so that the results are not skewed due to noisy neighbors.
  • 99% of the performance problems are generated by the applications and it is extremely rare to be related to misconfiguration of RabbitMQ. Therefore, we advise to simulate the customer's workload, taking into account peak and average load, and using the perf-test tool  to simulate it.


Pivotal Tile for RabbitMQ 3.7.11 has a number of enhancements for health checks of the deployment for an operator. If you are on a RabbitMQ deployment and are wondering about the deployment health point your operators to https://www.rabbitmq.com/blog/2019/02/07/this-month-in-rabbitmq-feb-7-2019/ .

Here are the links you need to benchmark the deployment against Pivotal Validated numbers > Pick a workload and compare.  https://github.com/rabbitmq/workloads

You can also run the performance test on PCF  https://github.com/rabbitmq/rabbitmq-perf-test-for-cf

To test the health of an individual deployment you can use  http://www.rabbitmq.com/monitoring.html#health-checks


In terms of application resiliency with regards RabbitMQ look at. https://github.com/rabbitmq/workloads/tree/master/resiliency for a number of recommendations.

Use RabbitMQ channel caching: Opening and closing channels frequently is a CPU bound process and does incur performance penalties. One of the best practices to implement is caching of connections and channels. Channels should NOT be shared across thread, but they sure can be shared in the same thread. Use Spring-AMQP abstraction to help with this. The auto configuration created by Spring AMQP creates a Caching ConnectionFactory, which allows connections and channels caching in a thread safe manner. You can read more about RabbitMQ best practices here : https://www.cloudamqp.com/blog/2018-01-19-part4-rabbitmq-13-common-errors.html

If you are using Spring Cloud Stream you are covered. By default, the RabbitMQ binder uses Spring Boot’s ConnectionFactory, and it therefore supports all Spring Boot configuration options for RabbitMQ and wires in the CachingConnectionFactory.

It is very difficult to determine if a shared PCF environment is in "good shape" - not because certainly the conditions (spare cpu, number of connections, spare memory, etc) of the benchmark are not going to be the same to the conditions at the time when we obtained the baseline. The best we can do is determine a number of metrics of what a healthy deployment looks like. And that will depend on the solution itself.

For instance, if it is massively critical that a hypothetical "incoming request" queue be always below a certain threshold then a metric would be the depth of the queue. Or if in normal circumstances we expect around 100+/10 connections, having more than 150 connections seems like we have a connection leak (something that usually occurs). The platform team should be monitoring RabbitMQ with a tool like prometheus connected with some alerting solution.

- The recommendation of scaling applications based on the depth of a RMQ queue sounds sensible but increasing the number of consumer connections/channels may also produce an excess of load in RabbitMQ defeating the purpose of scaling the number of consumers to help reduce the backlog of messages. So, we need to be cautious here.

- Analyzing "why there are so many messages in RMQ" puts us in the right direction. Monitoring the number of consumers on the queue and the consumer utilization is extremely valuable but also the monitoring the message ingress rate and the ack rate in the application itself.

Thanks to Marcial Rosales and Anwar Chirakkattil  for his guidance on Scaling and Health of RabbitMQ in PCF and Dan Frey for reviewing this article.  

Friday, January 4, 2019

Books I read in 2018 and Predictions

List of books I read in 2018 in prioritized order of importance and impact :

11. Reset
12. Rapid Modernization Of Java Apps

List Of Books In Progress


Here is a list of books I will read/ work through in 2019 :
























Predictions of 2019

Please note that these are not my employers opinions and these predictions are just that => future indicators based on past experience.  

1. Middleware modernization to cloud native tech and services accelerates dramatically
2. Further consolidation in the PaaS space. Pivotal and Docker are all acquisition targets.
3. Kubernetes enters trough of disillusionment. Serverless becomes the new darling particularly from the cost savings perspective. 
4. Backlash against AWS monopoly with increased adoption of multi-cloud and poly-cloud. 
5. ML/AI is infused in every product/ software category. 

Tuesday, December 18, 2018

The Anti-Architect

A colleague of mine - fellow AppTx practice lead -  Shaun Anderson posed a very interesting question last night
"tell me about the behaviors of all the people you have not had success working with"
 which got me thinking. Our industry has some established norms on what a software architect should do; however we don't have prescriptive guidelines around what an architect should NOT do. What behaviors an architect should not display. To capture these ANTI-architect behaviors , I outline the the architect styles you should probably NOT copy.

1. *Mr. Boxes and Lines*, *Mr. Ivory Tower* : Knows all the latest frameworks and technology choices and answers questions the right way; however could not code up a simple program to save his life.

2. *Mr. Hoarder*: Architects the system in such a way that only he understands the intricacies and holds it close to his chest. Refuses to call his baby ugly.

3. *Mr. Framework*: Frameworkify everything even spring such that all the developers only need to extend the UBER framework at the edges. Assumes developers are dumb and cannot think for themselves and therefore need to be provided abstractions suitable for a seven year old. This   is an egregious form of a hoarder. Sometimes this is taken to the extreme where a meta framework will generate other child frameworks aka Mother Of All Frameworks.


4. *Mr. Kafka-lover*, *Mr. Serverless-lover*, *Mr. Reactive-lover* : Answer is Kafka OR Answer is Serverless... What is the question ? This is an architect who falls prey to Microservices-envy, Kafka-as-an-esb and layered-tech-driven architecture anti-patterns. Answer is r2dbc - Now what is the question ?

5. *Mr. Async* : Makes the genuine mistake of equating event driven architecture to event sourcing. Does not realize the four forms of event driven architecture. see  Many meanings of event driven architecture

6. *Mr. Pussyfoot* Hesitates to make any architectural decision without deferring to a central committee.

7. *Mr. Shit-it-all* : Periodically swoops down from an ivory tower to shit all over the current project architecture and implementation leaving behind a bunch of half-ass diktats without full context.

8. *Mr. Manager-upper*: Architect only shows up when his boss or adversary shows up and is largely absent without their presence.

9. *Mr. Idiomatic*: Architect does not engage in fruitful debate and unwilling to reassess their state. They have made up their mind and are not willing to experiment or concede that their world view may be wrong.

10. *Mr. Cowboy*: Solution first - ask questions later.  Indulges in premature optimization or premature solutioning without fully understanding the question, intent or context.

11. *Ms. Alice in Wonderland*: jumps deep into each and every rabbit hole one can find

12. *Ms. I Told You So* : As soon as something goes just a bit sideways drops the “I told you this was not going to work and here’s why” in front of your stakeholders.

13. *Mrs. Nit Picker*:  The architect who needs to understand every single detail prior to coding anything.

14. *Ms. But Why*: When you don’t understand something 100% and the only thing you can do to prove your presence is to ask “But Why” >  on repeat.

As an architect we all make these mistakes and there is no shame in admitting that; however recognizing some of these behaviors  in ourselves and others helps us improve and become better architects that build something real that delights our customers and stakeholders.


Tuesday, December 4, 2018

Performance Profiling a Ruby application on the PCF Pivotal Application Service

Here is how we would go about debugging a ruby app on
http://engineering.pivotal.io/post/debugging-ruby-memory-issues-cloud-foundry-cloud-controller/

You will also do well to read https://www.oreilly.com/library/view/ruby-performance-optimization/9781680501681/

This poor-man's stats gathering tool can be really helpful (credit Will Sulzer)
https://gist.github.com/kelapure/f24a0e27aa12f1d3364a93b5289e7ec2. However, you'd need to modify the code to run in a block passed to the measure method. This comes from the Ruby Performance Optimization Book on Safari.

Ruby-Prof is another nice tool (https://github.com/ruby-prof/ruby-prof).. and then the real heavy-weight tools trace back to the native calls using Valgrind etc.. Unfortunately, the linked script may tell you what the Ruby program running in the JVM is doing.. but maybe not why it's consuming JVM memory to load the interpreter, etc. By default JRuby disables ObjectSpace because it's the *cause* of memory problems (it causes the runtime to maintain an additional list of weak references), BTW.

Checkout https://github.com/jruby/jruby/wiki/PerformanceTuning for JRuby specifics. You can enable ObjectSpace if you have code that depends on it, but it's highly discouraged in JRuby, and would likely spoil the tests.

That begs the question why should you run Ruby apps on Pivotal Cloud Foundry ?  What advantages does the Ruby BP on PAS

  • It allows for the code to run in isolation
  • It allow for resiliency with healthchecks
  • It allows for custom metrics and alerts via PCF Metrics
  • Makes binding to external services easier
  • Log aggregation
  • Manage all the app instances in apps-manager
  • Leverage Zipkin and Spring Cloud Services
  • Leverage task support to run one-off tasks
  • Ruby apps are one of the first ones to run in a PAS. Remember PCF was inspired from Heroku that only had support for Ruby initially.
  • Support for ruby apps in Concourse (edited)
  • Autoscale Ruby apps based on app metrics including latency, throughput, CPU, memory and custom ones.
What are the disadvantages
  • Debugging is hard in production
  • Memory limits on container are hard

via GIPHY
Many thanks to Chris Umbel and Will Sulzer who contributed the majority of this post. 

Sunday, November 18, 2018

Pivotal AppTx

Pivotal's approach to Application Transformation with help from Rohit Kelapure + Yaroslav Novytskyy. Want more details? Check this whitepaper by Matt Russell: https://lnkd.in/gMrHmRJ


Saturday, November 17, 2018

How to choose between PAS (Cloud Foundry - PaaS) and PKS(Kubernetes - CaaS)

This seems to be the question on the top of everybody's mind.  There are multiple ways of framing this decision. Several decision trees have been drawn up on this topic including these

credit @jxxf





App Transformation Decision Tree
These diagrams can be summarized as follows from a PAS and PKS perspective as follows : PKS is ideal for stateful and persistent pinned workloads, commercially packaged software, short-lived apps/workloads, software distributed via Helm chart, apps using non-standard port behavior and legacy, zero-factor, apps and complex apps already packaged as docker images and well along the containerization journey. PAS is ideal for custom-built software targeting Windows or Linux, software packaged as ear, jar and war files, web applications, APIs, batch jobs and streaming and reactive applications.

Looking at this decision tree someone who is further along on the dockerization journey may comment that the decision tree is biased towards PAS. You may take the view that if the workload(app) requires ANY or non-zero code changes to migrate to the PAS then the default destination should be PKS. How does one resolve this conflict?

The scientific method creates a hypothesis and then validate/refute the assumptions that led to the hypothesis. In this blog post, I will explain the science between choosing a destination (Cloud Foundry or Kubernetes) for your workload and establishing a migration factory that drives the transformation of all your apps to the right destination in the cloud.

Why should PAS (Cloud Foundry) be the default Choice?


First, let me explain why the default choice in the above picture is PAS aka Cloud foundry. Developers should operate at the highest level of abstraction. The easiest place to change and test your code is in in-place in the inner loop.  Cloud Foundry allows you to use the current currency of your developers i.e. war, jar, ear files.  Cloud Foundry provides a set of top-notch validated developer abstractions for running applications whereas Kubernetes provides a top-notch platform to build a platform. Kelsey Hightower has reinforced this

Kubernetes is an infrastructure framework. It's YAML based configuration files and the kubectl command line tool make it approachable to developers, but far from the developer productivity, you find in a PaaS or FaaS platform.

A common principle in manufacturing is that we should always detect and fix any problem in the production process at the lowest-value stage possible. When developing applications Cloud Foundry gives you a chance to develop faster and get productive by focusing on the inner loop of development and not worrying about non-functional concerns like service discovery, resiliency, stability, routing, security.  K8s is maturing very fast and these app concerns are being baked into the platform via various CNCF Projects and SIGs. There are some promising projects that are improving the K8s development experience including IstioKnative, Buildpacks, and spring-cloud-kubernetes; however, the beauty of an opinionated platform like CF that these choices are already made and you don't have to roll out a custom stack every time you develop an app. See challenges of containerization here.  also don't realize that the constraints of first generation PaaS systems (Heroku, Google Cloud Engine, CF v1) are all gone.  Many developers, architects, and managers still think of those first-generation PaaS constraints when considering PCF, and specifically, the Pivotal Application Service (PAS). Richard Seroter has demolished this myth in his 5 part series.

What Outcomes are you driving for the Cloud Acceleration / Migration Factory?

Technical decisions taken in a vacuum without influence from business drivers are destined to fail. Therefore before picking a choice, it is critical to examine the business motivators and your specific constraints.  So what is the science between picking the right destination for your workload ?. Its a combination of three factors - 1. technical feasibility 2. business value and 3. human factors. Any factor can tilt the decision to go towards PAS or PKS. This decision is also on a per deployable basis. For a large logical app you may components and modules on both PAS or PKS. 

1. Technical Feasibility: Where does the app fall on the cloud-native spectrum of 15 factors? Cloud Nativity can be ascertained through a tool like SNAP or automation via the various scanners. This analysis is done on the current state and has NO cloud influence. A scoring system is established with 0 being completely cloud-non-native, persistent and stateful and with 15 being completely cloud-native. It is helpful to think of workloads along an axis like this one ...



2. Business Value: Next a determination needs to be made on the strategic business value of this application. Is this application under heavy development. Are the feature and functions of this application critical to the survival and growth of our organization. A scoring of the application under business factors needs to be made. So what are the business factors? They can look like these ...
  • Ongoing Development Cost
  • Infrastructure Cost 
  • Software License Cost
  • Operations Cost 
  • Overall value to the Business  
  • Lead time for Changes
  • Business Priority
  • Business Criticality
  • User Satisfaction 
A score between 1-10 is determined based on a weighted average of all the contributed factors. 

3. Human Factors: Once the technical feasibility score and business value of an app is determined it is time to determine the wave of applications by arranging the apps in a matrix. Apps can be rearranged here arbitrarily based on knowns and unknowns that escaped the technical and business feasibility analysis. As the destination, PAS or PKS is chosen it is critical to understand the outcomes derived from each choice of platform and transformation activity such as rehost  (0 code changes also called lift' n ' shift, little or no configuration changes), replatform, refactor or rebuild.
  • Rehost: Containerize to “lift and shift” into Pivotal Container Services (PKS)
  • Replatform:  Upgrade an application from its existing platform adhering to the least possible 15 factors to get it to run on PAS, preserving existing functionality
  • Refactor: Changes to apps with high business priority, transactional load  to get them to 15 factor Cloud Native using Cloud Native architectural patterns
  • Rebuild: Leverage DDD techniques to deconstruct and migrate a complex and monolithic application to the cloud. 


The benefits of containerization to PKS include decreased infrastructure use, automated zero touch deployments with CI/CD,  reduce extensive & manual change management processes by using CI/CD and increased Multi-Cloud portability. If the underlying architecture and tech stack remain unchanged most of the gains are in OPEX related in operational efficiency.

However; rehosting to PKS will not eliminate the cost of proprietary stacks (RHEL, WebSphere, Weblogic, TIBCO) whereas replatforming will typically lead to the elimination of proprietary middleware licenses and decreased effort to patch & upgrade of software by the platform.

A more comprehensive refactoring or rebuild to a cloud-native application running on PAS yields CAPEX benefits like decreased time to scale, decreased MTTR for applications, proactive monitoring of KPI’s of applications,  increased deployment Frequency with CI/CD, reduced Lead Time, reduction of tight service coupling, increased automated testing & test data management as part of CI/CD pipeline and all this leads to increased developer productivity and satisfaction.

Containerization is the only fraction of the opportunity. Driving efficiency of the developer via XP practices and a product mindset on a PaaS is the whole opportunity. 80% of your development cost is labor and people, not infrastructure. Improve productivity by increasing leverage and producing better and faster with the same team.

What does the Cloud Migration Factory look like?

The process part of the migration factory where we take in a large number of raw materials and assemble meaningful parts was explained earlier. Once we have all the components in a factory it is time to assemble a coherent meaningful product. This is where a funnel and a codification of the process above helps. Once all the data is visualized you need to plan the waves of apps again based on the outcomes in the transformation program. It is critical that we measure key indicators and journey markers to ensure that we are realizing the outcomes planned before. These KPIs could be as varied as Percentage of portfolio running on PCF, Number of developers enabled on cloud-native, App Transformation decision framework in place, amount of time taken from idea to production and Developer Engagement with the platform and ROI from infrastructure and license consolidation.



In the end, none of the benefits of the cloud be it PAS or PKS or PFS can ONLY be realized with a change in the existing agilefall or waterfall-agile development process. It is critical to implement a value stream that emphasizes pace and progressive delivery. Without the confidence of automated tests and removal of headaches from continuous deployment, a cloud platform (Bare-metal, IaaS, CaaS, PaaS) from God won't help realize the desired outcomes.

Finally whatever your journey - please remember the law of the hammer - "If the only tool you have is a hammer, you treat everything as if it were a nail." Fight this cognitive bias and leverage the right choice PAS AND PKS for your apps driving the business outcomes that matter. 


Good Luck!

Credits:  

The blog post is a synthesis of the work of many colleagues in Pivotal and Pivotal AppTx including Richard Seroter, Joe Szodfridt, Shaun Anderson, Vinay Upadhya and others. You can checkout the Pivotal AppTx mission at https://pivotal.io/application-transformation and our whitepaper at https://content.pivotal.io/application-modernization/pivotal-practices-application-transformation


Tuesday, November 13, 2018

How do you modernize an Oracle Forms application to the Cloud ?

Options


  1. Wrap it in PKS ... no touch 0 refactoring. 
  2. Replatform it with  [Forms2ADF](http://www.oracle.com/technetwork/developer-tools/jheadstart/overview/index.html). Chain with ADF to JSF migrator.
  3. Refactor with a ground up rewrite and deploy to PAS
  4. Create a Oracle Forms custom buildpack for running apps unchanged in PAS
Pick one that you think leads to the outcome desired. If the app is strategic then pick the refactoring option to PAS where you would you expend energy cataloging the pl/sql (e.g., stored procs (ins/outs)) to see what entities, value objects and aggregates fall out or just rewrite from scrap and start again with a bottom-up domain modeling exercise? aka a full rewrite 

In Oracle Forms the majority of the business  logic is actually PL/SQL- the forms just skins all the resultsets.There is a whole cottage industry for [forms migration](http://www.oracle.com/technetwork/developer-tools/forms/oracle-forms-migration-partners-098680.html) You can  migrate to ADF and then run ADF on tomcat with It  the JHeadstart Forms2ADF Generator that allows you to transform Oracle Forms applications directly to ADF applications, thereby protecting your investments when moving to the JEE (Java Enterprise Edition) development platform. [Convert Oracle Forms To Modern Web Application]


First of all, the very thought of converting Oracle Forms to something else as-is is wrong..The entire task is an expensive time-consuming futile effort. It probably is as good as using your black and white camera for capturing videos without even upgrading it! Most of the transactional forms are heavy and designed with 20 year old technology. So it is better to leave them alone for as long as they work. In cases you absolutely want to redo the Oracle forms, it is advisable to plan a proper migration (no short-cuts) to leverage the latest and greatest technology, that can probably be maintained for another 20 years.

When there are a lot of stored procedures .. you have deep coupling with the DB schemas … these apps will take a LOT to refactor. Enterprises should save :$$$ by just using a migration partner that does this migration and then deploy that java code on the PAS

Sunday, November 11, 2018

Modernizing the Monolithic User Interface

There is a lot of treatment on the topic of decomposing a monolithic applications. Most of the existing literature deals with disentangling the business logic into logical bounded contexts and sub-domains. Not enough attention is paid to the separation, composition and co-existence of of the new user interface with the old user interface. So what are the techniques for decomposing and modernizing User Interfaces from a legacy UI technology like  Oracle ADF to a modern composable javascript based UI.

Plethora of options for UI decomposition
The existing theory of microservices UI  composition leads us towards micro-frontends.  How does one stitch together a series of micro-frontends decomposed from a monolithic UI ? What are the different options available to decompose a monolithic frontend and assemble a modernized micro-frontend UI.

  1. Monolithic UI
  2. Single Page Application Javasceript  (SPA) meta- framework single-spa
  3. Micro Front-Ends - micro-frontends
  4. Front End Server Transclusion  microservice-websites
  5. Mosaic project-mosaic-front-end-services
  6. Single SPA with multiple similar apps
  7. Resource Oriented Architecture roca
  8. iFrame Palooza iframes



So which option did we chose ? Option # 6


References