About Me

My photo
Rohit is an investor, startup advisor and an Application Modernization Scale Specialist working at Google.

Tuesday, December 18, 2018

The Anti-Architect

A colleague of mine - fellow AppTx practice lead -  Shaun Anderson posed a very interesting question last night
"tell me about the behaviors of all the people you have not had success working with"
 which got me thinking. Our industry has some established norms on what a software architect should do; however we don't have prescriptive guidelines around what an architect should NOT do. What behaviors an architect should not display. To capture these ANTI-architect behaviors , I outline the the architect styles you should probably NOT copy.

1. *Mr. Boxes and Lines*, *Mr. Ivory Tower* : Knows all the latest frameworks and technology choices and answers questions the right way; however could not code up a simple program to save his life.

2. *Mr. Hoarder*: Architects the system in such a way that only he understands the intricacies and holds it close to his chest. Refuses to call his baby ugly.

3. *Mr. Framework*: Frameworkify everything even spring such that all the developers only need to extend the UBER framework at the edges. Assumes developers are dumb and cannot think for themselves and therefore need to be provided abstractions suitable for a seven year old. This   is an egregious form of a hoarder. Sometimes this is taken to the extreme where a meta framework will generate other child frameworks aka Mother Of All Frameworks.


4. *Mr. Kafka-lover*, *Mr. Serverless-lover*, *Mr. Reactive-lover* : Answer is Kafka OR Answer is Serverless... What is the question ? This is an architect who falls prey to Microservices-envy, Kafka-as-an-esb and layered-tech-driven architecture anti-patterns. Answer is r2dbc - Now what is the question ?

5. *Mr. Async* : Makes the genuine mistake of equating event driven architecture to event sourcing. Does not realize the four forms of event driven architecture. see  Many meanings of event driven architecture

6. *Mr. Pussyfoot* Hesitates to make any architectural decision without deferring to a central committee.

7. *Mr. Shit-it-all* : Periodically swoops down from an ivory tower to shit all over the current project architecture and implementation leaving behind a bunch of half-ass diktats without full context.

8. *Mr. Manager-upper*: Architect only shows up when his boss or adversary shows up and is largely absent without their presence.

9. *Mr. Idiomatic*: Architect does not engage in fruitful debate and unwilling to reassess their state. They have made up their mind and are not willing to experiment or concede that their world view may be wrong.

10. *Mr. Cowboy*: Solution first - ask questions later.  Indulges in premature optimization or premature solutioning without fully understanding the question, intent or context.

11. *Ms. Alice in Wonderland*: jumps deep into each and every rabbit hole one can find

12. *Ms. I Told You So* : As soon as something goes just a bit sideways drops the “I told you this was not going to work and here’s why” in front of your stakeholders.

13. *Mrs. Nit Picker*:  The architect who needs to understand every single detail prior to coding anything.

14. *Ms. But Why*: When you don’t understand something 100% and the only thing you can do to prove your presence is to ask “But Why” >  on repeat.

As an architect we all make these mistakes and there is no shame in admitting that; however recognizing some of these behaviors  in ourselves and others helps us improve and become better architects that build something real that delights our customers and stakeholders.


Tuesday, December 4, 2018

Performance Profiling a Ruby application on the PCF Pivotal Application Service

Here is how we would go about debugging a ruby app on
http://engineering.pivotal.io/post/debugging-ruby-memory-issues-cloud-foundry-cloud-controller/

You will also do well to read https://www.oreilly.com/library/view/ruby-performance-optimization/9781680501681/

This poor-man's stats gathering tool can be really helpful (credit Will Sulzer)
https://gist.github.com/kelapure/f24a0e27aa12f1d3364a93b5289e7ec2. However, you'd need to modify the code to run in a block passed to the measure method. This comes from the Ruby Performance Optimization Book on Safari.

Ruby-Prof is another nice tool (https://github.com/ruby-prof/ruby-prof).. and then the real heavy-weight tools trace back to the native calls using Valgrind etc.. Unfortunately, the linked script may tell you what the Ruby program running in the JVM is doing.. but maybe not why it's consuming JVM memory to load the interpreter, etc. By default JRuby disables ObjectSpace because it's the *cause* of memory problems (it causes the runtime to maintain an additional list of weak references), BTW.

Checkout https://github.com/jruby/jruby/wiki/PerformanceTuning for JRuby specifics. You can enable ObjectSpace if you have code that depends on it, but it's highly discouraged in JRuby, and would likely spoil the tests.

That begs the question why should you run Ruby apps on Pivotal Cloud Foundry ?  What advantages does the Ruby BP on PAS

  • It allows for the code to run in isolation
  • It allow for resiliency with healthchecks
  • It allows for custom metrics and alerts via PCF Metrics
  • Makes binding to external services easier
  • Log aggregation
  • Manage all the app instances in apps-manager
  • Leverage Zipkin and Spring Cloud Services
  • Leverage task support to run one-off tasks
  • Ruby apps are one of the first ones to run in a PAS. Remember PCF was inspired from Heroku that only had support for Ruby initially.
  • Support for ruby apps in Concourse (edited)
  • Autoscale Ruby apps based on app metrics including latency, throughput, CPU, memory and custom ones.
What are the disadvantages
  • Debugging is hard in production
  • Memory limits on container are hard

via GIPHY
Many thanks to Chris Umbel and Will Sulzer who contributed the majority of this post. 

Sunday, November 18, 2018

Saturday, November 17, 2018

How to choose between PAS (Cloud Foundry - PaaS) and PKS(Kubernetes - CaaS)

This seems to be the question on the top of everybody's mind.  There are multiple ways of framing this decision. Several decision trees have been drawn up on this topic including these

credit @jxxf





App Transformation Decision Tree
These diagrams can be summarized as follows from a PAS and PKS perspective as follows : PKS is ideal for stateful and persistent pinned workloads, commercially packaged software, short-lived apps/workloads, software distributed via Helm chart, apps using non-standard port behavior and legacy, zero-factor, apps and complex apps already packaged as docker images and well along the containerization journey. PAS is ideal for custom-built software targeting Windows or Linux, software packaged as ear, jar and war files, web applications, APIs, batch jobs and streaming and reactive applications.

Looking at this decision tree someone who is further along on the dockerization journey may comment that the decision tree is biased towards PAS. You may take the view that if the workload(app) requires ANY or non-zero code changes to migrate to the PAS then the default destination should be PKS. How does one resolve this conflict?

The scientific method creates a hypothesis and then validate/refute the assumptions that led to the hypothesis. In this blog post, I will explain the science between choosing a destination (Cloud Foundry or Kubernetes) for your workload and establishing a migration factory that drives the transformation of all your apps to the right destination in the cloud.

Why should PAS (Cloud Foundry) be the default Choice?


First, let me explain why the default choice in the above picture is PAS aka Cloud foundry. Developers should operate at the highest level of abstraction. The easiest place to change and test your code is in in-place in the inner loop.  Cloud Foundry allows you to use the current currency of your developers i.e. war, jar, ear files.  Cloud Foundry provides a set of top-notch validated developer abstractions for running applications whereas Kubernetes provides a top-notch platform to build a platform. Kelsey Hightower has reinforced this

Kubernetes is an infrastructure framework. It's YAML based configuration files and the kubectl command line tool make it approachable to developers, but far from the developer productivity, you find in a PaaS or FaaS platform.

A common principle in manufacturing is that we should always detect and fix any problem in the production process at the lowest-value stage possible. When developing applications Cloud Foundry gives you a chance to develop faster and get productive by focusing on the inner loop of development and not worrying about non-functional concerns like service discovery, resiliency, stability, routing, security.  K8s is maturing very fast and these app concerns are being baked into the platform via various CNCF Projects and SIGs. There are some promising projects that are improving the K8s development experience including IstioKnative, Buildpacks, and spring-cloud-kubernetes; however, the beauty of an opinionated platform like CF that these choices are already made and you don't have to roll out a custom stack every time you develop an app. See challenges of containerization here.  also don't realize that the constraints of first generation PaaS systems (Heroku, Google Cloud Engine, CF v1) are all gone.  Many developers, architects, and managers still think of those first-generation PaaS constraints when considering PCF, and specifically, the Pivotal Application Service (PAS). Richard Seroter has demolished this myth in his 5 part series.

What Outcomes are you driving for the Cloud Acceleration / Migration Factory?

Technical decisions taken in a vacuum without influence from business drivers are destined to fail. Therefore before picking a choice, it is critical to examine the business motivators and your specific constraints.  So what is the science between picking the right destination for your workload ?. Its a combination of three factors - 1. technical feasibility 2. business value and 3. human factors. Any factor can tilt the decision to go towards PAS or PKS. This decision is also on a per deployable basis. For a large logical app you may components and modules on both PAS or PKS. 

1. Technical Feasibility: Where does the app fall on the cloud-native spectrum of 15 factors? Cloud Nativity can be ascertained through a tool like SNAP or automation via the various scanners. This analysis is done on the current state and has NO cloud influence. A scoring system is established with 0 being completely cloud-non-native, persistent and stateful and with 15 being completely cloud-native. It is helpful to think of workloads along an axis like this one ...



2. Business Value: Next a determination needs to be made on the strategic business value of this application. Is this application under heavy development. Are the feature and functions of this application critical to the survival and growth of our organization. A scoring of the application under business factors needs to be made. So what are the business factors? They can look like these ...
  • Ongoing Development Cost
  • Infrastructure Cost 
  • Software License Cost
  • Operations Cost 
  • Overall value to the Business  
  • Lead time for Changes
  • Business Priority
  • Business Criticality
  • User Satisfaction 
A score between 1-10 is determined based on a weighted average of all the contributed factors. 

3. Human Factors: Once the technical feasibility score and business value of an app is determined it is time to determine the wave of applications by arranging the apps in a matrix. Apps can be rearranged here arbitrarily based on knowns and unknowns that escaped the technical and business feasibility analysis. As the destination, PAS or PKS is chosen it is critical to understand the outcomes derived from each choice of platform and transformation activity such as rehost  (0 code changes also called lift' n ' shift, little or no configuration changes), replatform, refactor or rebuild.
  • Rehost: Containerize to “lift and shift” into Pivotal Container Services (PKS)
  • Replatform:  Upgrade an application from its existing platform adhering to the least possible 15 factors to get it to run on PAS, preserving existing functionality
  • Refactor: Changes to apps with high business priority, transactional load  to get them to 15 factor Cloud Native using Cloud Native architectural patterns
  • Rebuild: Leverage DDD techniques to deconstruct and migrate a complex and monolithic application to the cloud. 


The benefits of containerization to PKS include decreased infrastructure use, automated zero touch deployments with CI/CD,  reduce extensive & manual change management processes by using CI/CD and increased Multi-Cloud portability. If the underlying architecture and tech stack remain unchanged most of the gains are in OPEX related in operational efficiency.

However; rehosting to PKS will not eliminate the cost of proprietary stacks (RHEL, WebSphere, Weblogic, TIBCO) whereas replatforming will typically lead to the elimination of proprietary middleware licenses and decreased effort to patch & upgrade of software by the platform.

A more comprehensive refactoring or rebuild to a cloud-native application running on PAS yields CAPEX benefits like decreased time to scale, decreased MTTR for applications, proactive monitoring of KPI’s of applications,  increased deployment Frequency with CI/CD, reduced Lead Time, reduction of tight service coupling, increased automated testing & test data management as part of CI/CD pipeline and all this leads to increased developer productivity and satisfaction.

Containerization is the only fraction of the opportunity. Driving efficiency of the developer via XP practices and a product mindset on a PaaS is the whole opportunity. 80% of your development cost is labor and people, not infrastructure. Improve productivity by increasing leverage and producing better and faster with the same team.

What does the Cloud Migration Factory look like?

The process part of the migration factory where we take in a large number of raw materials and assemble meaningful parts was explained earlier. Once we have all the components in a factory it is time to assemble a coherent meaningful product. This is where a funnel and a codification of the process above helps. Once all the data is visualized you need to plan the waves of apps again based on the outcomes in the transformation program. It is critical that we measure key indicators and journey markers to ensure that we are realizing the outcomes planned before. These KPIs could be as varied as Percentage of portfolio running on PCF, Number of developers enabled on cloud-native, App Transformation decision framework in place, amount of time taken from idea to production and Developer Engagement with the platform and ROI from infrastructure and license consolidation.



In the end, none of the benefits of the cloud be it PAS or PKS or PFS can ONLY be realized with a change in the existing agilefall or waterfall-agile development process. It is critical to implement a value stream that emphasizes pace and progressive delivery. Without the confidence of automated tests and removal of headaches from continuous deployment, a cloud platform (Bare-metal, IaaS, CaaS, PaaS) from God won't help realize the desired outcomes.

Finally whatever your journey - please remember the law of the hammer - "If the only tool you have is a hammer, you treat everything as if it were a nail." Fight this cognitive bias and leverage the right choice PAS AND PKS for your apps driving the business outcomes that matter. 


Good Luck!

Credits:  

The blog post is a synthesis of the work of many colleagues in Pivotal and Pivotal AppTx including Richard Seroter, Joe Szodfridt, Shaun Anderson, Vinay Upadhya and others. You can checkout the Pivotal AppTx mission at https://pivotal.io/application-transformation and our whitepaper at https://content.pivotal.io/application-modernization/pivotal-practices-application-transformation


Tuesday, November 13, 2018

How do you modernize an Oracle Forms application to the Cloud ?

Options


  1. Wrap it in PKS ... no touch 0 refactoring. 
  2. Replatform it with  [Forms2ADF](http://www.oracle.com/technetwork/developer-tools/jheadstart/overview/index.html). Chain with ADF to JSF migrator.
  3. Refactor with a ground up rewrite and deploy to PAS
  4. Create a Oracle Forms custom buildpack for running apps unchanged in PAS
Pick one that you think leads to the outcome desired. If the app is strategic then pick the refactoring option to PAS where you would you expend energy cataloging the pl/sql (e.g., stored procs (ins/outs)) to see what entities, value objects and aggregates fall out or just rewrite from scrap and start again with a bottom-up domain modeling exercise? aka a full rewrite 

In Oracle Forms the majority of the business  logic is actually PL/SQL- the forms just skins all the resultsets.There is a whole cottage industry for [forms migration](http://www.oracle.com/technetwork/developer-tools/forms/oracle-forms-migration-partners-098680.html) You can  migrate to ADF and then run ADF on tomcat with It  the JHeadstart Forms2ADF Generator that allows you to transform Oracle Forms applications directly to ADF applications, thereby protecting your investments when moving to the JEE (Java Enterprise Edition) development platform. [Convert Oracle Forms To Modern Web Application]


First of all, the very thought of converting Oracle Forms to something else as-is is wrong..The entire task is an expensive time-consuming futile effort. It probably is as good as using your black and white camera for capturing videos without even upgrading it! Most of the transactional forms are heavy and designed with 20 year old technology. So it is better to leave them alone for as long as they work. In cases you absolutely want to redo the Oracle forms, it is advisable to plan a proper migration (no short-cuts) to leverage the latest and greatest technology, that can probably be maintained for another 20 years.

When there are a lot of stored procedures .. you have deep coupling with the DB schemas … these apps will take a LOT to refactor. Enterprises should save :$$$ by just using a migration partner that does this migration and then deploy that java code on the PAS

Sunday, November 11, 2018

Modernizing the Monolithic User Interface

There is a lot of treatment on the topic of decomposing a monolithic applications. Most of the existing literature deals with disentangling the business logic into logical bounded contexts and sub-domains. Not enough attention is paid to the separation, composition and co-existence of of the new user interface with the old user interface. So what are the techniques for decomposing and modernizing User Interfaces from a legacy UI technology like  Oracle ADF to a modern composable javascript based UI.

Plethora of options for UI decomposition
The existing theory of microservices UI  composition leads us towards micro-frontends.  How does one stitch together a series of micro-frontends decomposed from a monolithic UI ? What are the different options available to decompose a monolithic frontend and assemble a modernized micro-frontend UI.

  1. Monolithic UI
  2. Single Page Application Javasceript  (SPA) meta- framework single-spa
  3. Micro Front-Ends - micro-frontends
  4. Front End Server Transclusion  microservice-websites
  5. Mosaic project-mosaic-front-end-services
  6. Single SPA with multiple similar apps
  7. Resource Oriented Architecture roca
  8. iFrame Palooza iframes



So which option did we chose ? Option # 6


References





Saturday, November 10, 2018

Top 10 reasons for containerizing legacy COTS software ?

What does cloud transformation mean for legacy custom off the shelf software aka COTS ? Yes we are talking of rules engines , portals, commerce engines, BPMs, etc and other products from IBM, Redhat, Oracle and other vendors built on big application servers and other forms of middleware....

1. Operational efficiency provided by PKS for managing the upgrade of both the platform(K8s) and the CustomOffTheShelf software. If done right zero downtime, rolling updates, canary blue/green upgrades can be done with minimal fuss. Advanced deployment policies provided by K8s enables the deploy and release of builds without impact on upstream / downstream systems  providing loose coupling.

2. Infrastructure consolidation and better utilization of hardware via horizontal auto-scaling. Depending on the workload you can attach certain COTS to certain nodes if hardware affinity is required. PKS/K8s has a better story for hardware affinity if the COTS needs some special GPU/CPU/Memory.

3. Most COTS vendors will move to a container based deployment model in the future. Installers shall become moot. This is the future. K8s dial-tone is a must for all future deployment of COTS.

4. Through various value adds that PKS provides for logs. metrics, telemetry, clusters, fine-grained security, health-watch, network micro-segmentation - you will get better uptime, stability and resiliency of your COTS platform if deployed right. Monitor end-to-end to gain insight and make informed decisions using the insights provided by PKS. Kubernetes along with BOSH provides proactive system health and automated health management at the container level

5. Developer Benefits include more consistent environments. On demand environments for COTS can be stamped out as opposed to manual provisioning. Putting it in a container allows deployment across zones and across different cloud providers realizing the multi-cloud dream and avoiding BIG cloud vendor lockin.

6. Putting  the COTs in a Container and deploying with a K8s platform like PKS provides flexibility in dynamic routing and service discovery to start strangling the functionality from COTs if so desired. This provides a gateway to modernization.

7. If the COTS is non-strategic, deploying it in k8s provides a long term resting place that is coherent with the cloud strategy.

8. COTS in PKS + microservices in PAS provides for the right abstraction to run at the right platform level. Recommend keeping COTS vanilla, build customization using Boot microservices

9. K8s updates every quarters. COTS vendors and everyone else are playing catchup and moving to a faster upgrade - AsAService cycle. As an IT organization COTS in PKS helps you get ahead of this curve. If done right Pace is an immense leverage to your organization.

10. Developers love containers and can run the COTS locally through a docker image whereas earlier they would have to spend a lot of hours in arcane setup. They get all the goodness of docker aka OCI compliant container images.

Top 10 challenges in containerizing legacy enterprise apps




1. Vendor Support: Independent of the K8s distribution PKS or Openshift or GKE the availability of vendor supported images and deployment models via helm or operators. Who will support us on the platform ? Will we get a single throat to choke ? See IBM support model for containers here. Oracle Weblogic guidance for containers. 

2. Upgrades including Security: Upgrade and patching of the said COTs software and apps packed in the images. Since you bring your own image in Docker it behooves the app owner to update the entire stack OS, JVM. App-Server/middleware and the app itself. Are best practices being followed for container creation ?

3. State: Usually when containerizing legacy apps the intent is to do 0 refactoring of the app itself => this leads to embedded state within the app. This complicates the deployment and day-2 ops of the app on K8s ranging from autoscaling , liveliness and readiness  probes and proper use of stateful sets.

4. Plethora of choices: In the K8s world there are five ways of doing anything. Lets take service discovery for example - Do you want to use native DNS, cluster-IP, Istio sidecar, environment variables, OSBAPI service brokers, Eureka, client based or server based discovery. Picking the right one for your technical stack is a drag that most ignore. These choices multiple as you consider logging, security, metrics etc., This is where PKS/Pivotal/AppTx can help. We know what works for you.

5. Day-2 App Ops: Very few understand how to operationally keep a fleet of K8s clusters alive. The care and feeding of a K8s cluster requires a level of operational maturity that is hard to visualize and estimate. The care and feeding of your K8s cluster and the pods is usually an after-thought. 

6. ROI: Show me the Money? : There is a wave of buzzwords raining down on the industry right now - microservices, serverless, devops, containers, agile, etc.,. The return on investment by rehosting, refactoring, replatforming or rebuilding/retiring an application is NOT clear. Developers follow mandates from the top. Is containerization really the right choice for your technical and business outcomes ? Is this a strategic play or a tactical play. All these options needs to be considered before a decision is made to containerize your legacy app. Pivotal AppTx has a structured funnel approach see here to make the right choice. What is the right choice from P to V to C [link]. Remember CaaS is only a means to the end. If the end is unclear you may not be making the right choices along the way.

7. Code Provenance:  Dude where is the source code ? : Sometimes the provenance of the code of the legacy app cannot be established. The source code is owned by a third-party partner who has been maintaining the app for years. Development happens offshore with only a few key customer coordinators who manage the project from  on-shore. In such a situation the outsourced partner has little incentive to containerize and eliminate waste because it translates to a material impact on consulting `$$$`. This is really a question of alignment of priorities between you and any major offshore partner.

8. Process: I am a big fan of the Theory of Constraints by Dr. Eliyahu Goldratt. The Theory of Constraints is a methodology for identifying the most important limiting factor (i.e. constraint) that stands in the way of achieving a goal and then systematically improving that constraint until it is no longer the limiting factor. If you don't eliminate the top bottlenecks you may be solving the wrong problem. How do you pick the right set of workloads to be run  at the right level of abstraction. Have you eliminated waste in your release management process. There is no point in optimizing the 20% of the time spent in developing the software and keeping the 80% of the time spent in  Q&A and release gates intact.

9. Resiliency: Often packaging an app in containers changes the environment and assumptions of the app enough to have a detrimental effect on the stability of the app. You have to be careful about how the application is dockerized and run in K8s. The inherent assumption in all the container orchestrators is that the container is fungible and location transparent. If the legacy app violates these constraints then you are fitting a round peg in square hole.

10. Skills: The subset of developers who understand cloud native and furthermore Kubernetes and Platform  as a Service is small. K8s is a fast moving target. Significant features show up in releases in alpha or beta form every 3 months. It is critical that your application is written with cloud native principles thereby making it cloud agnostic therefore enabling it to consume these platform features whenever and as soon as they show up. It is critical to architect and develop the legacy or greenfield application in the right way to ride the surf waves of K8s releases. 

Sunday, September 23, 2018

SNAP Not Analysis and Paralysis

SNAP is a technique we employ extensively in Pivotal App Transformation to triage an application and determine technical feasibility. SNAP can be done for an individual application as well as for a bucket of applications that groups like apps. We typically grade each answer along S, M, L, XL t-shirt sizes and determine an overall technical feasibility of the application aka its cloud readiness score. 

SNAP Analysis of Apps :½ hour per app:

Questions to ask:

0. Versions and APIs in-use of Java, JavaEE, J2EE & Spring Framework
1. External Integrations - Databases, Caching, Messaging Queues
4. In-house frameworks
5. Logging
6. Configuration
7. SLAs
8. Packaging and Build (ear/war/jar)
9. CI/CD in place ?
10. Use of Distributed 2PC XA Transactions
11. Persistent File System Access like NFS or SMB/CIFS... ?
12. People location
13. non-HTTP inbound networking protocols like RMI-IIOP, etc ..
14. Total LOC
15. App Server ? and appserver specific code
15. Security Requirements (Siteminder, Ping, SSO)
16. Batch/ETL
17. Front-end tech
18. Runs on PCF
19. Statefulness - session state
20. Data Access - JDBC, ORM, JMS
21. Startup/shudown times
22. 3rd party frameworks/libraries- 32/64 bit References - bit.ly/app-replat & bit.ly/migrate-jee


Typical Outcomes of a SNAP include: (0. Characteristics, 1. User Stories, 2. Metrics, 3. Risks, 4. Score aka Summary))

Outcome for each app is a list of stories to get the app to the degree of cloud nativeness desired by customer. Each App maps to multiple MVPs. The first MVP is always getting the app running on PCF. Thereafter there can be multiple MVPs gradually moving the app on the modernization scale. The scorecard of each app is used to map all the apps on a quadrant of Business Value/ Technical Risk. Apps of low tech. Risk and high business value are chosen first

Thursday, August 9, 2018

Auto Generating cloud pipelines for existing applications - THANKS TO Marcin Grzejszczak

This is a guest post where I copy - paste complete nuggets of wisdom from Spring and Pipelines guru Marcin Grzejszczak

The question I posed Marcin was how do we mass generate CI pipelines for existing enterprise applications ?

Marcin indicated that this functionality already exists in spring-cloud-pipelines via project crawler. However this functionality is currently only restricted to Jenkins because there’s no Java API to work with Concourse YAML or Concourse as such.

Project Crawler -  In Jenkins, you can generate the deployment pipelines by passing an environment variable with a comma separated list of repositories. This however doesn’t scale. We would like to automatically fetch a list of all repositories from a given organization / team.

Documentation for Crawler support 



Cloud Pipelines Concourse comes with a crawler.groovy file that allows to go through the repositories in an organization in an SCM server (e.g. Github, Gitlab, Bitbucket) and for each repo it will create a pipeline in Concourse.


How does this work with Jenkins ?

This is the logic for Jenkins that does the creation of pipelines -

https://github.com/spring-cloud/spring-cloud-pipelines/blob/master/jenkins/jobs/jenkins_pipeline_crawler_sample.groovy

This is the crawler logic that creates the crawler -

https://github.com/spring-cloud/spring-cloud-pipelines/blob/master/jenkins/jobs/jenkins_pipeline_crawler_sample.groovy#L34-L41

This is the logic that fetches the repos -

https://github.com/spring-cloud/spring-cloud-pipelines/blob/master/jenkins/jobs/jenkins_pipeline_crawler_sample.groovy#L43-L45


Here you have the logic that iterates over repos and does some stuff (generates pipelines)

https://github.com/spring-cloud/spring-cloud-pipelines/blob/master/jenkins/src/main/groovy/org/springframework/cloud/pipelines/common/PipelineFactory.groovy#L40-L73


What can we do for Concourse ?

The following Groovy script to mass - generate pipelines:
// crawl the org
ProjectCrawler crawler = new ProjectCrawler(OptionsBuilder
    .builder().rootUrl(urlRoot)
    .username(gitUsername)
    .password(gitPassword)
    .token(gitToken)
    .exclude(repoProjectsExcludePattern)
    .repository(repoType).build())

// get the repos from the org
ListRepository repositories = crawler.repositories(org)
MapString, Exception errors = [:]

repositories.each { Repository repo ->
            try {
                // fetch the descriptor
                String descriptor = projectCrawler.fileContent(org, repo.name, repo.requestedBranch, "sc-pipelines.yml")
                // parse it
                PipelineDescriptor pipeline = PipelineDescriptor.from(descriptor)
                if (pipeline.hasMonoRepoProjects()) {
                    // for monorepos treat the single repo as multiple ones
                    pipeline.pipeline.project_names.each { String monoRepo ->
                        Repository monoRepository = new Repository(monoRepo, repo.ssh_url, repo.clone_url, repo.requestedBranch)
                        // generate a concourse pipeline for monoRepository
                    }
                } else {
                        // generate a concourse pipeline for pipeline
                }
            } catch (Exception e) {
                errors.put(repo.name, e)
            }
        }

if (generatedJobs.hasErrors()) {
    println "\n\n\nWARNING, THERE WERE ERRORS WHILE TRYING TO BUILD PROJECTS\n\n\n"
    generatedJobs.errors.each { String key, Exception error ->
        println "Exception for project [${key}], [${error}]"
        println "Stacktrace:"
        error.printStackTrace()
    }
}


In the future the spring cloud pipelines developer workflows shall be simplified such that

As a developer, you will clone one of our sc-pipelines prepared sample apps, alter it to suit your needs (and maintain the conventions that are already there). You’re done.

As an ops, ensure that the Jenkins seed job is running every now or then (or is triggered when a new repo is added to your Github org). That way a pipeline is created out of the box.

Thank you Marcin!!!


Monday, July 30, 2018

Are PHP Microservices a Thing ?

So cloud native architecture does not  apply only to .NET or Java . Cloud native is a secular term that applies to applications in any language including PHP. The concepts and 15 factors of cloud native can apply to any language.

So PHP Microservices are indeed a thing. There is even a book published Mar 17 https://www.packtpub.com/mapt/book/application_development/9781787125377/1

Not only can you write cloud native apps in PHP you can also practice domain driven design in PHP. The book Domain-Driven Design in PHP is an excellent resource to get started.

Please also checkout True Tales of publishing PHP Microservices.

So with a combination of cloud native 15 factor implementation techniques and monolith decomposition driven by DDD you too can achieve nirvana with PHP microservices.

PHP microservices can be pushed to PCF using the PHP buildpack.  For practice take a look at the getting-starter guide to easily push PHP apps to Pivotal Cloud Foundry in under 5 minutes.


Saturday, July 28, 2018

Migrating WebLogic workloads to the cloud

# CaaS

0. Containerize the app - PAS ... zero changes to the app - research - Weblogic standalone. Single inbound route. Stateful workload => volume services on PCF. Docker images.
1. Containerize the app - PKS ... zero changes to the app - Use existing Helm charts or operators to run on PKS - Weblogic clustered https://github.com/oracle/weblogic-kubernetes-operator/blob/master/site/design.md

#PaaS

2. Using a buildpack - Weblogic buildpack https://github.com/pivotal-cf/weblogic-buildpack
3. Using a buildpack - TomEE buildpack - Runs JavaEE. YMMW if there is a lot of ORCL specific deployment descriptors and API usage
4. Using the Java buildpack - Java Buildpack - Bootify the app and replace all EE framework usage with corresponding Spring components and frameworks - some changes - very well understood - and recipes exist for ALL mods EJB Migration
5. Using the Java buildpack - Package up the app as a fat app in WebLogic and then run it simply as a jar with the JBP. - experimental but could yield replatforming with 0 changes. same restrictions as PAS apply.

# FaaS

- Does not apply

#VM 

7. BOSH to deploy Weblogic as a release. BOSH release for weblogic
8. Ansible, chef recipes ...

# Bare Metal

9. Run as-is today on servers with NO virtualization.

GKE-on prem, Azure on-prem , AWS on-prem

All the major clouds are coming to a datacenter near you*. You can get Google, or Azure megascale in your own house datacenter. You can start taking advantage of the cloud platforms to improve and shift your workloads to the cloud and get the benefits of being cloud native. The cloud will be a force function for innovation and change in your organizations allowing you to finally throw away the yolk of agilefall or wagile or DAD or Scrum whatever has kept you behind. Your software can eat the world. You will no longer be threatened by the millennial's of Silicon Valley. 

wait a minute ... there is something missing here !! ... 

Humans. Where is the human cloud ? Where is the Cloud Native Transformation as a service. Where is the Platform as a Product Service. Where is the Domain Driven Design as a Service. Where is the SRE as a service. Where is the Outside-in product thinking as a service. Where is the  Training as a service. Where is the Incident Management as a Service. Where is the Busting Canonical Domain Model as a service. Where is decomposing monoliths as a service. Where is the Refactoring databases as a service. Where is Decomposing UIs as a service ? Where is the Outcomes as a service ? Where is the Human as a Service. 

Remember that the human aspect of the cloud is the biggest force function enabler that will enforce the razor focus on outcomes rather than technology envy. As I sit here reflecting on all the technology choices that are passing through > microservices, serverless, kubernetes, cloud foundry, the million choices from CNCF,  please remember the cloud revolution needs to be customized to your needs and not the other way around. 

Pick and chose the boring technology choices available that get the job done and allow you to reach your objectives and key results. Do not forget that the ultimate goal is increased frequency software releases, reduced change failure rate, reduced lead and cycle time to iterate fast and converge on the right outcome. 


So yeah please install PCF, gke, aws, openstack, vmware , azure or whatever but remember software is a means to the end and it is the END that has always to be kept in mind unless your core business is the business of software itself. 

* I have money on AWS-on-prem being announced this ReInvent. 

Day 2 Application Operations

What are the set of practices you need to inculcate and internalize to keep your app alive in production ? Here is a checklist that can help ...

1/ Use spring boot actuators in production … AppManager has secret sauce integrating with actuator allowing for easier management  and visibility of your app of PCF. all your actuator endpoints

2/ When it makes sense add custom health indicators that add to the default health indicators

3/ Add spring cloud sleuth to your classpath dependencies so that you can visualize the call flow across microservices threads with zipkin in PCF Metrics http://docs.pivotal.io/pcf-metrics/1-4/using.html

4/ Plug a syslog drain to the end of the Firehose using the appropriate nozzle like https://docs.pivotal.io/partners/splunk/index.html

5/ Emit application or domain level metrics using Micrometer https://docs.spring.io/spring-boot/docs/current/reference/html/production-ready-metrics.html

6/ Starting PCF 2.2 you can configure alerting and autoscaling on any custom app metric. Custom rules for autoscaling.

7/ Use VisualVM and sister tools from the JVM to get deeper insight particularly for performance issues. Here is a good article on setting up VisualVM with an app running on Cloud Foundry. Furthermore the ability to take threaddumps and heapdumps is critical in analyzing performance issues specifically How to generate Java Application thread dump from Cloud Foundry container and How to generate and download Java Application heap dump from Cloud Foundry container and How to know if an app is responsible for high CPU in a Diego Cell and how to find it

8/ Leverage canaries to understand the quality of the app and then scale up with Blue/Green. concourse-pipeline.

9/ Create a SLI/SLO dashboard to get immediate visibility into your error budgets and uptime.



10/ Practice an outage handling event and run through debugging common scenarios like OOM, hangs, deadlocks using cf-ssh, and boot actuator endpoints. Create a program around incident response management.



Tuesday, February 13, 2018

Onboarding applications onto Pivotal Cloud Foundry

We often talk about reducing the barrier of entry for developers onto a new platform. Remember your platform - PCF is a product. One of the core advantages of PCF is how it catalyzes developer productivity. This productivity often comes from having an opinionated prescriptive approach to developing apps. With the appropriate gaurdrails and productivity practices in play enterprises can truly harness the power of the platform. In this blog post I put forth the developer workflow for developing and onboarding apps at the highest level of abstraction.

What should be the onboarding process for an app on PCF ?

  1. App Developer logs in into the App On boarding portal.  Operators need to automate access to the PCF platform for developers in an enterprise using cf-mgmt with the following commands

  2. Customized version of start.spring.io for Java and .NET apps that generates a running app bound to the relevant PCF service - Redis, Rabbit, MySQL, …. Fork this spring intializr repository for customizing start.spring.io.
  3. Scaffolding: Portal generates a corresponding CI pipeline that deploys said app to dev space. Reuse spring-cloud-pipelines. Minimal refactoring to comply with basic Spring Cloud Pipelines requirements. At the end of this stage, each app will have a corresponding pipeline on Concourse. The pipelines will successfully build the apps, store the artifacts in Bintray, tag the GitHub repositories, and deploy the apps to Test, Stage, and Prod spaces in Cloud Foundry.
  4. The app generated by the start.spring.io has appropriate tie-ins into credhub and a config server for managing secrets and config respectively.
  5. Security integration with the app putting it in the right authentication/authorization flows with Pivotal SSO or custom services
  6. Iterate on app features tracking the objectives and key results of the application. 
  7. Testing: Add/organize tests to comply with Spring Cloud Pipelines recommendations. Incorporate flyway for database schema versioning and initial data loading. At the end of this stage, the pipelines will trigger unit and integration tests during the Build stage, smoke tests in the Test environment, and end-to-end tests in the Stage environment. The pipelines will also ensure backward compatibility for the database, such that you can safely roll back the backend service app, even after the database schema has been updated. Release workflow pipeline starts to promote the app into test and staging => Blue/Green deploy with tagging in artifactory. Integrate with ESB and other down stream integrations. Insert release mgmt. gate here.
  8. Contracts: Incorporate Spring Cloud Contract to define the API between the UI and service apps and auto-generate tests and stubs.At the end of this stage, the pipelines will catch breaking API changes during the Build stage and ensure backward compatibility for the API, such that you can safely roll back the backend service (producer) app, even after an API change.
  9. Pipeline to promote to prod.. This may deploy to multiple foundations. This has the capability to do a canary and dark deploys and rollback. Insert release mgmt. gate here

References: