About Me

My photo
Rohit is an investor, startup advisor and an Application Modernization Scale Specialist working at Google.

Thursday, October 1, 2015

Chasing Cloud Foundry OutOfMemory Errors - OOM

If you are ever unfortunate enough to troubleshoot application, java heap or native process OOM issues in Cloud Foundry follow the playbook below to get to the root cause:

1. Include the attached script dump.sh at the root of your JAR/WAR file. You can edit the LOOP_WAIT variable in the script to configure how often it will dump the Java NMT info. I'd suggest somewhere between 5 and 30 seconds, depending on how long it takes for the problem to occur. If the problem happens pretty quick, go with a lower number. If it takes hours, then go with something higher.

2. Make a .profile.d directory, also in the root of the JAR / WAR file. For a detailed explanation on using .profile.d to profile native memory checkout this note from CF support engineer Daniel Mikusa.

3. In that directory, add this script.
#!/bin/bash
$HOME/dump.sh &

This script will be run before the application starts. It starts the dump.sh script and backgrounds it. The dump.sh script will loop and poll the Java NMT stats, dumping them to STDOUT. As an example see the simple-java-web-for-test application. There is also an accompanying load plan here

4. Add the following parameters to JAVA_OPTS: 
JAVA_OPTS: "-XX:+PrintGCDateStamps -XX:+PrintGCDetails -Xloggc:./jvm-gc.log -XX:NativeMemoryTracking=detail"
The -XX:NativeMemoryTracking will enable native OOM analysis [1]. More on this later. The -Xloggc will allow you to pipe all GC output to a log that you can later analyze with tool like PMAT or GCMV

5. Push or restage the application.

6. In a terminal open cf logs app-name > app-name.log. That will dump the app logs & the Java NMT info to the file. Please try to turn off as much application logging as possible as this will make it easier to pick out the Java NMT dumps.

7. Kick off the load tests.


The nice thing about Java NMT is that it will take a snapshot of the memory usage when it first runs and every other time we poll the stats we'll see a diff of the initial memory usage. This is helpful as we really only need the last Java NMT dump prior to the crash to know what parts of the memory have increased and by how much. It's also nice because it gives us insight into non-heap usage. Given the Java NMT info, it should be easier to make some suggestions for tuning the JVM so that it doesn't exceed the memory limit of the application and cause a crash.

8. If you have the ability to ssh into the container use the following commands to trigger heapdumps and coredumps
JVM Process Id:
PID=` ps -ef | grep java | grep -v "bash\|grep" | awk '{print $2}'`
Heapdump:
./jmap -dump:format=b,file=/home/vcap/app/test.hprof $PID
Coredump:
kill -6 $PID   should produce a core file and leave the server running
Analyze these dumps using tools like Eclipse Memory Analyzer and IBM HeapDump Analyzer 

9. If you have the ability to modify the application then enable the Spring Boot Actuator feature for your app if it is Boot app else integrate a servlet or script like  DumpServlet  and HeapDumpServlet into the app 

Salient Notes:
- The memory statistics reported by CF in cf app is in fact  used_memory_in_bytes which is a summation of  rss and the active and inactive caches. [2] and [3]. This is the number watched by the cgroup linux OOM killer. 
- The Cloud Foundry Java Buildpack by default enables the following parameter -XX:OnOutOfMemoryError=$PWD/.java-buildpack/open_jdk_jre/bin/killjava.sh
Please do NOT be lulled into a false sense of complacency by this parameter. I have never seen this work in production. Your best bet is to be proactive when triggering and pulling dumps using servlets, JMX, kill commands, whatever ... 

References:

Tuesday, September 29, 2015

Spring Boot Activator metrics collection in a spreadsheet

If your app is a spring boot app that has the actuator enabled then use this nifty script from Greg Turnquist's Learning Spring Boot book with some changes from me to collect all the metrics in a csv


package learningspringboot

@Grab("groovy-all")

import groovy.json.*

package learningspringboot


@Grab("groovy-all")

import groovy.json.*

@EnableScheduling

class MetricsCollector {

    def url = "http://fizzbuzz.cfapps.io/metrics"

    def slurper = new JsonSlurper()
    def keys = slurper.parse(new URL(url)).keySet()
    def header = false;
    @Scheduled(fixedRate = 1000L)
    void run() {
        if (!header) {
            println(keys.join(','))
            header = true
        }

        def metrics = slurper.parse(new URL(url))


        println(keys.collect{metrics[it]}.join(','))

    }
}


Monday, September 28, 2015

Debugging DEA Issues on Cloud Foundry - Halloween Edition

Ever wondered why Cloud Foundry does not support ssh'ing into the warden container. Such a feature could be useful in so many situations - TechOps, Troubleshooting, Debugging etc. cf-ssh is coming to Cloud Foundry via Diego; however till you deploy Diego in production you will need to live with the process below to ssh into a particular application instance warden container.

# Step 1 - Find app guid

C:\Users\Administrator\workspace\FizzBuzz>cf app --guid fizzbuzz
90291bd7-ce52-43ee-aaa1-ed0405863c4a

# Step 2 - Find DEA IP address and port


C:\Users\Administrator\workspace\FizzBuzz>cf curl /v2/apps/90291bd7-ce52-43ee-aaa1-ed0405863c4a/stats
{
   "0": {
      "state": "RUNNING",
      "stats": {
         "name": "fizzbuzz",
         "uris": [
            "fizzbuzz.kelapure.cloud.pivotal.io"
         ],
         "host": "192.168.200.27",
         "port": 61015,
         "uptime": 1489673,
         "mem_quota": 1073741824,
         "disk_quota": 1073741824,
         "fds_quota": 16384,
         "usage": {
            "time": "2015-09-29 02:36:12 +0000",
            "cpu": 0.003104336638753874,
            "mem": 546877440,
            "disk": 187445248
         }
      }
   }
}


# Step 3 - Locate DEA Job that maps to the DEA IP from previous step

ubuntu@pivotal-ops-manager:~$ bosh vms
Acting as user 'director' on 'microbosh-38a3a7433db69fa7d159'
Deployment `cf-938e3d9bec67dbffeacc'

Director task 178

Task 178 done


ubuntu@pivotal-ops-manager:~$ bosh vms --details
Acting as user 'director' on 'microbosh-38a3a7433db69fa7d159'
Deployment `cf-938e3d9bec67dbffeacc'

Director task 179

Task 179 done

+----------------------------------------------------------------+---------+--------------------------------------------------------------+----------------+-----------------------------------------+--------------------------------------+--------------+
| Job/index                                                      | State   | Resource Pool                                                | IPs            | CID                                     | Agent ID                             | Resurrection |
+----------------------------------------------------------------+---------+--------------------------------------------------------------+----------------+-----------------------------------------+--------------------------------------+--------------+
| ccdb-partition-0d7a243620d08147fd3a/0                          | running | ccdb-partition-0d7a243620d08147fd3a                          | 192.168.200.15 | vm-df650acf-88e9-4b1a-b68e-a2ff11b47a65 | 1e315067-aef5-4f5d-ad00-50b492f98085 | active       |
| clock_global-partition-0d7a243620d08147fd3a/0                  | running | clock_global-partition-0d7a243620d08147fd3a                  | 192.168.200.22 | vm-f905d0b1-d4f4-4646-826b-6255e19fc0f0 | 992e57f7-003b-42f2-9818-3db5d4a64402 | active       |
| cloud_controller-partition-0d7a243620d08147fd3a/0              | running | cloud_controller-partition-0d7a243620d08147fd3a              | 192.168.200.18 | vm-29aeeadb-b9c3-4ca2-ab32-49573e957697 | 413a5a5f-06f4-4359-9074-13d3bbd11e35 | active       |
| cloud_controller_worker-partition-0d7a243620d08147fd3a/0       | running | cloud_controller_worker-partition-0d7a243620d08147fd3a       | 192.168.200.23 | vm-ded90340-de9f-4d15-8ac4-540669395d8a | 300814cb-6f23-4068-86ba-b6569bedf259 | active       |
| consoledb-partition-0d7a243620d08147fd3a/0                     | running | consoledb-partition-0d7a243620d08147fd3a                     | 192.168.200.17 | vm-9ef7f4fe-f6a9-4bab-89cb-0942d499e40c | bef26f14-cb91-4bbe-bc1d-3d2d27368ba6 | active       |
| consul_server-partition-0d7a243620d08147fd3a/0                 | running | consul_server-partition-0d7a243620d08147fd3a                 | 192.168.200.12 | vm-e57e77dd-25c6-4d3f-a440-4dd0b31d28f2 | af13ba63-068f-4ee8-bd8a-48b02438767e | active       |
| dea-partition-0d7a243620d08147fd3a/0                           | running | dea-partition-0d7a243620d08147fd3a                           | 192.168.200.27 | vm-699ed235-f7f4-4094-a92e-963ef726b1d6 | 437bbc76-7d1c-4ce9-916d-4e6a6355537f | active       |
| dea-partition-ee97ca1101e7cc2c048a/0                           | running | dea-partition-ee97ca1101e7cc2c048a                           | 192.168.200.28 | vm-91378b2c-7592-4b14-9f73-07c57043fe75 | 561899ad-00d4-4eec-a7f3-a1e8e9c84e10 | active       |
| doppler-partition-0d7a243620d08147fd3a/0                       | running | doppler-partition-0d7a243620d08147fd3a                       | 192.168.200.29 | vm-7ca4b286-13d9-4d84-aec0-8a1fcdf81cae | 4f243e01-2c89-4f77-8a37-31b8b0c5f1c8 | active       |
| doppler-partition-ee97ca1101e7cc2c048a/0                       | running | doppler-partition-ee97ca1101e7cc2c048a                       | 192.168.200.30 | vm-150f7c12-2bf9-4da3-bfec-8e342051a203 | f454702e-db36-4e37-a02d-3c44e1c57822 | active       |
| etcd_server-partition-0d7a243620d08147fd3a/0                   | running | etcd_server-partition-0d7a243620d08147fd3a                   | 192.168.200.13 | vm-fe166f4f-e060-407d-8ba1-74d50f17e22e | 1ea07f41-aa95-4ef0-9a3a-8a7a90886b1c | active       |
| ha_proxy-partition-0d7a243620d08147fd3a/0                      | running | ha_proxy-partition-0d7a243620d08147fd3a                      | 192.168.200.20 | vm-1fb87e81-d1cd-4abe-82fc-91bddf9e99dc | 2d4a5048-9a9d-4a8f-bf34-0967b8f0bbf5 | active       |
| health_manager-partition-0d7a243620d08147fd3a/0                | running | health_manager-partition-0d7a243620d08147fd3a                | 192.168.200.21 | vm-9e84e893-22a2-45c4-8f49-bcd65013494a | 7e4ce9e1-d5e9-4ec8-9ba0-2b9885a20fd5 | active       |
| loggregator_trafficcontroller-partition-0d7a243620d08147fd3a/0 | running | loggregator_trafficcontroller-partition-0d7a243620d08147fd3a | 192.168.200.31 | vm-f3d7075b-f3b6-4ea0-8307-590ee63e7bf3 | 7bf8256d-903f-4ce0-b56e-8f88c00c80ea | active       |
| loggregator_trafficcontroller-partition-ee97ca1101e7cc2c048a/0 | running | loggregator_trafficcontroller-partition-ee97ca1101e7cc2c048a | 192.168.200.32 | vm-f4ed6afa-2073-47b0-8ae3-d9a08e85a209 | 0df8b5e8-e539-471f-ac0d-4e6d68c79c09 | active       |
| mysql-partition-0d7a243620d08147fd3a/0                         | running | mysql-partition-0d7a243620d08147fd3a                         | 192.168.200.26 | vm-a73374e9-ffd8-4cd3-8bb4-519d35778143 | c4da718c-0964-4004-82b8-58c3341e8cb9 | active       |
| mysql_proxy-partition-0d7a243620d08147fd3a/0                   | running | mysql_proxy-partition-0d7a243620d08147fd3a                   | 192.168.200.25 | vm-e94670b3-3754-40f6-b86b-3dcf88d8542e | 06a3c9ae-3e64-4eec-82e7-0c91be97e81c | active       |
| nats-partition-0d7a243620d08147fd3a/0                          | running | nats-partition-0d7a243620d08147fd3a                          | 192.168.200.11 | vm-9b9855f5-3e5f-4de8-872b-347fbc56c984 | 4e621e1a-718a-4c55-931f-2c7b68504d80 | active       |
| nfs_server-partition-0d7a243620d08147fd3a/0                    | running | nfs_server-partition-0d7a243620d08147fd3a                    | 192.168.200.14 | vm-a28bad65-4fac-40ac-a383-4b30725966e1 | f02359a8-5c03-4942-ab6c-5ac32c536d6a | active       |
| router-partition-0d7a243620d08147fd3a/0                        | running | router-partition-0d7a243620d08147fd3a                        | 192.168.200.19 | vm-5b00e6db-669d-4115-b786-5c0b6c5b6c78 | 82111c58-b944-4099-8f56-5ec6b195e54a | active       |
| uaa-partition-0d7a243620d08147fd3a/0                           | running | uaa-partition-0d7a243620d08147fd3a                           | 192.168.200.24 | vm-aab6ab86-3eca-490d-aa7a-9d720b057721 | 5e95d2f3-f871-4a0c-ae81-ee1effb5fa1a | active       |
| uaadb-partition-0d7a243620d08147fd3a/0                         | running | uaadb-partition-0d7a243620d08147fd3a                         | 192.168.200.16 | vm-a472fe4e-40d8-4c3f-a95a-e6e8e521cf86 | 336af737-8bc1-4f62-bd40-ae3b63f85637 | active       |
+----------------------------------------------------------------+---------+--------------------------------------------------------------+----------------+-----------------------------------------+--------------------------------------+--------------+

VMs total: 22

In our case the DEA Job is  *dea-partition-0d7a243620d08147fd3a/0*


# Step 4 Login into the DEA VM

ubuntu@pivotal-ops-manager:~$ bosh ssh dea-partition-0d7a243620d08147fd3a/0 --public_key y.pub
Acting as user 'director' on deployment 'cf-938e3d9bec67dbffeacc' on 'microbosh-38a3a7433db69fa7d159'
Enter password (use it to sudo on remote host): ********
Target deployment is `cf-938e3d9bec67dbffeacc'

see https://github.com/cloudfoundry/bosh-lite/issues/134  if stuck on bosh ssh


# Step 5 Locate the warden container path


If you grep for `fizzbuzz`and the instance port gleaned from step-2 in the list below you will find the following 

"warden_container_path": "/var/vcap/data/warden/depot/18tlhc59f3v",

bosh_yms06qjkj@437bbc76-7d1c-4ce9-916d-4e6a6355537f:/var/vcap/data/warden/depot$ sudo cat /var/vcap/data/dea_next/db/instances.json

[sudo] password for bosh_yms06qjkj:
{
  "time": 1443494275.6732097,
  "instances": [

    {
      "cc_partition": "default",
      "instance_id": "07a4a39d7ec14249863303246c73dfa2",
      "instance_index": 0,
      "private_instance_id": "5e15175c18644561828e52534b7c7a71fad5773f13b045058eaed76793618b1f",
      "warden_handle": "18tlhc59erj",
      "limits": {
        "mem": 512,
        "disk": 1024,
        "fds": 16384
      },
      "health_check_timeout": null,
      "environment": {
        "CF_PROCESS_TYPE": "web"
      },
      "services": [

      ],
      "application_id": "093a9e11-0b06-45f1-b3c4-e801ad0aca81",
      "application_version": "2febead7-c0fe-43a5-b4a4-724c5581c037",
      "application_name": "tmfnodetest",
      "application_uris": [
        "tmfnodetest.kelapure.cloud.pivotal.io"
      ],
      "droplet_sha1": "fcba91c3099ba1218febfcb043d3792ccc9bba97",
      "droplet_uri": null,
      "start_command": null,
      "state": "RUNNING",
      "warden_job_id": 461,
      "warden_container_path": "/var/vcap/data/warden/depot/18tlhc59erj",
      "warden_host_ip": "10.254.2.41",
      "warden_container_ip": "10.254.2.42",
      "instance_host_port": 61127,
      "instance_container_port": 61127,
      "syslog_drain_urls": [

      ],
      "state_starting_timestamp": 1442435859.1693246
    },

    {
      "cc_partition": "default",
      "instance_id": "9383f43f0e3549ad9c29c511c5f4211e",
      "instance_index": 0,
      "private_instance_id": "23065b9faddb43d2ae296802c1db8cbdbf2a2d00dd2343849c8825bcfc7eb044",
      "warden_handle": "18tlhc59enu",
      "limits": {
        "mem": 1024,
        "disk": 1024,
        "fds": 16384
      },
      "health_check_timeout": null,
      "environment": {
        "CF_PROCESS_TYPE": "web"
      },
      "services": [
        {
          "credentials": {
            "agent-name": "nginx-hello",
            "host-name": "ca-apm.springapps.io",
            "port": "5001"
          },
          "options": {

          },
          "syslog_drain_url": "",
          "label": "user-provided",
          "name": "ca_apm_10",
          "tags": [

          ]
        }
      ],
      "application_id": "90291bd7-ce52-43ee-aaa1-ed0405863c4a",
      "application_version": "09f093ae-c9e8-46ba-89c8-e56d7b84b671",
      "application_name": "fizzbuzz",
      "application_uris": [
        "fizzbuzz.kelapure.cloud.pivotal.io"
      ],
      "droplet_sha1": "e5fc94cc79eb72489e94c6b620887c5b72244b89",
      "droplet_uri": "http://staging_upload_user:c601ae7f5ae3745d40ee@192.168.200.18:9022/staging/droplets/90291bd7-ce52-43ee-aaa1-ed0405863c4a/download",
      "start_command": null,
      "state": "RUNNING",
      "warden_job_id": 92,
      "warden_container_path": "/var/vcap/data/warden/depot/18tlhc59enu",
      "warden_host_ip": "10.254.0.85",
      "warden_container_ip": "10.254.0.86",
      "instance_host_port": 61015,
      "instance_container_port": 61015,
      "syslog_drain_urls": [
        ""
      ],
      "state_starting_timestamp": 1442004499.0728624

    },

    {
      "cc_partition": "default",
      "instance_id": "fa057152a7094ee6bfd0cd28c8cb76dc",
      "instance_index": 0,
      "private_instance_id": "18b2515c046742ae979fbfa58428120608960bc04f7444759983d1c91621e7da",
      "warden_handle": "18tlhc59f3v",
      "limits": {
        "mem": 1024,
        "disk": 1024,
        "fds": 16384
      },
      "health_check_timeout": null,
      "environment": {
        "CF_PROCESS_TYPE": "web"
      },
      "services": [

      ],
      "application_id": "d9969088-1f7b-40b3-a048-c71814d172c4",
      "application_version": "431e7f3d-774b-4c24-ad4e-c6fda884dab1",
      "application_name": "spring-music",
      "application_uris": [
        "spring-music.kelapure.cloud.pivotal.io"
      ],
      "droplet_sha1": "9e35ddfd97a7c1a32e2a69d5e1c5f90c6d4b7e06",
      "droplet_uri": "http://staging_upload_user:c601ae7f5ae3745d40ee@192.168.200.18:9022/staging/droplets/d9969088-1f7b-40b3-a048-c71814d172c4/download",
      "start_command": null,
      "state": "CRASHED",
      "warden_job_id": 1265,
      "warden_container_path": "/var/vcap/data/warden/depot/18tlhc59f3v",
      "warden_host_ip": "10.254.2.113",
      "warden_container_ip": "10.254.2.114",
      "instance_host_port": 61395,
      "instance_container_port": 61395,
      "syslog_drain_urls": [

      ],
      "state_starting_timestamp": 1443494262.4416354
    }
  ],
  "staging_tasks": [

  ]
}

# Step 6 wsh into the warden container

bosh_yms06qjkj@437bbc76-7d1c-4ce9-916d-4e6a6355537f:/var/vcap/data/warden/depot$ cd 18tlhc59enu

bosh_yms06qjkj@437bbc76-7d1c-4ce9-916d-4e6a6355537f:/var/vcap/data/warden/depot/18tlhc59enu$ ls
bin  destroy.sh  etc  jobs  lib  mnt  net_rate.sh  net.sh  run  setup.sh  snapshot.json  start.sh  stop.sh  tmp

bosh_yms06qjkj@437bbc76-7d1c-4ce9-916d-4e6a6355537f:/var/vcap/data/warden/depot/18tlhc59enu$ sudo ./bin/wsh
wsh   wshd  

bosh_yms06qjkj@437bbc76-7d1c-4ce9-916d-4e6a6355537f:/var/vcap/data/warden/depot/18tlhc59enu$ sudo ./bin/wsh
root@18tlhc59enu:~# ls
firstboot.sh
root@18tlhc59enu:~# cd /home
root@18tlhc59enu:/home# cd vcap/
root@18tlhc59enu:/home/vcap# ls
app  logs  run.pid  staging_info.yml  tmp

root@18tlhc59enu:/home/vcap# ll
total 40
drwxr-xr-x 5 vcap vcap 4096 Sep 11 20:48 ./
drwxr-xr-x 3 root root 4096 Sep 11 20:48 ../
drwxr--r-- 7 vcap vcap 4096 Sep 11 20:47 app/
-rw-r--r-- 1 vcap vcap  220 Apr  9  2014 .bash_logout
-rw-r--r-- 1 vcap vcap 3637 Apr  9  2014 .bashrc
drwxr-xr-x 2 vcap vcap 4096 Sep 11 20:47 logs/
-rw-r--r-- 1 vcap vcap  675 Apr  9  2014 .profile
-rw------- 1 vcap vcap    3 Sep 11 20:48 run.pid
-rw-r--r-- 1 vcap vcap 2000 Sep 11 20:47 staging_info.yml
drwxr-xr-x 3 vcap vcap 4096 Sep 11 20:48 tmp/
root@18tlhc59enu:/home/vcap#



# Step 7 - Now you are free to ftp files out or take heap dumps or thread-dumps

For instance to take a heapdump
root@18tlhc59enu:/home/vcap/app/.java-buildpack/open_jdk_jre/bin# ll
total 384
drwxr-xr-x 2 vcap vcap   4096 Sep 11 20:47 ./
drwxr-xr-x 5 vcap vcap   4096 Sep 11 20:47 ../
-rwxr-xr-x 1 vcap vcap   8798 Jul 16 09:29 java*
-rwxr-xr-x 1 vcap vcap   8909 Jul 16 09:29 jcmd*
-rwxr-xr-x 1 vcap vcap   8909 Jul 16 09:29 jjs*
-rwxr-xr-x 1 vcap vcap   8973 Jul 16 09:29 jmap*
-rwxr-xr-x 1 vcap vcap   8981 Jul 16 09:29 jstack*
-rwxr-xr-x 1 vcap vcap   8917 Jul 16 09:29 keytool*
-rwxr-xr-x 1 vcap vcap   1146 Sep 11 20:47 killjava.sh*
-rwxr-xr-x 1 vcap vcap   8981 Jul 16 09:29 orbd*
-rwxr-xr-x 1 vcap vcap   8917 Jul 16 09:29 pack200*
-rwxr-xr-x 1 vcap vcap   8917 Jul 16 09:29 policytool*
-rwxr-xr-x 1 vcap vcap   8909 Jul 16 09:29 rmid*
-rwxr-xr-x 1 vcap vcap   8917 Jul 16 09:29 rmiregistry*
-rwxr-xr-x 1 vcap vcap   8917 Jul 16 09:29 servertool*
-rwxr-xr-x 1 vcap vcap   8989 Jul 16 09:29 tnameserv*
-rwxr-xr-x 1 vcap vcap 217462 Jul 16 09:29 unpack200*

root@18tlhc59enu:/home/vcap/app/.java-buildpack/open_jdk_jre/bin# su vcap

vcap@18tlhc59enu:~/app/.java-buildpack/open_jdk_jre/bin$ PID=` ps -ef | grep java | grep -v "bash\|grep" | awk '{print $2}'`
vcap@18tlhc59enu:~/app/.java-buildpack/open_jdk_jre/bin$ echo $PID
29

vcap@18tlhc59enu:~/app/.java-buildpack/open_jdk_jre/bin$ ./jmap -dump:format=b,file=/home/vcap/app/test.hprof $PID
Dumping heap to /home/vcap/app/test.hprof ...
Heap dump file created

vcap@18tlhc59enu:~/app/.java-buildpack/open_jdk_jre/bin$ ls -al /home/vcap/app
total 230616
drwxr--r-- 7 vcap vcap      4096 Sep 29 03:06 .
drwxr-xr-x 5 vcap vcap      4096 Sep 11 20:48 ..
drwxr-xr-x 5 vcap vcap      4096 Sep 11 20:47 .java-buildpack
-rw-r--r-- 1 vcap vcap     82155 Sep 11 20:47 .java-buildpack.log
drwxr--r-- 3 vcap vcap      4096 Sep 11 20:46 META-INF
drwxr--r-- 3 vcap vcap      4096 Sep 11 20:46 my-resources
drwxr--r-- 3 vcap vcap      4096 Sep 11 20:46 org
-rw------- 1 vcap vcap 236032777 Sep 29 03:06 test.hprof
drwxr--r-- 5 vcap vcap      4096 Sep 11 20:46 WEB-INF