Top 10 Tips on Getting Observability Right

This was featured as the Capacitas thought of the Week and as I was the author thought I would capture it on my own blog! See the original here: https://www.capacitas.co.uk/insights/top-10-tips-on-getting-cloud-observability-right

  1. After buying a tool, create an ecosystem of people and integrations to get observability that provides value.
  2. Make sure the tool fits your technology stack (ok, this may seem obvious but be careful as the tool needs to work with legacy tech as well as the new and shiny tech).
  3. Spend time configuring the tools to make it readable and relevant to the users i.e., Rename IIS application pool AGKP_04520756 to UKAGKPPortal.
  4. Remove noise from the tool e.g. If volume /dev/u001/fred  is always full then configure the tool not to show it in red!
  5. Try to avoid underlying infrastructure alerting, it doesn’t matter what your hardware is doing, it is your users experience that matter. Set alerts at the UX level.
  6. Configure alerts that align with the business. For example, set alerts for critical user transactions such as time to generate a quote or complete payment rather than a generic alert across all user transactions.
  7. Decide who are the consumers of the tool and work with them to make sure they are trained; the tool should be configured for their needs. I am not talking about just turning them into a dashboard. That would be teaching them how to fish!
  8. These tools do love data sources. The more you give them the more likely you can correlate the source of problems i.e., poor performance could be due to waiting for a VM to be scheduled on the hypervisor. You may never know this unless you are monitoring the hypervisor. Of course, the more you monitor, the more you pay!
  9. Don’t be afraid to have a dedicated monitoring team but ensure they are skilled in using the tool and not just configuring the tool. i.e. When production goes down, they are in the thick of it trying to resolve the issue.
  10. Keep an eye on the costs!

Dynatrace Performance Troubleshooting Example

They do say the exception proves the rule. In this example, I do deviate from how I normally troubleshoot a problem. But, it was a bit of an odd problem!

One of the account managers I work with was getting worried. They are using data from Dynatrace to calculate an SLA metric for end user response times. In the recent months the response time for a key transaction had jumped significantly. pushing the value into the red. The SLA reports the 95th percentile. So, I was asked to have a look to see what the problem could be.

Looking at the response time for the transaction the median is very good at 350ms so I switched to the slowest 10% (90th percentile) and that was just over one minute. So, the next step was to look at the waterfall diagram for one of the slow pages. The waterfall is shown in the graphic below:

The waterfall is showing the delay is for the OnLoad processing. Typically the OnLoad processing is the execution of JavaScript after the page is loaded.

Normally, I would then try to recreate the problem myself and try to profile the JavaScript with the browser developer tools but that takes time and I still had some more digging to do in Dynatrace. Next, I wanted to see how this looked for a users session and I noticed something odd. Here, is the session for a user.

There are few things of interest:

(1) The user is just using the problem page/transaction

(2) The page is called every minute. (This suggests that this is automatically called rather than initiated by the user)

(3) Response time is good and then goes bad, and is a pretty consistent value.

I was surprised that a page this slow didn’t already have users complaining about it. So, I decided to see if I knew any of the users and then I could just check this with them. As it happens I recognized one of the users and I gave them a call and the mystery was explained.

The config.aspx page is a status page used by certain users, It transpires that this is accessed from a desktop device within the data center which means users have to remote desktop onto the desktop from their laptops (don’t ask why). We, did some live tests and discovered it occurred when the browser was left open and the remote desktop went into sleep mode! Therefore, when we get slow pages there isn’t a real user at the end waiting for a response.

Quick chat with the account manager and we agreed that page should be excluded from the SLA calculation. Problem sorted!

Performance Monitoring choosing a Sampling Period

I was recently visiting the St Ives Tate Art Gallery (I know I am so cultured/it was raining) and came across an art work called Measuring Niagara with a Teaspoon by Cornelia Parker. The blurb about the work is based on her interest in the absurd task of measuring an enormous object with such a small object. In this case Niagara falls using a tea spoon.

So, as a performance engineer this got me thinking about how we choose the sampling period when collecting performance data. We generally believe the more granular (shortest sampling period) the better as detail is lost when data is sampled over longer periods. For example, the graphic below is for CPU utilisation sampled at different periods and as you can see the odd behavior of the CPU does not become apparent until sampling every second.

However, the shorter the sampling period the more data we collect and have to anaylse. There are often times when we don’t have the luxury of being able to collect data with a really short sampling period. In which case how do we choose the best sample period?

For me, it comes down to what am I looking for in the data. For example, users are complaining about an intermittent problem where for about 15 minutes during the day response time is really slow across all user transactions. The time delay in the incident reporting means I can’t rely on using a circular buffer to store a couple of hours of highly sampled data which could be flushed just after an incident. So, I need to collect day, I would live with a sampling period of about 5 to 10 minutes. This means I would have enough data to correlate changes in resource usage with when the users reported performance issues.

Another example would be for looking for reasons behind slow response times. For example we are looking for reasons the response time has increase to over 6 seconds. In that case the minimum I would accept would be 3 seconds.

You can see in most cases the rule I apply is at least sample at 1/2 the duration of the period of interest. Ideally I would go 1/3 the period of interest. As they say you need 3 data points to draw a curve!

There is possibly some science behind this. There is the Nyquist Theorem which is used in signal processing. It says you need to sample twice as much as the frequency of the analogue signal you are converting. There is a bit of maths to prove the theorem but basically it makes sense to think you need to have at least two data points to detect a noticeable change in the system you are sampling.

How to Manage Complexity (Martin Thompson) Interview

While some people provide the odd cartoon at Christmas! Dave Farley provides something more substantial this festive season. I always like to listen to Martin Thompson on any of his performance work. This interview is a bit more general than purely performance related work, but it is still worth taking the time to listen. Also, I found it very interesting to hear that his consulting engagements are moving away from improving response time to improving efficiency.

To me the take away from this interview are listed below:

(1) Keep things simple

(2) Concentrate on the business logic

(3) Work in small steps, feedback and make small incremental changes

(4) Most performance problems he sees are systemic design flaws

(5) Measure and under stand (model) the system you are trying to improve and compare that model of what should be achieved.

(6) Consider separation of concerns in the design will lead to simpler code (“one class, one thing; one method, one thing”)

(7) If you haven’t heard about it think about Mechanical Sympathy https://dzone.com/articles/mechanical-sympathy

Response time blip when the system is quiet

Working with a customer using Dynatrace there where some noticeable peaks in response times for a application that wasn’t highly used. As can be seen in the graph below:

The application was web based and hosted on IIS. I had come across problems like this before and suspected it was due to the idle timer setting for an application pool. By default IIS will shutdown worker processes after being idle for 20 minutes. However, for some applications the time to restart a worker process can be noticeable slow.

If you set the idle Time-out to 0 then it will not terminate and you won’t have to suffer the restart overhead. I have made this change several times and not seen any adverse affects. But remember all applications are different!

Troubleshooting system.OutOfMemoryException

It happened when I got an email from the service manager “User for an application we host are intermittently failing to login during peak login period in the morning. The same users have no problem to login at other times and no particular group of users or locations are identified. The users are accessing via Citrix.

The support team have been trying to reproduce the error by login in multiple times and they cannot reproduce consistently. They even had 30-40 session active on the test box and they could not reproduce and the test box has available memory”.

The first thing I wanted to to see the actual error seen by the users and any logs. The support folks sent this through and the error came with a stack trace which had this at the bottom:

at System.Text.StringBuilder..ctor(String value, Int32 startIndex, Int32 length, Int32 capacity)

Exception of type ‘System.OutOfMemoryException’ was thrown.

What string builder is doing is allocating space for holding a string. You can get an OutOfMemoryException if you try to allocate a string length greater than the capacity but this level of programming error would probably occur at any time not at the peak. As this is the login process it should be pretty consistent.

I think the issue here is to do with the limit on virtual memory rather than physical memory. I mocked up a .NET 4.5 app on my PC that grabs 0.5G of string storage after each key press. Here is a code snippet below:

int capacity = 250000000;
int maxCapacity = capacity + 1024;
StringBuilder stringBuilder1 =
new StringBuilder(capacity, maxCapacity);
Console.WriteLine(GC.GetTotalMemory(false));

Console.ReadKey(true);

There where two scenarios I found that generated the OutOfMemory exception

Exceeding the Process Address Space

The test app process is limited to 4G of memory and when that is exceeded I get the out of memory error. However, I don’t think this is your issue as the problem occurs only during heavy usage.

Committed bytes is exceeded

Committed bytes is the total amount of memory available on the Physical RAM and the Page File. I get the error when the perfmon counter % Committed byes in Use reaches 100% as the process cannot find any spare memory for the next 0.5G of string storage. Below you can see when the issue occurs:

The answer was simply to increase the committed bytes limit which means an increase in the paging file size.

Performance Trouble Shooting Example (#2)

Sometimes in my job I will have to troubleshoot some old technology. This is a case where I had a call that an application that run in CGI process spawned from an IIS website was running a lot slower that the same application in a test environment. This was for a single iteration of a request to the application. The customer was keen to find out why that although the test environment was identical to production there was such a performance difference. Luckily, we could take out one of the IIS servers from production so that we could us it for testing.

To start with we did some benchmark tests on the machine using tools like IOmeter and zCPU just to confirm the raw compute power was the same. As a separate thread various teams where checking to make sure the two environment where identical not only in terms of hardware and OS but security, IIS config etc. They all came back to say the production and test environments where identical. Next we wanted to see if the problem was in the application itself or the invocation of the application. Someone knocked up a CGI hello world application which we tested in test and production. This showed that the production environment was slower running the “Hello World” application and therefore the problem was around the invocation of the CGI process. Next I wrote a C# program that invoked a simple process and timed it. Running this in both test and prod showed no difference between the two environment. This lead to the problem being specific to the IIS/CGI.

The next step was to take a Process Monitor trace (Process Monitor – Windows Sysinternals | Microsoft Docs). The first thing we noticed was the trace size was significantly larger for the production trace. On the production server there are multiple calls made to Symantec. With the production trace showing multiple calls made to Symantec end point security before the start of the LoadImage event. At the start it only add 1 second to the trace before the image load but there are latter calls to the registry. With this information we asked the security team to check again the security settings and they discovered they where different this the exceptions set in test not set in production. The settings where reviewed and it was decided that the same exceptions rules could be applied across both environment and after this was done the retesting showed that both environment provided the same level of performance.

What is the lesson for this. Let data be your truth! I am not saying the people checking the security settings lied but comparing lots of settings is complex and often not automated. Add in the biases that people think they have configured the system correctly to start. leads to early conclusions that can be flawed.