When John F. Kennedy said that “the time to repair the roof is while the sun is shining” in his State of the Union address, application development was probably the last thing on his mind. Yet his notion of preparing in advance to pre-empt tough times is something that can certainly be applied to the world of app development.
As modern application development grows more intense because of the demands on services, platforms, access, and ubiquity, so too does the intensity of scrutiny placed on product deployment. The specter of poor performance, or worse, error-strewn “meltdown” once an app goes live has a direct impact on customer or user satisfaction, trust, and loyalty.
Naturally then, the practices of application performance monitoring and management of errors and logs are being taken much more seriously, earlier in the development lifecycle.
“The purpose of application performance monitoring and log monitoring is largely viewed as a “reactive” approach,” says Craig Ferril, COO of Stackify, a cloud-based monitoring and troubleshooting software.
“Historically if you think about how developers have learned or been kind of conditioned to view errors and logs, for example, it’s been viewed as the thing that you do, that you put out there as your lifeline or breadcrumb trail to look at after bad things have happened.”
It’s clear developers should take a more active role in the app monitoring process instead of pushing out code to QA, staging, or production and waiting to be tapped on the shoulder to fix a problem. This proactive philosophy requires a certain amount of collaboration, but also access and the empowerment of developers to instrument their apps to leverage information.
Craig adds: “If you think about being able to monitor application performance as well as your errors and logs earlier in the lifecycle, you can really start to build trends and this body of information that tells you how your application looks from release to release.”
Tooling, of course, plays a vital role here, with a number of off-the-shelf application monitoring solutions available. While a majority of project environments use a myriad of these tools for application and server monitoring, performance metrics, errors and log management, all-in-one products, such as Stackify provide an added value.
According to Stackify’s own December Application Troubleshooting Report, 59 percent of organizations use standalone tools, despite integrated solutions showing significant gains on time to fix and impact on users. In addition, while 37 percent of developers are still learning about application issues from users, 46 percent lean instead on application monitoring and alerting to stay ahead of the curve.
A SaaS-based, cross-server, cross-application suite, Stackify slots effortlessly alongside popular technology stacks within cloud, on-premise, or hybrid setups. Teams can get a synchronised overview of performance, errors and logs within a single pane of glass, very quickly and with minimal fuss.
“The first things we suggest are that developers integrate Stackify’s error monitoring and log aggregation functionality into the product,” advises Craig. “That’s as simple as just grabbing a package from Stackify that integrates into your product and it gets built-in.”
While development roles and IT remain ultimately where application support is administered, arming developers with information is critical. Errors and logs represent the most common sources here for resolution, before server performance, stack trace, web requests, etc. So rather than viewing this information as a “breadcrumb trail” to help rectify a fault after a problem has occurred, developers can begin identifying trends beforehand.
Craig explains how that once you’ve plugged Stackify into your existing logging framework, “you’ll know if you’ve introduced new exceptions, new errors that were never seen before per environment – at each stage of the process, so in development you’re going to identify and solve more new errors than you would maybe later in the lifecycle, making your production app better”.
“But by the time you get into QA and staging, things should start to stabilize,” Craig adds. “Once you get into that QA, staging, and maybe you have a load test environment, you’ll want to, of course, continue to monitor errors and logs there with your application and you’ll know if new things have occurred, if log rates have gone up or if new errors have emerged, have slipped through development unit testing, or if errors that you thought were gone have come back.”
Where an integrated tool such as Stackify proves beneficial is in shining a light on the error “black hole” your logs can be. New or regressed, you can drill down the when and where with stack trace, url, mvc route, query string, headers, cookies, context data, related log statement details, and much more.
Top errors as well as error rates can be visualised graphically and quickly correlated to application performance and key metrics, so developers know what happened before and after.
“You can instrument your code with custom application metrics, kind of like KPIs for your code,” says Craig. “So you’ll know things like if you’re tracking specific actions within your code that are important to you but not easy to monitor from the “outside looking in”.
“There’s that old mantra of “if it moves measure it” – so in Stackify not only can you measure it but you can also monitor it, so if you get outside of a certain bounds and you want to be notified or alerted to that you can see that.”
Using an integrated solution like Stackify needn’t require an overnight sea change in the way you develop. It can be a natural evolution, perhaps even starting from a “reactive” standpoint that gradually becomes woven into the process. By instrumenting, measuring and monitoring from here such tools can be “baked in” going forward, building up a useful body of proactive queries and monitors across application iterations or releases.
Stackify then goes a stage further by enabling developers to monitor code behavior and query results, isolating problems at the code or query level.
“When you think of the typical reactive process of troubleshooting an application, one of the places a lot of developers find themselves investigating problems is in the database,” says Craig.
“If you find yourself trying to track down issues/problems in production by looking at the contents of the database you can actually monitor the contents of a table, the results of a query and do data integrity monitoring. So you can monitor the results of queries and know if there are some things that have created a problem at data level.”
On the code side, logging debug data goes beyond merely strings and text to logging objects. Proactive logging of any constructions, state changes, and key events at points in time facilitates the kind of pseudo-debugging useful for tracing object-level exceptions.
Monitoring of message queues is another useful technique, along with simply monitoring known events and setting thresholds perhaps for key events deemed “normal” until excessive.
“We tend to encourage people to monitor things like their queues, such as if you have a message queue system,” says Craig. “One of the early indicators of a problem is a message queue that is filling up – messages are coming into it but they are not coming out of it, so monitoring those kinds of message queues is really important.”
By then feeding all these things back into application performance, it’s all about giving proactive power to developers keen to monitor and measure before things go wrong. So that leaky roof won’t be an issue when the rains come!
This post is brought to you in collaboration with GetApp – GetApp is the leading premium business app discovery platform on the web. The site focuses on profiling established business apps — mostly software as a service (SaaS) — targeting an audience of small and medium-sized businesses and business buyers from enterprise departments. The article was written by Mark Billen.