Stackify is now BMC. Read theBlog

4 Ways the Cloud Has Influenced App Troubleshooting

By: cferril
  |  March 11, 2024
4 Ways the Cloud Has Influenced App Troubleshooting

The rise of cloud computing has ushered in an era of unprecedented productivity for developers over the past several years. For those who have embraced this new world order, gone are the days of long lead times for hardware procurement and installation, architecture defined by slow-moving hardware upgrades, hardware-constrained scalability and flexibility, and a world where only sys admins have access to the infrastructure. But, as the barriers between development and delivery disappear, new challenges have emerged that can disrupt the lives of developers and slow down delivery of new products and features, giving back some of the efficiency gains that the Software-Defined Data Center (SDDC) created.

Whether you’re new to the cloud, or you’ve been around since before cloud was cool, you are likely to see four common challenges emerge that can make troubleshooting your applications in the cloud more difficult. Let’s take a closer look at these common pain points first to help build awareness around the challenges, and then I will offer some suggestions for how to prevent these hurdles from tripping you and your team up when it comes time to unravel an application troubleshooting mystery.

Shifting Ownership

If you’re adopting the cloud with limited support from an operations team, or perhaps you’re one of the growing numbers of DevOps or even no-ops teams who find themselves bridging both the operations and development worlds, you will find that the responsibility of operating and supporting both your app and your infrastructure, at some level at least, will introduce a new dynamic that you may not have contemplated.

True, the ability to roll your own architecture without the burden of dealing with physical devices is liberating and far more efficient. But, as developer tools, deployment tools, and cloud operations tools become inextricably linked to one another, the old boundaries between who is dev and who is ops become blurred or even get removed altogether. This means the dev team is suddenly an integral part of operations, whether by design or by default, adding yet another responsibility for developers whose chief mandate is often to go faster. The more time you spend in the operations realm, especially in troubleshooting your app or the cloud resources it depends on, the less time you are able to devote to adding new value through code.

Lack of Transparency and Burden of Proof

While it’s true that having the full benefits of the cloud available at the press of a button is awesome, you wouldn’t be faulted for having a bit of nostalgia about the “good old days” of being able to have a conversation with a real live person down the hall about real physical hardware that’s either healthy or isn’t (along with the ability to actually lay hands on it). An old familiar refrain when something went wrong with an app in production was for the burden of proof to rest initially with the ops team – prove the hardware is working, the network is healthy, and the SAN hasn’t lost disks before making the dev team dig in. Honestly, everyone was just hoping against hope that it was something “easy” in the infrastructure, because when it was the app, that’s when things got hard. Well, that script has been reversed with the cloud: now the burden of proof is on the dev team, because what’s really hard is finding a problem that originates with someone else’s complex, abstracted, virtualized data center.

App returning a 500 error or performing poorly? If you’re using something delivered as-a-Service, such as database, queues, cache and the like, you won’t really have any visibility into health other than the cloud provider’s status page and whatever you can directly observe. It’s either working correctly and is speedy, or it isn’t; if it isn’t, life gets a lot murkier. Likewise, servers can be monitored, but you can’t really tell why your virtual resource’s performance has trailed off if you are the victim of something environmental that’s out of your control.

And, no matter how good the support team is at your favorite cloud provider, it’s rare that they will be as responsive to your requests for more information on an issue as your own in-house ops team could be, and they won’t be as well-versed on your architecture. To varying degrees, you’re at the mercy of the cloud provider for consistent, reliable services, and it’s also up to them to offer timely insight when issues arise with the services you depend on. Your mileage may vary, of course, as to whether your cloud provider offers this level of communication and transparency. But, if they don’t, then the burden of proof rests squarely with you to show that the issue isn’t in your app. Quite a reversal of fortunes, isn’t it?

Easy Complexity

Compounding the challenge of sorting through infrastructure issues vs. code issues is the simple fact that applications are becoming far more complex, and in many cases, portions of the overall architecture may be transient in nature. Combine complexity with impermanence and you have a recipe for some real Sherlock Holmes-caliber mysteries at times.

The incredible thing about an SDDC is that you can create nearly any kind of architecture required to support your application stack’s needs, all relatively easily – if you can dream it, you can build it. Want to cobble together .NET, Java, PHP, Node.js, Ruby, Database-as-a-Service for SQL and NoSQL, Message-Queues-as-a-Service, and Search-as-a-Service? From a cloud deployment perspective, it’s been made devilishly easy to deploy and get started. But, with that ultra-polyglot approach and a heavy reliance on software-defined services comes a new set of challenges:

  • First, you have a variety of services that are black boxes to you. Each of these services comes with its own set of tricks for gaining insight into performance and availability, but each one may be different in how you monitor and troubleshoot.
  • Learning how to support a variety of different technologies creates drag on your delivery velocity. It’s hard enough learning the performance and reliability tricks for a few technologies; trying it for a wide variety can draw focus from the real goals of building new value through software and making the business more successful.
  • Not every monitoring tool can support every technology stack, and the wider you cast the technology net, the harder it can become to support your full stack from a single monitoring tool.
  • If you’re using dynamic (transient) resources, such as scale-on-demand servers, you are quite likely to lose critical data that you need when troubleshooting a problem if you haven’t given thought to how you preserve critical insights that disappear with the server when it’s de-provisioned.

More Frequent Change

And, finally, we come to the double-edged sword that brought this all about in the first place: going faster! The increased agility that the cloud brings, especially when coupled with dev tools that are integrated into the delivery cycle (think PaaS environments), has a way of shortening delivery cycle times and increasing the number of releases crammed into a given week, month, and year. This is especially true in organizations who have also adopted agile development practices. Code can flow to production smoothly with greater frequency, and architecture changes can be made far more swiftly and easily. Unfortunately, with more frequent code releases and architecture changes comes more frequent opportunities to break something.

A big part of the movement toward Agile and Lean is also the notion of always moving forward – rather than rolling back a release in the event of an issue, detect problems early and patch them quickly. To enable this mandate, however, requires two things that are often missing if you are coming from a slower moving environment or from a more traditional hosting model: 1) developer visibility into a baseline of behavior telemetry to know what “good” looks like historically; and, 2) instant feedback on the health of the application post-release relative to that healthy baseline. Without this, it’s hard to know if you’ve made gains or losses with your release – your users are often your only real barometer.

So… How Do I Code More and Support Less?

There’s no denying that the cloud has impacted the life of many developers, mostly in a very positive way. Of course, with new technologies and capabilities always comes a new set of challenges to overcome. In the case of cloud-hosted applications, this includes challenges to effectively and efficiently supporting those applications in their new environments so that the gains in productivity aren’t given back in support of the application.

So, what can development teams to do adapt to and overcome these challenges?

There are 3 basic steps that every development team should take to make supporting cloud-based applications easier.

  1. Establish Access, Process, and Protocol – The first order of business for helping developers support their cloud-based apps more effectively is giving them safe access to the information and resources they need. Unfortunately, all too often in cloud environments this is an all-or-nothing proposition – full login rights to servers and even potentially full rights to the management portal, or no access at all. Make sure to establish the correct access methods to your developers so that they have the visibility and access they need, without handing over so much control that it increases the likelihood of accidents.
  2. Design Supportability Into the Application – Once your application is in production, there are several common questions that you will need to be able to answer at a moment’s notice about your application: Is it (and everything it depends on) running? Are users satisfied with the performance? Is anything silently failing and frustrating users without setting off alarms? If something failed, who was impacted, and what caused the issue?There are also some things that simply cannot be measured and monitored from outside of the application, but which speak directly to the health and well-being of your application. To enable you to quickly answer the inevitable questions, consider incorporating the following:
    • If it moves, measure it. Report application metrics and KPIs from within your code in order to see events and data that would otherwise be locked away from you. Some events and metrics only you, the developer, have the power to expose. Knowing how your app behaves at a core level can provide levels of insight that prove invaluable when searching for troubleshooting clues. If you can configure monitoring and alerts for those metrics, even better. We’ve elaborate on this subject in this article Errors & Logs: putting the data to work
    • Log often, and log meaningfully. If you only report errors, you will lack the critical insights necessary to help point to the root cause of the error. By logging at, say, info or debug instead of just warn or error, you will have the breadcrumb trail you need to find it. It’s impossible to get the state of the system after the fact – you need to have logged it at the time of the event.
    • Centralize your insights. Remembering that life in the cloud can be both quite distributed, and quite transient, it’s always good to bring everything – logs, errors, custom metrics, and other telemetry – into a central location for normalization, correlation, and continuity. You may need the data, and what it tells you, well beyond the ephemeral life span of your cloud resource.
  3. Identify Health Baselines Early – Key information like message queue length, average request time, app pool resource utilization, custom metrics values, log and error rates, and more can all be charted for your application these days – monitoring and charting isn’t just the domain of ops tools any longer. Understand what your app looks like both when healthy and unhealthy, preferably starting with pre-production environments even, so that you can see how your application morphs from release to release as well as with different loads and as your architecture evolves. By baselining as far back as dev and QA, you can often catch problems well before they impact customers and send you and your team scrambling.

Conclusion

There’s no denying the cloud brings incredible capabilities to the lives of developers: speed, agility, flexibility, scalability, and more. As with any new, disruptive technology, new challenges are also par for the course. By applying some basic strategies for application management, monitoring and troubleshooting, you can have all of the advantages of the cloud without giving back the gains during those critical support engagements, and have happier team members and end users as well.
At Stackify, we offer a solution to the issues presented in this article. Start your free trial of Retrace today.

Improve Your Code with Retrace APM

Stackify's APM tools are used by thousands of .NET, Java, PHP, Node.js, Python, & Ruby developers all over the world.
Explore Retrace's product features to learn more.

Learn More

Want to contribute to the Stackify blog?

If you would like to be a guest contributor to the Stackify blog please reach out to [email protected]