What is VisualVM? How to Use VisualVM, Benefits, Tutorials and More

Angela Stringfellow Developer Tips, Tricks & Resources, Insights for Dev Managers Leave a Comment

VisualVM is a Java profiler, one of the several types of Java performance tools (to see more of our favorite Java performance tools, check out this post). In today’s post, we’ll take a look at VisualVM, how it works, and some integrations and alternatives.

A Definition of VisualVM

VisualVM is a powerful tool that provides a visual interface to see deep and detailed information about local and remote Java applications while they are running on a Java Virtual Machine (JVM). It utilizes and integrates some of the command-line tools that JDK provides and bundles them up to see the application within JVM; this bundle includes command-line tools jmap, jstack, jConsolem, jstat, and jinfo. All these tools are available in standard JDK distribution.

It helps the programmers and architects to track memory leaks, analyze the heap data, monitor the garbage collector and CPU profiling. It also helps to improve the application performance and ensure that memory usage is optimized. With features like thread analysis and head dump analysis, it is very handy in solving run-time problems.

VisualVM is free, and you don’t need to pay a separate cost to get this.

Official Page: https://visualvm.github.io

How to Get and Run VisualVM

The good news here, you actually don’t need to do anything, it is already available in the JDK bin directory. It’s available after JDK 1.6 update 7. Once you are in the bin directory of JDK, you will find jVisualVM.exe; just click on it, and the application starts up.

 

Get VisualVM

You can see all the running Java applications on the left pane of the interface.

Running Java Apps

The on top-left you can see application tab, and under this, you can see different options like Local, Remote and Snapshots. To set a remote application profiling, you must connect to the remote server and add that application:

VisualVM Options

 

Local Applications

While setting up the remote application, you can give it name as well, “Display name:.”

Benefits

There are many important features that VisualVM supports, such as:

  1. Visual interface for local and remote java applications running on JVM.
  2. Monitoring of application’s memory usage and application’s runtime behavior.
  3. Monitoring of application threads.
  4. Analyzing the memory allocations to different applications.
  5. Thread dumps – very handy in case of deadlocks and race conditions.
  6. Heap dumps – very handy in analyzing the heap memory allocation.

So if you see the above list, you can actually monitor your applications — both local and remote — which is quite handy in case of a run time exception, like outOfMemoryException, deadlocks, race conditions, etc., as you get to visually see which objects are causing an outOfMemoryException, for example, or the objects/resources causing thread deadlock.

How VisualVM Works

Once the VisualVM application starts, you will see the list of applications on the left pane, and on the right side of the pane you will see different tabs. The important tab here is the “Monitor” tab. It gives you a detailed analysis of heap space and utilization, classes and threads. It can read and interpret binary heap dump files. You can either generate a heap dump, or you can read one you have generated outside this application (e.g. a dump created using kill-3 command on Unix/Linux server where the application is hosted).

How VisualVM Works

The VisualVM-MBeans plugin will give programmers a MBean Browser to help you access the all of the platform MXBean instruments. You can monitor and manage the MBeans of the application. Similarly, the VisualVM-GC plugin will provide a graphical interface for the information regarding garbage collection.

The purpose of CPU profiler is to determine how much time the CPU is spending when it executes the program and using this information; you can optimize the code and improve the overall performance of the application.

Here’s an interesting article about monitoring the IBM JVM with VisualVM.

VisualVM IDE Plugins

In cases where the interface doesn’t look interesting, you can integrate to different development IDEs like Eclipse, IntelliJ, NetBeans via the plugin provided. It makes life easier for developers. Here are a few useful links for setting them up in IDEs:

Alternatives

With application performance and memory utilization becoming so important these days, it obvious that profiling tools are in demand. There are many tools which are serving similar purpose that VisualVM is service. There are a few other profiling tools available in the market:

  1. YourKit
  2. Profiler
  3. JConsole

In summary, VisualVM is a valuable tool which can provide the programmer and coder and deep details of application is performing in terms of CPU, memory and threads and then how can they utilize these in formations to improve the performance and scalability of their applications. It is also very useful in supporting applications and solving complex run-time problems.

Additional Resources and Tutorials

For further reading, tutorials, and other helpful insights, visit the following resources:

The Best Tools for Log Management

Best Log Management Tools: 50 Useful Tools for Log Management, Monitoring, Analytics, and More

Angela Stringfellow Developer Tips, Tricks & Resources, Popular Leave a Comment

Gone are the days of painful plain-text log management. While plain-text data is still useful in certain situations, when it comes to doing extended analysis to gather insightful infrastructure data – and improve the quality of your code – it pays to invest in a reliable log management solution that can empower your business workflow.

Logs are not an easy thing to deal with, but regardless is an important aspect of any production system. When you are faced with a difficult issue, it’s much easier to use a log management solution than it is to weave through endless loops of text-files spread throughout your system environment.

The big advantage of log management tools is that they can help you easily pinpoint the root cause of any application or software error, within a single query. The same applies to security-related concerns, where many of the following tools are capable of helping your IT team prevent attacks even before they happen. Another factor is having a visual overview of how your software is being used globally by your user base — getting all this crucial data in one single dashboard is going to make your productivity rise substantially.

When picking the right log management tool for your needs, evaluate your current business operation.  Decide on whether you’re still a small operation looking to get the basic data out of your logs, or you plan to enter the enterprise level – which will require more powerful and system efficient tools to tackle large scale log management.

We built Retrace to address the need for a cohesive, comprehensive developer tool that combines APM, errors, logs, metrics, and monitoring in a single dashboard. When it comes to log management, tools run the gamut from stand-alone log management tools to robust solutions that integrate with your other go-to tools, analytics, and more. We put together this list of 50 useful log management tools (listed below in no particular order) to provide an easy reference for anyone wanting to compare the current offerings to find a solution that best meets your needs. 

1. Loggly

@Loggly

Loggly

Loggly is a cloud-based log management services that can dig deep into extensive collections of log data in real-time while giving you the most crucial information, on how to improve your code and deliver a better customer experience. Loggly’s flagship log data collection environment means that you can use traditional standards like HTTP and Syslog, versus having to install complicated log collector software on each server separately.

Key Features:

  • Collects and understands text logs from any sources, whether server or client side.
  • Keeps track of your logs even if you exceed your account limitations. (Pro & Enterprise)
  • Automatically parses logs from common web software; Apache, NGINX, JSON, etc.
  • Custom tags let you find related errors throughout your log data.
  • State of the art search algorithm for doing a global search, or individual based on set values.
  • Data analysis dashboard to give you a visual glimpse of your log data.

Cost:

  • Lite: Free
  • Standard: $99
  • Pro: $199
  • Enterprise: $349

2. Logentries

@Logentries

Logentries

Logentries is a cloud-based log management platform that makes any type of computer-generated type of log data accessible to developers, IT engineers, and business analysis groups of any size. Logentries’ easy onboarding process ensures that any business team can quickly and effectively start understanding their log data from day one.

Key Features:

  • Real-time search and monitoring; contextual view, custom tags, and live-tail search.
  • Dynamic scaling for different types and sizes of infrastructure.
  • In-depth visual analysis of data trends.
  • Custom alerts and reporting of pre-defined queries.
  • Modern security features to protect your data.
  • Flawless integration with leading chat and performance management tools.

Cost:

  • Free: $0
  • Starter: $39
  • Pro: $99
  • Team: $265
  • Enterprise: Custom quote.

3. GoAccess

@GoAccess

GoAccess

GoAccess is a real-time log analyzer software intended to be run through the terminal of Unix systems, or through the browser. It provides a rapid logging environment where data can be displayed within milliseconds of it being stored on the server.

Key Features:

  • Truly real-time; updates log data within milliseconds within the terminal environment.
  • Custom log strings.
  • Monitor pages for their response time; ideal for apps.
  • Effortless configuration; select your log file and run GoAccess.
  • Understand your website visitor data in real-time.

Cost: Free (Open-Source)

4. Logz

@Logzio

Logz

Logz uses machine-learning and predictive analytics to simplify the process of finding critical events and data generated by logs from apps, servers, and network environments. Logz is a SaaS platform with a cloud-based back-end that’s built with the help of ELK Stack – Elasticsearch, Logstash & Kibana. This environment provides a real-time insight of any log data that you’re trying to analyze or understand.

Key Features:

  • Use ELK stack as a Service; analyze logs in the cloud.
  • Cognitive analysis provides critical log events before they reach production.
  • Fast set-up; five minutes to production.
  • Dynamic scaling accommodates businesses of all sizes.
  • AWS-built data protection to ensure your data stays safe and intact.

Cost:

  • Free: $0
  • Pro: Starting at $89
  • Enterprise: Custom quote.

5. Graylog

@Graylog2

Graylog

Graylog is a free and open-source log management platform that supports in-depth log collection and analysis. Used by teams in Network Security, IT Ops and DevOps, you can count on Graylog’s ability to discern any potential risks to security, lets you follow compliance rules, and helps to understand the root cause of any particular error or problem that your apps are experiencing.

Key Features:

  • Enrich and parse logs using a comprehensive processing algorithm.
  • Search through unlimited amounts of data to find what you need.
  • Custom dashboards for visual output of log data and queries.
  • Custom alerts and triggers to monitor any data failures.
  • Centralized management system for team members.
  • Custom permission management for users and their roles.

Cost:

  • Free: Open-Source
  • Enterprise: Starting at $6,000 per year

6. Splunk

@Splunk

Splunk

Splunk focuses its log management services around enterprise customers who need concise tools for searching, diagnosing and reporting any events surrounding data logs. Splunk’s software is built to support the process of indexing and deciphering logs of any type, whether structured, unstructured, or sophisticated application logs, based on a multi-line approach.

Key Features:

  • Splunk understands machine-data of any type; servers, web servers, networks, exchanges, mainframes, security devices, etc.
  • Flexible UI for searching and analyzing data in real-time.
  • Drilling algorithm for finding anomalies and familiar patterns across log files.
  • Monitoring and alert system for keeping an eye on important events and actions.
  • Visual reporting using an automated dashboard output.

Cost:

  • Free: 500MB data per day
  • Splunk Cloud: Starting at $186
  • Splunk Enterprise: Starting at $2,000

7. Logmatic

@Logmatic

Logmatic

Logmatic is an extensive logging management software that integrates seamlessly with any language or stack. Logmatic works equally well with front-end and back-end log data and provides a painless online dashboard for tapping into valuable insights and facts of what is happening within your server environment.

Key Features:

  • Upload & Go — share any type of logs or metrics, and Logmatic will automagically sort them for you.
  • Custom parsing rules let you weed through tons of complicated data to find patterns.
  • Powerful algorithm for pinpointing logs back to their origin.
  • Dynamic dashboards for scaling up time series, pie charts, calculated metrics, flow charts, etc.

Cost:

  • Starter: $49
  • Pro: $99
  • Enterprise: $349

8. Logstash

@Elastic

Elastic

Logstash from Elasticsearch is one of the most renowned open-source projects for managing, processing and transporting your log data and events. Logstash works as a data processor that can combine and transform data from multiple sources at the same time, then send it over to your favorite log management platform, such as Elasticsearch.

Key Features:

  • Ingest data from varied sets of sources: logs, metrics, web apps, data storages, AWS, without losing concurrency.
  • Real-time data parsing.
  • Create structure from unstructured data.
  • Pipeline encryption for data security.

Cost: Open-Source

9. Sumo Logic

@SumoLogic

SumoLogic

Sumo Logic is a unified logs and metrics platform that helps you analyze your data in real-time using machine-learning, Sumo Logic can quickly depict the root cause of any particular error or event, and it can be setup to be constantly on guard as to what is happening to your apps in real-time. Sumo Logic’s strong point is its ability to work with data at a rapid pace, removing the need for external data analysis and management tools.

Key Features:

  • Unified platform for all log and metrics.
  • Advanced analytics using machine learning and predictive algorithms.
  • Quick setup.
  • Support for high-resolution metrics.
  • Multi-tenant: single instance can serve groups of users.

Cost:

  • Free: 500MB per day
  • Professional: $90
  • Enterprise: $150

10. Papertrail

@PapertrailApp

Papertrail

Papertrail is a snazzy hosted log management service that takes care of aggregating, searching, and analyzing any type of log files, system logs, or basic text log files. Its real-time features allow for developers and engineers to monitor live happenings for apps and servers as they are happening. Papertrail offers seamless integration with services like Slack, Librato and Email to help you set up alerts for trends and any anomalies.

Key Features:

  • Simple and user-friendly interface.
  • Easy setup; direct logs to a link provided by the service.
  • Log events and searches are updated in real-time.
  • Full-text search. Message, metadata, even substrings.
  • Graph with Librato, Geckoboard, or your own service.

Cost:

  • Free: 100MB/month
  • Pro: Starting at $7/month for 1GB/data

11. Fluentd

@Fluentd

Fluentd

Fluentd collects events from various data sources and writes them to files, RDBMS, NoSQL, IaaS, SaaS, Hadoop and so on. Fluentd helps you unify your logging infrastructure. Fluentd’s flagship feature is an extensive library of plugins which provide extended support and functionality for anything related to log and data management within a concise developer environment.

Key Features:

  • Unified logging layer that can decouple data from multiple sources.
  • Gives structure to unstructured logs.
  • Flexible, but simple. Takes a couple of minutes to get it going.
  • Compatible with a majority of modern data sources.

Cost:

  • Free: Open-Source
  • Enterprise: Upon request.

12. syslog-ng

@sngOSE

syslog-ng

Syslog is an open-source log management solution that helps engineers and DevOps to collect log data from a large variety of sources to process them and eventually send over to a preferred log analysis tool. With Syslog, you can effortlessly collect, diminish, categorize and correlate your log data from your existing stack and push it forward for analysis.

Key Features:

  • Open-source with a large community following.
  • Flexible scaling with any size infrastructure.
  • Plugin support for extended functionality.
  • PatternDB for finding patterns in complex data logs.
  • Data can be inserted into common database choices.

Cost: Free

13. rsyslog

@RGerhards

rsyslog

Rsyslog is a blazing-fast system built for log processing. It offers great performance benchmarks, tight security features, and a modular design for custom modifications. Rsyslog has grown from a singular logging system to be able to parse and sort logs from an extended range of sources, which it can then transform and provide an output to be used in dedicated log analysis software.

Key Features:

  • Easy to implement in common web hosts.
  • Lets you create custom parse methods.
  • Online config builder.
  • Regex generator and checker.
  • Custom development available for hire.

Cost: Free

14. LOGalyze

@LOGalyze

LOGalyze

LOGalyze is a simple to use log collection and analysis system with low operational costs, centralized system for log management and is capable of gathering log data from extended sources of operational systems. LOGalyze does predictive event detection in real-time while giving system admins and management personnel the right tools for indexing and searching through piles of data effortlessly.

Key Features:

  • High-performance and high-speed processing of logs.
  • Log-definitions for breaking down and indexing log lines.
  • Integrated front-end dashboard for efficient online access.
  • Secure log forwarding to chosen applications.
  • Automated reporting in PDF.
  • Compatible with Syslog, Rsyslog.
  • It breaks down the incoming log to fields and names them.

Cost: Free & Open-Source

15. jKool

@jKoolCloud

jKool

jKool Cloud helps its users to unravel important insights about their log data which can then be used to amplify the decision making in any business environment. jKool’s platform helps teams to improve their customer experience by tapping into crucial data about user and application activity on server and client side of things; with comprehensive tools, you can better understand how users are using your apps and improve based on your findings.

Key Features:

  • Cloud-based, but can be deployed on your own server infrastructure.
  • Multi-tenancy for team and accounts management.
  • Handles large and complex sets of data in real-time.
  • Streamable queries for gathering insight without having to deploy complex infrastructure.
  • Visual charts dashboard for visualizing any insights and important data.
  • Geo-tagging for events and location-based search queries.
  • Simulation engine for bootstrapping mock-ups.

Cost:

Free: 1GB/per day

Business: Custom quote only.

16. Sentry

@GetSentry

Sentry

Sentry is a modern platform for managing, logging, and aggregation any potential errors within your apps and software. Sentry’s state of the art algorithm helps teams detect any potential errors within the app infrastructure that could be critical to production operations. Sentry essential helps teams to avoid the hassle of having to deal with a problem that’s too late to fix and instead uses its technology to help inform teams about any potential rollbacks or fixes that would sustain the health of the software.

Key Features:

  • Detailed error reporting: URL’s, used parameters, and header information.
  • Graphical interface for understanding nature of certain errors and where they originate from so that you can fix them.
  • Dynamic alerts and notifications using SMS, Email, and Chat services.
  • Real-time error reporting as you deploy a new version of your app so that errors can be monitored as they happen, and ultimately prevented before it’s too late.
  • User-feedback system to compare any potential error reporting to that of experience of the user himself.

Cost:

  • Free: 10k/events per month
  • Pro: Starting at $12
  • Enterprise: Upon request.

17.  Rocana

@RocanaInc

Rocana

Rocana provides an out of the box log analytics with its flagship product Rocana Ops — Rocana Ops is an advanced analytics platform that is capable of advanced anomaly detection, automated behavior detection across your existing stack, and direct error reporting. A limitless Search feature lets you dig deep into the history of your logs and pinpoint crucial errors and obstructions as far as you need to look, giving you clear answers to questions that might have been previously difficult to answer.

Key Features:

  • Statistical metrics for system performance measurement.
  • Weighted Analytic Risk Notifications gives an individual score to trends (succeed or fail). 
  • Custom metrics out of the box.
  • Highly scalable and can manage terabytes of data without any performance issues.
  • Built for Hadoop to provide stellar back-end performance.
  • Provides concise business data for IT operations.

Cost: Upon request.

18. Flume

@TheASF

Flume

Apache Flume is an elegantly designed service for helping its users to stream data directly into Hadoop. It’s core architecture is based on streaming data flows — these can be used to ingest data from a variety of sources to directly link up with Hadoop for further analysis and storage purposes. Flume’s Enterprise customers use the service to stream data into the Hadoop’s HDFS; generally, this data includes data logs, machine data, geo-data, and social media data.

Key Features:

  • Multi-server support for ingesting data from multiple sources.
  • Collection can be done in real-time or collectively using batch modes.
  • Allows the ingestion of large data sets from common social and eCommerce networks for real-time analysis.
  • Scalable by adding more machines to transfer more events.
  • Reliable back-end built with durable storage and failover protection.

Cost: Free, Open-Source

19. Cloudlytics

@Cloudlytics

Cloudlytics

Cloudlytics is a SaaS startup designed to improve the analysis of log data, billing data, and cloud services. In particular, it is targeted at AWS Cloud services, such as CloudFront and S3 CloudTrail — using Cloudlytics customers can get in-depth insights and pattern discovery based on the data provided by those services. With three management modules, Cloudlytics gives its users the flexibility to choose from monitoring resources in their environment, analyze monthly bills or analyze AWS logs.

Key Features:

  • Real-time alerts of errors as soon as they appear.
  • Billing analytics let you closely watch over your consumption of resources.
  • Sophisticated user interfaces for getting a truly in-depth view of your data.
  • File download analytics including GEO data.
  • Automated cloud management for back-ups and service status.

Cost: Starts at $5/month.

20. Scalyr

@Scalyr

Scalyr

Scalyr’s modern technology enables for Ops teams to experience a heightened level of performance and work productivity through replacing traditional tools (monitoring, metrics, analysis, and tracking) with one standalone and integrated service. Scalyr’s infrastructure allows any DevOps team to scour through terabytes of data within a matter of seconds. Scalyr can be used as a separate agent on any of your own services, or you can import data from services like Heroku, AWS, and Fluentd.

Key Features:

  • Centralized log management and server monitoring.
  • Search hundreds of GBs/sec across all your servers.
  • Watch filtered messages and events appear in real-time.
  • Turn your logs and metrics data into visually appealing graphs.
  • System overview dashboards for quick access to system performance and reporting.
  • Powerful alert manager so you can keep up with what’s going on in your system.
  • Error and alert reports can be traced back to root issues.

Cost:

  • Silver: $99
  • Gold: $249
  • Platinum: $499

21. Octopussy

Octopussy

Octopussy is a Perl-based, open-source log manager that can do alerting and reporting, and visualization of data. Its basic back-end functionality is to analyze logs, generate reports based on log data, and alert the administration to any relevant information.

Key Features:

  • Lightweight Directory Access Protocol for maintaining a users list.
  • Custom alert notifications through email, Jabber, Nagios and Zabbix.
  • Generate custom reports and export them using FTP, SCP, or Email.
  • Create custom maps for understanding the architecture of your back-end.
  • Custom support for popular services and software: Cisco, Postfix, MySQL, Syslog, etc.
  • Custom templates for interfaces and reports.

Cost: Free

22. LOGStorm

@BlackStratusInc

LOGStorm

LOGStorm is a SIEM compliant log management solution with advanced features that are easy to implement and use. Built with security in mind, LOGStorm focuses on helping Ops teams to identify threats, breaches, and violations before or as they appear. LOGStorm’s cost-friendly management and monitoring solution allows teams of any size to better understand what their data is doing and why.

Key Features:

  • Real-time threat analysis allows you to identify threats as they happen so that you can prevent them from having a negative impact on your network.
  • Correlation algorithm to understand why events are occurring and whether there are any patterns to recognize.
  • Centralized storage of logs for easy access to event data, records, and raw logs.
  • Extensive Device Support ensures integration with over 1,000 devices, systems, and applications.
  • Easy setup and configuration even for operations without prior security resources.

Cost: Upon request.

23. NXLog

NXLog

Today’s environment of IT departments can provide a layer of challenges when it comes to truly in-depth understanding of why events occur and what logs are reporting. With thousands of log entries from a plethora of sources, and with the demand for logs to be analyzed real-time, there can arise difficulties in knowing how to manage all of the data in a centralized environment. NXLog strives to provide the required tools for concise analysis of logs from a variety of platforms, sources, and formats. NXLog can collect logs from files in various formats, receive logs from the network remotely over UDP, TCP or TLS/SSL on all supported platforms.

Key Features:

  • Multi-platform support for Linux, GNU, Solaris, BSD, Android, and Windows.
  • Modular environment through pluggable plugins.
  • Scalable and high-performance with the ability to collect logs at 500,000 EPS or more.
  • Message queuing enables you to buffer and prioritize logs so they don’t get lost in the pipeline.
  • Task schedule and log rotation.
  • Offline log processing capabilities for conversions, transfers, and general post processing.
  • Secure network transport over SSL.

Cost: Free (Community Edition), Enterprise (Upon request)

24. Sentinel Log Manager

@NetIQ

Sentinel Log Manager

NetIQ is an enterprise software company that focuses on products related to application management, software operations, and security and log management resources. The Sentinel Log Manager is a bundle of software applications that allow for businesses to take advantage of features like effortless log collector, analysis services, and secure storage units to keep your data accessible and safe. Sentinel’s cost-effective and flexible log management platforms make it easy for businesses to audit their logs in real-time for any possible security risks, or application threats that could upset production software.

Key Features:

  • Distributed search — find comprehensive details about events on your local or global Sentinel Log Manager servers.
  • Instant reports — create detailed one-click reports based on your search queries.
  • Sentinel Log Manager comes with reports needed for common regulatory reporting. These predefined reports reduce the time you must spend on compliance.
  • Choose from traditional text-oriented search or built custom, and more complex, search queries yourself.
  • Support for non-proprietary storage systems.
  • Intuitive storage analysis to let you know when you can expect to need more storage availability, based on the current rate of consumption.
  • Log encryption over the network to provide a hardened layer of security for your log data.

Cost: Custom quote upon request.

25. XpoLog

@XpoLog

XpoLog

XpoLog seeks out new and innovative ways to help its customers better understand and master their IT data. With their leading technology platform, XpoLog focuses on helping customers analyze their IT data using unique patents and algorithms that are affordable for all operation sizes. The platform drastically reduces time to resolution and provides a wealth of intelligence, trends, and insights into enterprise IT environments.

Key Features:

  • Agent-less technology for collective live data over an SSH connection.
  • Collect log events via traditional choices like HTTP or Syslog, or Fluentd and LogStash.
  • XpoLog’s technology can interpret any log format, including that of archived files.
  • Choose from dynamic or automated parsing rules.
  • Dynamic search platform that provides comprehensive search features within a Google-like search environment.
  • Search across live log data for application problems, IDs, IPs, errors, exceptions, and more.
  • Using search functions, users can filter and investigate logs and apply complex functions to aggregate and correlate events in the indexed data.

Cost:

  • Free: 1GB / per day
  • Pro: Starting at $9
  • Enterprise: Custom quote.

26. EventTracker

@LogTalk

EventTracker

EventTracker provides its customers with business-optimal services that help to correlate and identify system changes that potentially affect the overall performance, security, and availability of IT departments. EventTracker uses SIEM to create a powerful log management environment that can detect changes through concise monitoring tools, and provides USB security protection to keep IT infrastructure protected from emerging security attacks. EventTracker SIEM collates millions of security and log events and provides actionable results in dynamic dashboards so you can pinpoint indicators of a compromise while maintaining archives to meet regulatory retention requirements.

Key Features:

  • Malware detection and automated audit using MD5 and VirusTotal.
  • Network-wide threat hunting based on patterns.
  • Builds on top of the success of Snort and OpenVAS, providing a user-friendly environment to use both for extensive security measurements and audits.
  • Straightforward deployment of software to have it up and running quickly.
  • Pre-configured alerts for hundreds of security and operational conditions.

Cost: Starting at $2,000

27. LogRhythm

@LogRhythm

LogRhythm

Getting your focus lost in an ocean of log data can be detrimental to your work and business productivity. You know the information you need is somewhere in those logs, but don’t quite have the power to pick it out from the rest. LogRhythm is a next-generation log management platform that does all the work of unfolding your data for you. Using comprehensive algorithms and the integration of Elasticsearch, anyone can identify crucial insights about business and IT operations. LogRhythm focuses on making sure that all of your data is understood, versus collecting it alone and only taking it from it what you need.

Key Features:

  • Smart data collection technology allows you to collect, analyze and parse virtually any kind of data.
  • Elasticsearch backend for concluding simple or sophisticated search queries that go through your data at lightning-fast speeds.
  • Critical attack monitoring to the very first and last second of occurrence.
  • Advanced visual dashboard to help you quickly understand how data is originating and whether a threat is present.
  • Meet compliance and data retention requirements by archiving data at a low cost. 

Cost: Starting at $24,000.

28. IPSwitch

@Ipswitch

IPSwitch

WhatsUp Log Management Suite from Ipswitch is a modular management solution based on apps that collect, analyze, report, alert and store log data in real-time. That gives you the tools to understand your data real-time to detect events and prevent security mishaps. Log data is full of insightful information about the ways that an organization can prevent itself from threats, attacks, malware, and any loss of data. Given the fact that log files come from a plethora of sources at any given time, it’s hard to do all of the work manually, albeit it’s near-impossible, which is why WhatsUp Suite is the perfect solution for log management and analysis.

Key Features:

  • Automated archiving and collection of logs; clears and consolidates within a single framework.
  • Helps to keep a close eye on what’s happening with your log files in real-time.
  • Create custom analysis queries and builds reports to understand log data and trends.
  • In-depth forensics across all servers and workstations in a single console. 
  • High-level cryptographic encryption using FIPS 140-2.

Cost: Starting at $1,300.

29. McAfee Enterprise

@IntelSec_Biz

McAfee Enterprise

McAfee is a household name in IT and Network security and has been known to provide modern and latest technology optimized tools for businesses and corporations of all sizes. The McAfee Enterprise Log Manager is an automated log management and analysis suite for all types of logs; Event, Database, Application, and System logs. The software’s in-built features can identify and validated logs for their authenticity — a truly necessary feature for compliance reasons. Organizations have been using McAfee to ensure that their infrastructure is in compliance with the latest security policies. McAfee Enterprise complies with more than 240 standards.

Key Features:

  • Keep your compliance costs low with automated log collection, management, and storage.
  • Native support for collecting, compressing, signing, and storing all root events so that they can be traced back to their origin.
  • Custom storage and retention options for individual log sources.
  • Option to choose from local or network storage areas.
  • Supports chain of custody and forensics.
  • Storage pools for flexible, long-term log storage. 

Cost: Starting at $30,000.

30. AlienVault

@AlienVault

AlienVault

AlientVault USM (Unified Security Management) reaches far beyond the capabilities of SIEM solutions using a powerful AIO (All in One) security precautions and comprehensive threat analysis algorithm to identify threats in your physical or cloud locations. Resource-dependent IT teams that rely on SIEM are at risk of delaying their ability to detect and analyze threats as they happen, whereas AlienVault USM combines the powerful features of SIEM and integrates them with direct log management and other security features, such as; asset discovery, assessment of vulnerabilities, and direct-threat detection — all of which give you one and centralized platform for security monitoring.

Key Features:

  • Cost-effective by integrating third-party security tools.
  • Pre-written configs let you detect threats right from the get go.
  • Comprehensive security intelligence as provided by AlientVault Labs.
  • Kill-chain taxonomy for quick assessment of threats, their intent, and strategy.
  • Granular methods for in-depth search and security data analysis.
  • Network & Host IDS.

Cost: Starting at $10,000.

31. Bugfender

@BugfenderApp

Bugfender

Not everyone is in need of an enterprise solution for log management, in fact, many of today’s most well-known businesses operate solely on mobile-only platforms, which is a market that Bugfender is trying to impact with its high-quality log application for cloud-based analysis of general log and user behavior within your mobile apps.

Key Features:

  • Intuitive bug analysis lets you track your app errors and get them patched up before they reach production.
  • Customer history to provide better and more precise customer support.
  • Remote logging sends all log data directly to the cloud services provided by Bugfender.
  • Custom logging options for individual devices.
  • Offline data storage for transmission to live servers once the device is back online.
  • Extended device information for all logging sessions.

Cost:

  • Free: 100K log lines per day
  • Startup: $29
  • Business: $99
  • Premium: $349

32. LogDNA

@TryLogDNA

LogDNA

LogDNA prides itself as the easiest log management platform that you’ll ever put your hands on. LogDNA’s cloud-based log services enable for engineers, DevOps, and IT teams to suction any app or system logs within one simple dashboard. Using CMD or Web interface, you can search, save, tail, and store all of your logs in real-time. With LogDNA, you can diagnose issues, identify the source of server errors, and analyze customer activity, as well as monitor Nginx, Redis, and more. A live-streaming tail makes surfacing difficult-to-find bugs easy. 

Key Features:

  • Gather logs from your favorite systems including Linux, Mac, Windows, Docker, Node, Python, Fluentd, and much more.
  • Easy to use and experiment with demo environment for a real-time product preview.
  • Powerful algorithm to identify and detect the core relationship between data and issues at hand.
  • Real-time data search, filter, and debug.
  • Built by an ambitious group of people who are keen to work on implementing new features and sets of tools.
  • Has a close relationship with the open-source community to provide transparency.

Cost:

  • Free: Unlimited / Single User
  • Pro: Starting at $1.25 per GB and custom features as needed.

33. Prometheus

@PrometheusIO

Prometheus

Prometheus is a systems and service monitoring system that collects metrics from configured targets at specified intervals, evaluates rule expressions, displays results and triggers alerts when pre-defined conditions are met. With customers like DigitalOcean, SoundCloud, Docker, CoreOS and countless others, the Prometheus repository is a great example of how open-source projects can compete with leading technology and innovate in the field of systems and log management.

Key Features:

  • A custom-built query language for digging deep into your data that can then be used to create graphs, charts, tables, and custom alerts.
  • A selection of data visualization methods: Grafana, Console, and an inbuilt ExpressionEngine.
  • Efficient storage techniques to scale data appropriately.

Cost: Free, Open-Source.

34. ScoutApp

@ScoutApp

ScoutApp

Scout is a language specific monitoring app that helps Ruby on Rails developers identify code errors, memory leaks, and more. Scout has been renowned for its simple yet advanced UI that provides an effortless experience of understanding what is happening with your Ruby on Rails apps in real-time. A recent business expansion also enabled Scout to expand its functionality for Elixir-built apps.

Key Features:

  • Memory leak detection.
  • Slow database query analysis.
  • Powerful integration with GitHub.
  • Automatic dependency instrumentation.

Cost: $59/server/month

35. Motodata

@MotodataSystems

Motodata

Motodata does more than just manages your logs; it can correlate, integrate and visualize near any of your IT data using native applications inbuilt within the platform. On top of world-class log management, Motodata is capable of monitoring the status and health of your network, servers, and apps. Contextual alerts ensure that you can sleep well-rested as any critical events or pre-defined thresholds will notify you or your team using frequently used platforms like Email, Messaging, or Chat applications.

Key Features:

  • Extensive log sourcing options: Firewalls, Routers, Switches, Servers (Web, App, Sys), Databases, Anti-Malware Software, Mail Servers, and more.
  • Gather essential data quickly in the event of a security breach. 
  • In-depth keyword search that pinpoints specific terms across all of your logs.
  • Audit analysis to discover crucial insights and trends that stem across your log data.
  • Native integration with apps like Jira, Jetty, AWS, IIS, Oracle, Microsoft, and much more.

Cost: 30Day Free Trial

36. InTrust

@Quest

InTrust

InTrust gives your IT department a flexible set of tools for collecting, storing, and searching through huge amounts of data that comes from general data sources, server systems, and usability devices within a single dashboard. InTrust delivers a real-time outlook on what your users are doing with your products, and how those actions affect security, compliance, and operations in general. With InTrust you can understand who is doing what within your apps and software, allowing you to make crucial data-driven decisions when necessary.

Key Features:

  • Security and Forensic analysis using pre-built templates and algorithms.
  • Concise and dynamic investigations in data about your users, files, and events.
  • Run smart searches on auditing data from Enterprise Reporter and Change Auditor to improve security, compliance, and operations while eliminating information silos from other tools.
  • Easily forward your Windows system data to a SIEM solution for deeper analysis.

Cost: Free Trial for Enterprise solution upon request.

37. Nagios

@NagiosInc

Nagios

Nagios provides a complete log management and monitoring solution which is based on its Nagios Log Server platform. With Nagios, a leading log analysis tool in this market, you can increase the security of all your systems, understand your network infrastructure and its events, and gain access to clear data about your network performance and how it can be stabilized.

Key Features:

  • A powerful out of the box dashboard that gives customers a way to filter, search, and conduct a comprehensive analysis of any incoming log data.
  • Extended availability through multiple server clusters so data isn’t lost in case of an outage.
  • Custom alert assignments based on queries and IT department in charge.
  • Tap into the live-stream of your data as its coming through the pipes.
  • Easy management of clusters lets you add more power and performance to your existing log management infrastructure.

Cost: Starting at $1995.

38. lnav

@LnavApp

lnav

If Enterprise-level log management software is overwhelming you by now, you may want to look into LNAV — an advanced log data manager intended to be used by smaller-scale IT teams. With direct terminal integration, it can stream log data as it is incoming in real-time. You don’t have to worry about setting anything up or even getting an extra server; it all happens live on your existing server, and it’s beautiful. In order to run LNAV, you will need to get the following packages: libpcre, sqlite, ncurses, readline, zlib, and bz2.

Key Features:

  • Runs directly in your server terminal; easy to open, close, and manage.
  • Point and shoot concept, specify the log directory and start monitoring.
  • Custom filters automatically filter out the ‘garbage’ portion of your log data.

Cost: Open-Source

39. Seq

@GetSeq_Net

Seq

Seq is a software-specific log software for .NET applications. Developers can easily use Seq to monitor log data and performance through the process of developing the application all the way to production level. Search specific application logs from a simple events dashboard, and understand how your apps progress or perform when you push towards your final iteration.  

Key Features:

  • Structured logging provides a rich outlook on events and how they related to each other.
  • Intuitive filters allow developers to use SQL-like expressions or an equivalent of JavaScript and C# operators.
  • Full-text support.
  • Filters database for creating and saving filters based on what you’re searching for.
  • Custom analysis and charting using SQL syntax.

Cost:

  • Team: $190
  • Business: $690
  • Enterprise: $1990

40. Logary

@LogaryLib

Logary

Logary is a high performance, multi-target logging, metric, tracing and health-check library for Mono and .Net. As a next-generation logging software, Logary uses the history of your app progress to build models from.

Key Features:

  • Logging from a class module.
  • Custom logging fields and templating capabilities.
  • Custom adapters: EventStore, FsSQL, Suave, Topshelf.

Cost: Open-Source

41. EventSentry

@netikus

EventSentry

EventSentry is an award-winning monitoring solution that includes a new NetFlow component for visualizing, measuring, and investigating network traffic. This log management tool helps SysAdmins and network professionals achieve more uptime and security.

Key Features:

  • See all traffic metadata that passes through network devices that support NetFlow.
  • Utilize network traffic data for troubleshooting purposes.
  • Map network traffic to a geo-location.
  • Communicate and document your network by adding notes or uploading documents in the web reports by @ mentioning the computer name so the web reports can associate the update with the appropriate device on the network.
  • Automatically extracts IP addresses from events and supplements them with reverse lookup and/or Geo IP lookup data.
  • Central collector service supports data collection over insecure mediums through strong TLS encryption.

Cost:

  • Full License: $85/Windows device + free year of maintenance and $15.30 for each additional year – Price decreases when purchasing multiple licenses at a time
  • Network Device Licenses: Starting at $58 + free year of maintenance – Price decreases when purchasing multiple licenses at a time
  • NetFlow License: $1,299/collector + free year of maintenance and $233.82 for each additional year

42. Logsign

@logsign

LogiSign

A full feature, all-in-one SIEM solution that unifies log management, security analytics, and compliance, Logsign is a next-generation solution that increases awareness and allows SysAdmins and network professionals to respond in real time.

Key Features:

  • Easily detect attacks with Logsign’s intelligently visualized dashboards.
  • SIEM with built-in TI data feeds.
  • Security information and event management solution focused on security intelligence, log management, and easier compliance reporting.
  • Real-time monitoring that enables users to work fast with live data.
  • After collection, Logsign filters, classifies, and normalizes logs.
  • Manage and store centralized and distributed logs based on your structures and needs.

Cost: FREE trial available; Contact for a quote

43. Loom Systems

@Loom_Systems

Loom Systems

Loom Systems provides AI-powered log analysis for watching over your digital systems. Their advanced AI analytics platform predicts and prevents problems in digital business by connecting to your digital assets and continually monitoring and learning about them by reading logs and detecting when something seems likely to go off course.

Key Features:

  • Automated log parsing for any type of application.
  • Problem prediction and cross-applicative correlation.
  • Automated root cause analysis and recommended resolutions.
  • Stream all logs from any application, and Loom automatically parses and analyzes them in real time.
  • Leverages AI to provide root causes of issues in real time.

Cost: FREE trial available

  • Startup: $999/month – 1 user, unlimited data, up to 10 monitored instances, access to full feature set, dedicated secure cloud server, and unlimited alerts
  • Team: Contact for a quote – 5 users, all Startup features, plus up to 100 monitored instances, dedicated account manager, and kickoff training session
  • Business: Contact for a quote – 20 users, all Team features, plus up to 1,000 monitored instances, weekly session with an expert  analyst, and enterprise SLA

44. SolarWinds Log & Event Manager

@solarwinds

SolarWinds Log & Event Manager

SolarWinds offers IT management software and monitoring tools such as their Log & Event manager. This log management tool handles security, compliance, and troubleshooting by normalizing your log data to quickly spot security incidents and make troubleshooting a breeze.

Key Features:

  • Node-based licensing.
  • Real-time event correlation.
  • Real-time remediation.
  • File integrity monitoring.
  • USB defender.
  • Configurable dashboard.
  • Scheduled searches.
  • User defined groups.
  • Custom email templates.
  • Threat intelligence feed.

Cost: FREE trial available; Starts at $4,495

45. ManageEngine EventLog Analyzer

@manageengine

ManageEngine EventLog Analyzer

ManageEngine creates comprehensive IT management software for all of your business needs. Their EventLog Analyzer is an IT compliance and log management software for SIEM that is one of the most cost-effective on the market today.

Key Features:

  • Automate the entire process of managing terabytes of machine-generated logs by collecting, analyzing, correlating, searching, reporting, and archiving from one centralized console.
  • Monitor file integrity.
  • Conduct log forensics analysis.
  • Monitor privileged users.
  • Comply with various compliance regulatory bodies.
  • Analyzes logs to instantly generate a number of reports including user activity reports, historical trend reports, and more.

Cost: FREE trial available; Contact for a quote

46. PagerDuty

@pagerduty

PagerDuty

PagerDuty helps developers, ITOps, DevOps, and businesses protect their brand reputation and customer experiences. An incident resolution platform, PagerDuty automates your resolutions and provides full-stack visibility and delivers actionable insights for better customer experiences.

Key Features:

  • Visualize each dimension of the customer experience.
  • Gain event intelligence and understand the context of disruptions across your infrastructure with actionable, time-series visualizations of correlated events.
  • Response orchestration to enable better collaboration and rapid resolution.
  • Discover patterns in performance and view post-mortem reports to analyze system efficiency.

Cost: FREE trial available for 14 days

  • Lite: $9/month billed annually or $10/month billed monthly – Unlimited notifications, 180+ integrations with top tools, alert triage and reduplication, reliable notifications and escalations, and more
  • Basic: $29/month billed annually or $34/month billed monthly – Unlimited notifications, 200+ integrations with top tools, all Lite features, plus incident enrichment, incident urgencies, on-call scheduling, and more
  • Standard: $49/month billed annually or $59/month billed annually – Unlimited notifications, 200+ integrations with top tools, all Basic features, plus coordinated response, incident subscription, postmortems, and more
  • Enterprise: $99/month billed annually – Unlimited notifications, 200+ integrations with top tools, all Standard features, plus operations command console, infrastructure health application, stakeholder users, live all routing, and more

47. BLËSK

@bleskcanada

BLËSK

BLËSK Event Log Manager is an intuitive, comprehensive, and cost-effective iT and network management software solution. With BLËSK, you can collect log and event data automatically with zero installation and zero configuration.

Key Features:

  • Store logs and event data in a single place.
  • Centralize, analyze, and control logs from all of the equipment on your network and more.
  • Lightning fast access to millions of log entries on your network.
  • Collect log and event data in real-time from any device.
  • Fast, easy log collection for addressing different scaling needs.

Cost: FREE trial available; Contact for a quote

48. ALog SMASH

ALog SMASH

ALog SMASH is a top log management tool that collects log data used to monitor access to servers storing important information accessible through endpoints. ALog SMASH works a the server level and costs less to run than client PC log monitoring tools.

Key Features:

  • Monitors the status of all access to crucial data.
  • Collects log files and converts them into usable, actionable information.
  • Ultra-compression reduces converted files to less than 1/40,000 their original size.
  • Indexes files for easy search.
  • Customizable settings for easier, more efficient detection.

Cost: FREE trial available; ALog SMASH 1 server license: $1,740 – Includes first year maintenance fee

49. Alert Logic Log Manager

@alertlogic

Alert Logic Log Manager

Alert Logic offers full stack security and compliance. Their Log Manager with ActiveWatch is a Security-as-a-Service solution that meets compliance requirements and identifies security issues anywhere in your environment, even in the public cloud.

Key Features:

  • Collects, processes, and analyzes data while the ActiveWatch team unlocks the insights in your log data.
  • 24×7 expert monitoring and analysis.
  • Cloud-based log management.
  • Increased visibility, rapid custom reporting, and scalable, real-time log collection and log management.
  • Easy-to-use web interface with intuitive search interface.
  • Over 4,000 parsers available with new log format support added frequently.
  • Advanced correlation capabilities.

Cost: Contact for a quote

50. WhatsUp Gold Network Monitoring

@Ipswitch

Alert Logic Log Manager

WhatsUp Gold Network Monitoring is a log management tool that delivers advanced visualization features that enable IT teams to make faster decisions and improve productivity. With WhatsUp Gold, you can deliver network reliability and performance and ensure optimized performance while minimizing downtime and continually monitoring networks.

Key Features:

  • Monitor applications, network, servers, VMs, and traffic flows with one flexible license.
  • Visualize your end-to-end network with interactive network maps.
  • Find problems and troubleshoot them more quickly to provide optimal availability and low MTTRs.
  • Unique, affordable consumption-based licensing approach.
  • Application monitoring, network traffic analysis, configuration management, discovery and network monitoring, and virtual environment monitoring.

Cost: FREE trial available for 30 days

  • WhatsUp Gold Basic: Starting at $1,755/license – Network monitoring essentials
  • WhatsUp Gold Pro: Starting at $2,415/license – Proactive server and network monitoring
  • WhatsUp Gold Total: Starting at $3,495/license – Visibility across your infrastructure and apps
Cloud Monitoring Tips

6 Reasons Cloud Monitoring Is Different Than Server Monitoring

Matt Watson Developer Tips, Tricks & Resources, Insights for Dev Managers Leave a Comment

Traditional IT monitoring has revolved around monitoring the infrastructures and servers. As you move to the cloud, it is possible that you don’t have either of those things. You could deploy your app via a service like Azure App Services and rely on Azure’s hosted Redis and SQL offerings. You could literally have access to zero servers.

In the cloud, it is even more important to monitor your actual applications and not just your servers. Application performance management solutions become even more important. Your cloud provider is responsible for monitoring the infrastructure and keeping your servers online. You still need to monitor the performance of your actual applications.

Cloud monitoring VS traditional server monitoring

Monitoring Platform-as-a-Service (PaaS) Style App Hosting

One of the big advantages of cloud computing is the ability to deploy your applications, and the server aspects of it are completely managed. As a developer, I love only having to worry about my application.

Application deployment options like Heroku, Azure App Services, Google Cloud Engine and others potentially create some monitoring challenges. You may not have full access to the underlying servers and typical monitoring solution will not work. Some of these also provide deployment slots which are also unique from a monitoring perspective.

At Stackify we use Azure App Services. Using them as an example, we do not have access to the server themselves. We can use the Azure KUDU console to access a pseudo file system, Event Viewer, IIS Logs, running processes, and other information. We also can’t access Windows Performance Counters. To monitor our instances we use a special WebJob as a monitoring agent instead of installing on directly on the server.

Auto Scaling in the Cloud

One of the big advantages of cloud hosting is the auto-scaling capabilities. Many companies have peak times of day or week for their applications. Outside of those peak times, they should scale their applications down to save on server expenses.

Cloud monitoring solutions have to support the autoscaling of the applications. The number of the instances of the applications could constantly be changing, and each one still needs to be monitored. The cloud monitoring must easily install as servers are created and handle scaling down.

Server Monitoring is Not Cloud Monitoring

Traditional server monitoring has revolved around if a server was up or down and what the CPU and memory usage is. Once you move to the cloud, these are details you don’t have to worry as much about or may not even have access to. You can setup auto-scaling or use a serverless architecture and it just works. Monitoring cloud applications is a little different!

Application performance monitoring is still very important. You still need to know which requests in your application are used the most and which are the slowest. APM solutions, like Retrace, can help provide this. You also need to monitor application metrics via Windows Performance Counters, JMX MBeans, or other common metrics.

More: Application Monitoring Best Practices for Developers with Retrace APM

Function-as-a-Service (Faas) or Serverless Architectures

Developers are starting to take advantage of new serverless architectures. Services like AWS Lambda and Azure Functions make it easy for developers to deploy applications as individual pieces of business logic. The cloud providers can then process requests for those functions at nearly infinitely scale. They have completely abstracted away from the concept of servers.

Monitoring serverless architectures is a whole new paradigm. Cloud monitoring solutions are going to have to play catch-up when it comes to monitoring these new types of applications. The cloud providers are also going to have to build new capabilities to make the monitoring possible.

Monitoring Cloud Application Dependencies

Cloud providers provide a wide array of specialized databases, queuing, storage, and other services. Some examples from Azure are Cosmos DB, Service Bus, Table Storage, and others. For AWS it would be services like Redshift, DyanamoDB, SQS, and others. Traditional monitoring solutions were not designed to monitor the special services. You will need to monitor these via the cloud provider or via specialized cloud monitoring solutions.

No Infrastructure to Monitor

In the cloud, you don’t have to worry about monitoring traditional IT infrastructure. There are no switches, firewalls, hypervisors, SANs, or similar devices to monitor. The cloud providers are responsible for all of this under the covers. It has all been abstracted away from us, which is a beautiful thing. I just want to setup 100 servers, and I need 10 terabytes of SSD storage. I don’t care how it works!

Summary

If you have taken your app and moved it to some virtual machines in the cloud, you can probably keep monitoring your servers and applications the same way you have been. However, if you are “all in” and taking full advantages of all the Platform-as-a-Service features, you will likely need to re-think how you monitor your applications. Moving to the cloud creates new opportunities and challenges. Cloud monitoring is perhaps both!

Software Testing Tips: How to Ensure Your App Functions Like a Well-Oiled Machine

Software Testing Tips: 101 Expert Tips, Tricks and Strategies for Better, Faster Testing and Leveraging Results for Success

Angela Stringfellow Developer Tips, Tricks & Resources, Insights for Dev Managers, Popular Leave a Comment

When you hear the term “software testing,” do you think about one particular type of test — such as functional testing or regression testing — or do you immediately start visualizing the complex, interconnected web of test types and techniques that comprise the broad world of software testing?

Most experienced developers understand that software testing isn’t a singular approach, although, in the broadest sense, it refers to a collection of tests and evaluations that aim to determine whether a software application works as it should and if it can be expected to continue working as it should in real-world use scenarios. Basically, software testing aims to ensure that all the gears are churning smoothly and work together like a well-oiled machine.

That said, there are a variety of approaches to software testing, all of which are equally important in reaching a realistic conclusion to the pressing questions facing developers and testers:

  • Does the application as a whole work?
  • Do all features function as expected?
  • Can the application withstand the demands of a heavy load?
  • Are there security vulnerabilities that could put users at risk?
  • Is the application reasonably easy to use, or will users find it a pain in the a$$?
  • And others

Still, it’s not a simple matter of running a few tests and getting the green light. There’s a process to thorough software testing, which entails writing appropriate test cases, ensuring that you’re covering the right features and functions, addressing user experience concerns, deciding what to automate and what to test manually, and so forth.

We’ve covered many different types of software testing in our recent guide to software testing, as well as in many individual posts (check out our testing archives here). Beyond knowing the ins and outs of software testing, it’s helpful to learn from those who have traveled the path before you to learn from their mistakes and leverage the tips and tricks they’ve learned along the way (and graciously decided to share with the development world). That’s why we rounded up this list of 101 software testing tools.

Software Testing Tips

The list features tips and insights from experts on many of the less black-and-white aspects of testing.  Such as considerations for choosing the right tests, creating a testing culture that sets the stage for successful testing among teams, prepping for tests, testing with greater efficiency, and other important insights to streamline your testing process and get better results in less time and, often, at a more affordable cost.

Click on a link below to jump to tips to a particular section:

Cultivating a Testing Culture

1. Don’t treat quality assurance as the final development phase. “Quality assurance is not the last link in the development process. It is one step in the ongoing process of agile software development. Testing takes place in each iteration before the development components are implemented. Accordingly, software testing needs to be integrated as a regular and ongoing element in the everyday development process.” – Lauma Fey, 10 Software Testing Tips for Quality Assurance in Software Development, AOE; Twitter: @aoepeople

"A good bug report can save time by avoiding miscommunication or the need for additional communication." - Sifter

2. Encourage clarity in bug reporting. “Reporting bugs and requesting more information can create unnecessary overhead. A good bug report can save time by avoiding miscommunication or the need for additional communication. Similarly, a bad bug report can lead to a quick dismissal by a developer. Both of these can create problems.

3. Treat testing like a team effort. “Testing is a team effort. You’ll find keeping everyone in the loop from the beginning will save an enormous amount of time down the line.

“When you expose testers to a greater amount of the project, they will feel much more comfortable and confident in what their goals should be. A tester is only as efficient as their team.

“Your goal is to make sure everyone involved in the project has a solid understanding of the application. When everyone understands what the application entails, testers can effectively cover the test cases.

“Communicate with the test lead or manager to allow testers to be involved in the decision-making meetings. Giving testers access to early knowledge will allow them to prepare early test environments. This will avoid any unforeseen issues, preventing any delays or risks while also being cost-effective.” – Willie Tran, 7 Easy Ways To Be An Efficient Software Tester, Testlio; Twitter: @testlio

4. Use tools to make testing easy. “Most technical leads will be familiar with the challenge of getting developers into the habit of making code testable. Therefore, top of your list of objectives should be ‘ease of use.’ Tests should be easy to write, and more importantly, trivially easy to run, by your development team. Ideally, all developers should be able to run all tests, in a single click, right from in their IDE. No excuses!” – Adam Croxen, Mobile automation testing tips, App Developer Magazine; Twitter: @AppDeveloperMag

5. Find your “good enough” threshold. “Everyone wants perfect software, but budget constraints, business priorities, and resource capacity often make ‘perfect’ an impossible goal. But if perfection isn’t your goal, what is? Recognize that the goal of testing is to mitigate risk, not necessarily eliminate it. Your applications don’t need to be perfect — but they do need to support your business processes in time to leverage new opportunities without exposing companies to unnecessary or untenable risk. Therefore, your definition of quality may vary by application. As you initiate a project, get the right roles involved to ask the right questions: What constitutes perfect versus good enough versus unacceptable?

  • Benefit: Your ability to achieve quality is improved because the application development team is not charged with unrealistically perfect expectations. Rather, it is chartered with a definition of quality that fits the given time, resource, and budget constraints.
  • Impact on quality: This improvement will help you meet business requirements and achieve a satisfying user experience.
  • Relevant roles: Business stakeholders and the entire application development team will need to implement this practice.” – Margo Visitacion and Mike Gualtieri, Seven Pragmatic Practices To Improve Software Quality, Forrester; Twitter: @forrester

6. Your user documentation should be tested, too. “User manuals are indivisible from software. There’s no software simple to the point where it doesn’t need a user guide. End users are people who can fall under certain categories and be united by the notion of target audience, but, nevertheless, they are still just a bunch of unique human beings. So, some functionality that is clear to one person is rocket science to another. This proves two points: yes, we all need technical documentation so our product is used properly and, yes, this documentation should be approached from many angles and tested thoroughly to be understood by everyone.” – ClickHelp Team, Testing of User Documentation, TestMatick; Twitter: @TestMatick

"Opening up the communication lines between the testing teams can do wonders for making the testing smooth." - Tommy Wyher

7. Keep open lines of communication between testing teams. “Opening up the communication lines between the testing teams can do wonders for making the testing smooth. Communications allow the team to compare results and share effective solutions to problems faced during the test. This will also ensure clear assignment of each task. All members of the team should get updated with the current status of the test.” – Tommy Wyher, Top Software testing tips and tricks you should know, uTest; Twitter: @uTest

8. Automation is good, but it doesn’t fix poor test design. “Test design must take into consideration all the areas of testing to be performed, but it should also identify high-risk areas or other specific areas where test automation would add the most value rather than leaving such decisions to be made ad hoc once development is in later stages.” – 10 Tips to Get Started with Automated Testing, Optimus Information; Twitter: @optimusinfo

9. Testing is about reducing risk. “Testing, at its core, is really about reducing risk.

“The goal of testing software is not to find bugs or to make software better. It’s to reduce the risk by proactively finding and helping eliminate problems that would most greatly impact the customer using the software. Impact can happen with the frequency of an error or undesired functionality, or it can be because of the severity of the problem.

“If you had a bug in your accounting software that caused it to freeze up for a second or two whenever a value higher than $1,000 was entered, it would not have a huge impact. However, that would be a high enough frequency to be very annoying to the customer.

“On the other hand, if you had a bug in the accounting software that caused all of the data to become corrupted every 1,000th time the data was saved, that would have a huge impact but at a very low frequency.

“The reason I define software testing in this way is that — as any tester will tell you — you can never find all the bugs or defects in a piece of software and you can never test every possible input into the software (for any non-trivial application).” – John Sonmez, What Software Developers Should Know About Testing and QA, DZone; Twitter: @DZone

"The ‘what if’ should become the leading question of the software research." - A1QA

10. Think outside of the box. “More and more often we have to deal with assuring quality of various IoT developments. They require testers to become real users for some time and try the most unthinkable scenarios. What we recommend is to start thinking out of the box.

“How can a professional manual tester who runs routine tests regularly become more creative? There are some useful pieces of advice that might be of help to any tester:

  • Find out what the software under test is not expected to be doing. Try those things out.
  • The ‘what if’ should become the leading question of the software research. So you are finding yourself in the middle of Apple Watch testing. How will it act if an iPhone it is paired to runs out of battery, etc.?
  • If you can do anything in the system (meaning it allows you to), do so without question and despite everything telling you shan’t do just that.
  • If possible, get the system (or device) under test out of your working premises and try it in a real environment.” – Guideline for Successful Software Testing in 2017, A1QA; Twitter: @A1QA_testing

11. Don’t rely solely on written communication, particularly for virtual teams. “Especially in virtual teams often the only point of interaction between developers and testers are bug tracking system, yet it is especially the written word that causes misunderstandings and leads to pointless extra work. Regular calls and actually talking to each other can work miracles here.” – Andrea, Successful Software Testing – Communication is Everything, Xceptance; Twitter: @Xceptance

12. Develop “rules of thumb” – and document them. “As testers, we often use rules of thumb throughout a project. For example, we sometimes use a total number of expected defects during test planning and then compare actual defects per hour found versus what we would expect, during test execution. Each of these rules of thumb aids us in managing the information we deal with as testers and QA managers.

“It would be nice (and useful) to have a collection of these rules of thumb in one place, each documented with examples.” – Ray Vizzone, Software Testing and Quality Assurance Rules of Thumb, got bugs?

13. Conduct code reviews. “Four eyes see more than two. That’s why you should let other developers review your source code on a regular basis. Pair programming, on the other hand, a technique where two developers write code together for longer periods, isn’t for everyone and is often not needed. But complicated, important or security related code greatly benefits from code reviews and will improve your code quality a lot.” – Dennis Gurock, 12 Practical Tips for Building Bug-Free Software, Gurock Quality Hub; Twitter: @gurock

"Rather than rely on traditional QA testing methods, Developers and development managers should also be able to quickly and easily manage the defects in their code – especially where code is complex." - Chris Adlard

14. Manage defects in code during development, particularly for complex code. “Rather than rely on traditional QA testing methods, Developers and development managers should also be able to quickly and easily manage the defects in their code – especially where code is complex. This includes prioritizing defects based upon impact and filtering defect information to view only what’s relevant to them. Once the defects have been prioritized, developers should be able to automatically find all of the places the defect exists across projects and code branches – thus minimizing duplication of efforts. Then they should be able to collaborate with other developers to share triage information across distributed teams and geographic boundaries.” – Chris Adlard, Five Tips to Make Software Testing Easier, Database Trends, and Applications; Twitter: @dbtrends

15. Report findings in the context of business value. “Focus on the data that is being communicated back to stakeholders, from your findings as part of testing – the data should be in context of ‘how’ the behavior observed is detrimental to the objective of the feature or application being developed.” – Mush Honda, 9 Steps to Becoming a Great QA Lead, KMS Technology; Twitter: @kmstechnology

16. Engage the end user. “Probably the most important person in the whole process, yet many times we may be tempted to keep them at arm’s length; you should involve the customer actively. Have them give frequent feedback on the product for future improvement and development; software developers who respond quickly to customer feedback are generally more successful.” – 5 Tips for Developing an Effective Software Quality Testing and Assurance Culture, Techno FAQ; Twitter: @Techno_FAQ

17. Always keep learning. “[The] IT field changes; way [faster] than some of us would like.

“If you are not constantly updating your skills, you could get irrelevant, obsolete and outdated. In a world of lay-off paranoia, it is a good idea to rise above it all, gain immunity and feel secure. The best way to do so is to make learning a habit.” – Swati Seela, How Can Testers Embrace Learning and Keep the Spark Alive?, Testing Excellence; Twitter: @TestingExcel

18. Bug summaries must be thorough. “Most customers including your managers, developers, and peers will read the summary first when they review a bug. This is especially true when they have more bugs to review.

“The simple reason is that they don’t have enough time to go into details of every bug, so having a short and concise summary will surely help to grab an idea of what the problem is about and how important it is.

“You can have a short and concise summary by telling exactly what problem you found and in what condition.” – Thanh Huynh, 3 Simple Reasons Why Your Bug Report Sucks, LogiGEAR Magazine; Twitter: @logigear

19. Use Test Maturity Model integration. “The software industry does not operate in a zero‑defect environment, and, arguably, it never will. In the face of this truism, numerous techniques to reduce the number and severity of defects in software have been developed, with the ultimate, albeit unobtainable, goal of defect elimination. Such optimistic thinking has led to significant improvements in software quality over the past decade, notwithstanding increased software complexity and customer demands.

“One such defect elimination approach is maturity models. Broadly, these are structures which state where an organization sits on a maturity scale, where its failings lie and what should be done to improve the situation using process improvement frameworks. The archetypal maturity model is the Capability Maturity Model Integration (CMMI)2, in addition to its predecessor, the Capability Maturity Model (CMM).” – Dr. Mark Rice, Test process assessment: climbing the maturity ladder, Software Testing News; Twitter: @testmagazine

Test Prep Tips

20. Always start with a product map. “Early on in the project you should spend some time exploring the software, and try to model the features and requirements of the product. A graphical model (for example, a mind map) can provide a concise, easy-to-understand representation of the product, and the modeling process is likely to help you uncover features that you may not previously have been aware of.” – ChengVoon Tong, Top 3 tips for software testing a mature product, Redgate; Twitter: @redgate

"When testers start working on the project from the very beginning, they make sure that many errors are identified and eliminated even before the development phase." - The Merkle

21. Getting testers involved from the start means you can eliminate many errors even before reaching the development stage. “When testers start working on the project from the very beginning, they make sure that many errors are identified and eliminated even before the development phase. Writing test scripts, quality testers assist developers that can later use these scripts for making product creation easier. Thus, involving testers into work at the first stages of development has a range of advantages: helping the team to understand customers’ goals, saving a lot of time, minimizing expenses, and optimizing the approach to testing.” – Mark, Top 7 Tips for Choosing an Outsourcing Software Testing Team, The Merkle; Twitter: @themerklenews

22. Choose flexible test management tools that can adapt to your needs. “No two businesses are the same which might mean a particular tool is best-suited for a situation different to yours. Keeping this in mind, you should look for a test management tool which not only fits your day-to-day testing needs today but should also offer flexibility if your testing approach changes course in the future.” – Sanjay Zalavadia, 5 Most Important Features to Look for in Test Management Tools, Quick Software Testing; Twitter: @quickswtesting

23. Create sample test data if needed. “Depending on your testing environment you may need to CREATE Test Data (Most of the times) or at least identify a suitable test data for your test cases (if the test data is already created).

“Typically test data is created in-sync with the test case it is intended to be used for.

“Test Data can be Generated –

  • Manually
  • Mass copy of data from production to testing environment
  • Mass copy of test data from legacy client systems
  • Automated Test Data Generation Tools

“Typically sample data should be generated before you begin test execution because it is difficult to perform test data management. Since in many testing environments creating test data takes many pre-steps or test environment configurations which are very time-consuming. Also If test data generation is done while you are in test execution phase, you may exceed your testing deadline.” – Tips and Tricks to Generate Test Data, Guru 99; Twitter: @guru99com

24. Aim for stability. “Stability is always important; your tests must always run and spit out the correct results. What good is having a test suite if it’s giving you false positives and negatives?” – John Howard, Tips & Tricks for creating a great Automated Testing Suite, uTest; Twitter: @uTest 

25. Make sure developers have the test cases. “It is considered a good practice if tester gives his test cases to the developer to verify that all the important functionalities are developed properly before he releases the application for further testing. It ensures that re-work would be minimum since most important part of the application is taken care by the developer himself.” – Software Testing Tips And Tricks For Testing Any Application, Software Testing Class

26. Follow a proven process for functional testing. “Functional testing reviews each aspect of a piece of software to make sure it works (aka functions) correctly. Quite simply, functional testing looks at what software is supposed to do and makes sure it actually does that. So while functional testing looks at an application’s ability to execute, non-functional testing looks at its overall performance (e.g. by testing scalability, reliability, security, and compatibility).

“When conducting functional tests, you typically need to follow a process that looks something like this:

  • Use test data to identify inputs
  • Determine what the expected outcome should be based on those inputs
  • Run the test cases with the proper inputs
  • Compare the expected results to the actual results

“Following this method, if the expected results and the actual results match, then you can conclude that the software functions properly and the test has passed. If they do not match (assuming you properly understand what the outcome should have been and used the correct input), then there is an issue with the software.” – Functional Testing Types – 25 Best Practices, Tips & More!, QA Symphony; Twitter: @QASymphony

"When you know how the data travels inside your application, you are better able to analyze the impact of component failures and security issues." - MSys Technologies

27. Understand the data flow. “When you know how the data travels inside your application, you are better able to analyze the impact of component failures and security issues. Hence, recognize how the data is used within the application early on in order to report bugs and defects faster.” – Eight Tips to Be More Effective in Agile Software Testing, MSys Technologies; Twitter: @MSys_Tech

"Focus your UI automation efforts on high-value features." - Telerik

28. Write your tests for the correct features to cut your maintenance costs. “What’s the easiest test to maintain? The one you didn’t write.

“UI automation is hard. It’s a difficult domain to work in, the tests are slower to write than other tests, they’re slower to run, and they’re harder to keep working. If you want to keep your maintenance costs as low as possible, carefully consider what you’re writing your UI tests for.

“Focus your UI automation efforts on high-value features. Talk with your stakeholders and product owners. Find out what keeps them awake at night. I’ll bet it’s not whether or not the right shade of gray is applied to your contact form. I’ll bet it’s whether customers’ orders are being billed correctly. I’ll bet it’s whether sensitive privacy or financial information is at risk of being exposed to inappropriate visitors to your site.

“Automate tests around critical business value cases, not around shiny look-and-feel aspects of your application.

“Write your tests for the correct features. It will cut your maintenance costs dramatically.” 10 Tips on How to Dramatically Reduce Test Maintenance, Telerik; Twitter: @Telerik

29. Channel an attacker for security testing. “Try to get into the mindset of a potential attacker. Just as you try to emulate the end user when software testing, with security testing you want to emulate an attacker. It’s fair to assume that they’ll seek entry via the path of least resistance. Start with the most common methods and attack scenarios. But it’s important to remember that nothing’s off the table because an attacker will do anything that will get them the data they want.” – Simon Hill, 8 tips for security testing web applications, Crowdsourced Testing; Twitter: @crowdsourcingqa

30. For apps, including the device in your testing plan is imperative. “An application that comes packaged on a consumer-grade laptop or notebook for a police squad car will not withstand the rigors of high-speed chases and constant bangs and knocks. Part of the application testing strategy, if you are developing for situations like this, should include the testing of the robustness of the device itself in adverse operating conditions. If you fail to include the device in your test plan, the app might be great — but it might also crash at a critical moment if the end device fails.” – Mary Shacklett, 10 tips for testing apps for the real world, TechRepublic; Twitter: @TechRepublic

31. Test-writing methodologies and concepts must be grasped before using automated tools to “turn the crank.” “When I hear about these new testing tools, I generally view them as new methods to turn the crank in testing. Anything can execute a test plan, after all, there’s no fundamental reason why a human needs to run through a test plan versus a machine. Both are capable of turning the crank. There’s also no fundamental reason why a human needs to write a test plan either. Well, other than machine learning hasn’t gotten that good yet.

“The part many of these tools seem to leave out is how these tests get written. The methodologies and concepts need to be grasped before you start turning the crank. Extracting anything useful from automated testing requires a robust set of test cases. You need to have clear goals – launch the app, poke this set of buttons, get this result. This is true regardless of which method is used.” – Kirk Chambers, QA tips and tricks: Why a clear and robust test plan is essential, Possible Mobile; Twitter: @POSSIBLEmobile

32. Avoid cross-browser variation. “On a new project, you might be tempted to use a slew of emerging browser capabilities. You will often need to use feature detection so that visitors with older browsers get a usable fallback. This means that you need to test the same feature across browsers with different expectations of what is correct.

“It can be easy to get carried away using the latest tech, but a significant part of your audience may be using older, less capable browsers. Sometimes it is better to use a well-established approach for everybody.

“By using newer browser features with inconsistent support, you are deliberately introducing cross–browser variation. We know from the Browser Wars of the 1990s that this comes at a cost.

“Pick the technologies you use with care. Restrict your choice of newer features to those that will have the biggest net benefit to the user.” – Jim Newberry, 31 Ways to Spend Less Time on Manual Cross–Browser Testing, Tinned Fruit; Twitter: @froots101

33. Define entry and exit points. “Understand your software application being tested completely. Here, we do take care of when and how the testing of a particular testing phase will start and end. So, this will help us to decide how and which automation testing framework can be involved for a particular testing stage.” – Manish Verma, Top Software Testing Best Practices and Automation Strategy, Software Testing Mentor; Twitter: @swtmentor

34. Run a pilot project before full-scale automation testing tool adoption. “Generally, a pilot project is kick started by preparing a business case describing the objectives of the project and outlining the methodology how the project shall be carried out.  A realistic time plan along with metrics for the determination of the success is an essential part of the business case. For instance, the testing engineer may like to reduce the time to run the regression tests from a week to a day. Actually, applying the ‘don’t be overly optimistic’ rule, it may be better to set a target such as reducing the time for 20% of the tests by a factor of 50%. This may result in a five-day regression test taking four and a half days, but might be a much easier target to hit.

“The pilot project must not be either too short or too long, may be from 2 to 3 months. Subsequent phases of the pilot project could extend this time beyond 3 months but each phase should have measurable objectives. If the pilot stretches for a longer period without significant results, it would cast a shadow of doubts on the viability of the overall test automation. It is better to accrue smaller benefits in the beginning may be in bits and pieces, being less risky rather than gaining much larger benefits that are projected later.” – Importance of doing a Pilot Project before Full Scale Automation Tool Roll Out, Software Testing Genius

"Considering Murphy’s Law, any missing feature in the emulator that CAN go wrong in the real environment, WILL go wrong and cause troubles." - Skelia

35. For testing mobile apps, emulators can be useful, but they can’t perfectly replicate a real-world operating system. “Some tech experts use emulators for application testing. And it is good because an emulator is a powerful tool that makes app testing easier and cheaper. However, emulators lack many features inherent only to real operating systems. Considering Murphy’s Law, any missing feature in the emulator that CAN go wrong in the real environment, WILL go wrong and cause troubles. So, before release, test an application using target OSs on physical mobile devices.

“Moreover, it is necessary to check how your application works on different versions of the target operating system. For example, if your application is intended to work on iOS 9, then try 9.0, 9.1, 9.2, etc.” Mobile App Testing Tips & Tricks, Skelia; Twitter: @Skelia_company

36. Don’t underestimate the impact of maintenance overhead. “This one is particularly tricky. People often do not realize the cost of maintaining an automated testing infrastructure. If you are writing test scripts for a rapidly changing application, you should gather all the information that you need and then take some time to come up with an estimate for this overhead. Having a solid testing setup really makes the difference here: fixing broken tests is faster when you have clean, readable test scripts with little to no duplicated code. Following a pattern such as PageObject can help you build such a setup.” – Giovanni Rago, Top 5 Mistakes That Can Sabotage a Successful Test Automation Project, SauceLabs; Twitter: @saucelabs

37. Test the instructions. “If a test has too many twists and turns, you’re testing users, not the website. Ask someone else on your team to try the test (not just read the steps) and have them note any unclear or confusing instructions. Run a pilot test and see if you get the intended results.

“Save big picture questions for the summary, when test participants have completed all tasks and have a moment to reflect.

“Tip: Request the same test participants for later rounds. This allows you to test the experience of repeat visitors and find out whether you’ve addressed their feedback.” – 12 Tips for Top Test Results, User Testing; Twitter: @usertesting

Testing Considerations

38. Consider the complete, end-to-end user journey. “A user journey is a series of steps which represent a scenario in which a user might interact with a system. Typically, a user journey has a starting point, i.e. an entry point into the system, a series of transitions from one state to another and a set of triggers which cause the transitions.

“User journeys can help you identify customer behavior and how the users use the system or how they could potentially use the system.

“When we are creating user journeys, we need to be thinking about:

  • Context – Where is the user? What is around them? Are there any external factors which may be distracting them?
  • Progression – How does each step enable them to get to the next?
  • Devices – what device are they using?Are they a novice or expert? What features does the device have?
  • Functionality – What type of functionality are they expecting? Is it achievable?
  • Emotion – What is their emotional state in each step? Are they engaged, bored, annoyed?

“The important part here, is that a User Journey is a ‘mental’ and ‘lived’ experience. The journey is deeply linked to ’emotions’ and these emotions usually have a bearing on the users perception of quality.

“Whilst some of the above factors can be accounted for when writing automated tests, we certainly cannot know about user’s emotions, it is for this reason you cannot automate a user journey.” – Amir Ghahrai, Can You Really Automate a User Journey?, Testing Excellence

39. User personas are the foundation of successful software testing. “Do you know what’s the most important part about any existing User Story? Users that will potentially be behind it. Stories we are talking about must be aimed to describe how people will actually use your app. Therefore appropriate stories should be designed from their perspective. User Stories should also capture precise and accurate pieces of information like why and how would a certain person log in the app. Not more, nor less.” Complete Guide Through Supreme User Story Creation, TestFort QA Journal; Twitter: @Testfort_inc

40. Embrace exploratory testing. “We are all accustomed to reading books about projects that include complete specifications, iterations, test plans, and other benefits of a formal development process. But usually we just get pathetic hints of documentation in a real life setting. Sometimes a tester will hear such the phrase, ‘Hey, let’s test this!’ What should you do when such a shaky moment arrives at your doorstep?

“The answer is simple – you need to learn!

“There is one such testing technique that is called ‘Exploratory Testing,’ which can actually turn out to be your life vest. The essence of this technology is testing during the time which the project is being studied. All deeper analysis of the application’s functionality will help us understand what we need to check and how we can proceed. It also reveals all the week sides of the application. Although many people are skeptical about this technique, even in projects where the tester has thoroughly documented his work. This technique, can however, in many cases, bring good results. After all, real people are not robots, and their actions are not scripted.” – Eugene Korobka, 6 important Software Testing Tips from our QA team, Rozdoum; Twitter: @rozdoum

41. Don’t skip load testing. “Why is load testing so important? The world is a big place, and even if your application is brand new and you’re still trying to grow your user base, chances are more than one person will try to use it at any given time. If you’re not able to handle those users and that traffic, your company and application isn’t able to put your best foot forward. Spotty behavior and flaky availability can give people the impression that your application isn’t polished and professional enough to meet their needs, potentially causing them to look for a solution elsewhere.

“Through load testing, and making improvements and changes based on the results of that load testing, you can be better prepared for your users and give them the best possible experience.” – Why You Should Be Load Testing Your Application, Test Talk; Twitter: @te52app

42. Triggering bugs requires a perfect storm, so some bugs will inevitably be discovered in the wild. “Sometimes triggering a bug takes the perfect storm of the right (or wrong?) web browser, browser version, OS, screen dimensions, device…because testing can never cover everything, it’s possible that you’ll never hit that specific bug-triggering combination. When that happens, a bug may slip through to production and stay hidden until a user discovers it ‘in the wild.'” – Cullyn Thomson, 4 Reasons Bugs Are Missed, QA Intelligence; Twitter: @CullynT

43. Write logical acceptance tests — and do it early. “During the release planning meeting, capture acceptance criteria and immediately add them as logical test cases linked to the product backlog item. This will help the team to understand the item and clarify the discussion. An even more important benefit of this tip is that it helps testers be involved and be important at the early stages of the software cycle.” – Clemens Reijnen, 5 Tips for Getting Software Testing Done in the Scrum Sprint, Methods & Tools; Twitter: @methodsandtools

44. Make sure you understand the risks. “When planning testing, whether it’s a long term plan or a short one for a single test session, you’d better try to keep in mind the risks involved with the features under test. It helps you organize your time and efforts and gives a quick feedback on the riskier parts that may compromise the functionality of the product.” – Belen Arancibia, 10 Tools and Tips for Better Testing, Belatrix; Twitter: @BelatrixSF

45. Test for usability. “Yes, we are testing the functionality, but basic usability issues could be easily caught and submitted without even applying the usability standards and special checks.

“For example, is the application logic too complicated? Are the help sections easy to understand? Can we confirm that tips and labels are marked well and easily seen given the application’s background color? These and many other questions could help to make the application more user-friendly.” – Tatyana Mahlaeva, Tips and tricks for mobile testing: A software tester’s roadmap, Mobile Marketer; Twitter: @MobileMktrDaily

46. Don’t cheat on performance tests. “In the real world users may spend anywhere from a few minutes on a typical website up to a few hours on a SaaS type web application. For example, perhaps the application you’re going to test needs to support 5,000 concurrent users that have an average visit length of 20 minutes. In the course of the peak hour the site is expected to serve 1 million page views.

“Rather than use 5,000 virtual users to generate the load over the course of an hour, you figure you’ll just use 500 virtual users and drop your session length down to two minutes… essentially cutting everything by a factor of ten. You will use less virtual users to generate the same number of page views. Many performance testers are doing this with no idea of how it actually translates to in load on the downstream systems. Well, here is the bad news… at multiple points in the infrastructure this is going to result in, you guessed it, about ten times as much load as there should be.

“It has a load balancer at the top, some web servers, some application servers, and a database cluster. There are typically firewalls in front of and in-between a few of the tiers. Here is a short list of some of the unnatural behaviors that can occur in the environment as a result of cheating the performance test cases:

  1. Too many connections to the firewalls
  2. Too many application server sessions
  3. TCP queues filling up on all servers
  4. Database connections piling up” – Stop Cheating in Your Performance Tests, Software Test Professionals; Twitter: @SoftwareTestPro

47. Adopt an Agile testing mindset. “Historically, the role of the software tester was mostly to sit within a team of testers, often producing large documents such as test strategy and test plans as well as detailed test scripts. This method of working also implied that the testers are generally abstracted from the whole software development process and only come in at the later stages when the software was already developed.

“Nowadays, testers within an agile context, are required to be multi-skilled, technical, collaborative and have an agile mindset. Testers are under tremendous pressure to release applications more quickly, and companies are pushing testers to make changes in their mindset, from skillsets to coding, to understanding how the business functions and dealing with customers. Testers must evolve.” – Amir Ghahrai, Traditional Tester vs Agile Tester – What are the Differences?, Testing Excellence

48. Emphasize code quality. “Quality is not a universal value. It is defined by standards, specifications, numbers, factors and different parameters. Hence, when a company wants to develop a high-quality software system, it considers a great number of aspects. Code quality takes one of the leading positions in the list.

“Software analysis experts agree that code quality has a remarkable growth of attention and demand these days. They confirm that continuous development of a software system makes the source code significantly complicated after numerous updates. Therefore, the team has to analyse the code on a continual basis to keep the code base in a good maintainable state. This will prevent uncovered technical debts, system crashes and expensive fixes.” – Sergey Terekhov, Defining and tracking the code quality, Software Testing News; Twitter: @testmagazine

49. Utilize smoke testing. “Smoke Tests are a kind of basic, non-extensive software testing practice, where you put the code developed so far through fundamental, ‘happy path’ use cases to see if the system breaks.

“If it does, you go back to fix the system because it is in no way, shape or form ready for more extensive and scientific testing. And if it doesn’t, you know you’re on track and that the fundamental features the system is designed to provide, work.

“That’s smoke testing in a nutshell for you, my friend. It’s as simple as – at any given point in time – putting the product built thus far through a rudimentary series of happy path tests, to help bring out the simple yet critical bugs.” – Ulf Eriksson, 11 Quick Tips to Master Smoke Testing, ReQtest; Twitter: @ReQtester

50. Run an agile beta test. “The single most defining feature of agile beta tests is the very short period of time available for the beta testing phase. Companies that ascribe to the agile method ‘release early, release often,’ so you need to gather feedback from users quickly in order to keep everything on track.” – Jon Perino, Tips for Tackling Agile Beta Testing, Centercode; Twitter: @Centercode

51. Regression testing is a crucial step. “Regression testing involves testing the entire application (or at least the critical features) to ensure that new features or bug fixes haven’t inadvertently caused bugs to appear in other areas of the application.

“Because of its scope, regression testing is typically a process that involves automated tests, or at least some level of scripted manual tests to ensure that the key components of the application are tested.” – Mike Sparks, Software Testing for Hidden Bugs, Test Talk; Twitter: @te52app, @mdpsparks

"The earlier you start the tests, the better results you get." - Oksana Levkovskaya

52. Apply tests during the requirements analysis phase for better results. “First of all, software testing process is based on the software development process. Software development life cycle (sdlc) includes the following steps:

  1. Requirements analysis
  2. Design process
  3. Development
  4. Testing process and debugging
  5. Operation and maintenance

“As it shown in the list above, we should perform the required tests is the fourth stage of the life cycle. But usually, if the main goal is to get a high-quality software and minimize the cost of bug-fixing, we can apply tests during the requirements analysis phase. The earlier you start the tests, the better results you get.” – Oksana Levkovskaya, Software Testing Life Cycle (STLC): Benefits and Major Testing Steps, XB Software; Twitter: @xbsoftware

53. Ensure maximum test coverage. “Breaking your Application Under Test (AUT) in to smaller functional modules will help you to cover the maximum testing applications also if possible break these modules into smaller parts and here is an example to do so.

“E.g: Let’s assume you have divided your website application in modules and accepting user information is one of the modules. You can break this User information screen into smaller parts for writing test cases: Parts like UI testing, security testing, functional testing of the User information form etc. Apply all form field type and size tests, negative and validation tests on input fields and write all such test cases for maximum coverage.” – SiliconIndia, 20 Top Practical Testing Tips A Tester Should Know, SiliconIndia QA City; Twitter: @SiliconIndia

54. Do you need to test your API? “Testing API is like testing any other interface into the software. You have to make sure that it is bug-free before shipping.

“It resembles testing on the UI level but instead of just using data input and output, API tester makes calls to the API, receives output and puts down the actual result as opposed to the expected one. You can perform it using special test solutions (for instance, Postman) or as API testers frequently have to do, write API test code.

“The purpose of API test code is to issue a request to the API, output and note down the expected, actual results, and the time within which the response was delivered.” API Testing – What? Why? How?, TestFort QA Journal; Twitter: @Testfort_inc

55. Find difficult bugs by experimenting with unusual behaviors. “After completing all the planned test cases there is a need to allocate some time to test the system functionality randomly, trying to create some unusual situations or behaviors.” – Natalia Vasylyna, Tips and Tricks to Detect “Difficult” Bugs, QA TestLab; Twitter: @QATestLab

56. Web services apps can be tested in isolated components. “More websites are being built using web services. These provide an opportunity for testers to test the web application in isolated components rather than a full blown integrated web application.

The benefits of testing web services in isolation are:

  • No browser involved – We can directly communicate with a web service as long as we know its end-point and what parameters to send.
  • Much faster – As we are targeting isolated web service, there is no images, javascript or css to load, so the response is much quicker.
  • Easier debugging – when testing a web service, if we encounter an issue, it is much easier to locate the cause of the issue and so to debug becomes less of a pain.
  • More control – we have direct control over what request we submit to the web service, so we can use all sorts of data for negative testing of web services” – Amir Ghahrai, Web Testing Tips – How to Test Web Applications, Testing Excellence; Twitter: @TestingExcel

57. If a test (case) can be specified in a rule – that MUST be automated. “Automation code is software – thus, obviously is built on some kind of specification. Most GUI automation (QTP, Selenium) is typically built based on so-called ‘test cases’ written in human language (say English). It is the first question that a automation guy will ask while starting automation – ‘where are the test cases?.’ In dev world – automation takes a different meaning. In TDD style automation (if you call TDD tests as automation) – test is itself a specification. A product requirement is expressed as a failing test to start with. The approach of BDD throws this context to other boundary, specify tests in the form of expected behavior. So, automated tests are based on specification that is a human language but expressed in business terms (mainly) and with a fixed format (Given-when-then).” – Shrini Kulkarni, Two important lessons for success of Test Automation, Thinking Tester; Twitter: @shrinkik

"When it comes to exploratory and interface testing, humans still beat machines by a long shot." - Ashley Dotterweich

58. Unscripted exploratory tests are one of the best ways for testers to examine usability. “When it comes to exploratory and interface testing, humans still beat machines by a long shot. While we’re making big strides with machine learning, having a human tester poke around a product to see what they discover is still one of the best ways to truly test the quality of a piece of software. After all, users are real people, so why not test with real people, too?

“These unscripted exploratory tests can mean the difference between shipping a product that should work fine, and a product that actually works. Usability can be a serious roadblock to adoption, and testing a feature for acceptance is a critical aspect of QA. Manual testing is critical because it helps you test the product from the perspective of a user, making sure that by the time it hits your customers, it’s ready for them.” – Ashley Dotterweich, Is manual QA a poor use of time?, Rainforest QA Blog; Twitter: @rainforestqa

59. Recognize that automation can have errors. “Like any piece of code, your automation will contain errors (and fail). An error filled automation script may be misinterpreted as failed functionality in your tested application, or (even worse) your automation script will interpret an error as a correct functionality. Manually testing your core, critical-path functionality ensures that your test case is passing from a user perspective, with no room for misinterpretation.” – 8 Reasons Why Manual Testing is Still EXTREMELY Important, 3Qi Labs; Twitter: @3qilabs

60. Each function added to a model is a target for tests. “In particular, I like to test the complete behaviour of an action from the user’s point of view. This is what BDD preaches and it is the process on which I rely for creating tests. With this principle I’m not going to test the specific function which activates a button, I’m going to test the final state of the app after the user presses the button.” – Fernando González, Some Helpful Tips & Tricks for iOS Testing, LateralView, Medium; Twitter: @lateralview

61. Unit test every time you need to minimize risk. “Unit test your product every time you need to minimise the risk and possibility of future problems. Unit testing is best utilised to smooth out the rougher edges of software development and is relatively cheap to perform when compared with, for example, the cost of delivering a broken build for user acceptance testing. Unit tests will help to identify problems during the early stages of the development cycle, before they reach the customer and the testing team. When problems are uncovered during code design and implementation, they’re likely to be fixed faster and at less cost. Each completed unit test brings you closer to a more robust and reliable system.” – Andrew Smith, 10 Unit Testing Tips You Should Follow In Every Language, Ministry of Testing; Twitter: @ministryoftest

"In Integration testing we check if the data created in one module is reflected or transferred or shown in other respective modules." - OnlineQA.com

62. After functional testing, conduct integration testing. “Checking the data flow between the modules or interfaces is known as Integration testing.

“In Integration testing we check if the data created in one module is reflected or transferred or shown in other respective modules.” – Types of Integration Testing in Software Testing, OnlineQA.com

63. An automated vulnerability scanner can be helpful for streamlining security testing. “A good commercial option is Burp Scanner; there are also free options such as OWASP’s ZAP and Google’s RatProxy. These work by routing the HTTP traffic to and from an application through a proxy, and then resending the requests with various attack attempts replacing the original values. This can be an effective way of finding certain classes of vulnerability in a short amount of time, but it is important to understand (and make sure that your stakeholders understand) that this is not a magic bullet. The tool is naive, and has no knowledge of the applications business logic – it is simply replaying requests and checking the responses. There are many types of vulnerability that can not and will not be found with this strategy, and use of a scanning tool absolutely does not replace the need for manual security testing.

“Automated tools, even expensive ones, find only relatively simple vulnerabilities and they usually come up with a lot of ‘noise,’ or false positives. You need to know enough about security vulnerabilities to be able to evaluate each finding of the automated tool. Taking a scanner report and sending it unverified to the developers is the worst possible thing one could do.” – Mark Hrynczak, 13 Steps to Learn & Perfect Security Testing in your Org, Atlassian Blog; Twitter: @Atlassian

64. Address signup and login issues. “This may seem like a no-brainer, but if users cannot easily access your application, your efforts will have been wasted. If your app or mobile site requires password and username (not recommended), pay close attention to the fields and make sure that it’s easy for users to enter their information.” The Essential Guide to Mobile App Testing, uTest; Twitter: @uTest

65. If you have a stand-alone mobile app or a mobile app that complements a desktop app, consider how different connections will impact performance. “A desktop is immobile. It sits in one area and stay there, more or less, for the life of its use. Connected by wire, the connection is stable and usually fast. A mobile device is, well, mobile. The user is constantly moving from place to place, and from one coverage area to another. You have to make sure that different local connections won’t affect the performance of your mobile application.” – Steven Machtelinckx, The Testing Challenges You Face When Your App Goes Mobile, TestingMinded

 

Improving Testing Efficiency

66. It’s like they always say: if you fail to plan, you plan to fail. Or in this case, you plan to be inefficient. “It is necessary to have test plan written by experience person like QA lead or manager. While creating test plan you need follow an organized approach to make it good test plan. The good test plan must cover Scope of testing, test objectives, budget limitations, deadlines, test execution schedule, risks identifications and more.” – 15 Tips on How to make your software testing more efficient, Software Testing Class

67. Snoop on your competition to discover common mistakes. “When planning your testing activities, look at the competition for inspiration – the cheapest mistakes to fix are the ones already made by other people. Although it might seem logical that people won’t openly disclose information about their mistakes, it’s actually quite easy to get this data if you know where to look.

“Teams working in regulated industries typically have to submit detailed reports on problems caught by users in the field. Such reports are kept by the regulators and can typically be accessed in their archives. Past regulatory reports are a priceless treasure trove of information on what typically goes wrong, especially because of the huge financial and reputation impact of incidents that are escalated to such a level.

“For teams that do not work in regulated environments, similar sources of data could be news websites or even social media networks. Users today are quite vocal when they encounter problems, and a quick search for competing products on Facebook or Twitter might uncover quite a few interesting testing ideas.

“Lastly, most companies today operate free online support forums for their customers. If your competitors have a publicly available bug tracking system or a discussion forum for customers, sign up and monitor it. Look for categories of problems that people typically inquire about and try to translate them to your product, to get more testing ideas.” – Gojko Adzic, To improve testing, snoop on the competition, Gojko.net; Twitter: @gojkoadzic

"Rather than have a lot of half-baked test cases, you should write fewer, but more effective ones." - Aditi Consulting

68. Instead of writing a multitude of test cases, focus on writing better ones. “It’s really tempting to have a lot of test cases running when you’re trying to identify the bugs in your programming. But rather than have a lot of half-baked test cases, you should write fewer, but more effective ones.

“Read the requirements of the software, break these tests into sets and subsets, look at similar test cases, and practice, practice, practice.

“You’ll be writing better test cases in no time.” – 20 Brilliant Software Testing Hacks for Software Testers, Aditi Consulting; Twitter: @TopTechStaffing

"High Priority Bugs should be prioritized on testing." - Kevin Clay Badilla

69. Focus on high-priority bugs. “High Priority Bugs should be prioritized on testing. These bugs have greater impact on the system and usually, it takes up more time in terms of testing. Mainly due to the complexity of the bug or perhaps the level of significance of it to the end-users.” – Kevin Clay Badilla, Tips for Effective Software Testing, Ideyatech; Twitter: @ideyatech

70. Conducting user testing? Make sure you have the “right” users. “Suppose, you are recruiting users for testing a ‘yet to be released’ mobile yoga app that caters to Ashtanga Yoga aspirants. There are several formats of yoga in the market, especially in the western world. Hence, it is important to note that many Ashtanga Yoga practitioners believe that theirs is the most authentic form of yoga ever. Which users from this large community should we consider for user testing of this particular yoga app? Who do we recruit? How do we recruit? On what basis?

“Identifying the right kind of users is a challenging task. Many organizations follow the ‘hallway testing’ approach where users are randomly chosen as though there were walking in the hallway. These users may not be the best possible sample given diversity factors like geographies, culture, age group, profession, tech-savvy-ness and so forth. It is always good to know who are the users and what are their key characteristics. Without this information, we might just react like horses with blinkers on.

“In above mentioned context, consumers of this app are yoga practitioners, teachers, students and general public. These people may or may not be the users we are looking for. Few of them may not even know how to use a mobile app. Some might be extremely tech-savvy and represent a fairly good sample. Recruiting users depends on asking the right questions depending on the context of the product. The user testing team can design a ‘User Recruitment Questionnaire’ that helps to screen users and shortlist the most suitable candidates.” – Parimala Hariprasad, Recruiting Users for User Testing, An Aspiring UX Alchemist; Twitter: @CuriousTester

71. Do you need independent testing staff? “While all projects will benefit from testing, some projects may not require independent test staff to succeed.

“Which projects may not need independent test staff? The answer depends on the size and context of the project, the risks, the development methodology, the skill and experience of the developers, and other factors. For instance, if the project is a short-term, small, low-risk project, with highly experienced programmers utilizing thorough unit testing or test-first development, then test engineers may not be required for the project to succeed.

“In some cases, an IT organization may be too small or new to have a testing staff even if the situation calls for it. In these circumstances, it may be appropriate to instead use contractors or outsourcing, or adjust the project management and development approach (by switching to more senior developers and test-first development, for example). Inexperienced managers sometimes gamble on the success of a project by skipping thorough testing or having programmers do post-development functional testing of their own work, a decidedly high-risk gamble.

“For non-trivial-size projects or projects with non-trivial risks, a testing staff is usually necessary. As in any business, the use of personnel with specialized skills enhances an organization’s ability to be successful in large, complex, or difficult tasks. It allows for both a) deeper and stronger skills and b) the contribution of differing perspectives. For example, programmers typically have the perspective of ‘what are the technical issues in making this functionality work?’. A test engineer typically has the perspective of ‘what might go wrong with this functionality, and how can we ensure it meets expectations?’. A technical person who can be highly effective in approaching tasks from both of those perspectives is rare, which is why, sooner or later, organizations bring in test specialists.” – Software QA and Testing Frequently-Asked-Questions, Part 1, SoftwareQATest.com

72. Automated testing can save both time and money. “It’s been found time and again that the automated method of testing software is much more effective and efficient, and even in the short-term, a cheaper choice than setting humans in front of computers. With automated testing, every possible input and usage combination is tested out in every possible combination, multiple times, and in multiple environments (operating systems, operating system versions, and computer hardware). By taking the extra time to automate testing in this manner, developers and testers alike can be assured that any bugs found will allow for solutions that make the software compatible for all end users, regardless of what type of computer and operating system they use. Adaptive diagnostic reasoning and the other components that make up automated testing software solutions is cost-effective and efficient, and you’ll want to utilize it prior to releasing your software to the general public.” – Software Testing Tips for your small/big Business, Sky Tech Geek; Twitter: @skytechgeek

73. Prioritize automation based on the tests that will need to be run most often. “When choosing tests to automate, prioritize tests that will need to be run many times during the project. Some common candidates for automation are:

  • Smoke and Regression tests: These tests verify the general functionality of the software. They may include performing simple actions such as adding, modifying, and deleting data.
  • New Features/Functionality tests: When possible, automate new features/functionality once they have passed initial testing. Add these tests to the regression set so they can be run after each project build or when there is a release to QA.

“By letting automation handle these basic functionality tests, you’ll save the most time and effort.” – Yolanda Hyman, 7 Automated QA Testing Tips for the Manual QA Tester, Atlantic BT; Twitter: @atlanticbt

"The more repetitive the execution is, the better candidate a test is for automation testing." - Joe Colantonio

74. You should also look at tests with repeatable execution as candidates for automation. “You shouldn’t try to automate everything. In fact, not everything is automatable. When planning what test cases to automate here are some things to look for:

  • Test that is deterministic.
  • Tests that don’t need human interaction
  • Test that needs to run more than once
  • Any manual process that will save engineers time (not necessarily an official “testing” process)
  • Test that focuses on the money areas of you application
  • Test that focuses on the risk areas of your application
  • Unit tests
  • Test that needs to run against different data sets
  • Test that is hard to test manually.
  • Focus on critical paths of your application
  • Test that needs to run against multiple builds and browsers
  • Tests used for load/stress testing

“The more repetitive the execution is, the better candidate a test is for automation testing. However, every situation is different.” – Joe Colantonio, Automation Testing Resources & Best Practices, Joe Colantonio; Twitter: @jcolantonio

75. Divide and conquer. “There are almost no real complex tasks, as long as you are willing to look for ways to break them into smaller and simpler components.

“Many times I meet QA managers who explain to me how they manage their tests using a small number of (very long) excel sheets or wiki pages.  When I ask them why do they work this way they explain to me that they started with small documents that grew over time…

“One of the first pieces of advice I give these managers is to divide and conquer.  By breaking down their very long and complex testing procedures into smaller and more modular test cases, they can gain flexibility and achieve faster and more accurate coverage.

“But this advice is not only good for the size of test cases.  If you look into any testing tasks and break it down into smaller testing tasks then you will be able to manage your team more efficiently and provide better visibility to your internal customers.” – Joel Montvelisky, 5 simple tips to keep testing simple, PractiTest; Twitter: @PractiTest

76. Do you know who’s using your app? “There are several ways testers can find out who is using an application and how. One approach that is becoming more commonplace is analyzing the application’s production logs.

“Logs are lists of lines of text output from an application in a given environment, such as a test server or production server. They can be helpful for testing purposes because they provide real feedback and insight into an application as it’s being used, as well as information that describes or can even help solve bugs.

“Each line of a log corresponds to some event or occurrence in the application. A log line could be informational (‘A user successfully logged in at ‘1:00 PM ES”), a warning (‘The current number of users is 90 percent of the total allowed concurrent users’), or an error (‘A valid user login failed unexpectedly’). Log entries can be output from the application itself (‘The number of logged-in users at a given time has reached a hard-coded limit’) or from the application’s environment or the system running the application (‘The server has run out of memory and cannot allow any more users to log in’). Most logging systems provide a timestamp for each log entry, often to the millisecond, and each log entry follows some standard format. This can provide useful insight into the question ‘Who’s using this application?'” – Josh Grant, Who’s Using Your App? Examine Logs for Testing Insight, StickyMinds; Twitter: @StickyMinds

77. Clear your browser cache. “When testing the application, it’s always better to clear the cookies/cache of the browser unless it needs to be there while testing.” – Mohd Azeem, Tips for Finding and Filing Issues When QA Testing, 3 Pillar Global; Twitter: @3PillarGlobal

"Open betas don’t work." - Joel on Software

78. If you’re running a beta test, avoid an open beta. “Open betas don’t work. You either get too many testers (think Netscape) in which case you can’t get good data from the testers, or too few reports from the existing testers.” – Joel Spolsky, Top Twelve Tips for Running a Beta Test, Joel on Software; Twitter: @spolsky

79. Use dedicated testers. “As with any type of software, bugs and defects can result in frustrated users who may choose to stop using the software. Complicating matters is the fact that collaborative research frequently results in users that are geographically distributed; yelling over a cubicle wall or going to the next office to discuss a bug may no longer be an option. In the worst case, subtle bugs in simulation or data processing components could ultimately lead to the recall of research results. No one wants that!

“Many mature corporate software development departments include dedicated test groups. These groups are typically involved in integration, performance, usability, and system-level testing. Given that developers should certainly test their own code at the functional/feature level and have ultimate responsibility for the quality of the code they create, having test engineers finding bugs that should have been caught at the development level is very expensive.” – Scott Henwood, 3 Tips to Help Your Team Build the Best Software for Scientific Research, CANARIE; Twitter: @CANARIE_Inc

80. Remember the Law of Demeter. “The Law of Demeter applies the principle of the least knowledge of software to promote loose coupling between units – which is always a design goal when developing software.

“The Law of Demeter can be stated as a series of rules:

  • within a method, an instance of a class can invoke other methods of the class;
  • within a method, an instance can query its own data, but not the data’s data;
  • when a method takes parameters, the first level methods can be called on the parameters;
  • when a method instances local variables, the instance of the class can invoke methods on these local variables;
  • don’t invoke methods on global objects.” – David Salter, Top Testing Tips for Discriminating Java Developers, Zero Turnaround; Twitter: @zeroturnaround

81. Test functionality first, user experience second. “The core functionality is the main draw for any app and it has to be rock solid. People seek out apps to perform specific functions. Incomplete or inadequate functionality will result in abandonment, so make sure that the main functions are fully implemented and tested before you move on.” – Vu Pham, Top 10 Tips for Testing Mobile Apps, Developer.com; Twitter: @DeveloperCom

82. Exploratory testing has its place, but it also has some cons. “In exploratory testing, testers may interact with the application in whatever way they want and use the information the application provides to react, change course, and generally explore the application’s functionality without restraint. It may seem ad hoc to some, but in the hands of a skilled and experienced exploratory tester, this technique can prove powerful. Advocates argue that exploratory testing allows the full power of the human brain to be brought to bear on finding bugs and verifying functionality without preconceived restrictions.

“The drawback to exploratory testing is that testers risk wasting a great deal of time wandering around an application looking for things to test and trying to find bugs. The lack of preparation, structure, and guidance can lead to many unproductive hours and retesting the same functionality over and over. One can easily see that completely ad hoc testing is clearly not the best way to go about testing. Testers who learn about inputs, software environments, and the other things that can be varied during a test pass will be better equipped to explore their application with purpose and intent. This knowledge will help them test better and smarter and maximize their chances of uncovering serious design and implementation flaws.” – Exploratory Software Testing, Microsoft Developer Network; Twitter: @Microsoft

83. Use Black Box testing when valuable or necessary. “Black box testing techniques, also known as a type of behavioral testing, offer development teams the opportunity to examine software without necessitating a deep understanding of the code used to build it. The style of testing looks at the inputs and outputs of the software under test but does not examine the internal workings of the product. The code itself is treated as if it were hidden under a black box.

“By separating the user and developer perspectives, black box testing allows testers to more efficiently validate large bodies of code with a deep understanding of how it was built.” – Jeni Kyuchukova, 8 Black Box Testing Techniques to Boost QA Success Rates, MentorMate; Twitter: @MentorMate

84. Testing in production is important. “When you mention ‘testing in production‘ you might recall the days when developers snuck releases past the QA team in hopes of keeping the application up to date, but in reality it only caused a buggy mess. And users were the ones who suffered. For this reason, most businesses avoid testing in production altogether because it’s too risky for the end user.

“But there are problems with not testing in production, too. Test environments are rarely built out to the same level as production environments, so they can never really achieve the scale that you’d see in ‘real-life.’ Plus, testing environments can easily get stale and out-of-date – and as a result you aren’t testing what you ought to be.” – Tim Hinds, Don’t Do It the Wrong Way: Tips for Testing in Production, Neotys; Twitter: @Neotys

85. A testing brain is invaluable in DevOps. “Testing maturity is a key differentiator for the success of DevOps. Even if organizations can automate their integrations, builds, and delivery processes but they still struggle to manage test orchestration and automation. Testing brains play a critical role to achieve this with their expertise in test design, test automation and test case development with DevOps.  Irrespective of what DevOps processes, models and tools organizations use, testing is a vital part of the overall DevOps process — not only to ensure code changes work as expected and integrate well — but to ensure the requirement changes do break the functionality.” – How DevOps Transformed Software Testing, Cigniti; Twitter: @cigniti

86. Ask the right questions. “Ask the right questions. Don’t just ask for the sake of asking. Try to understand the context and dependencies, then ask the questions that will give you deeper insights, help you understand, and enable you to build the right test cases.” – Peter Spitzer, as quoted by Chelsea Frischknecht, Think Like Your Grandma: Testing Tips from Peter Spitzer, 2013 Test Engineer of the Year, Tricentis; Twitter: @Tricentis

"You can use Pair testing to your advantage to generate test ideas that seem to have dried up when you try alone." - Debasis Pradhan

87. Avoid test traps such as running out of test ideas. “This is by far the most common problem that a tester can run into while on a project. How many times have you been in a situation where you didn’t know what else to test and how? I call this phenomenon as  ‘tester’s block syndrome’  [a condition, associated with testing as a profession, in which a tester may lose the ability to find new bugs and defects in the software that (s)he is testing]. If you’re curious, which you should be (if you are or aim to become a good tester), then you can read more about it in the article titled The Se7en Deadly Sins in ‘Software Testing’ that I wrote a while back.

“How to overcome this trap?

“Pair Testing: You can use Pair testing to your advantage to generate test ideas that seem to have dried up when you try alone. Pair testing is nothing but a testing technique where two testers work in pair to test the software under test.
“BCA (Brute Cause Analysis): Testers can employ this unique brainstorm technique when one tester thinks about a bug and the other tester thinks of all possible functions and areas where this bug can manifest.
“Think ‘Out of the Box’: Instead of thinking about the feature/function/application in front of you, rather try thinking in opposite directions. Take a step back and reassess the situation. Have you been trying to run functionality test when you ran out of ideas? How about performance, load and stress tests? How about tests involving data, structures, platforms, browsers, devices, operations?” – Debasis Pradhan, Top 5 Software Testing Traps and How to Overcome Them, Software Testing Tricks; Twitter: @debasispradham
88. Random data generation libraries can be useful. “If you have developed automation code, you might have experienced difficulty while creating test data. One can use hard-coded data or randomize the data generation. Hard-coded data is mainly a bad choice because of uniqueness problems that are why random data generation might be a better fit.” – Canberk Akduygu, Test Data Generation Libraries, SW Test Academy; Twitter: @swtestacademy

89. Use “good enough” testing as early as possible. “What’s the best load testing strategy—to invest in a realistic test or take a quick-and-dirty approach?

“Many testers aim for realism, but setting up a realistic simulation can take a lot of time and effort. This can significantly delay testing, leading to serious risks. As Kent Beck and Cynthia Andres observe in Extreme Programming Explained, catching problems early costs less than fixing them at the end of the development lifecycle.

“The other option is to use ‘good enough’ testing as early as possible. I would argue that in many cases, this approach produces better outcomes. We can spend 20 percent of our typical effort on the test configuration and still learn 80 percent of what we want to know—and we can find problems when they’re still cheap and easy to fix.” – Ragnar Lonn, When should I start load testing?, TechBeacon; Twitter: @TechBeaconCOM

90. Assign severity to defects. “Severity can be defined as how severe the defect is to the system and how badly it will affect the functionality. For example, an application crash on clicking a button is severe to the system. So its severity will be high. Whereas a spelling/grammatical error will not have much impact on the overall functionality. So its severity will be low.

“Levels:

“Although this varies from company to company, there are 4 levels of severity.

  • Showstopper: A defect of this severity blocks testers from testing further. Hence the name showstopper. An example of a showstopper defect: A mobile app crashing on the splash screen.
  • Severe: A defect of this severity breaks the important feature. However, a tester can test other features fine. Let’s understand this using a defect found in user registration scenario. Although a user is successfully registered in the system, web page crashes on clicking Submit button, and registration confirmation mail is not sent. Due to this defect, a tester would probably be able to test other features such as Login and Profile fine. But since the registration is broken, this defect would be severe to the system.
  • Moderate: A defect due to which application behavior deviates from what is expected but the system as a whole is usable. For example, a validation failure for any important text field.
  • Minor: A defect of this severity doesn’t impact the functionality much. Nonetheless, it should be fixed. Some examples: A spelling/grammatical errors, UI alignment issues.

“Important concepts:

91. Leverage user experience elements to improve software testing. “Tests based on requirements still fail user expectations because requirements describe system specifications whereas user expectations are best represented through user-centric design artifacts. Exploratory testing is a technique that shifts focus from system-centric checking to user-centric testing. To be effective, exploratory tests have to rely on the user-centric artifacts that capture the user behavior.

“Exploratory testing as opposed to ad-hoc testing is a focused, well-defined and controlled testing approach that time-boxes test iterations and cycles using scenarios for reference. Exploratory testers rely on hunches, biases, conjectures, intuition, personal experience and heuristics while continuously learning from the system behavior. User experience (UX) design process tries to uncover similar aspects of user behavior that motivates users of the system that are the basis for user expectations.” – Venkat Moncompu, Leveraging user experience design elements to improve software testing, WestMonroe; Twitter: @WestMonroe

"Screenshots, logs and video are a tester’s best proof-points." - Mr. OoPpSs

92. Screenshots, logs, and videos are your best proof-points. “Screenshots, logs, and video are a tester’s best proof-points.

“Unfortunately, server communication logs are not as easy to handle as client logs. They are usually added more for the developer’s convenience when debugging communications with server than for the tester’s benefit.

  • Please ask developers of clients and servers to export all server requests and responses into a convenient no-nonsense interface for log viewing. It will become easier to analyze server requests and responses, pinpoint duplicates, and find more convenient ways updating data.
  • For example, a developer may have to re-request the entire profile to update only a part of it instead of applying a more lightweight request. In situations where the location of a problem is unclear, a combination of server and client logs can help tackle the problem faster in most cases.” – Mr. OoPpSs, Mobile App Penetration Testing – Tips and Tricks, LinkedIn; Twitter: @mrooppss

Leveraging Test Results

"The final test result may be ‘pass’ or ‘fail’ but troubleshooting the root cause of ‘fail’ will lead you to the solution of the problem." - Vijay Shinde

93. Don’t just test — get to the root cause of bugs and failures. “Do not ignore the test result. The final test result may be ‘pass’ or ‘fail’ but troubleshooting the root cause of ‘fail’ will lead you to the solution of the problem. Testers will be respected if they not only log the bugs but also provide solutions.” – Vijay Shinde, Top 20 practical software testing tips you should read before testing any application, Software Testing Help; Twitter: @vijayshinde

94. Watch for unexpected behavior. “It’s common sense to test an app for expected functionality and valid conditions, but it is also helpful to test for invalid conditions and unexpected behavior. For example, you’ll always want to test for potential points where software might fail or crash but you should also take a close look at how the software is performing when no observable bugs seem to be occurring. This can help you find issues you might otherwise overlook.”  Best Practical Software Testing Tips, SQA Solution; Twitter: @sqa_solution

"Nothing slows a customer down more than a technical issue." - Scott Stiner

95. Eliminate technical issues during development that can impact user experience. “During software development, you must complete rigorous testing to eliminate all technical issues. Nothing slows a customer down more than a technical issue. According to a Kissmetrics report, 25% of visitors leave a website within four seconds due to slow load time, and page abandonment increases as load time increases.

“Technical issues can destroy a business. So when developing software, be certain that all the bugs are worked out and operations run smoothly to ensure an optimal user experience. While working with one of our larger clients in the energy space, we found a number of technical issues in later stages of development. This created the need to revisit some earlier development stages to course correct. Luckily, our second reiteration was complete, the software was rid of bugs, and the UX experience was clean.” – Scott Stiner, Five User Experience Tips For Software Developers, Forbes; Twitter: @Forbes

96. Identify bottlenecks. “If you’re experiencing slow .NET application performance, the best thing you can do is identify the bottleneck by measuring the speed of your site with database profiling, tracing, and looking at your logs.” – Boris Dzhingarov, 4 Tips to Improve Your .NET Application Performance, TG Daily; Twitter: @tgdaily

97. Be diplomatic in bug reports. “Even if you are brimming with confidence about authenticity of the bug detected by you, avoid writing a bug report which would reflect as if you are trying to pass your verdict on the genuinity of the bug. In every probability this could initiate a controversy which would reflect your superiority complex as a tester. Your main aim should be to keep your bug report conclusive supporting your bug plus the sole motive must be to get the bug closed ultimately. Try to use diplomacy in bug report, instead of using authoritative statements in favor of your bug thereby making your bug report unpleasant, best way is to be suggestive. Such an approach shall always be taken in good spirit.” – Nine Tips for an Effective Bug Reporting, Software Testing Genius; Twitter: @CertnTesting

98. Speed up the development cycle with consistent feedback. “So this consistent state of change requires us to put continuous feedback at the core of our projects and project efforts. Being agile also means providing touch points for continuous feedback.

“Though it’s nothing new, Feedback is key. Continuously asking for feedback throughout the entire project requires a feedback culture where people are pushed to evaluate what they are doing on a daily basis.” – Thomas Peham, Why No One Talks About Agile Testing!, DZone; Twitter: @DZone

"Try to find out the result pattern and then compare your results with those patterns." - Software Testing Help

99. Perform repeated tests with different test environments, and then try to find results patterns. “Perform repeated tests with the different test environment.

Try to find out the resulting pattern and then compare your results with those patterns.

“When you think that you have completed most of the test conditions and when you think you are tired somewhat then do some monkey testing.

“Use your previous test data pattern to analyze the current set of tests.” – How to find a bug in an application? Tips and Tricks, Software Testing Help; Twitter: @VijayShinde

100. Practice pattern recognition. “This trick is basically to enhance your alertness in finding the bug. For instance, when you have to compare to pieces of similar code and come up with small bugs which might go unnoticed, you will able to draw conclusions in no time.

“For a small piece, it won’t make much difference, but when it comes to a lot of information and lengthy code, it is very helpful.” – How to Improve Your Manual Testing Skills?, Testbytes; Twitter: @Testbytes

"The most important thing is to keep on testing." - Evolutionate

101. Keep on testing. “The most important thing is to keep on testing. This is only plausible if you have been keeping that ‘eye’ to test. This is another way to say to look at things from a different angle.” – Tips and Tricks for Mobile App Testing and Quality Assurance, Evolutionate; Twitter: @Cuelogic

 

 

What's a C# Throw Exception?

How to Throw C# Exceptions Like a Major League Pro: Examples, Best Practices, and Everything You Need to Know

Angela Stringfellow Developer Tips, Tricks & Resources Leave a Comment

Practically everyone who has ever used a web page or an app has encountered an exception at one point or another, but they probably didn’t realize what it was. Exceptions are pretty common ways to handle unexpected inputs but are they always the right way to handle such problems? In this post, we’ll take a closer look at C# exceptions, an example, and cover some best practices for when to throw exceptions and when it might be smart to consider another option.

What Does “Throw Exception” Mean?

An exception is an event that occurs during the execution of a program. It disrupts the normal flow of instructions. This is perhaps the simplest definition of an exception.

An exception is basically a problem occurring while a program is being executed.  It is the response of the OS to any exceptional computing which results in error, and there is no direction within the program about what should be done. In programming jargon, developers say a program “throws an exception,” hence the term “throw exception”. Throw is also a keyword in C#.

Exception handlers are shortcodes written to handle specific errors that may occur during execution. Control is transferred to the handlers when errors occur, and the handlers tell the program what to do.

There are four main constructs used within programs to handle exceptions – try, catch, finally, and throw. These keywords are specific to C#. Other programming languages may use different keywords, but the basic logic is generally the same. Let’s take a look at a hypothetical example to understand this better.

A Hypothetical Example: C# Throw Exception

Let’s assume that we are calculating the average grades for students. Further, we’ll assume that for a particular subject not a single student sat for the exam. In this case, the divisor would become zero. If this situation occurs and there is no handler, the program would crash. However, developers usually foresee this possibility and check for zero divisors. A developer would enter code to handle the error by displaying an error message and bringing the program to a logical end.

StackOverflow provides some example code for handling such an error.

namespace nsDivZero
{
    using System;
    public class DivZero
    {
        static public void Main ()
        {
            // Set an integer equal to 0
            int IntVal1 = 0;
            // and another not equal to zero
            int IntVal2 = 57;
            try
            {
                Console.WriteLine ("{0} / {1} = {2}", IntVal2, IntVal1, IntResult (IntVal2, IntVal1) / IntResult (IntVal2, IntVal1));
            }
            catch (DivideByZeroException e)
            {
                Console.WriteLine (e.Message);
            }
            // Set a double equal to 0
            double dVal1 = 0.0;
            double dVal2 = 57.3;
            try
            {
                Console.WriteLine ("{0} / {1} = {2}", dVal2, dVal1, DoubleResult (dVal2, dVal1));
            }
            catch (DivideByZeroException e)
            {
                Console.WriteLine (e.Message);
            }
        }
        static public int IntResult (int num, int denom)
        {
            return (num / denom);
        }
        static public double DoubleResult (double num, double denom)
        {
            return (num / denom);
        }
    }
}

Quite simply, this code divides one variable by another and stores the result in a third variable. This division is performed in the try{} part of the exception handler. In the event that the value of the divisor is zero, control is automatically transferred to the catch{} part, which displays a message for the user and terminates the program.

The general syntax for catching an exception is to use a try-catch combination with the statements that may throw an exception contained within the try{} block and the error handling contained within the catch{} block.

Benefits of Exceptions

The advantage of throwing exceptions lies the answer to the question, “What is your program doing?” In the above example, for instance, you’d probably prefer to continue to the next set of numbers rather than abort the program. In this case, preventing the divisor from becoming zero at any point may be better than throwing an exception when an error occurs.

If, however, you are attempting to store data in a file, and the file is non-existent, such an error cannot be prevented. Therefore, it becomes necessary to throw an exception and handle the error logically, letting the user know that “A system Error Has Occurred” or that the “File Does Not Exist.”

You might say that prevention is a better alternative. But if for instance, you check for zero value every time a result is generated, it’s one additional step for runtime, meaning it takes that much longer for the program to execute. Where the data is large, this can make a significant difference, and exception handlers may become necessary.

The best approach to handling an error is a decision that must be made by the developer.

Best Practices for Throwing & Catching C# Exceptions

Exceptions can be handled in different ways. Let’s say you’re storing the results of an equation into an array. In this case, you may check for zero value at the time the result is generated, and avoid storing zero in your array. That way, no exception is likely to be thrown.

It’s up to you as a developer to choose how to handle exceptions within the code. According to Blackwasp, two great uses of the exception handler are:

  1. When an invalid value (such as zero) is passed to a method.
  2. When a method fails to run (perhaps because previous steps such as opening a file have not been completed).

Infoworld, on the other hand, states returning exceptions as the result of a method is bad programming practice. Exceptions are thrown to the next higher level in the hierarchy. Handling exceptions at lower levels may complicate the code and make it difficult to trace the error. Infoworld suggests that exceptions should be handled as high as possible in the hierarchy.

MSDN suggests creating “human” messages rather than throwing exceptions from the ApplicationException class, which is what the “throw” statement does. The ApplicationException class does not provide the cause of the error.

If, however, you don’t want to create your own error handler, it’s better to throw the exception from the exception class simply. Infoworld advocates logging the sequence of events that led to the exception rather than just the exception itself.

The try-catch-throw construct of C# is an extremely useful tool for trapping errors and preventing an application from crashing. It provides a systematic way to let both the user and the developer know what went wrong and why.

However, exceptions are just that – exceptions – and should be used sparingly. If you expect an error to recur a number of times, it’s better to use the IF construct to avoid its occurrence rather than throw an exception. Exceptions are for errors that cannot be trapped within the normal flow of the program, such as file open errors, IO errors, and so forth.

Our advice? Take all possibilities into consideration before deciding to throw exceptions.

Stackify Supports Humanitarian Toolbox

Humanitarian Toolbox Lets You Save Lives with Your Keyboard

Jennilee Live Queue Leave a Comment

I know a lot of people that want to contribute their time and skills in times of crises, myself included. But life has a way of running amok, and as soon as I decide that the time is right for volunteering, I’m distracted by kids or work or the latest mystery mess in my kitchen (hint: it might be someone’s effort to make pancakes). Thankfully, there’s Humanitarian Toolbox (HTBox), a charity that supports disaster relief organizations. It provides developers, designers, testers and other industry professionals the chance to make a difference with their unique skills despite unpredictable schedules.

HTBox takes on projects at varying levels of progress. If they are building something from scratch, they handle all the requirement gathering. They also help to maintain and improve custom-built disaster relief software. That means that when you volunteer, you can jump in and just focus on development. Not only that, they use Open Source software, so nothing stands in the way for contributors and users. Check out their repositories on GitHub.

Stackify works tirelessly to make things better for developers, so developers can make things better for others. That’s why we support Humanitarian Toolbox. From now until June 30th, we’re donating $25 to HTBox for every Retrace signup. Disasters happen too often all over the world. People need our help. See how you can make a difference with HTBox. Like us here at Stackify, they believe that your code can save lives.

Performance Testing for Business People

What Is Performance Testing? An Explanation for Business People

Erik Dietrich Developer Career Development, Insights for Dev Managers Leave a Comment

Performance testing is a form of software testing that focuses on how a system running the system performs under a particular load. Performance testing should give organizations the diagnostic information they need to eliminate bottlenecks. You can find more information about types, steps and best practices here.  This article provides insights and scenarios on performance testing from a business perspective.

How Performance Testing Impacts Your Business

The world of enterprise IT neatly divides concerns between two camps: IT and the business.  Because of this division, an entire series of positions exists to help these groups communicate.  But since I don’t have any business analysts at my disposal to interview, I’ll just bridge the gap myself today.  From the perspective of technical folks, I’ll explain performance testing in the language that matters to the business.

A Tale of Vague Woe

To prove it, let me conjure up a maddening hypothetical.  You’ve coordinated with the software organizations for months.  This has included capturing the voice of the customer and working with architects and developers to create a vision for the product.  You’ve seen the project through conception, beta testing, and eventual production release.  And you’re pretty proud of the way this has gone.

But now you find yourself more than a little exasperated.  A few scant months after the production launch, weird and embarrassing things continue to go wrong.  It started out as a trickle and then grew into a disturbing, steady stream.  Some users report agonizingly slow experiences while others don’t complain.  Some report seeing weird error messages and screens, while others have an average experience.  And, of course, when you try to corroborate these accounts, you don’t see them.

You inquire with the software developers and teams, but you can tell they don’t quite take this at face value.  “Hmmm, that shouldn’t happen,” they say to you.  Then they concede that maybe it could, but they shrug and say there’s not much they can do unless you help them reproduce the issue.  Besides, you know users, amirite?  Always reporting false issues because they have unrealistic expectations.

Sometimes you wonder if the developers don’t have the right of it, but you know you’re not imagining the exasperated phone calls and negative social media interactions.  Worse, paying users are leaving, and fewer new ones sign up.  Whether perception or reality, user’s experience hits you in the pocketbook.

Read More

Develop your application with Spring Boot

How Spring Boot Can Level Up your Spring Application

Eugen Paraschiv Developer Tips, Tricks & Resources Leave a Comment

The Spring Ecosystem

There are a two stable, mature stacks for building web applications in the Java ecosystem, and considering the popularity and strong adoption, the Spring Framework is certainly the primary solution.

Spring offers a quite powerful way to build a web app, with support for dependency injection, transaction management, polyglot persistence, application security, first-hand REST API support, an MVC framework and a lot more.

Traditionally, Spring applications have always required significant configuration and, for that reason, can sometimes build up a lot of complexity during development. That’s where Spring Boot comes in.

The Spring Boot project aims to make building web application with Spring much faster and easier. The guiding principle of Boot is convention over configuration.

Let’s have a look at some of the important features in Boot:

  • starter modules for simplifying dependency configuration
  • auto-configuration whenever possible
  • embedded, built-in Tomcat, Jetty or Undertow
  • stand-alone Spring applications
  • production-ready features such as metrics, health checks, and externalized configuration
  • no requirement for XML configuration

In the following sections, we’re going to take a closer look at the necessary steps to create a Boot application and highlight some of the features in the new framework in more detail.

Spring Boot Starters

Simply put, starters are dependency descriptors that reference a list of libraries.

To create a Spring Boot application, you first need to configure the spring-boot-starter-parent artifact in the parent section of the pom.xml:

<parent>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-parent</artifactId>
    <version>1.5.3.RELEASE</version>
    <relativePath />
</parent>

This way, you only need to specify the dependency version once for the parent. The value is then used to determine versions for most other dependencies – such as Spring Boot starters, Spring projects or common third-party libraries.

The advantage of this approach is that it eliminates potential errors related to incompatible library versions. When you need to update the Boot version, you only need to change a single, central version, and everything else gets implicitly updated.

Also note that there are more than 30 Spring Boot starters available, and the community is building more every day.

A good starting point is creating a basic web application. To get started, you can simply add the web starter to your pom:

<dependency>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-web</artifactId>
</dependency>

If you want to enable Spring Data JPA for database access, you can add the JPA starter:

<dependency>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-data-jpa</artifactId>
</dependency>

Notice how we’re no longer specifying the version for either of these dependencies.

Before we dive into some of the functionality in the framework, let’s have a look at another way we can bootstrap a project quickly.

Spring Boot Initializr

Spring Boot is all about simplicity and speed, and that starts with bootstrapping a new application.

You can achieve that by using the Spring Boot Initializr page to download a pre-configured Spring Boot project, which you can then import into your IDE.

The Initializr lets you select whether you want to create a Maven or Gradle project, the Boot version you want to use and of course the dependencies for the project:

Spring Initializr

 

You can also select the “Switch to the full version” option, you can configure a lot more advanced options as well.

Spring Boot Auto-Configuration

Spring applications usually require a fair amount of configuration to enable features such as Spring MVC, Spring Security or Spring JPA. This configuration can take the form of XML but also Java classes annotated with @Configuration.

Spring Boot aims to simplify this process by providing a sensible default configuration, based on the dependencies on the classpath and loaded automatically behind the scenes.

This auto-configuration contains @Configuration annotated classes, intended to be non-invasive and only take effect if you have not defined them explicitly yourself.

The approach is driven by the @Conditional annotation – which determines what auto-configured beans are enabled based on the dependencies on the classpath, existing beans, resources or System properties.

It’s important to understand that, as soon as you define your configuration beans, then these will take precedence over the auto-configured ones.

Coming back to our example, based on the starters added in the previous section, Spring Boot will create an MVC configuration and a JPA configuration.

To work with Spring Data JPA, we also need to set up a database. Luckily, Boot provides auto-configuration for three types of in-memory databases: H2, HSQL, and Apache Derby.

All you need to do is add one of the dependencies to the project, and an in-memory database will be ready for use:

<dependency>
    <groupId>com.h2database</groupId> 
    <artifactId>h2</artifactId>
</dependency>

The framework also auto-configures Hibernate as the default JPA provider.

If you want to replace part of the auto-configuration for H2, the defaults are smart enough to gradually step back and allow you to do that while still preserving the beans you’re not explicitly defining yourself.

For example, if you want to add initial data to the database, you can create files with standard names such as schema.sql, data.sql or import.sql to be picked up automatically by Spring Boot auto-configuration, or you can define your DataSource bean to load a custom named SQL script manually:

@Configuration
public class PersistenceConfig {

    @Bean
    public DataSource dataSource() {
        EmbeddedDatabaseBuilder builder = new EmbeddedDatabaseBuilder();
        EmbeddedDatabase db = builder.setType(EmbeddedDatabaseType.H2)
          .addScript("mySchema.sql")
          .addScript("myData.sql")
          .build();
        return db;
    }
}

This has the effect of overriding the auto-configured DataSource bean, but not the rest of the default beans that make up the configuration of the persistence layer.

Before moving on, note that it’s also possible to define an entirely new custom auto-configuration that can then be reused in other projects as well.

The Entry Point in a Boot Application

The entry point for a Spring Boot application is the main class annotated with @SpringBootApplication:

@SpringBootApplication
public class Application {
    public static void main(String[] args){
        SpringApplication.run(Application.class, args);
    }
}

This is all we need to have a running Boot application.

The shortcut @SpringBootApplication annotation is equivalent to using @Configuration, @EnableAutoConfiguration, and @ComponentScan and will pick up all config classes in or bellow the package where the class is defined.

Embedded Web Server

Out of the box, Spring Boot launches an embedded web server when you run your application.

If you use a Maven build, this will create a JAR that contains all the dependencies and the web server. This way, you can run the application by using only the JAR file, without the need for any extra setup or web server configuration.

By default, Spring Boot uses an embedded Apache Tomcat 7 server. You can change the version by specifying the tomcat.version property in your pom.xml:

<properties>
    <tomcat.version>8.0.43</tomcat.version>
</properties>

Not surprisingly, the other supported embedded servers are Jetty and Undertow. To use either of these, you first need to exclude the Tomcat starter:

<dependency>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-web</artifactId>
    <exclusions>
        <exclusion>
            <groupId>org.springframework.boot</groupId>
            <artifactId>spring-boot-starter-tomcat</artifactId>
        </exclusion>
    </exclusions>
</dependency>

Then, add the Jetty or the Undertow starters:

<dependency>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-jetty</artifactId>
</dependency>
<dependency>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-undertow</artifactId>
</dependency>

Advanced Externalized Configuration

Another super convenient feature in Boot is the ability to easily configure the behavior of an application via external properties files, YAML files, environment variables and command-line arguments. These properties have standard names that will be automatically picked up by Boot and evaluated in a set order.

The advantage of this feature is that we get to run the same deployable-unit/application in different environments.

For example, you can use the application.properties file to configure an application’s port, context path, and logging level:

server.port=8081
server.contextPath=/springbootapp
logging.level.org.springframework.web: DEBUG

This can be a significant simplification in more traditional environments but is a must in virtualized and container environments such as Docker.

Of course, ready-to-go deployable units are a great first step, but the confidence you have in your deployment process is very much dependent on both the tooling you have around that process but also the practices within your organization.

Metrics

Beyond project setup improvements and operational features, Boot also brings in some highly useful functional features, such as internal metrics and health checks – all enabled via actuators.

To start using the actuators in the framework, you need to add only a single dependency:

<dependency>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-actuator</artifactId>
</dependency>

The relevant information is available via endpoints that can be accessed out-of-the-box: /metrics and /health.

We also get access to other endpoints such as: /info which displays application information and /trace that shows the last few HTTP requests coming into the system.

Here are just some of the types of metrics we get access to by default:

  • system-level metrics – total system memory, free system memory, class load information, system uptime
  • DataSource metrics – for each DataSource defined in your application, you can check the number of active connections and the current usage of the connection pool
  • cache metrics – for each specified cache, you can view the size of the cache and the hit and miss ratio
  • Tomcat session metrics – the number of active and maximum sessions

You can also measure and track your own metrics, customize the default endpoints as well as add your own, entirely new endpoint.

Now, tracking and exposing metrics is quite useful until you get to production, but of course, once you do get to production, you need a more mature solution that’s able to go beyond simply displaying current metrics. That’s where Retrace is a natural next step to help you drill down into the details of the application runtime, but also keep track of this data over time.

Health Checks

One of the primary and most useful endpoints is, not surprisingly, /health. 

This will expose different information depending on the accessing user and on whether the enclosing application is secured.

By default, when accessed without authentication, the endpoint will only indicate whether the application is up or down. But, beyond the simple up or down status, the state of different components in the system can be displayed as well – such as the disk or database or other configured components like a mail server.

The point where /health goes beyond just useful is with the option to create your custom health indicator.

Let’s roll out a simple enhancement to the endpoint:

@Component
public class HealthCheck implements HealthIndicator {
  
    @Override
    public Health health() {
        int errorCode = check(); // perform some specific health check
        if (errorCode != 0) {
            return Health.down()
              .withDetail("Error Code", errorCode).build();
        }
        return Health.up().build();
    }
     
    public int check() {
        // Your logic to check health
        return 0;
    }
}

As you can see, this allows you to use your internal system checks and make those a part of /health.

For example, a standard check here would be to do a quick persistence-level read operation to ensure everything’s running and responding as expected.

Similarly to metrics, as you move towards production, you’ll definitely need a proper monitoring solution to keep track of the state of the application. Within Retrace, the People Metrics feature is a simple way you can define and watch these custom metrics.

A powerful step forward from just publishing metrics or health info on request is the more advanced Key Transactions feature in Retrace – which can be configured to actively monitor specific operations in the system and notify you when the metrics associated with that operation become problematic.

Example Application

After setting up the project, you can simply start creating controllers or customizing the configuration.

Let’s create a simple application that manages a list of employees.

First, let’s add an Employee entity and repository based on Spring Data:

@Entity
public class Employee {
    
    @Id
    @GeneratedValue(strategy=GenerationType.IDENTITY)
    private long id;

    private String name;
    
    // standard constructor, getters, setters
}
public interface EmployeeRepository extends JpaRepository<Employee, Long>{ }

Let’s now create a controller to manipulate employee entities:

@RestController
public class EmployeeController {

    private EmployeeRepository employeeRepository;
    
    public EmployeeController(EmployeeRepository employeeRepository){
        this.employeeRepository = employeeRepository;
    }
    @PostMapping("/employees")
   [email protected](HttpStatus.CREATED)
    public void addEmployee(@RequestBody Employee employee){
        employeeRepository.save(employee);
    }
    
    @GetMapping("/employees")
    public List<Employee> getEmployees(){
        return employeeRepository.findAll();
    }
}

You also need to create the mySchema.sql and myData.sql files:

create table employee(id int identity primary key, name varchar(30));
insert into employee(name) values ('ana');

To avoid Spring Boot recreating the employee table and removing the data, you need to set the ddl-auto Hibernate property to update:

spring.jpa.hibernate.ddl-auto=update

Testing the Application

Spring Boot also provides excellent support for testing; all included in the test starter:

<dependency>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-test</artifactId>
</dependency>

This starter automatically adds commonly used dependencies for testing in Spring such as Spring Test, JUnit, Hamcrest, and Mockito.

As a result, you can create a test for the controller mappings, by using the @SpringBootTest annotation with the configuration classes as parameters.

Let’s add a JUnit test that creates an Employee record, then retrieves all the employees in the database and verifies that both the original record added and the one just created are present:

@RunWith(SpringRunner.class)
@SpringBootTest(classes = Application.class)
@WebAppConfiguration
public class EmployeeControllerTest {
    
    private static final String CONTENT_TYPE 
      = "application/json;charset=UTF-8";
    
    private MockMvc mockMvc;
    
    @Autowired
    private WebApplicationContext webApplicationContext;
    
    @Before
    public void setup() throws Exception {
         this.mockMvc = MockMvcBuilders
           .webAppContextSetup(webApplicationContext)
           .build();
    }
    
    @Test
    public void whenCreateEmployee_thenOk() throws Exception {
        String employeeJson = "{\"name\":\"john\"}";

        this.mockMvc.perform(post("/employees")
          .contentType(CONTENT_TYPE)
          .content(employeeJson))
          .andExpect(status().isCreated());
        
        this.mockMvc.perform(get("/employees"))
          .andExpect(status().isOk())
          .andExpect(content().contentType(CONTENT_TYPE))
          .andExpect(jsonPath("$", hasSize(2)))
          .andExpect(jsonPath("$[0].name", is("ana")))
          .andExpect(jsonPath("$[1].name", is("john")));      
    }
}

Simply put, @SpringBootTest allows us to run integration tests with Spring Boot. It uses the SpringBootContextLoader as the default ContextLoader and automatically searches for a @SpringBootConfiguration class if no specific classes or nested configuration are defined.

We also get a lot of additional and interesting support for testing:

  • @DataJpaTest annotation for running integration tests on the persistence layer
  • @WebMvcTest which configures the Spring MVC infrastructure for a test
  • @MockBean which can provide a mock implementation for a required dependency
  • @TestPropertySource used to set locations of property files specific to the test

Conclusions

Ever since Spring sidelined XML configuration and introduced its Java support, the core team has had simplicity and speed of development as primary goals. Boot was the next natural step in that direction, and it has certainly achieved this goal.

The adoption of Boot has been astounding over the last couple of years, and a 2.0 release will only accelerate that trend going forward.

And a large part of that success is the positive reaction of the community to the production-grade features that we explored here. Features that were traditionally built from the ground up by individual teams are now simply available by including a Boot starter. That is not only very useful, but also very cool.

The full source code of all the examples in the article is available here, as a ready to run Boot project.

What is RestSharp?

What is RestSharp? An Introduction to RestSharp’s Features and Functionality

Angela Stringfellow Developer Tips, Tricks & Resources, Live Queue Leave a Comment

RestSharp is one of the several ways to create a web service or web request in .NET; we discuss a few other such options in this post. In today’s post, though, we’ll take a look at RestSharp specifically, its features and benefits, and a few examples of RestSharp in action.

Definition of RestSharp

RestSharp is a comprehensive, open-source HTTP client library that works with all kinds of DotNet technologies.  It can be used to build robust applications by making it easy to interface with public APIs and quickly access data without the complexity of dealing with raw HTTP requests. RestSharp combines myriad advantages and time-saving features with a simple, clean interface, making it one of the hottest REST tools being used today.

With its simple API and powerful library, REST architecture is the tool of choice for programmers looking to build detailed programs and applications. RESTful architecture provides an information-driven, resource-oriented approach to creating Web applications. It also offers common tasks such as URI generation, payload parsing, and authentication, as configurable options, ensuring that application developers no longer have to worry about low-level tasks such as networking.

Benefits of RestSharp

RestSharp is one of the best libraries to use if you frequently use REST to consume HTTP APIs in DotNet. It comes in particularly handy for Windows phone applications, where REST or SOAP are often used to communicate with external data.

Asynchronous request handling is one of the foremost requirements for programming on Windows platforms. RestSharp supports both synchronous and asynchronous requests, making it a perfect fit for Windows applications. This powerful library saves on programming time and equips developers with useful tools that aid in creating elegant applications that are easy to debug.

RestSharp Features

The RestSharp library boasts some powerful features, making it a singularly unique tool that syncs admirably with RESTful architecture and helps in creating varied DotNet applications. Some of these features include:

  • Custom serialization and deserialization via ISerializer and IDeserializer.
  • Both synchronous and asynchronous requests
  • Automatic XML and JSON parsing, including fuzzy element name matching (“product_id” in XML/JSON will match C# property named ‘ProductId’)
  • Multipart file/form uploads
  • oAuth 1, oAuth 2, Basic, NTLM and Parameter-based Authentication
  • Support for features such as GET, PUT, HEAD, POST, DELETE and OPTIONS

How RestSharp Works

RestSharp works best as the foundation for a proxy class for an API. The most basic features of RestSharp include creating a request, adding parameters to the request, execution, and handling of said request, deserialization, and authentication. Here’s a look at some RestSharp basics:

Handling Requests

  • Using RestRequest creates a new request to a specified URL.
  • AddParameter will add a new parameter to the request.
  • HTTP headers can easily be added to the request you have generated, using request.AddHeader.
  • You can replace a token in the request, by using request.AddUrlSegment. This will replace the matching token in the request.
  • To execute the request, the command client.Execute(request) is used. The response object can also be used to parse your data.

Deserialization

RestSharp contains inbuilt de-serializers that support XML and JSON. The correct de-serializer is chosen by RestSharp, based on the content type returned by the server.

RestSharp supports the following content types:

  • application/json – JsonDeserializer
  • application/xml – XmlDeserializer
  • text/json – JsonDeserializer
  • text/xml – XmlDeserializer
  • * – XmlDeserializer

Overriding Default Deserializer

If the default de-serializers don’t answer to your requirements, RestSharp also allows the programmer to create their own de-serializers to handle content. This is done as follows:

  • Create a class and implement Deserializer.
  • Use RestClient.AddHandler(type, IDeserializer) to register a handler and its associated content type.
  • If you need to remove a registered handler, you can use the command RestClient.RemoveHandler(type). RestClient.ClearHandlers() removes all registered handlers.

Authenticators

RestSharp also provides authentication support for different systems like HTTP, NTLM and other parameter based programs. It also lets you create your own Authenticator. The process is simple: Implement IAuthenticator and get it registered with your RestClient.

RestSharp supports the following Authenticators:

  • HttpBasicAuthenticator
  • IAuthenticator
  • NtlmAuthenticator
  • OAuth1Authenticator
  • OAuth2Authenticator
  • SimpleAuthenticator

RestSharp Examples

The following code example comes from RestSharp on GitHub:

RestSharp

If you have only a small number of one-off requests to make an API, you can use RestSharp as in the example below (also from GitHub):

Basic RestSharp

You can also find simple examples of RestSharp code at StackOverflow and Xamarin.

Advanced Task-Handling in RestSharp

RestSharp takes care of a lot of banal tasks, so you don’t have to spend valuable time on tedious, repetitive work. For instance, the API in RestSharp returns XML, which RestSharp automatically detects and deserializes to the Call object using the default XmlDeserializer.

Also, RestSharp can make a default RestRequest via a GET HTTP request. This setting can be changed in the Method property of RestRequest or by specifying the method in the constructor when creating an instance.

Additional Resources and RestSharp Tutorials

For more information on RestSharp, check out the following links for helpful tutorials and other resources:

N-Tier Architecture

What is N-Tier Architecture? How It Works, Examples, Tutorials, and More

Angela Stringfellow Developer Tips, Tricks & Resources Leave a Comment

Great products are often built on multi-tier architecture – or n-tier architecture, as it’s often called. At Stackify, we love to talk about the many tools, resources, and concepts that can help you build better. (check out more of our tips and tricks here)  So in this post, we’ll discuss n-tier architecture, how it works, and what you need to know to build better products using multi-tier architecture.

Definition of N-Tier Architecture

N-tier architecture is also called multi-tier architecture because the software is engineered to have the processing, data management, and presentation functions physically and logically separated.  That means that these different functions are hosted on several machines or clusters, ensuring that services are provided without resources being shared and, as such, these services are delivered at top capacity.  The “N” in the name n-tier architecture refers to any number from 1.

Not only does your software gain from being able to get services at the best possible rate, but it’s also easier to manage.  This is because when you work on one section, the changes you make will not affect the other functions.  And if there is a problem, you can easily pinpoint where it originates.

A More In-Depth Look at N-Tier Architecture

N-tier architecture would involve dividing an application into three different tiers.  These would be the

  1. logic tier,
  2. the presentation tier, and
  3. the data tier.

N-Tier Architecture

Image via Wikimedia Commons

The separate physical location of these tiers is what differentiates n-tier architecture from the model-view-controller framework that only separates presentation, logic, and data tiers in concept.  N-tier architecture also differs from MVC framework in that the former has a middle layer or a logic tier, which facilitates all communications between the different tiers.  When you use the MVC framework, the interaction that happens is triangular; instead of going through the logic tier, it is the control layer that accesses the model and view layers, while the model layer accesses the view layer.  Additionally, the control layer makes a model using the requirements and then pushes that model into the view layer.

This is not to say that you can only use either the MVC framework or the n-tier architecture.  There are a lot of software that brings together these two frameworks.  For instance, you can use the n-tier architecture as the overall architecture, or use the MVC framework in the presentation tier.

What are the Benefits of N-Tier Architecture?

There are several benefits to using n-tier architecture for your software.  These are scalability, ease of management, flexibility, and security.

  • Secure: You can secure each of the three tiers separately using different methods.
  • Easy to manage: You can manage each tier separately, adding or modifying each tier without affecting the other tiers.
  • Scalable: If you need to add more resources, you can do it per tier, without affecting the other tiers.
  • Flexible: Apart from isolated scalability, you can also expand each tier in any manner that your requirements dictate.

In short, with n-tier architecture, you can adopt new technologies and add more components without having to rewrite the entire application or redesigning your whole software, thus making it easier to scale or maintain.  Meanwhile, in terms of security, you can store sensitive or confidential information in the logic tier, keeping it away from the presentation tier, thus making it more secure.

Other benefits include:

  • More efficient development. N-tier architecture is very friendly for development, as different teams may work on each tier.  This way, you can be sure the design and presentation professionals work on the presentation tier and the database experts work on the data tier.
  • Easy to add new features. If you want to introduce a new feature, you can add it to the appropriate tier without affecting the other tiers.
  • Easy to reuse. Because the application is divided into independent tiers, you can easily reuse each tier for other software projects.  For instance, if you want to use the same program, but for a different data set, you can just replicate the logic and presentation tiers and then create a new data tier.

How It Works and Examples of N-Tier Architecture

When it comes to n-tier architecture, a three-tier architecture is fairly common.  In this setup, you have the presentation or GUI tier, the data layer, and the application logic tier.

The application logic tier.  The application logic tier is where all the “thinking” happens, and it knows what is allowed by your application and what is possible, and it makes other decisions.  This logic tier is also the one that writes and reads data into the data tier.

The data tier. The data tier is where all the data used in your application are stored.  You can securely store data on this tier, do transaction, and even search through volumes and volumes of data in a matter of seconds.

The presentation tier.  The presentation tier is the user interface.  This is what the software user sees and interacts with.  This is where they enter the needed information.  This tier also acts as a go-between for the data tier and the user, passing on the user’s different actions to the logic tier.

Just imagine surfing on your favorite website.  The presentation tier is the Web application that you see.  It is shown on a Web browser you access from your computer, and it has the CSS, JavaScript, and HTML codes that allow you to make sense of the Web application.  If you need to log in, the presentation tier will show you boxes for username, password, and the submit button.  After filling out and then submitting the form, all that will be passed on to the logic tier.  The logic tier will have the JSP, Java Servlets, Ruby, PHP and other programs.  The logic tier would be run on a Web server.  And in this example, the data tier would be some sort of database, such as a MySQL, NoSQL, or PostgreSQL database.  All of these are run on a separate database server.  Rich Internet applications and mobile apps also follow the same three-tier architecture.

And there are n-tier architecture models that have more than three tiers.  Examples are applications that have these tiers:

  • Services – such as print, directory, or database services
  • Business domain – the tier that would host Java, DCOM, CORBA, and other application server object.
  • Presentation tier
  • Client tier – or the thin clients

One good instance is when you have an enterprise service-oriented architecture.  The enterprise service bus or ESB would be there as a separate tier to facilitate the communication of the basic service tier and the business domain tier.

Considerations for Using N-Tier Architecture for Your Applications

Because you are going to work with several tiers, you need to make sure that network bandwidth and hardware are fast.  If not, the application’s performance might be slow.  Also, this would mean that you would have to pay more for the network, the hardware, and the maintenance needed to ensure that you have better network bandwidth.

Also, use as fewer tiers as possible.  Remember that each tier you add to your software or project means an added layer of complexity, more hardware to purchase, as well as higher maintenance and deployment costs.  To make your n-tier applications make sense, it should have the minimum number of tiers needed to still enjoy the scalability, security and other benefits brought about by using this architecture.  If you need only three tiers, don’t deploy four or more tiers.

N-Tier Architecture Tutorials and Resources

For more information on n-tier architecture, check out the following resources and tutorials:

Targeting Top Java Developers on Twitter

Top Java Developers on Twitter

Hannah White BuildBetter Leave a Comment

For our most recent BuildBetter publication, we created an ultimate Comprehensive Java Developer’s Resource Guide. Not only does this feature awesome tools that help Java devs develop, monitor performance, find errors, and distribute messages, it also includes other Java Developer-related content – namely Java Developer Twitter .

We love Twitter – seriously, it’s our favorite way to reach developers across the globe. With over 328 million active users at the end of Q1 2017, sometimes finding relevant influencers to follow is pretty overwhelming. Even with something as simple as Twitter, we’re here to help developers build an awesome social circle.

Here are 12 active Java influencers that you should be following on Twitter:

Nick Craver
Nick Craver
@Nick_Craver

Nick Craver is an architecture lead, developer, site reliability engineer for Stack Exchange. He’d probably appreciate a thank you for keeping Stack Overflow up and Running.

 

John Resig
John Resig
@jeresig
John Resig is an American software engineer and entrepreneur. He’s most notable for being the creator and developer of jQuery.

 

Rich Hickey
Rich Hickey
@richhickey
Creator of the Clojure language, a functional language that runs on the JVM and fully interacts with Java.

 

Joshua Bloch
Josh Bloch
@joshbloch
Josh is the former chief Java architect at Google and distinguished engineer at Sun Microsystems. He has authored many books including Effective Java, Java Puzzlers, and Java Concurrency in Practice.

 

Doug Cutting
Doug Cutting
@cutting
Doug Cutting is the Chief architect at Cloudera, the Co-creator of Lucene, Nutch, and Hadoop and is on the board of the Apache Software Foundation.

 

Peter Lawrey
Peter Lawrey
@PeterLawrey
Peter is the CEO of Chronicle Software, a company specializing in consulting, training and development of low latency, high throughput applications in Java. He is also the author of the blog Vanilla Java.

 

Ana Noemi
Ana Noemi
@anoemi
Ana Noemi is a project manager at Stack Overflow. She assists folks in learning how to use software and helping them learn to work together to build something awesome.

 

Arun Gupta
Arun Gupta
@arungupta
Arun Gupta is the VP of developer advocacy at Couchbase and the founder of Devoxx4Kids USA. He has built and led developer communities for 10+ years at Sun, Oracle, and Red Hat.

 

A. Nelson-Hornstein
Ashley Nelson-Hornstein
@ashleynh
Previously a developer for Apple and Dropbox, Ashley Nelson-Hornstein is now the co-founder of Sound Off, an org working to increase access to professional opps for marginalized people in tech.

 

Justin Searls
Justin Searls
@searls
Justin Searls, along with his company Test Double, is on a mission to uncover the myriad ways that software fails businesses, developers, and users and improve how the world writes software.

 

Ola Sendecka
Ola Sendecka
@asendecka
Ola Sendecka serves as a Django Girls co-founder, a Django project core team member, a Senior Software Engineer at BuzzFeed and the author of “Coding is for Girls” YouTube channel.

 

Tor Norbye
Tor Norbye
@tornorbye
Tor Norbye is the tech lead on the Android team at Google.

We know that developers don’t have much time to do anything else but build – that’s where we come in. We believe that it’s our personal responsibility to help developers work better, code better, and build better careers. After all, that is our motto. Even if we’re just helping in Twitter.

Want more content to help you be the best Java developer that you can be? Download our Comprehensive Java Developer’s Resource Guide today:

Dev Ops Increases Security (1)

How DevOps Increases Security, Not Hurts It

Matt Watson Developer Tips, Tricks & Resources, Insights for Dev Managers Leave a Comment

One of the biggest challenges for development teams is having good visibility into production deployments. It is nearly impossible to track down application problems without access to critical data. Developers need access to a range of things, including application performance reporting, configurations, log files and more.

Does DevOps create or solve security challenges?

Possible DevOps Security Issues

DevOps typically refers to topics around application deployment, server provisioning, and application monitoring. All three of these topics have potential security implications.

Application Deployments

One of the best things about using continuous integration and deployment tools is their ability to create a repeatable and dependable way to deploy your application. How you deploy your application is scripted out and works the same way every single time.

From a DevOps security perspective, I see this as a huge upgrade over someone manually pushing code. It allows you to implement controls and security policies into your release process.

We are also starting to see new ways to add security scanning and testing into the build process. Products like Contrast Security are very interesting.

Server Provisioning & Configuration

I’m sure you have heard of infrastructure as code. Similar to scripting application deployments, scripting server deployments allows you to document and control the process.

By scripting out server configurations, you can also easily implement specific company policies. Things like what ports are open, automatic updates, and more.

Deploying to the cloud also changes everything. At Stackify, how we deploy to Azure is part of our application itself. We don’t even think about server provisioning or server configurations. Microsoft Azure takes care of securing our servers, Windows Updates, and other common issues.

Scripting server configurations enable security experts to have better visibility and be part of the security conversations throughout the process.

Application Monitoring

When it comes to monitoring, and troubleshooting application problems, a DevOps approach solves a lot of security problems. The goal of DevOps is to create collaboration and improve the working relationships between development and operations. Monitoring is a perfect example of where a company can gain efficiency and even security with a DevOps mentality.

By giving developers access to the tools and more data, they no longer need administrator level access to production. You can also get more developers involved in the supporting their applications.

Application Monitoring: Developers Need Data, Not Production Access

In the past, many organizations were forced to give developers administrator level permissions so they could support their apps. It was the only way for them to see if their apps were running, the health of them, and access basic things like log files. This, of course, causes a lot of security concerns.

What developers really need is access to lots of data. Having to log in to servers one by one is not a good solution if your app runs on multiple servers.

What developers really need to support their apps:

  • Deployment history – What changed and when?
  • Application configurations – Is everything configured correctly?
  • Application errors – Is there a critical error going on?
  • Application log files – Logs are the eyes and ears for developers.
  • Server metrics – Need to double check server CPU, memory, disk, and network performance
  • Application metrics – Do we have issues with garbage collection or other key metrics?
  • Application performance – APM tools are invaluable for identifying why an application is slow or not performing correctly

Developers need tools to aggregate this data together across multiple servers. Traditionally, developers have used multiple monitoring type tools. APM solutions, like Retrace, can help solve this by combining all the data into one place.

Providing your entire development team access to this data fits into the DevOps mentality and solves some security challenges.

Summary of DevOps Security Impact

By leveraging DevOps best practices, companies can increase the velocity at which they do releases while improving security. DevOps and security issues related to it will continue to be big topics.

Scripting out how you do deployments and configure servers gives you the ability to review and audit the configurations.

By giving developers access to the data and monitoring tools that they need, you can also limit administrator level access to production.

To Tool is Human, To Java Tool is Developer Genius

Hannah White BuildBetter Leave a Comment

Tools make our lives easier – as humans, we’ve used tools to improve processes since the beginning of time. At BuildBetter, one of our highest priorities is to make dev’s lives easier. We’re developers ourselves, so we know that tools are an absolute necessity to get the job done as efficiently as possible with as few bugs as possible. That’s why we’re presenting to you these Java Tools for Developers. 

In our most recent BuildBetter issue, we compiled a resource guide for Java developers. Seriously, this guide is HUGE and it contains all things related to Java Devs and their daily digs. This resource guide began as just a tool guide, so it’s only appropriate that our first blog features our development tools in the mag.

JDK 

Java Developer Kit

Java developers need development tools like Oracle’s Java Development Kit to develop and deploy Java applications on desktops, servers, and embedded environments. JDK gives users enterprise-level features that minimize the costs of deployment   maintenance of their Java-based IT Environment.

​For​ ​new​ ​and​ ​experienced developers,​ ​this​ ​tool​ ​makes​ ​Java​ ​incredibly​ ​easy.​ Included in the kit is the Java Runtime Environment, the Java compiler, and the Java APIs. JDK provides the rich user interface, performance, versatility, portability, and security that today’s developers want and need. Devs also reap the benefits of the Java SE community, such as opportunities for collaboration and early feedback from developers across the globe.

Eclipse

Eclipse

For developers looking for assistance in code completion, refactoring, and syntax checking, Eclipse is the tool for you. Eclipse provides Integrated Development Environments (IDEs) and platforms for nearly every language and architecture. With a Java Development Tools project, Eclipse provides a range of useful plugins to help develop all kinds of Java applications. Its famous for its Java IDE, C/C++, JavaScript, and PHP IDEs, which are built on extensible platforms for creating desktop, Web, and cloud IDEs. For the most extensive collection of add-on tools available, Eclipse is where it’s at.

Gradle

Gradle

Whether you’re a small startup or a big enterprise and whether you’re building a mobile app or a microservice, Gradle is the productivity tool that helps dev teams build, automate, and deliver software faster. For continuous delivery in Java, C++, Python, or other languages of choice, Gradle allows developer teams to automate everything and deliver faster. Because Gradle’s build scripts are written in Groovy and not XML, it’s oriented toward being used as a language itself. This means that developer teams can integrate their own abstractions or use the ones that Gradle provides.

Take it from the development teams at LinkedIn, Netflix, and Android: the flexibility to structure your build, monitor and customize configuration, scale up or down depending on your project, and support multi-project builds are just a few of the features and benefits of using Gradle.

Jenkins

Jenkins

The clear leader in Java continuous integration is Jenkins. This flexible plugin system has dominated open source automation for the past five years.

Jenkins can be used as a simple CI server or turned into a continuous delivery hub for any project. Not only is it a self-contained, ready to run program, it can also be easily configured via its web interface, detecting errors on-the-fly and providing built-in support. Hundreds of plugins means that you can essentially integrate any tool with Jenkins while extending its infinite possibilities via its plugin architecture.

JUnit

JUnit

Looking for a unit testing framework that will help you write and run tests? JUnit is the tool for the job. In the world of test-driven development, JUnit promotes the idea of “test first, code later.” It allows programmers to test one block of code at a time rather than waiting for the module to be completed before running a test. This check-as-you-go approach increases programmer productivity and the stability of your program code. JUnit also provides annotations to identify test methods and assertions for testing expected results. Who wouldn’t want to reduce stress and time spent debugging?

Cobertura

Cobertura

Its name means “coverage,” and that’s exactly what the Cobertura plugin provides. Cobertura is a free tool that calculates the percentage of code accessed by tests to identify which parts of the Java program are lacking test coverage. While Cobertura is meant to be used with Ant, it also works with the command line and plugins for Maven2 and Eclipse. Tests that use HttpUnit, HtmlUnit, Empirix, and Rational Robot can still be detected by Cobertura.

Cobertura’s claim to fame is its “pretty output,” an easy-to-digest report that translates to less time figuring out where to add test coverage. Cobertura’s generated report can also be used to improve efficiency, since an efficient line of code improves the efficiency of an entire application.

Groovy

Groovy

Name a developer who isn’t interested in Groovy… we’ll wait. Its dynamic runtime nature and powerful static-typing and static compilation capabilities sets Groovy apart from other Java development platforms. Boasting a flat learning curve and concise, easy-to-learn syntax, Groovy is aimed at effortlessly improving developer productivity. Its powerful features includes closures, builders, runtime and compile time meta programming, functional programming, type interference, and static compilation. It integrates with any Java program and immediately delivers its powerful features to your application.

IntelliJ IDEA

IntelliJ IDEA

Every minute spent in the flow is a good minute. Minutes spent fixing a broken flow? Not so much. You don’t want to spend your precious dev time examining code and making connections, and with IntelliJ IDEA, you don’t have to.

IntelliJ IDEA analyzes code and looks for connections across all project files and languages, providing information for in depth coding assistance, quick navigation, clever error analysis, and refactorings. Save time and maximize productivity with IntelliJ IDEA’s editor-centric environment, shortcuts for (nearly) everything, ergonomic user interface, and an inline debugger. Other tools’ code completion features suggest names of classes, methods fields, and keywords. IntelliJ IDEA suggests only those types that are expected in the current context. What’s not to love?

Appreciate this blog? Well we can guarantee you’ll love our Java Developer’s Guide. You’ll get to read more about app performance tools, errors and log tools, web extension tools, and messaging distribution tools PLUS books, websites and blogs, Youtube channels, Twitter influencers, podcasts, events, and LinkedIn groups and influencers for Java Developers.

Even if developing in Java isn’t in your wheelhouse, many of the tools and resources we’ve listed support multiple languages, so you’re bound to find something you can use – no matter what technology you’re developing in.

 

Software Quality

How to Evaluate Software Quality from Source Code

Erik Dietrich Developer Tips, Tricks & Resources, Insights for Dev Managers Leave a Comment

I’ll understand if you read the title of this post and smirked.  I probably would have done so, opening it up only to see what profound wisdom awaited me.  Review the code, Captain Obvious.  

So yes, rest assured, I understand the easy assumption that one can ascertain a codebase’s quality by opening it up and starting to review it.  But what does this really tell you?  What comes out of this activity?  Simply put, your opinion of the codebase’s quality comes out of this activity.

Data-Driven Ways to Evaluate Software Quality

Initially, the idea for this practice arose out of some observations I’d made a while back.  I watched consultants tasked with critical evaluations of codebases, and I found that they did exactly what I mentioned in the first paragraph.  They reviewed it, operating from the premise, I’m an expert, so I’ll record my anecdotal impressions and offer them as evidence.  That put a metaphorical pebble in my shoe and bothered me.  So I decided to chase a more empirical concept of code quality with my practice.

Don’t get me wrong.  The proprietary nature of source code and outcome data in the industry makes truly scientific experiments difficult.  But I can still automate the inquiries and use actual, relative data to compare properties.  So from that perspective, I’ll offer you more data-driven ways to evaluate software quality from source code.

Average Cyclomatic Complexity per Method

Perhaps you’ve heard of the term cyclomatic complexity.  Succinctly put, it describes the number of paths through a piece of code.  To picture this, consider the following trivial bits of code.

public int Add(int x, int y)

{

return x + y;

}

The code above has a cyclomatic complexity of one, since only one path through the code exists.  But add a wrinkle, and you’ll have a cyclomatic complexity of two.

public int StrangeAdd(int x, int y)

{

if(x < 0)

return 0;

return x + y;

}

You can use this to obtain a broad insight into software quality.  Compute the codebases’s cyclomatic complexity, normalized over the number of methods.  This tells you the complexity of the average method, which carries critical significance.  More paths through the code means more tests needed to verify the application’s behavior.  And this, in turn, increases the likelihood that developers and testers miss verification scenarios, letting untested situations into production.  Does that sound like a recipe for defects?  It should.

Cohesion and Coupling

Coupling and cohesion represent fairly nuanced code metrics.  I’ll offer an easy mnemonic at the risk of oversimplifying just a bit.  You can think of cohesion as the degree to which things that should change together occur together.  And you can think of coupling as the degree to which two things must change together.

Because software design forces endless trade-offs and few absolutes, we can’t simply declare good and bad here.  But some generalities do emerge.  Specifically, you want to maximize cohesion while minimizing unnecessary coupling.  You can never escape all coupling, or your code wouldn’t do anything.  And you can’t have total cohesion, either.  But you can work toward those goals.

Non-cohesive code creates the code smell affectionately known as shotgun surgery.  You may have experienced this in a codebase where doing something like adding a menu item to a GUI required you to modify 17 different files.  Software quality suffers as this leads to necessary changes slipping through the cracks.

Similarly excessive coupling causes quality issues as well, albeit of a slightly different flavor.  It causes the application to behave in weird, embarrassing ways.  If changing the font on the login button causes some ETL process to fail, you have a problem caused by coupling.

Global State

In the wild, global state frequently becomes the complete bane of software quality.  With global state, you have information with a scope of the entire codebase.  In a small application, this might not matter, but have you ever seen this scale well?  If you have, please send me a link, because it would represent the first time I’ve ever encountered it.

With global state, you effectively give the software developers the ability to rip holes in the application’s metaphorical space-time.  They don’t need to think through object graphs, interfaces, or anything else.  They can simply reach into the ether and set a variable, and reach into it from somewhere else and read that variable.  Neat trick, until people set it in 45 different places and read it in 332 others.  Then, good luck reasoning about the variable’s value at any point in time and in any situation.

Applications with significant global state tend to exhibit specific symptoms of low software quality.  They tend to encourage “defect whack-a-mole” wherein fixing one bug just creates another one.  This happens because developers can only address production issues through trial and error, reasoning having gone out the window.

Source Control Commit Hot Spots

Go take a look through a codebase’s commit history.  While there, look for something specific.  Do you see clustering around certain files or modules as hot spots?  You can easily quantify this and even plot a distribution graph of how frequently developers touch which files.

In codebases with a relatively even and low distribution, you tend to see good software quality as measured by outcome.  This happens because these codebases abide by something known as the open/closed principle.  This principle offers the wisdom that you should design source code elements to be closed for modification, but open for extension.

If that sounds curious to you, think of it this way.  Modifying existing code represents a relatively violent operation on a codebase compared to adding new code.  Modifying existing code runs the risk of breaking dependent code.  Adding code carries no such risk.  So the open/closed principle steers you toward adding frequently, but touching existing stuff as infrequently as possible.

Contrast this with a codebase that has commit hot spots.  This means people modify the code in question a lot.  Do enough assessments, and you’ll start to accurately perceive these hot spots as defect factories.  Luckily, you can steer your way toward better situations by modifying the design to eliminate these hot spots.

Reasoning about Software Quality

I’ve offered a series of ways to gain empirical insight into your expected software quality.  With all of these, you can easily quantify across entire codebases.  You can then compare that quantification with other codebases and observe differences in outcomes that matter to the business.  I submit that this beats scanning through source files and saying things like “this guy didn’t even know how to implement a hash code on his object!”

But I think you should take more away than a handful of application-wide metrics.  You should take away a preference for statistical and empirical consideration.  Figure out how to quantify trends you observe and form hypotheses about how they impact code quality.  Then do your best to measure them in terms of outcomes with the application.  We lack a laboratory and we lack non-proprietary data, but that doesn’t stop us from taking the lessons of the scientific method and applying them as best we can.

Top Source Code Repository Hosts

Top Source Code Repository Hosts: 50 Repo Hosts for Team Collaboration, Open Source, and More

Angela Stringfellow Developer Tips, Tricks & Resources, Insights for Dev Managers Leave a Comment

Every developer’s toolkit needs a good source code repository host; a good host for your code is especially useful for DevOps team collaboration and working with Open Source projects.

There are many source code repository hosts available online, from the widely-used GitHub and Bitbucket to the lesser-known yet useful repo hosts catering to specific needs. Each one appeals to different users and their specific projects: open source projects, multi-developer projects, and more. While having many diverse hosts to choose from is a luxury, the problem lies with determining the perfect source code repository host for your needs.

Some hosts boast features that others don’t have. In order to make it easier for you to choose the right host, we’ve put together this list of 50 popular source code repository hosts. NOTE: The following 50 source code repository hosts are listed in no particular order.

1. Bitbucket
@bitbucket

Bitbucket

Bitbucket is offered by the widely-known Atlassian and offers unlimited private code repositories for Git or Mercurial. Bitbucket is one of the most popular source code repository hosts in the development community.

Key Features:

  • Approve code review more efficiently with pull requests
  • Inline comments allow you to hold discussions within the source code
  • Flexible deployment models
  • Unlimited private repositories
  • Secured workflow
  • Integration with other tools that can help your team
  • JIRA integration
  • Custom domains
  • Code reviews

Cost:

  • Small teams – FREE up to 5 users
  • Large teams – $10 per month (10 users) $25/month (25 users) $50/month (50 users) $100/month (100 users)
  • Unlimited users – $200 per month

2. SourceForge
@sfnet_ops

SourceForge

Sourceforge is an open-source community resource. It’s been around longer than many of the other hosts on this list and is considered a top choice among Open Source developers. Open source projects for Linux, Windows, and Mac are typically hosted on Sourceforge.

Key Features:

  • Host code for Git, Mercurial, and Subversion
  • Features threaded discussion forums and integrated issue tracking
  • View commit history as a graph

Cost: FREE

3. ProjectLocker
@ProjectLockerHQ

ProjectLocker

ProjectLocker is an enterprise-grade code repository host that comes with a private source code repository (Subversion hosting or Git hosting). These are compatible with standard clients.

Key Features:

  • Web-based management console for managing users
  • Automatic data backups
  • Fine-grained directory-based permission
  • BuildLocker continuous integration

Cost:

  • 15-days free trial for all plans
  • FREE – 1 user, 1 project, 50 MB storage
  • Venture – $19/month, 5 users, 5 GB storage, 5 projects
  • Equity – $49/month, 20 users, 10 GB storage, unlimited projects
  • IPO – $99/month, 50 users, 25 GB storage
  • Enterprise – Contact for a quote

4. GitLab
@gitlabhq

GitLab

GitLab has a lot of features and tools, and they offer a variety of source code repository hosting. One of their unique features is the ability to install GitLab on your own server. Installing on your own server allows you to use GitLab with custom domains and custom hosts.

Key Features:

  • Includes Git repository management, issue tracking, code review, an IDE, activity streams, wikis, and more
  • Allows you to install GitLab on your own server
  • Can cover bases by adding control on development process
  • Built-in Continuous Integration and Continuous Deployment to test, build, and deploy code.

Cost:

  • Community Edition – FREE, unlimited users
  • Enterprise Edition Starter – $3.25 per user per month ($39 annually)
  • Enterprise Edition Premium – $16.59 per user per month ($199 annually)

5. CloudForge
@CloudForgeHQ

CloudForge

CloudForge from CollabNet offers Subversion Hosting and Git Hosting. You can choose between the two source code repository hosts. They have a wide selection of tools and features.

Key Features:

  • Version Control Hosting
  • Bug & Issue Tracking – Create, rank, assign and track issues with TeamForge tracker
  • Wikis, discussion forums, and document management
  • Granular permissions, project access, and security
  • 99.9% uptime, backups, support and global datacenters

Cost:

  • FREE trial for 30 days on all plans
  • Standard: $2/user/month (packs of 5), for small teams and non-critical projects
  • Professional: $10/user/month (packs of 5), for small business and enterprise workgroups

6. Fog Creek Kiln
@kilnfc

Fog Creek Kiln

Kiln, from Frog Creek Software, is a paid source code host for Git and Mercurial. It is known for collaboration tools and its ability to keep codes organized and secure. Kiln was created by the same company behind Trello and Stack Overflow.

Key Features:

  • Work in any part of your code
  • Brand, merge, clone, push or pull with ease
  • HTTPS and SSH support and flexible user permission
  • Monitor updates across projects, repos, and commits
  • Save time when searching changesets, files, and code

Cost:

  • FREE 7-day trial
  • Up to 5 users: $20/month on monthly plan or $18/month on yearly plan
  • Up to 10 users: $100/month on monthly plan or $90/month on yearly plan
  • Up to 20 users: $200/month on monthly plan or $180/month on yearly plan
  • Up to 50 users: $400/month on monthly plan or $360/month on yearly plan
  • Up to 100 users: $500/month on monthly plan or $450/month on yearly plan
  • Up to 150 users: $700/month on monthly plan or $630/month on yearly plan
  • Up to 250 users: $900/month on monthly plan or $810/month on yearly plan
  • Up to 500 users: $1200/month on monthly plan or $1080/month on yearly plan
  • For 501+ users: Contact them for a quote
  • Pick optional add-ons to enhance FogBugz (for all plans): Time Tracking, Agile, Wiki, or Dev Hub

7. Launchpad
@launchpad_net

Launchpad

Launchpad is a software collaboration platform that provides bug tracking, code hosting using Bazaar, code reviews, a mailing list, and more. They use the Bazaar version control system to host project source code and import more than 2000 CVS, SVN and Git projects.

Key Features:

  • Bug tracking
  • Code hosting using Bazaar
  • Code reviews
  • Ubuntu package building and hosting
  • Translations
  • Mailing lists
  • Specification tracking

Cost: FREE

8. Codeplane
@codeplane

Codeplane

Codeplane is a paid service with Git as their VCS of choice. They offer up to 2GB for repositories with no limits on users or number of repositories per month. It’s a great choice for small companies or freelance teams.

Key Features:

  • 2GB for Git repositories
  • Unlimited users
  • Command-line
  • You can invite anyone you want without limits
  • Simple interface

Cost:

  • FREE 30-day trial
  • $9 per month

9. Assembla
@assembla

Assembla

Assembla is the perfect host for Apache Subversion and Git. It is known for getting projects up and running quickly. Documentation, code reviews, and task management can all be handled through this app.

Key Features:

  • Task management
  • Features like Ticket Views and Milestones
  • Built for Agile – very customizable, and by default, all projects are set up for Agile development: from time-tracking on tickets to custom fields through code reviews
  • Can be shared or private install

Cost:

  • 10 users: $7.50 per user, $75 monthly price
  • 15 users: $6.75 per user, $101 monthly price
  • 20 users: $6.75 per user, $135 monthly price
  • 30 users: $5.70 per user, $171 monthly price
  • 50 users: $5.70 per user, $285 monthly price
  • 70 users: $5.45 per user, $381 monthly price
  • 100 users: $5.45 per user, $545 monthly price
  • 150 users: $4.98 per user, $747 monthly price
  • 200 users: $4.98 per user, $996 monthly price
  • 200+ users: Custom plan (contact for a quote)

10. CodePlex
@codeplex

CodePlex

CodePlex is a free and open source project hosting offering from Microsoft. CodePlex allows you to create projects that you can share. You can also collaborate on projects with others and download open source software.

Key Features:

  • Source code control
  • Wiki pages and project discussions
  • Issue tracking

Cost: FREE

11. Beanstalk
@beanstalkapp

Beanstalk

Beanstalk is a Git and SVN hosting that doesn’t require clients. You can add files, create branches, and edit directly in your browser for instant gratification.

Key Features:

  • Get total control of both teams and individuals with repository and branch-level permissions
  • Keep your entire team on the same page with notifications
  • Email digest, compare view, detailed history
  • Fluid code review
  • Issue tracker and statistics

Cost:

  • FREE for the first two weeks
  • Bronze: $15/month, 3GB of storage, 10 repositories, 5 users, 3 servers/repository
  • Silver: $25/month, 6GB of storage, 25 repositories, 20 users, 5 servers/repository
  • Gold: $50/month, 12 GB of storage, 50 repositories, 40 users, 10 servers/repository
  • Platinum: $100/month, 24GB of storage, 120 repositories, 100 users, 20 servers/repository
  • Diamond: $200/month, 60GB of storage. 300 repositories, 200 users, 40 servers/repository

12. Savannah

Savannah

Savannah is becoming one of the most popular source code repository hosts, allowing you to host free projects running on free operating systems without proprietary software dependencies.

Key Features:

  • Maintenance and distribution of official GNU software
  • Host projects that aren’t a part of GNU but support free software with savannah.nongnu.org

Cost: FREE

13. CCPForge

CCPForge

CCPForge started as a collaborative software development environment tool for the Collaborative Computational Projects (CCP) community. It has now broadened its scope to all UK computational research and development projects. It aims to be as user-friendly as possible.

Key Features:

  • Choose from CSV, SVN, or Git
  • Bug tracking
  • Functionality fixing
  • Developers and user forums
  • Feature request and other support request tracking

Cost: FREE

14. RepositoryHosting.com
@rephosting

RepositoryHosting.com

RepositoryHosting.com aims to make developing and completing source code projects as simple as possible. Developers can choose their repositories and create as many Subversion, Git, and Mercurial projects as they want.

Key Features:

  • Choose repositories
  • Browse your code
  • Organize projects and users
  • Comprehensive permission management
  • Host open source projects
  • Tickets, Milestones, Wikis, Blogs, and Discussion Forums

Cost:

  • FREE 30-day trial
  • $6/month

15. Codebase
@codebase

Codebase

Codebase is great for teams for keeping track of code and managing projects to ensure the continuous delivery of excellent software. It’s a professional code hosting for developers, and it allows software teams to choose the repositories.

Key Features:

  • Git, Mercurial, Subversion repositories
  • Code hosting
  • Tickets, issues, and milestones
  • File sharing
  • Time tracking
  • Discussions
  • Wikis/notebooks

Cost:

  • FREE for 15 days on all plans
  • Hobbyist Plan: £9/month, 6 active projects, unlimited archived projects, 4GB disk space, 10 users
  • Freelancer Plan: £19/month, 20 active projects, unlimited archived projects, 8GB disk space, unlimited users
  • Studio Plan: £29/month, 45 active projects, unlimited archived projects, 14GB disk space, unlimited users
  • Agency Plan: £59/month, 110 active projects, unlimited archived projects, 30GB disk space, unlimited users

16. Unfuddle
@unfuddle

Unfuddle

Unfuddle is a full-stack software project management tool that provides bug and issue tracking, Git and Subversion hosting, and collaboration tools in one central place to streamline your workflow.

Key Features:

  • Track tasks, issues, bugs
  • Feature requests
  • View, pivot, and organize tasks with drag and drop convenience
  • First-class Git hosting with an unlimited number of repositories
  • Code review

Cost:

Personal Projects: FREE

Organizations:

  • $19/month for up to 5 people
  • $59/month up to 10 people
  • $99/mo For up to 15 people
  • $249/mo For up to 25 people
  • $499/mo For up to 50 people +$3/person over 50

17. Jenkins
@jenkinsci

Jenkins

Jenkins is a leading open source automation server. It provides hundreds of plugins for supporting, building, and automating projects.

Key Features:

  • Continuous integration and delivery
  • Easy installation and configuration
  • Plugins
  • Extensible
  • Distributed

Cost: FREE

18. SourceRepo
@sourcerepo

SourceRepo

Run by and for developers, SourceRepo provides an easy-to-use control panel for running Git, Subversion, and Mercurial, with a free project management solution.

Key Features:

  • Easy to use Control Panel
  • Unlimited Users/Developers for each Repository
  • Secure Access and Hourly Backups
  • Project Management Software
  • Hook Script Integration
  • Free 24/7 personal technical support

Cost:

  • Level One: $3.95 per month, 500 MB Storage, 1 Git, SVN, or HG repository, 1 Trac Instance and 1 Redmine Project, Unlimited Developers/Committers
  • Level Two: $6.95 per month, 1 GB Storage, Unlimited Git, SVN, or HG Repository, Unlimited Trac Instances and Redmine Projects, Unlimited Developers/Committers
  • Level Three: $12.95 per month, 3GB Storage, Unlimited Git, SVN, or HG Repository, Unlimited Trac Instances and Redmine Projects, Unlimited Developers/Committers

19. kforge 0.20
@pypi

kforge

An enterprise software application for project hosting, KForge enables you to control access with a robust, role-based, single sign-on access controller. Their service also includes version control systems such as Git, Mercurial, and Subversion.

Key Features:

  • Control access with robust, role-based, single sign-on access controller
  • Project frameworks with features to help you plan and track work
  • Wikis and a mailing list
  • Content management systems and blogs
  • Version control systems such as Git, Mercurial, and Subversion

Cost: FREE

20. Deveo
@deveoteam

Deveo

Deveo is a repository management platform that can support Git, Mercurial, SVN, WebDAV and more.

Key Features:

  • Repository management
  • Collaboration tools
  • Code review tools and Kanban type issue tracking
  • Git-powered Wiki for documentation and file sharing
  • Cloud or on-premise; works on most Linux operating systems
  • Support common cloud providers

Cost:

  • Cloud: FREE, unlimited users, unlimited projects, unlimited repositories, use from the cloud, 1GB of storage for free, 1€/GB/month after 1GB
  • ON-PREMISES: 36€ / year / user
  • ENTERPRISE: Contact for a quote

21. Phabricator
@phabricator

Phabricator

Phabricator is an integrated set of powerful tools that aim to help build higher quality software.

Key Features:

  • Apps that can help manage tasks and sprints, review code, host Git, SVN, or Mercurial repositories
  • Build with continuous integration
  • Discuss in integral chat channels
  • Fast, scalable and fully open source
  • No local limitations

Cost:

  • For Projects: FREE, Phacility Hosted, Free Up to 5 Users
  • For Business: $20, Phacility Hosted, $20 per user / per month, no added cost after 50 users
  • For Enterprise: $TBD, Private Cluster, Phacility Hosted, $500 per host / per month, unlimited users

22. Review Board
@reviewboard

Review Board

Review Board was designed to support talking to multiple source code repositories of different types. A single server can be configured with an unlimited number of repositories, and you can also link up a repository with a supported hosting service.

Key Features:

  • They provide a fast and easy way to configure the repository without having to figure out specific paths
  • Bug tracker
  • Easy configuration for working with different hosting services
  • Can generate an SSH key to be used with repositories

Cost: FREE

23. SVNRepository.com

SVNRepository.com

SVNrepository is a subversion hosting company run by and for developers. They have an easy to use control panel that is helpful in running Subversion and Trac in no time.

Key Features:

  • Intuitive control panel
  • Unlimited users/developers for repositories
  • Trac and Redmine automatically installed for each respository
  • Automatic project setup for each repository
  • Hourly backups
  • Secure HTTPS access to SVN repository and Trac instances

Cost:

  • Level One: $3.95/month, 500 MB Space, 1 SVN or Git or Mercurial Repository, 1 Trac Instance, 1 Redmine Project, Unlimited Developers/Committers
  • Level Two: $6.95/month, 1 GB Space, Unlimited SVN, Git, and Mercurial Repositories, Unlimited Trac Instances, Unlimited Redmine Projects, Unlimited Developers/Committers
  • Level Three: $12.95/month, 3 GB Space, Unlimited SVN, Git, and Mercurial Repositories, Unlimited Trac Instances, Unlimited Redmine Projects, Unlimited Developers/Committers

24. Gna!

Gna!

Gna! is a source code repository that is a central point for development, distribution, and maintenance of Libre Software (Free Software) projects. They provide source code repositories including CVS, GNU Arch, Subversion.

Key Features:

  • Source Code Repositories (CVS, GNU Arch, Subversion)
  • Download area
  • Web pages
  • Mailing-list and trackers (bugs, tasks, support request, patches)

Cost: FREE

25. Pikacode
@pikacode

Pikacode

Pikacode is a source code repository host for Git and Mercurial.

Key Features:

  • Git and Mercurial hosting
  • Public and private repositories
  • Bug tracker
  • Roadmap and milestones
  • Durable backup
  • Sponsors code hosting for Hoa project

Cost:

  • FREE – 100MB storage, unlimited public repositories and collaborators
  • 14.99€/year – 1GB storage, unlimited private repositories and unlimited private collaborators

26. Planio
@planio

Planio

Planio is a host for Subversion and Git repositories. A Planio account comes with unlimited hosted SVN and Git repositories which are integrated, highly available, and secured.

Key Features:

  • Task management and workflows solution
  • Agile project management
  • Roles and permissions to manage access control
  • Effective communication through blog, forums, and comments
  • Time tracking

Cost:

  • FREE trial for 30 days on all plans
  • Platinum: 99 € / month, 40 active Projects, 45 active Users, 50 GB Storage
  • Diamond: 59 € / month, 15 active Projects, 20 active Users, 30 GB Storage
  • Gold: 39 € / month, 7 active Projects, 10 active Users, 15 GB Storage
  • Silver: 19 € / month, 3 active Projects, 5 active Users, 2 GB Storage
  • Enterprise: from 199 €* / mo, 100 Projects or more, 100 Users or more, 100 GB Storage or more

27. RhodeCode
@rhodecode

RhodeCode

Rhodecode is an enterprise open source code repository. They can host Mercurial, Git, and Subversion.

Key Features:

  • Team collaboration
  • Workflow automation
  • Integrate an existing code base with new tools and issue trackers
  • Secured repositories
  • Audit and report code compliance

Cost:

  • Community Edition (CE): Free & Open Source, unlimited users, hosted on-premises
  • Enterprise Edition (EE): (30-day trial) $75 per user/year, minimum 10 users, seats offered in 10-packs

28. Pulp

Pulp

Pulp is a platform for managing repositories that make it easy to fetch, upload, host, publish, and apply software packages.

Key Features:

  • Supported Types: RPM, Python, Puppet, Docker OSTree
  • Free and open-source
  • Can locally mirror all or part of another code repository
  • Host your own software packages in repositories
  • Manage content from multiple sources in one place

Cost: FREE

29. TuxFamily

TuxFamily

A non-profit organization offering free services for projects working under the free software philosophy, TuxFamily provides “free hosting for free people.”

Key Features:

  • Web hosting (PHP5 is supported)
  • MySQL and PostfreSQL databases
  • CVS, Subversion, Git, and Mercurial repositories
  • Download up to 1GB (can be increased if more space is needed)
  • 200 MB quota for all groups

Cost: FREE

30. Versionshelf
@versionshelf

Versionshelf

Versionshelf is a secure service for effortless hosting of Git, Subversion, and Mercurial.

Key Features:

  • Use Git, Subversion, and Mercurial
  • Assign user accounts, teams, and permissions
  • Track commit log activity with RSS feeds
  • Use Post commit hooks to integrate with your issue Tracker
  • Trigger web hooks after each commit
  • Web repository access for all users

Cost: 

  • All plans have 30-day FREE trial
  • Premium Plan: $79.00 /month, Accounts: unlimited, Repositories: unlimited, Storage capacity: 18 GB
  • Plus Plan: $39.00 /month, Accounts:  45, Repositories: 30, Storage capacity: 8 GB
  • Basic Plan: $19.00 /month, Accounts: 20, Repositories: 15, Storage capacity: 3 GB
  • Personal Plan: $6.95 /month, Accounts: 5, Repositories: 6, Storage capacity: 600 MB

31. Pastebin
@pastebin

Pastebin

Pastebin is a source code repository host providing an online space to store text for a certain period of time. It’s mostly used by developers to store source code or for configuring information.

Key Features:

  • Available API
  • Syntax highlighting available for almost any language
  • Control your pastes as public or private

Cost:

  • Available FREE plan (limited features)
  • PRO ACCOUNT:
    • Monthly: $2.95 (per month)
    • Yearly: $23.95 (per year)

32. Eclipse
@EclipseFdn

Eclipse

An open-source community of tools, projects, and collaborative working groups, Eclipse is a popular service among developers.

Key Features:

  • IDE and other tools
  • Community of projects
  • Collaborative working groups

Cost: FREE

33. TurnKey GNU
@turnkeylinux

TurnKey GNU

TurnKey GNU offers revision control as part of an all-in-one code repository. Revision control combines open source version control systems, which makes everything easier for developers.

Key Features:

  • Revision control systems supported: Git, Bazaar, Mercurial, Subversion
  • SSL support out of the box
  • Webmin module for configuring Apache2
  • Includes TurnKey web control panel

Cost: FREE

34. Transifex
@transifex

Transifex

Transifex is a localization platform to power global content, aiming to drive international growth with translation tools and a central location for automating localization.

Key Features:

  • Manage translation
  • Translate content
  • Collaborate with translators
  • Automate localization process from one central place
  • Build personalized multilingual experiences
  • Translate website without coding

Cost:

  • STARTER: $139 per month, billed annually ($179/mo billed monthly)
  • GROWTH: $369 per month billed annually ($449/mo billed monthly)
  • ADVANCED: $749 per month billed annually ($899/mo billed monthly)
  • PRO: $1,350 per month billed annually
  • ENTERPRISE: Contact for a quote

35. Tigris
@tigrisdotorg

Tigris Source code tool

Tigris is a mid-sized open source community focused on building tools for collaborative software development.

Key Features:

  • Informational resources for software engineering professionals and students
  • Every project fits into the Tigris mission
  • Produce a number of very powerful and useful software development tools

Cost: FREE

36. GitHub
@github

GitHub

GitHub is a development platform. Using this source code repository host, you can review code, manage projects, and build software together with other developers.

Key Features:

  • Write better code
  • Collaborations
  • Conversations and code reviews
  • Project management alongside code in issues and projects
  • Code security
  • Access controlled
  • Hosted where you need it

Cost:

  • Developer: $7 per month – unlimited public repositories, unlimited private repositories, unlimited collaborators
  • Team: $9 per user / month – unlimited public repositories, unlimited private repositories, team and user permissions
  • Business: $21 per user / month – hosted on GitHub.com
  • Business (with free trial): $21 per user / month

37. Perforce
@perforce

Perforce

Perforce is known for its “Helix” platform that offers a complete software collaboration system with issue tracking, code review, and advanced features like Threat Detection.

Key Features:

  • Version Control that keeps Scania on the road to compliance
  • Perforce Helix that supports both centralized and distributed workflows with enterprise-grade scalability
  • Additional developer resources

Cost:

  • Free for a small team
  • 12-Month Subscription and Perpetual Use: Request for a quote for other plans

38. Chisel

Chisel

A Fossil SCM host offering unlimited Fossil repositories, Chisel is a free software licensed under the ISC license.

Key Features:

  • Weekly repositories backup
  • Public or private distinctions to control access to repositories
  • Submit issues and suggestions at any time

Cost: FREE

39. Buddy
@BuddyGit

Buddy Source Code Repository Host

Buddy is a Git host that allows users to build, test, and deploy code in seconds.

Key Features:

  • Build apps and run commands in isolated Docker containers
  • FTP/SFTP, Amazon S3, Elastic Beanstalk, DigitalOcean, Heroku, Azure, and more
  • Setup custom developer environments with Docker images
  • Automate development
  • Flexible deployments

Cost: All plans have a free trial available

Buddy Cloud:

  • PLAY: FREE, 1 concurrent run, 1 project
  • FREELANCER: $49/MO, 1 concurrent run, 25 projects
  • TEAM: $99/MO, 2 concurrent runs, 50 projects
  • SOFTWARE HOUSE: $199/MO, 4 concurrent runs, 100 projects
  • ENTERPRISE: $299/MO, 6 concurrent runs, unlimited projects

BUDDY GO:

  • FREE, Up to 10 users
  • Enterprise: $75 for every 5 users, per month, unlimited users

40. Subversion

Subversion Source Code Repository Host

Subversion is an open source software developed as a project of the Apache Software Foundation. It is part of a rich community of developers and users.

Key Features:

  • Most CVS features
  • Directories are versioned
  • Copying, deleting, and renaming, are versioned
  • Free-form versioned metadata
  • Atomic commits
  • Branching and tagging are cheap operations
  • Merge tracking

Cost: FREE

41. Gogs – Go Git Service
@GogsHQ

Gogs - Go Git Service Source Code Repository Host

Gogs is a self-hosted Git service. It is 100% open source and free of charge. All source code is available under the MIT License on GitHub.

Key Features:

  • Easy to install
  • Gogs is cross-platform, and can run anywhere
  • Go can compile for: Windows, Mac, Linux, ARM, and more
  • Lightweight

Cost: FREE

42. Kallithea

Kallithea Source Code Repository Host

Kallithea is a member project of Software Freedom Conservancy, a GPLv3’d, Free Software source code management system. It supports two leading version control systems: Mercurial and Git.

Key Features:

  • Built-in push/pull server
  • Easy to integrate
  • Code review
  • Contribute online
  • VCS visualized

Cost: FREE

43. Microsoft Visual Studio Team Services

Microsoft Visual Studio Team Services

Visual Studio Team Services offers an open platform for any development stack, including code hosting as well as a Continuous Integration service and Agile planning tools.

Key Features:

  • Agile tools
  • Git
  • Continuous integration
  • Release management
  • Tools for Java Teams
  • Centralized version control system with free private repos
  • DevOps
  • Enterprise ready
  • Cloud-based load testing

Cost:

Small teams:

  • Free – 5 Users with access to Basic features like unlimited Git repos, Agile tools, exploratory testing, release management, and more.
  • Unlimited users with access to work items, 1 Private Pipeline to run builds and deploy releases from your own server, 1 Hosted Pipeline (4 hours per month) to run builds and deploy releases in the cloud.

Growing teams:

  • $30/monthly – 10 users
  • $110/month – 20 users
  • $350/month – 50 users
  • $750/month – 100 users
  • $1150/month – 200 users
  • $4350/month – 1000 users
  • Pay only for the users on your team who need access

44. Gitolite

Gitolite Source Code Repository Host

Gitolite hosts Git repositories and allows you to setup Git hosting on a central server, making it possible to control access to many Git repos.

Key Features:

  • Setup Git hosting on a central server
  • Fine-grained access control
  • Can be installed without root access
  • Control access to many Git repositories

Cost: FREE

45. Springloops
@springloops

Springloops Source Code Repository Host

Springloops is a useful web development tool with lightning-quick deployments. It offers SVN/Git control combined with web developments.

Key Features:

  • Lightning-fast deploy
  • Load revisions of your project at any time
  • No limit on the number of users and servers
  • Eliminate the risk of error by copying server settings, then sharing them with your team

Cost:

PERSONAL:

  • FREE, 100 MB of space, 1 repository
  • $15, 3 GB of space, 10 repositories
  • $25, 6 GB of space, 25 repositories

BUSINESS:

  • $50, 12 GB of space, 50 repositories
  • $100, 24 GB of space, 120 repositories
  • $200, 60 GB of space, unlimited repositories

46. XP-Dev
@xpdev

XP-Dev Source Code Repository Host

XP-Dev is an all-in-one enterprise-grade private code hosting solution for collaborating on projects, as well as sharing and deploying code.

Key Features:

  • Git hosting
  • Subversion hosting
  • Mercurial hosting
  • Repository deployments
  • Repository integrations
  • Global project and repository hosting
  • Trac hosting
  • Agile project management
  • Real-time backups

Cost:

  • Pro Small: $5/month$48/year, 2GB storage
  • Pro MSmall: $10/month$96/year, 5GB storage
  • Pro Medium: $15/month$144/year, 10GB storage
  • Pro Large: $30/month$288/year, 20GB storage
  • Enterprise Small: $50/month$480/year, 40GB storage
  • Enterprise Medium: $100/month$960/year, 90GB storage

47. GerritForge
@gerritforge

GerritForge Source Code Repository Host

Gerritforge offers development and enterprise-grade support. It is one of the main contributors to Gerrit Code Review and provides LDAP integration, single sign-on, and more.

Key Features:

  • LDAP integration
  • Single-Sign-On
  • Role-Based Access Control
  • Lifecycle integration with an Enterprise-grade support 24/7 basis

Cost:

  • Base Package – up to 100 users – $250/month
  • Silver Package – up to 500 users – $1,175 /month
  • Gold Package – up to 1000 users – $2,225 /month
  • Platinum Package – up to 5000 users – $8,075 /month

48. Alioth / FusionForge
@fusionforge

Alioth / FusionForge Source Code Repository Host

FusionForge aims to foster better team collaboration, offering tools such as message forums, mailing lists, and overall management of the entire development lifecycle.

Key Features:

  • Control access to source code management repositories such as CVS and Subversion
  • Manage file releases
  • Document management
  • News announcements
  • Surveys for users and admins
  • Issue tracking with “unlimited” numbers of categories, text fields, and more

Cost: FREE

49. Git

Git Source Code Repository Host

Git is a free and open source distributed version control system designed for small to large projects with speed and ease. Git is hosted on GitHub and is a member of the Software Freedom Conservancy.

Key Features:

  • Easy to learn
  • Has a tiny footprint with lightning fast performance
  • Outclasses SCM tools like Subversion, CVS, Perforce, and ClearCase
  • Cheap local branching
  • Multiple workflows

Cost: FREE

50. Java.net

Java.net Source Code Repository Host

Java.net by Oracle is a source for Java technology collaboration, designed to make it easy for people to create projects.

Key Features:

  • Each project has a project owner who can monitor the project
  • Owner can grant roles and permissions to users
  • Spam elimination by process for creating a project
  • Scalable

Cost: FREE

What Is Function-as-a-Service? Serverless Architectures Are Here!

Matt Watson Developer Tips, Tricks & Resources Leave a Comment

It has never been a better time to be a developer! Thanks to cloud computing, deploying our applications is much easier than it used to be. How we deploy our apps continues to evolve thanks to cloud hosting, Platform-as-a-Service (PaaS), and now Function-as-a-Service.

What is Function-as-a-Service (FaaS)?

FaaS is the concept of serverless computing via serverless architectures. Software developers can leverage this to deploy an individual “function”, action, or piece of business logic. They are expected to start within milliseconds and process individual requests and then the process ends.

Principles of FaaS:

  • Complete abstraction of servers away from the developer
  • Billing based on consumption and executions, not server instance sizes
  • Services that are event-driven and instantaneously scalable

Timeline of moving to FaaS

At the basic level, you could describe them as a way to run some code when a “thing” happens. Here is a simple example below from Azure Functions. Shows how easy it is to process an HTTP request as a “Function”.

using System.Net;

public static async Task<HttpResponseMessage> Run(HttpRequestMessage req, TraceWriter log)
{
    log.Info("C# HTTP trigger function processed a request.");

    // Get request body
    dynamic data = await req.Content.ReadAsAsync<object>();

    return req.CreateResponse(HttpStatusCode.OK, "Hello " + data.name);
}

Benefits & Use Cases

Like most things, not every app is a good fit for FaaS.

We have been looking to use them at Stackify primarily for our very high volume transactions. We have some transactions that happen hundreds of times per second. We see a lot of value in isolating that logic to a function that we can scale.

  • Super high volume transactions – Isolate them and scale them
  • Dynamic or burstable workloads – If you only run something once a day or month, no need to pay for a server 24/7/365
  • Scheduled tasks – They are a perfect way to run a certain piece of code on a schedule

Function-as-a-Service Features

Types of Functions

There are a lot of potential uses for functions. Below is a simple list of some common scenarios. Support and implementation for them varies by provider.

  • Scheduled tasks or jobs
  • Process a web request
  • Process queue messages
  • Run manually

These functions could also be chained together. For example, a web request could write to a queue, which is then picked up by a different function.

FaaS Providers

AWS, Microsoft Azure, and Google Cloud all provide a solution.  A lot of innovation is still going on in this area and things are rapidly improving and changing.  Read our article on how AWS, Azure, and Google Cloud compare to determine which cloud best meets your needs.

Monitoring Challenges

One of the big challenges is monitoring your function apps. You still need to understand how often they occur, how long they take, and potentially, why they are slow.

Since you don’t necessarily have a server or control the resources they are running on, you can’t install any monitoring software.

How we monitor these new types of apps is going to continue to evolve.

Comparing FaaS vs PaaS

Platform-as-a-Service greatly simplifies deploying applications. It allows us to deploy our app and the “cloud” worries about how to deploy the servers to run it. Most PaaS hosting options can even auto-scale the number of servers to handle workloads and save you money during times of low usage.

PaaS offerings like Azure App Services, AWS Elastic Beanstalk, and others, make it easy to deploy an entire application. They handle provisioning servers and deploying your application to the servers.

Function-as-a-Service (FaaS) provides the ability to deploy what is essentially a single function, or part of an application. FaaS is designed to potentially be a serverless architecture. Although, some providers, like Azure, also allow you to dedicate resources to a Function App.

When deployed as PaaS, an application is typically running on at least one server at all times. With FaaS, it may not be running at all until the function needs to be executed. It starts the function within a few milliseconds and then shuts it down.

Both provide the ability to easily deploy an application and scale it, without having to provision or configure servers.

FaaS Case Study

If you aren’t too sure if you are ready to take the plunge into FaaS, maybe a case study will help!

Netflix uses AWS Lambda to help manage their AWS infrastructure with event-based systems.

Summary

Developers hate servers. The idea of serverless architectures is a panacea for developers. That said, I don’t see FaaS as being a complete replacement for normal application architectures. For a basic web application, it would take a lot of functions.

In my humble opinion, function based apps are a perfect fit for replacing potential microservice style architectures and background type services.

Images from Building “Serverless” Integration Solutions with Azure Logic Apps

Mistakes Implementing APM Solutions

20 IT Leaders Reveal the Biggest Mistakes IT Management Teams Make When Implementing Application Performance Monitoring Solutions

Angela Stringfellow Developer Tips, Tricks & Resources, Insights for Dev Managers Leave a Comment

Application Performance Management (APM) solutions are a must-have for Agile development teams, and when implemented correctly, they can save substantial amounts of time, create a better end user experience, and improve overall development operations. (Naturally, we’re big on APM – it’s what we do.)

The key to success, though, is implementing systems and solutions that are aligned with larger business goals and knowing how to leverage your tools to your advantage. So, we rounded up some advice from developers and IT leaders to offer some insight on this question:

“What’s the biggest mistake IT management teams make when it comes to implementing application performance monitoring processes (and how can they fix it)?”

Meet Our Panel of Developers and IT Leaders:

Read on to find out how you can better leverage APM to your team’s strategic advantage.


Mark HayfordMark Hayford

@MorpheusData

Mark supports, administers, and helps improve Morpheus IT infrastructure. Prior to Morpheus, Mark served as a Cloud Architect for a data consultancy, Support and Solutions Engineer at VMware, and a Procurement Automation Administrator at Lockheed Martin.

“After about 15 years in IT, here’s a couple of the most common mistakes that I see all the time…”

  1. They address the error, not the root cause. It’s the job of IT managers to know what’s going on and to try catching problems before they get out of control. As a result, we sometimes become too reliant on third-party technology. Tools like Datadog and New Relic are great, but they are not going to tell you exactly what the problem is. Yeah, they can help point out bottlenecks, but unless you resolve those bottlenecks by finding the root cause, you are going to continue having problems. Don’t expect a tool to solve your problems—find the root cause, create new checks and error messages, and and you’ll be able to deter potential problems in the future.
  2. They don’t consider the business impact. Sometimes we get a new tool because we think it will solve all our problems. While that may be the case sometimes, we usually have to do some groundwork before we can achieve true success. Here’s an example: We are tasked with monitoring the most important systems in our organization, so we put performance monitoring on everything. Good? No. I’ve seen so many teams use every single license they have, just for the sake of using them all. They think they need to put performance monitoring on everything so they hook up the performance monitor to all our servers. What this inevitably does it open more servers and systems to bugs, and suddenly, Dev is underperforming and it’s because of a “good thing.” Focus on what’s most important and make it solid. Then build out from there. (This ties in with this idea that sometimes performance management is catching what we need to care about-not just the problems at hand. The performance of high-use mission critical systems is vital for being monitored and an issue there should not be treated with the same level of concern as issues elsewhere. I’ve seen groups say, “Well, we only had 4 errors last month—everything is fine.” Well, those 4 errors were all in production, but they had 30 errors/alerts issues in staging and QA that they’re afraid to mention. All errors are not created equal. We need more errors in staging and QA, and we need to recognize those as good things because they are preventing errors in Prod.)
  3. Start earlier. Build performance expectations and testing into the earliest parts of new app development. This is hard to do on legacy applications but is a great practice for Agile shops. By setting desired expectations on some early metrics, as you build, those performance metrics are visible targets to the team. You can start dealing with the impact earlier and forecast performance for a production settings. Again, this can help cross-functional teams avoid problems down the road and point out issues and bugs sooner.


Tapas BanerjeeTapas Banerjee

@WebAgeSolutions

Tapas Banerjee is the CEO of Web Age Solutions.

“The single biggest mistake IT Management Teams make implementing Application Performance Monitoring is…”

In not having an enterprise monitoring strategy. This can stem from the mistaken notion that APM is server monitoring or as a result of bringing in solutions that someone on the team used before without considering the overall application set in play.

Fixing the gap means creation and implementation of a monitoring strategy. The monitoring strategy should cover the collection of monitoring data from all of the parts in the organization’s business solutions, so that you can proactively identify and resolve failures. Your monitoring plan should address, at a minimum, the following questions:

  • What business goals does your monitoring support?
  • What are the categories and specific resources you will monitor?
  • How will you measure the success of your monitoring?
  • How often will you review your monitoring plan?
  • What is the periodicity of the monitoring of applications and resources?
  • Who will perform the monitoring tasks?
  • Who should be notified when something goes wrong?
  • What monitoring tools do you use?
  • What is your organizational DevOps monitoring maturity?
  • What monitoring tools will you use?


Dan RasbandDan Rasband

@danrasband

Dan is the Development Team Lead for Objective in Salt Lake City, Utah. He holds a master’s degree in Linguistics from the University of Hawai’i and received bachelor’s degrees from Brigham Young University in Korean and Linguistics.

“The biggest mistake that IT management teams make when it comes to implementing application performance monitoring processes is that…”

They don’t implement them. This can be fixed by using services such as New Relic or Skylight for web application performance, and Crashlytics or similar for iPhone apps. There are similar services and/or libraries for pretty much every type of application.


David LynchDavid Lynch

@ITXcorp

David Lynch is a Marketing Specialist for ITXcorp.

“When IT management looks at application performance monitoring…”

They tend to think in terms of the systems that they manage and not the experience of the end user. This leads to a narrow focus on the underlying architecture (solving technical problems) first. Put another way, the focus of the performance monitoring is on the underlying systems, and this monitoring is used as a proxy for end user experience, such as “Is the disk performing well?” or, “Is the CPU under load?” Moving the target by asking, “Is the end user able to use the system to accomplish or advance their goals?” allows the system to be examined from a different context and helps to focus investment and iterative performance improvement tasks on those portions of the system which are most important.


Mihai CorbuleacMihai Corbuleac

@UnigmaApp

Mihai Corbuleac is a Cloud Consultant at Unigma Monitoring Solution.

“The biggest mistake IT management teams make when it comes to implementing application performance monitoring processes is…”

The fact that people don’t usually give the performance monitoring app time to gather enough data to make forecasts and generate accurate suggestions for improving performance. Our tool monitors cloud-based apps, improves cloud costs, and also generates app performance suggestions, and I know that people are eager to optimize as soon as possible. We always share immediate suggestions, but the best optimization comes after a while.


Michael ReddyMichael Reddy

@digitalacumen

Michael is the Founder & Chief Analytics Officer for Digital Acumen. He worked with Fortune 100 companies in these roles: media mix and statistical modeling, test & learn, web analytics, product, and program management.

“The biggest challenge to successful performance monitoring isn’t the vendor, the reports, or the actionable intelligence…”

It is executing on the intelligence.

We have worked with a major B2B service provider who uses a major performance monitoring tool but was not making substantial changes from the information collected. Their challenge was getting buy-in that the changes were critical to the user experience and thus had significant difficulty in getting the fixes prioritized.

The solution was to present evidence on the important of site speed and up-time, and its effect on bounce rates and thus conversion. The data shows that a one-second delay in site load can drop conversions by 7%. Their survey results also showed page load as a top-five complaint. With this data, the team was able to prioritize performance issues on the backlog.


Brady KellerBrady Keller

@AtlanticNet

Brady Keller is a Digital Marketing Strategist at Atlantic.Net, a trusted hosting solution for businesses seeking the best enterprise-class data centers.

“One of the biggest challenges to implementing application performance monitoring is…”

That with the rise of virtual and Cloud environments, it can be hard to monitor the performance of a process that may not always be running on the same server/node but could be hopping across a distributed cluster. Once IT teams are aware of the need for different metrics than what they had used prior, they can begin to split up those metrics into several categories, like virtual machine workload, per-server application performance, virtualization platform performance, and user-side experience and response time. It becomes less about one overall metric and more about segmenting and then weighing what metrics are the most important from a wide swatch of metrics.


Daniel LakierDaniel Lakier

@radware

Daniel Lakier is VP-ADC globally for Radware. Daniel has been in the greater technology industry for over 20 years. During that time he has worked in multiple verticals including the energy, manufacturing, and healthcare sectors. Prior to Radware, Daniel was president and CTO of a leading technology integrator.

“Two of the most common mistakes we see are intrinsically linked…”

First, the lack of monitoring and testing is often due to time pressure from production management and fast moving project timelines. However, the old adage, “more haste, less speed,” still rings true today. Do it right the first time and you save significant time on the backend because it’s much easier to optimize an application before it goes into production.

Moreover, too often we see people using two different systems, or tools for doing the performance monitoring/baselining – one in test and development phase and a completely different tool in production. Switching tools throughout the process makes using the performance metrics for baselining purposes less than ideal and can cause a host of unforeseen challenges when trying to compare actual application performance and stability to expected performance and stability.

To minimize these challenges and avoid these mistakes, be sure to use the right tool for the right job and function. In many cases, an SLA manager, similar to those found in some ADCs, can give you a quick guide to whether the problem is internal, external, network, or application-based. These SLA managers can also be set to provide alerts on performance deviations and are an effective first tool for any application performance strategy. Network monitoring solutions can also be used to help you get more granular on the connectivity layer by reviewing performance like network latency, packet error rates, and retransmits packet loss.

Lastly, a full APM can drill down into the application itself to do root-cause analysis for coding optimization to enhance code performance and stability.

If we have a clear strategy and build a good practice, then we can always stay ahead of the curve. By providing the application with the appropriate resources to handle the required task, we can provide a predictable and repeatable customer experience.


Swapnil BhagwatSwapnil Bhagwat

@swapnildigital

Swapnil Bhagwat is the Senior Manager – Design & Digital Media, implementing web, design and marketing strategies for the group companies. He is an MBA graduate with work experience in the US, UK and Europe. Swapnil
has worked for more than a decade across a range of businesses for the global markets.

“Some of the crucial mistakes the IT Management team makes while monitoring the application performance are…”

  1. Not identifying the exact scenario before hiring external contractors
  2. Going ahead with the non-essential investments related to the application
  3. Over-employment of staff
  4. Appointing inexperienced or incompetent leaders
  5. Having more than adequate numbers of managerial positions in a team


Eric ChristopherEric Christopher

@getzylo

Eric Christopher is co-founder and CEO of Zylo, the leading SaaS optimization platform that transforms how companies manage and optimize the vast and accelerating number of cloud-based applications organizations rely
on today.

“With SaaS app purchases being made across the organization without IT’s involvement, CIOs have another issue on their hands…”

Lack of visibility. This “operating in the dark” can cause them to make decisions without having all of the important information.

Enter a cloud intelligence system of record. What could this type of platform actually do to help the CIO better manage SaaS and cloud applications? Let’s examine a few components:

Executive Dashboard: Having all provider-specific data in one single platform, versus many siloed platforms, CIOs can see cloud metrics alongside spend and application trending detail to make data-driven decisions.

Renewals: With proactive visibility and alerting, as well as application level data ownership, CIOs are in a much better position to negotiate and get the best contract terms available.

Supplier Relationships: Effectively managing supplier relationships is now possible when the contact information, quotes, contracts and notes about the relationship are stored with the current contract spend and application utilization information. The days of asking a provider to share utilization detail to negotiate a deal with that same provider are over.


Benny FriedmanBenny Friedman

@bennyfr1

Benny Friedman is the Director, Israeli Development Center at Ruckus Wireless.

NOTE: The following information is excerpted from 10 totally avoidable performance testing mistakes via TechBeacon. 

“Some people schedule performance testing at the end of the life cycle, assuming they can’t test before the complete application is available…”

That is so wrong. If your continuous integration (CI) system already has some automated tests in it, you can start automated performance testing. Running performance tests should be part of your functional testing cycle.

When testing each build, you should also test for performance, and reuse the functional tests for performance testing if possible. You might want to have a few test configurations, just as with functional user interface (UI) tests. Have a short one to run for every build, a more thorough one to run nightly, and a longer, more comprehensive one to run at the end of each sprint. If your application relies on services that are expensive to use during testing, or if they’re not available at the time you need to test your application, use a service emulation tool to simulate them. This lets you test the performance of your core software as if it was consuming the actual services.


BeeyeBeeye

@mybeeye

Beeye aligns people and projects through a collaborative planning and managing tool so you better reach your goals. With Beeye, managers know which projects are understaffed, which are running behind schedule and which are most profitable. Managers and employees can manage their time, workload, and analyse their performance. Beeye is a SaaS solution that gives organizations the capacity and profitability planning information they need, when they need it. All with a lightweight, low cost, easy to use tool.

NOTE: The following information is excerpted from 14 Mistakes That Ruin Performance Management Every Time via Beeye.

Probably the most common misunderstanding about performance management is that it is not the same thing as performance review. The performance appraisal is only a part of the whole process outlined above.

Worse, performance management is often confused with the mostly outdated annual performance review. If it is to be taken seriously, performance should be monitored on an ongoing basis so that problems are fixed when they arise, and opportunities exploited as soon as possible. It is a continuous process, not an event.

Thinking this way is one of the things that get organizations into trouble, because of the yearly appraisal process, no matter how well designed, is not enough to ensure that employees perform at their best.

It is also not a purely administrative burden: performance management is about making people and organizations more efficient in a measurable way, not about filling forms and having meetings to collect data that will never be used.

Basic misunderstandings about performance management explain both why it is reviled, and why it is inefficient when companies decide to go through with it even though they are missing critical pieces of the puzzle.


Charles AraujoCharles Araujo

@Intellyx

Charles Araujo is a Principal Analyst for Intellyx. Intellyx is the first and only industry analysis, advisory, and training firm focused on agile digital transformation. Intellyx works with enterprise digital professionals to cut through technology buzzwords and connect the dots between the customer and the technology – to provide the vision, the business case, and the architecture for agile digital transformation initiatives.

NOTE: The following information is excerpted from Slow is Smooth and Smooth is Fast: Application Performance Management and the New Development Mantra via Intellyx. 

“There is a well-known axiom in the development world that is synonymous to my father’s SWAT team mantra…”

“The best time to find bugs is when you’re creating them.”

Of course, development teams know this — or at least they pay lip service to it. Quality Assurance (QA) teams and their embedded testing procedures are almost universally a part of the software development lifecycle. But there two corollary facts that are just as prevalent, if less discussed, in an ever-faster-moving development world: coders want to code (not test), and traditional testing won’t uncover the most common performance-related issues.

The problem is that traditional testing approaches primarily test code at a functional level — their aim: to identify code that just doesn’t work. But in today’s world, that’s a rarity. We’re long past the point in which consumers (internal or external) were tolerant of rampant and blatant bugs in the code. Today, the vast majority of issues that make the difference between perceived success or failure of a deployment come down to one thing: performance.

Unfortunately, developers discover most transactional performance issues only after they’re in production rather than at the point of development.

Stackify created Prefix to help close this gap. Prefix runs in your development environment and is a lightweight tool that shows real-time logs, errors, and queries, along with other real-time, performance-related information on the developers’ workstations. It helps them understand how long transactions take and can answer the key question, “what did my code just do,” while they can still do something about it.


Floyd SmithFloyd Smith

@nginx

Floyd Smith is the Director of Content Marketing at NGINX.

NOTE: The following information is excerpted from 10 Tips for 10x Application Performance via NGINX. 

“If your web application runs on a single machine, the solution to performance problems might seem obvious: just get a faster machine, with more processor, more RAM, a fast disk array, and so on…”

Then the new machine can run your WordPress server, Node.js application, Java application, etc., faster than before. (If your application accesses a database server, the solution might still seem simple: get two faster machines, and a faster connection between them.)

Trouble is, machine speed might not be the problem. Web applications often run slowly because the computer is switching among different kinds of tasks: interacting with users on thousands of connections, accessing files from disk, and running application code, among others. The application server may be thrashing – running out of memory, swapping chunks of memory out to disk, and making many requests wait on a single task such as disk I/O.

Instead of upgrading your hardware, you can take an entirely different approach: adding a reverse proxy server to offload some of these tasks. A reverse proxy server sits in front of the machine running the application and handles Internet traffic. Only the reverse proxy server is connected directly to the Internet; communication with the application servers is over a fast internal network.

Using a reverse proxy server frees the application server from having to wait for users to interact with the web app and lets it concentrate on building pages for the reverse proxy server to send across the Internet. The application server, which no longer has to wait for client responses, can run at speeds close to those achieved in optimized benchmarks.

Adding a reverse proxy server also adds flexibility to your web server setup. For instance, if a server of a given type is overloaded, another server of the same type can easily be added; if a server is down, it can easily be replaced.

Because of the flexibility it provides, a reverse proxy server is also a prerequisite for many other performance-boosting capabilities, such as:

  • Load balancing (see Tip 2) – A load balancer runs on a reverse proxy server to share traffic evenly across a number of application servers. With a load balancer in place, you can add application servers without changing your application at all.
  • Caching static files (see Tip 3) – Files that are requested directly, such as image files or code files, can be stored on the reverse proxy server and sent directly to the client, which serves assets more quickly and offloads the application server, allowing the application to run faster.
  • Securing your site – The reverse proxy server can be configured for high security and monitored for fast recognition and response to attacks, keeping the application servers protected.

NGINX software is specifically designed for use as a reverse proxy server, with the additional capabilities described above. NGINX uses an event-driven processing approach which is more efficient than traditional servers. NGINX Plus adds more advanced reverse proxy features, such as application health checks, specialized request routing, advanced caching, and support.


Frank J. OhlhorstFrank J. Ohlhorst

@FJO_Writes_Tech

Frank J. Ohlhorst is an award winning technology journalist, professional speaker and IT business consultant with over 25 years of experience in the technology arena. Frank has written for several leading technology publications, including ComputerWorld, TechTarget, PCWorld, ExtremeTech and Toms Hardware. Frank has also contributed to business publications, including Entrepreneur and BNET, and also has contributed to multiple technology books and has written several white papers, case studies, reviewers’ guides and channel guides for leading technology vendors.

NOTE: The following information is excerpted from Application Control: How to Detect Performance Bottlenecks via Tom’s IT Pro. 

“Ultimately, the goal with Application Performance Management or Monitoring (APM ) is to leverage proactive management, succeeding in preventing problems and helping IT to plan for future needs…”

To accomplish that, an APM product should assist in managing certain elements – which can be broken down into:

  • Fault Monitoring: Primarily used to detect major errors related to one or more components. Faults can consist of errors such as the loss of network connectivity, a database server going off line, or the application suffers an out-of-memory situation. Faults are important events to detect in the lifetime of an application because they negatively affect the user experience.
  • Performance: Performance monitoring is specifically aimed at detecting less than desirable application performance, such as degraded servlet, database or other back-end resource response times. Generally, performance issues arise in an application as the user load increases. Performance problems are important events to detect in the lifetime of an application since they, like Fault events, negatively affect the user experience.
  • Configuration: Configuration monitoring is a safeguard designed to ensure that configuration variables affecting the application and the back end resources remain at some predetermined configuration settings. Configurations that are incorrect, can negatively affect the application performance. Large environments with several machines, or environments where administration is manually performed, are candidates for mistakes and inconsistent configurations. Understanding the configuration of the applications and resources is critical for maintaining stability.
  • Security: Security monitoring detects intrusion attempts by unauthorized system users.
  • Accounting: In some cases, departments or users may be charged maintenance, usage and administration fees. Accounting monitoring measures usage so that, for example, organizations that have a centralized IT division with profit/loss responsibilities can appropriately bill its customers based on their usage.

Each of the above capabilities can be integrated into daily or weekly management reports for the application. If multiple application monitoring tools are used, the individual subsystems should be capable of either providing or exporting the collected data in different file formats that can then be fed into a reporting tool. Some of the more powerful application monitoring tools can not only monitor a variety of individual subsystems, but can also provide some reporting or graphing capabilities.

It is those reports and historical data that IT can use to prove their value in the enterprise and secure funding for additional projects, solutions, and capabilities.


Lady CodersLady Coders

@LadyCoders

LadyCoders.com is run by a bunch of women that want to break down the stereotypes that women can’t code, aren’t good with computers, and somehow aren’t equal to men in this area. Nothing could be further from the truth! They offer a lot of tips that should really help both men and women who aspire to build a web application or site in one of the more popular coding languages.

NOTE: The following information is excerpted from Tips for Maximizing and Monitoring Application Performance via LadyCoders.com. 

“When it comes to the performance of a web application, the amount of available bandwidth plays a direct role in how quickly the application operates…”

While you may have carefully planned for the amount of bandwidth an application will require, what happens when the application receives an unexpected amount of traffic? What if your site goes viral and thousands wish to access an application within a short amount of time? While there are many instances that are outside of your control, such as unexpected spikes in application traffic, you can help support continual high-performance by reducing the amount of unnecessary high resolution files, which include images and videos. When designing your web application, do so with the intention of having high bandwidth demands, which will automatically result in leaner, more functional applications.

Because of the dynamic environment most applications thrive in, it’s essential that you establish an application performance monitoring solution capable of continuously monitoring and addressing issues in all levels of an application. Remember, there’s no such thing as the perfect application code. Errors and issues will arise; however, it’s how you tend and repair these issues that truly determine its overall performance. Remain vigilant in regards to monitoring the performance of an application and establish set guidelines when it comes to addressing and correcting any and all errors.


George LawtonGeorge Lawton

@TechTarget

George Lawton is a journalist based near San Francisco, Calif. Over the last 15 years, he has written over 2,000 stories for publications about computers, communications, knowledge management, business, health and other areas which interest him.

NOTE: The following information is excerpted from Web application performance tips from the wolves on Wall Street via TechTarget’s TheServerSide.com. 

“Many web applications application performance issues come down to the I/O in, compute, archive, encode, and then I/O out cycle…”

Normally developers use coarse-grained parallelism where the application logic will go through some activity and then respond back. This has interesting implications. In many cases, the application model has to deal with concurrence. It’s easy to create applications where multiple threads hit the same backend.

“Another issue is that the application will often deal with contention on data stores,” said Martin Thompson, founder of Real Logic. A queue helps management. Modern CPUs have caches that drive performance. The level 1 cache has a latency of about 1 nanosecond (ns). But if the application misses this window it rises to about a 100 ns. If the data has to be pulled from another server, this can rise to over 1000 ns. One of the properties of caches is that they work on a principle of least recently used data. In many cases it’s possible to craft the application such that it doesn’t fundamentally leverage the best use of the hardware architecture.

Leveraging a pool of threads can make it easy to optimize cache performance, and subsequently improve web application performance as well. But a better approach is to assign a thread per stage where each stage is working with the part that keeps it simple. This is how manufacturing works. The model become single threaded and there is reduced concurrence. This approach is better at dealing with data stores in batches. Queues are used everywhere in modern applications. Thompson recommends making them explicit and then measuring cycle time and service time.


Kieran TaylorKieran Taylor

@kieran_taylor

Kieran Taylor, previously the Director of Product Marketing of Compuware’s APM Business Unit, is currently the Senior Director of Product and Solutions Marketing for CA Technologies.

NOTE: The following information is excerpted from The New World of Application Performance Management via Data Center Journal. 

“With so much going on beyond your own data center, the reality of modern web applications is that even if your tools inside the firewall indicate that everything is running okay, that’s no guarantee your end users are happy…”

You can no longer just manage the elements and application components inside your firewall because this gives you only partial coverage and leaves you with significant blind spots. Many aspects of the end-user experience will not be inferable from data collection points within the data center. The point at which the end user accesses a composite application is the only place where true application performance can be understood.

Today, it becomes more critical to assess the end-user experience as part of an overall performance management strategy canvassing all performance-impacting elements—from the end user’s browser all the way back to the multiple tiers of the data center and everything in between. This is the key to identifying and fixing any weak links. Some businesses believe they can’t manage the performance of the entire application delivery chain because many of these components are outside the firewall and beyond their direct control. But if you can manage application performance from the end user’s point of view and include the entire web application delivery chain, then you really are in a stronger position of control.


James ManciniJames Mancini

@Netreo_James

James Mancini is the Founder and Chief Technologist for Netreo.

NOTE: The following information is excerpted from Counting On The Cloud: 4 Tracking Tips For Cloud-Hosted Applications via Netreo. 

“Knowing the real-time status of cloud-based systems may give you time to prepare for the effects of an impending outage…”

You may be able to take corrective action, or at least communicate to affected users so they’re aware of the problem and can act accordingly.The ability to see historical information at a glance, and produce reports to document it, is also important. With this data in hand, you can hold your service providers accountable. If they’re not delivering on the service level requirements they’ve committed to, you need to show them what’s happening.If you’ve done the hard work of migrating bare metal services to the cloud, you’ve probably seen an increase in uptime, and that’s great. But the cloud’s dramatically increasing role in IT system infrastructure will likely create more complexity and more service issues.Prepare yourself now to handle emerging cloud service issues by monitoring cloud-hosted applications thoroughly.


Boris DzhingarovBoris Dzhingarov

@BorisDzhingarov

Boris Dzhingarov graduated at the University of National and World Economy with major marketing. He writes for several sites online such as Semrush, Tweakyourbiz, and Socialnomics.net. Boris is the founder of Tech Surprise and MonetaryLibrary.

NOTE: The following information is excerpted from 4 Tips to Improve Your .NET Application Performance via TG Daily. 

“When optimizing your .NET application performance, Profilers are a critical component in your troubleshooting arsenal, especially when dealing with poor CPU performance and memory resource issues…”

There are three types of Profilers, all of which are important for you to use: traditional, lightweight, and application performance management (APM) tools.

Traditional profilers track things like memory usage, method call frequency, and time spent per line of code. Lightweight profilers provide you with a high-level understanding of how your code is performing. And APM tools monitor your production servers.

  • Traditional .NET Profilers

While these profilers aren’t used very often, they come in handy when you’re dealing with problems stemming from poor CPU performance and memory resource issues.

Because traditional .NET Profilers consume a hefty amount of resources; you want to avoid running them on the same computer as the database you’re profiling.

  • Lightweight .NET Profilers

These profilers are designed for frequent use and to track your application’s performance at a high level so you can see important data like page load times, successful database calls, and why pages are taking so long to load.

Lightweight profilers, like Stackify’s Prefix, are a great alternative to the often arduous task of debugging your code.

Since lightweight profilers don’t use much of your computer’s resources, you can let these run indefinitely.

  • APM tools

APM tools that run on your server need to be lightweight, so they don’t slow down your applications. Thankfully, they can collect details quickly to help you diagnose the problem faster.

Software Quality

How to Evaluate Software Quality from the Outside In

Natalie Sowards Live Queue Leave a Comment

In a sense, application code serves as the great organizational equalizer.  Large or small, complex or simple, enterprise or startup, all organizations with in-house software wrangle with similar issues.  Oh, don’t misunderstand.  I realize that some shops write web apps, others mobile apps, and still others hodgepodge line of business software.  But when you peel away domain, interaction, and delivery mechanism, software is, well, software.

And so I find myself having similar conversations with a wide variety of people, in a wide variety of organizations.  I should probably explain just a bit about myself.  I work as an independent IT management consultant.  But these days, I have a very specific specialty.  Companies call me to do custom static analysis on their codebases or application portfolios, and then to present the findings to leadership as the basis for important strategic decisions.

As a simple example, imagine a CIO contemplating the fate of a 10 year old Java codebase.  She might call me and ask, “should I evolve this to meet our upcoming needs, or would starting from scratch prove more cost effective in the long run.”  I would then do an assessment where I treated the code as data and quantified things like dependence on outdated libraries (as an over-simplified example).  From there, I’d present a quantitatively-driven recommendation.

So you can probably imagine how I might call code a great equalizer.  It may do different things, but it has basic structural, underpinnings that I can quantify.  When I show up, it also has another commonality.  Something about it prompts the business to feel dissatisfied.  I only get calls when the business has experienced adverse outcomes as measured by software quality from the outside in.

Defining Quality

You might wonder whether such calls to me always stem from issues of code quality.  The clients don’t necessarily think so.  But in my experience, they usually do.  Undesirable business outcomes, such as missed deadlines or high cost of change, can indeed arise from personnel or process issues.  But I rarely find these without also finding issues of code quality.

So, let’s define software quality, from the business’s perspective.  If you write code for a living, forget for a moment about how you might measure quality a granular level.  Put aside your linting tool, static analyzers, notions about cyclomatic complexity and percent test coverage.  Forget even about defect counts for definition purposes. The business only cares about these things if you make it care about them.  Enough milestone misses or disastrous releases, and someone comes along to mandate 70% unit test coverage.

So let’s zoom out to the business’s level of understanding and look at how it reasons about code quality.  I submit that all business concerns boil down to two main questions.

  • Does the software do what it’s supposed to do?
  • Can I easily change what the software does?

That’s it.  Other outcomes derive from these core concerns.  Huge defect lists happen because the software behaves in a way not intended.  Code gets the pejorative label of legacy when everyone fears to touch it.  It’s not doing what it’s supposed to, and it’s hard to change.

Working from this definition, let’s then look at some heuristics for evaluating software quality from the business’s perspective — from the outside in.  What can the business take as a sign of poor software quality?

Large, Diverse Defect Counts

Earlier, I described that defect counts represented a symptom of poor quality, rather than a defining element.  So it stands to reason that I should mention them when listing symptoms of underlying quality issues.  Of course, anyone with common sense will tell you that high defect rates correlate with poor quality.

But look beyond the obvious examples of “doesn’t do what it should.”  Also, consider how it does with so-called non-functional requirements.  Does it do the correct thing but exhibit abysmal performance?  Does it exhibit ridiculous behavior in the event of an unexpected hardware failure?  Teams tend to fix these things when they come up.  So, if you see many examples in the wild, you can take it as a sign that the team can’t.  And consider this as a sign of hard-to-change software.

Rising Cost of Features

Developers tend to think of feature implementation in units of time or effort.  And, reasonably so.  The business, however, translates these into money without a second thought.  So whether you quantify features by time or money, it all winds up in the same bucket.

You can often tell that a codebase has software quality issues by looking at the cost per feature as a function of time.  In a well crafted codebase, the cost of features remains relatively flat.  I would argue that complete flatness is pretty much impossible because of the inherent nature of complexity with scale.  But feature cost, for similarly sized features, should rise very gradually.

When you evaluate software quality for a less than stellar codebase, you will see sharp upticks in feature cost.  As time goes by, the expense of a feature will grow more than linearly.  In less than ideal situations, look for a polynomial rise.  In catastrophic codebases, you might see exponential cost growth.  Those codebases generally don’t last very long.

Team Reaction to Feature Requests

I’ll switch gears for a moment.  So far, I’ve talked about antiseptically quantifiable ways to evaluate software quality from the outside.  Now I’ll offer a very human-oriented one.  To get a feel for the nature of the codebase, see how the development team reacts to change.

Maybe you have a Scrum shop and developers pull cards out of the backlog.  Or, perhaps you have more traditional assignment methodology and you assign them tasks.  In either case, reactions give valuable information.  If they refuse to give an estimate or give extremely high estimates, it signals a lack of confidence in accomplishing the task.  Likewise, if they balk or suggest not implementing that feature, you should worry.

I see a lot of codebases with large pockets of metaphorical Jenga code.  The developers understand this, and will react accordingly when you prompt them to touch the mess.  You’ll get a truer reaction in this scenario, on average, than by simply asking them.  I say this not because teams dissemble, but because they tend to feel the same sense of ownership and pride as anyone else.  They may not tell you outright that you have a software quality problem, but if they run from feature development like the plague, you can infer it.

Inexplicable or Weird Application Behaviors

When I do a codebase assessment, I always look at cohesion and coupling in detail.  I find these to represent a codebase’s overall quality quite well.  In layman’s terms, think of them this way.

  • Cohesion answers the question, “do things that belong together actually appear together?”
  • Coupling answers the question, “do things that don’t belong together appear together?”

From those working definitions, you can reasonably conclude that these two properties tell you how well (or poorly) the codebase is organized.  High quality means high cohesion and relatively small coupling.  Codebases exhibiting these properties lend themselves to easy change.

On the flip side, codebases with low cohesion and high coupling become expensive to change.  But, beyond that, they do weird things.  To understand why to imagine an inappropriate coupling in real life.  Say I took your oven and your car and turned them into a “super-device.”  This does you no good.  And now, when you have to take your car to the shop, you can’t make dinner.  If you tried to explain this to someone, they would think you insane.

Look for this in code as a sign of software quality issues.  “We changed the font on the login screen, and that somehow deleted a bunch of production customer accounts.”  Crazy stories like that indicate code quality problems.

Catch Code Defects Early

As I said before, software quality gives off signals (symptoms) to non-technical stakeholders.  Some, like software defects, are obvious.  But with others, you must look for subtle signs.  With something like fear of changing the code or weird defects, you may chalk them up to something else.

Resist this impulse.  Regardless of the symptoms, the underlying causes — wrongness and change resistance — calcify as time goes on.  As with planting a tree, the best time to fix it is years ago, and the second best time is now.  So when evaluating software quality from the outside in, don’t ignore symptoms.  At worst, everyone has a clarifying conversation and perhaps wastes a bit of time.  And, at best, you catch a minor problem before it becomes the sort of thing someone calls me to look at.

What are Docker Logs?

Docker Logging 101: Best Practices, Analytics Tools, Tutorials, and More

Angela Stringfellow Developer Tips, Tricks & Resources Leave a Comment

For troubleshooting code, few things are more valuable to developers than logs. That’s just one reason we built Retrace, which combines logs, errors, and code level performance in a single pane of glass to give you the insights you need to quickly identify and rectify the source of problems. With the widespread popularity of Docker’s container-based solution for apps, it’s important that you understand the ins and outs of Docker logs, so we put together this overview of Docker logging to bring you up to speed on the basics.

Definition of Docker Logs

Logging has always been a central part of application monitoring. Logs tell the full story of what is happening, or what happened at every layer of the stack. Whether it’s the application layer, the networking layer, the infrastructure layer, or storage – logs have all the answers. As the software stack has changed from hardware-centric infrastructure to Dockerized microservices-based apps, much has changed, but what’s remained unchanged is the importance of logging. Docker needs logging more than traditional apps, and there are many innovative solutions to help you get logging right for Docker.

Docker adds complexity to the software stack. Troubleshooting is very different for Dockerized applications. You can’t make do with just a few basic metrics like availability, latency, and errors per second. These worked for traditional apps that ran on a single node and needed very little troubleshooting. With Docker, you need to search far and wide to identify root causes, and the time it takes to resolve issues is critical to delivering an outstanding user experience.

Collecting Raw Log Data

Logging drivers collect container logs and make it available for analysis. The default logging driver is a JSON file to with log data is written, but there are many other logging drivers like the following:

  • syslog: A traditional and popular standard for logging applications, and infrastructure
  • journald: A structured alternative to the unstructured Syslog. It is compatible with Syslog
  • Fluentd: A log aggregation tool that can be easily integrated across your stack
  • awslogs: Sends log data to AWS CloudWatch Logs. This is a great option if you host your apps in AWS.
  • Splunk: The popular monitoring and logging tool which can be used to integrate Docker Logs with the rest of your monitoring process

As you can tell from this list, a logging driver can be used to share log data with external services. Running the docker logs command will return log data only if you’ve set JSON or journald as the logging drivers. For the other services, you can view logs in each of their interfaces.

A Docker Logging Example

When you start the Docker daemon, you can specify logging attributes and options. Docker offers the following example command for manually starting the daemon with the json-file driver and setting a label and two environment variables:

$ dockerd \
–log-driver=json-file \
–log-opt labels=production_status \
–log-opt env=os,customer

Then, you’d run a container and specify values for the labels or env, using, for example: 

$ docker run -dit –label production_status=testing -e os=ubuntu alpine sh

This will add additional fields to the logging output if the logging driver supports it, such as the following output for json-file:

“attrs”:{“production_status”:”testing”,”os”:”ubuntu”}

Analyzing the Log Data

To make log data useful, they need to be analyzed. Most log data is monotonous and poring over every line is enough to drive anyone crazy. When analyzing log data, you’re looking to find a needle in a haystack. Out of thousands of lines of normal log entries, you’re often looking for that one line with an error. To get the true value of logs, you need a robust analysis platform.

ELK

The most popular open source log data analysis solution is ELK. It’s a collection of three different tools – ElasticSearch for storing log data, Logstash for processing the log data, and Kibana to present the data in a visual user interface. ELK is a great option for Docker log analysis as it provides a robust platform that is supported by a large community of developers and costs nothing. Despite being free, it’s a very capable data analysis platform.

Fluentd

Another popular open source option is Fluentd. It tries to solve not just your Docker logging, but logging for your entire stack, including non-Docker services. It uses a hub and spoke model to collect log data from various sources and share that log data to log analysis tools as needed. Thus, it doesn’t require writing scripts for each integration and stitching together your entire logging layer.

Specialized Log Analysis Tools

With the open source options, you need to setup your stack on your own and maintain it. This means provisioning the required resources, and ensuring your tools are highly available and is hosted on scalable infrastructure. This can take a lot of IT resources. The easier way is to opt for a hosted log analysis solution like Sumo Logic or Splunk. These vendors provide logging as a service. All you need to do is point your Docker logs to these services, and they automatically handle storage, processing, and presentation of the log data.

The advantage that commercial log analysis tool has over open source ones is that they have intelligence built into their platform. For example, Sumo Logic offers predictive outlier detection. With this feature, it looks for anomalies that may escalate, and alert you of possible issues before they become issues. This is still early stages for intelligent log analysis, but for commercial log analysis tools, this is the way to differentiate themselves from the many powerful open source options available today.

Metrics, Events & Logs

Along with logs, metrics and events are an important part of the entire Docker monitoring process. Metrics are performance numbers for various parts of the Docker stack like memory, I/O, and networking. You use the docker stats command to view all container runtime metrics.

Events are more detailed than metrics, and report the activity stream, or change history for various components of the Docker stack. There are events for containers, container images, plugins, volumes, networks, and daemons. Some sample events create, delete, mount, unmount, start, stop, push, pull, and reload. You view events using the docker events command. Together, metrics, events, and logs give you the end-to-end visibility you need when running and troubleshooting applications in Docker.

In conclusion, when running a Dockerized application, things get complicated. Using old log analysis methods will leave you flying blind. You need a modern approach to logging that is careful about how it collects log data from containers, and how it analyzes that data. There are many options both open source and commercial. Docker logging is a critical part of modern web-scale applications. Get it right, and you’re on your way to building highly available, scalable, and innovative applications.

Additional Resources and Tutorials on Docker Logs

For further reading on Docker logs, the ELK Stack, and other tools and logging information, visit the following resources and tutorials:

Featured image is by kyohei ito via Flickr, under Attribution-ShareAlike 2.0 Generic (CC BY-SA 2.0), and was cropped and optimized but not otherwise altered. 

Microsoft Build 2017 Review of Day Two – Windows News & Updates

Matt Watson Developer Tips, Tricks & Resources Leave a Comment

Welcome to Build and our Build 2017 review of day two! You can read our review of day one here:

Microsoft Build 2017 Review of Day One – Azure News & Updates

The theme of day two was all about love and engagement. New ways for developers to get their users to love their apps and increase engagement.

Build 2017 Review & Highlights for Day Two

Windows 10 Fall Creators Update

They announced the next major update for Windows, previously known as codename Redstone 3. Seems to be a heavy focus on additional 3D capabilities. Including the new Story Remix app, Fluent Design, and cross-platform capabilities.

More: https://www.onmsft.com/news/microsoft-announces-redstone-3s-official-name-windows-10-fall-creators-update-at-build-2017

Story Remix

Amazing new video editing app that allows users to mix video, photos, and 3D objects. They showed off some pretty amazing demos.

It appears they have leveraged a lot of the mixed reality technology to bring 3D to videos. They demoed pinning 3D objects into a video and the object moved and scaled within the video. #mindblown

More: https://techcrunch.com/2017/05/11/microsofts-windows-story-remix-uses-machine-learning-to-make-your-videos-look-awesome/

New Fluent Design

Microsoft showcased next generation UI ideas. Including 3D lighting, UI depth, motion and more. Microsoft has enabled easy ways to use these new fluent design elements in XAML apps.

They also demoed some new capabilities around ink that were really awesome. I especially like the idea of being able to mark up PDFs with ink.

OneDrive Files On-Demand

As part of the Fall Creators Update, users will be able to store all their files in the cloud and only download files as they are needed. You no longer have to store gigabytes of files on your PC synced from the cloud. You can access your files ad-hoc as you need them.

More: http://www.windowscentral.com/microsoft-onedrive-placeholders-are-back

Windows Timeline

New Windows feature to track what recent apps and activities you have been doing. This looks awesome to track down what you were doing earlier without having to leave that app running all the time. They also showcased how this can work across multiple devices. Including the ability to move data between devices to continue an activity. Cortana is Timeline aware and can help you complete activities across devices.

More: https://www.theverge.com/2017/5/11/15610612/microsoft-windows-10-timeline-feature

Cloud-powered Clipboard

New ability to copy data or a file across devices. This will be really awesome for moving photos from your phone to your PC, as well as many other use cases. I believe Apple recently released some similar functionality across MacOS and iOS.

Project Rome

Rome is a new SDK that helps power all of the cross-device capabilities. Including the new Timeline and cloud clipboard. Being able to track within our software user activity and let them continue it on another device is pretty awesome.

More: https://github.com/Microsoft/project-rome

.NET Standard 2.0 for UWP

This fall UWP will now support .NET Standard. This will enable UWP developers to leverage more of their .NET code across all platforms via shared libraries.

XAML Standard 1.0

This will enable Xamarin Forms and UWP to share the same XAML UI markup across platforms. This will make creating apps that run across devices a little easier.

UWP for Visual Studio Mobile Center

New support for testing UWP apps across multiple devices. Including running tests and visually seeing how your app looks across devices.

More: https://blogs.msdn.microsoft.com/visualstudio/2017/05/11/more-platforms-more-choices-more-power-visual-studio-mobile-center-at-build/

Windows 10 S

Microsoft pitches it as a more secure platform that only allows Windows Store based apps. This likely would cut down on spyware which can run rampant across weird download websites.

This will also force more developers to make their apps available in the Windows Store or risk users not being able to use it. They announced Spotify and iTunes as good examples of this.

Ultimately, this will also help Microsoft make more money via the Windows Store. Google Chrome is also not supported for Windows 10 S because all apps must use Microsoft’s web browsing capabilities for security reasons. This is similar to WebKit on iOS.

More: https://www.microsoft.com/en-us/windows/windows-10-s

Linux on Windows Updates

Ubuntu is now part of the Windows Store. I believe previously you had to enable Windows 10 Developer Mode to get it. They also announced that Fedora and openSUSE shells will also be supported.

More: https://blogs.msdn.microsoft.com/commandline/2017/05/11/new-distros-coming-to-bashwsl-via-windows-store/

Xamarin Live Player

You can now easily deploy from Visual Studio to an iOS or Android app and have full debugging capabilities.You can also edit XAML and see it update in real time on your device. This is an amazing enhancement!

More: https://arstechnica.com/information-technology/2017/05/xamarin-live-player-almost-takes-the-mac-out-of-ios-development/

Windows Narrator

New developer mode to make it easier to test their apps for the visually impaired. This is great for helping developers perfect their apps for the Windows Narrator.

Motion Controllers for Mixed Reality

You can now use handheld controllers to create more types of gestures to interact with the virtual world.

More: https://techcrunch.com/2017/05/11/microsoft-demos-its-own-motion-controllers-for-mixed-reality/

New HP & Acer Headsets for $399 with Motion Controllers

Amazing to see this technology so affordable! Mixed reality will finally be affordable for everyone to use.

More: http://www.windowscentral.com/acers-399-mixed-reality-bundle-big-deal

Order yours: https://www.microsoftstore.com/store/msusa/en_US/pdp/productID.5106771200

Summary

We hope you enjoyed our Microsoft Build 2017 review for day two. If you couldn’t make it to Build, this should give you a good list of things to look up and learn more about. Microsoft continues to push the bar with Windows and mixed reality. I’ve excited to see how mixed reality continues to evolve.