We’ve talked a lot about Agile development and DevOps — particularly, the always-pressing need to rapidly ship new versions of their code and update their products. But when it comes to evaluating the productivity of your development teams, what metrics matter most? Should you merely be concerned with deadlines and time sheets, or should you focus on customer satisfaction to evaluate team performance?
It’s a tough question to answer. There are lots of numbers you can track, but is there a single metric that accurately reflects software development productivity? To find out, we reached out to a panel of software developers and development leaders and asked them to weigh in on this question:
What are the best metrics to measure software development efficiency and productivity?
Meet Our Panel of Software Developers and Development Leaders:
Allison is a Product Owner and ScrumMaster at Ascendle. Allison’s career is focused on managing projects with grace and success, always with an eye on process improvement and quality assurance. She combines strategic vision and tactical execution to improve bottom lines, business systems, client satisfaction and team effectiveness.
“The most critical question to answer when measuring software development efficiency and productivity is…”
“Is the client ecstatic?” Below are the metrics and measurements that will get you to a “yes.”
Stick to standard time limits for Scrum meetings. If you find your team is extending the Standup meeting times on a regular basis, then the stories in the Sprint were not written or prepared sufficiently before the start of the Sprint. If your Sprint Planning meetings are taking longer than expected, the team needs to spend more time discussing stories during Backlog Grooming.
If all team members do not fully participate in all Scrum ceremonies, the length of a Scrum meeting is not a true indicator of the health of a project. Any concerns a team member has about meeting times or participation should be discussed in the Retrospective. During the next Sprint, the team can course correct.
Measuring meeting times: Provide a time-tracking tool that makes it easy to record time separately for different meeting types. At the end of a Sprint, review the meeting times for each meeting type. Address the good, bad and the ugly findings during Retrospective.
Time spent on a subtask
The best team requires little prodding from the ScrumMaster as the workday clicks by. An established team should understand the flow of a Sprint board. It’s like playing mini golf if you putt and only make it half way down the greenway you’re going to have to putt again until you make it to the hole. The ScrumMaster should not have to nudge a team member to make that next putt.
Team members should be clear in their role in advancing the subtasks in a story. A good rule of thumb is that any one subtask should never be estimated at more than 4 hours. That way the whole team isn’t waiting around until one person completes a subtask. The team should have standard rules for alerting the team that their subtask has been completed, so the team knows when to take on the next subtasks.
Measuring time spent on a subtask: Throughout the day the ScrumMaster can use a Burndown Chart to understand the progress of the Sprint. If things look “off,” they can head to the task board to see which subtasks may be holding up progress.
Completing new features
The client does not need to have a pulse on the day-to-day productivity of the team. The true indicator of efficiency and productivity will be if new features are being introduced at or before the time the client expects them.
A successful team will provide the client with new features to test, play with and discuss on a regularly scheduled basis. Keep in mind that a client is likely using new features to advance the product within the company or the marketplace. They should be able to trust at the end of every Sprint the team will demonstrate one or more features that will improve the client’s product pitch.
Measuring new feature completion: Maintaining a Version Report provides a clear picture of the progress of a team and the development of the product. The Product Owner can use the report to track the projected release date for a defined version. The Product Owner should work with the client to determine which new features are to be included in a version. The Version Report can then be used as a tool for discussion with the client at the end of every Sprint, to show progress and manage expectations.
Ross Smith is the Chief Architect for PITSS America. As an architect, Ross ensures that projects are appropriately scoped, planned, and documented. His goal is to make sure that the team understands exactly what the customer needs and that the customer understands what they are going to receive from engagement with PITSS.
“The most effective measure for efficiency is going to come from…”
Agile development practices.
- Burndowns, for instance, measure how many development tasks are completed over time. The time is usually measured in sprints, which are usually two weeks long.
- Sprints are created with a set number of tasks, and the burndown shows whether tasks are completed to stay on the 2-week schedule.
- Agile burndowns help show ROI and progress in smaller bursts instead of long-term projects.
- Another efficiency measurement for applications in production is how frequently defects are raised and how long they remain unresolved.
Skot Carruth is the co-founder and CEO of Philosophie Group, Inc.
“To quote the Agile Manifesto…”
“Working software is the primary measure of progress.” But today that isn’t sufficient—shipping software that works but doesn’t create value is not a good measure. The best metrics to measure the productivity of your software development are the metrics that you use to measure the business results. And the best measure of how efficient your software development is how quickly your software improves business results.
Jonathan D. Roger is the Operations Director at AndPlus, LLC, a custom software consultancy just outside of Boston. With experience in government, finance, and green tech, he has a laser focus on customer delivery and process improvement. He is a Certified ScrumMaster and loves coaching Agile teams to reach their full potential.
“Software team productivity is an inherently difficult thing to put metrics — at least, quantitative metrics — around…”
Lines of code, bug rates, etc., are not necessarily good indicators of how well or poorly your software team is doing, especially if they are working on very complex problems. At our firm, we use a combination of qualitative and quantitative metrics to see that our teams are performing efficiently and being productive. To wit:
- Customer satisfaction: The most important thing for us is that customers are happy with the work we are doing. Regular check-ins to ensure that the client feels that we are making adequate progress are crucial metrics for our team. The Scrum process that we use at AndPlus ensures that we demonstrate progress for clients every two weeks, and this gives us a perfect touch point with them.
- Peer Code Reviews: Every line of code that gets put into a project at our firm goes through a peer code review, and our senior-most members of the technical team will also spot-check projects to ensure that code quality is being maintained. We will also compare the amount and quality of code written to the amount of time spent and logged on an issue — this gives us a feeling of how productive (or not) an engineer is being.
- QA Kickback Rate: Once a ticket is dev-complete, we count on our engineers to ensure that the feature works. Once they are confident, they will push the issue to our QA team for review. Kickbacks from QA to the engineering team are common, but if we see a significant number of issues (especially simple issues) being kicked back more than once, that is a leading indicator of problems with the engineering team’s effectiveness and productivity.
- Time Logs versus Historical Data: After several years of writing custom software, we have thousands of completed issues in our JIRA instance, all associated with time logs. We can use this data to track historical time records for story point levels (e.g., the median 2-point user story takes n hours from birth to death). While any one user story may go far longer or take far less time than the median if we see a large number of stories taking longer than the average it is an indicator that the team may not be as performant as they should be. On the other hand, if a team is consistently taking less time than the median, it indicates either a highly-performant team or a team that is padding estimates.
At the end of the day, our goal is to be fair to our engineering team and our clients — we know that every project and every issue within a project are different, and complexities arise even when the team is working on something that should be simple. As a custom software firm, everything we do is in some way novel, and we take that into account as well.
Vlad Giverts is the Head of Engineering for Clara Lending, a technology-enabled lending platform making financing a home easier, faster and more affordable. Before joining Clara, he was a partner and Sr. Director of Software Engineering at Workday Ventures. He holds a B.S. in Computer Science from the University of California, Berkeley.
“The truth is, there’s no good way to measure software development efficiency and productivity. But you can measure things that have a positive or negative effect on productivity….”
It’s very much dependent on the type of software and the structure of your team and organization.
That said, there are some common measures that can be useful to consider to see if they apply to your situation:
1. Bugs. Not how many, as there will always be some bugs, but rather how much time you’re spending on them each week. This includes both fixing issues once you’ve identified them or troubleshooting issues when they come up. If it’s more than 20% of your engineering time, you might have a quality/architecture problem that is a drain on your productivity.
2. Uptime. Related to the above. If you have a product on the internet, how much of the time is it unavailable to customers? Every time that happens it’s a distraction to the engineering team (and a cost to your business!).
3. Time from code complete to done. This is a tough one to measure, but incredibly impactful if you can improve it. When a developer is done writing the code for whatever new feature, there’s always some steps before those features are available to your customers. Those steps could be a code review, running the build including automated tests, QA and/or User Acceptance Testing, and the actual deployment/release process. Any one of those steps could result in an issue that requires the developer to go back into the code to fix or change something. Depending on how long this process takes and how many times the developer has to get back into the code, it could be a massive drain on productivity as the developer could have moved on to something else and has to context switch back and forth and rebuild the context every time.
Michael Mah teaches, writes, and consults with technology companies on estimating and managing software projects, whether in-house, offshore, waterfall, or agile. He is the managing partner at QSM Associates Inc. and director of the Benchmarking Practice at the Cutter Consortium, a US-based IT think-tank. With over 25 years of experience, Michael and his partners at QSM have derived productivity and quality patterns for thousands of projects collected worldwide. His work examines time-pressure dynamics of teams and its role in project success and failure. His degree from Tufts University is in physics and electrical engineering, and he is a mediator specializing in conflict resolution for technology projects, having completed his certification from the Program on Negotiation at Harvard Law School. Michael is also a private pilot and lives in the mountains of Western Massachusetts. His non-profit work is with Sea Shepherd Conservation Society.
“The global software industry is estimated at over $US 400 billion while adding an estimated US$ 525 billion to US GDP alone…”
Naturally, some leaders ask themselves, “What’s the best way to measure software development productivity and efficiency?” The truth is, there are no best metrics. That said, there are some that are more valuable than others.
The question is, what would you do with these measures in a perfect world? Would you want to diagnose and understand how to improve? Would you want to compare different teams? Would you want to more reliably estimate future projects and make management trade-offs?
Some might reply, “All of the above.” But another key factor could be missing. What is the value of the software that teams produce (an even trickier question)? Are you looking to cut costs, drive top-line revenue, and/or optimize your capabilities to grow the company and its staff?
Whatever your answer might be, you’ll have to understand that there are several dimensions which matter. Here are a few things to consider:
- Productivity is often seen as delivering functionality at a lower cost.
- It is also seen as delivering the same or more functionality, faster.
- Sometimes these are inversely related. For example, delivering at a lower cost, but taking more time.
- Alternatively, one might achieve a faster schedule, but they accomplish this with large teams, at a higher cost.
- Lastly, some might say that for a given schedule and fixed cost, a team, delivers MORE than it did before.
- Hence any measure MUST take into account all of these factors. 1) Speed 2) Effort/Cost, and 3) Volume of Functionality Delivered.
Some measures since the beginning of measurement looked at only two out of three, using metrics like cost per unit function, or velocity measures like story points per 2-week sprint. Each of these measures leaves out another dimension, and that’s where things get misleading. So now it becomes clear that ALL THREE have to be taken into account, while not omitting either effort or schedule. Make sense?
But wait. There’s more. None of this can hold water if teams deliver software at higher productivity, but at the poorer quality (more defects or poor usability) or low value.
In other words, higher productivity means nothing, if you’re delivering more junk that no one cares about, which has little value. That said, if you know what you’re doing, higher productivity can mean a whole lot. But that’s the subject of another article.
Swapnil Bhagwat is Senior Manager – Design & Digital Media and implementing web, design and marketing strategies for the group companies. He is an MBA graduate with work experience in the US, UK, and Europe. Swapnil has worked for more than a decade across a range of businesses for the global markets.
“Some of the Agile methodology metrics that are now being regularly used to measure software development efficiency and productivity include…”
Cycle time: The time taken to drive a change in the application and deliver it into production.
Lead Time: The duration between the formation of an idea and its delivery.
Fix rate: The time taken to open and close a specific production issue.
Rodolfo Justoni is a Project Manager (CSM) at Nearshore Systems, overseeing and defining Agile processes and team metrics with 15+ years background in the IT world in both technical and management roles. Rodolfo has experience managing agile teams and commercial engagements from the early stages and in leading teams within different locations and countries.
“The metrics we, at Nearshore Systems, use to measure development efficiency and productivity are the following…”
We measure efficiency as the percentage of an engineer’s contributed code that’s productive. The higher the efficiency rate, the longer that code is providing business value. A high churn rate reduces it. Code Churn is the percentage of a developer’s code representing an edit to their recent work. It’s typically measured as lines of code (LOC) that were modified, added and deleted over a short period such as a few weeks.
The primary purpose of measuring churn is to allow software managers and other project stakeholders to control the software development process, especially its quality.
The most prolific engineers contribute lots of small commits, with a modest churn rate, resulting in a high-efficiency rate. Understanding an engineer’s typical efficiency rate can help you understand their character and where they will fit in best.
As for the productivity, we take into consideration Epic and release (or version) burndown charts that track the progress of development over a larger body of work than the sprint burndown and guide development for both scrum and kanban teams.
Finally, we also measure velocity. Velocity is the average amount of work a scrum team completes during a sprint, measured in either story points or hours, and is very useful for forecasting.
Doru Paraschiv, co-founder and VP of Engineering at IRON sheep TECH, is an engineer with more than 15 years of experience. He has recently collaborated with DZone on research about optimization and monitoring tools.
“If we’re talking about team productivity (and not individual productivity), the best metric is represented by some on spec bug-free features delivered…”
By this, I mean that the way I know a team is doing the job is if:
- The team delivers the feature.
- The feature follows the specifications.
- The feature is bug-free.
However, the subject is touchy, as there are a plethora of takes on this issue. In reality, it is truly hard to measure software development productivity. It all depends on the type of business you are running. For certain businesses, delivery speed would be the most important. For others, following the specifications would be of greater importance. And for others, the absence of bugs is the best one. In the end, it all depends on the type of software you write.
For instance, for us, productivity doesn’t necessarily mean speed of development, but rather a robustness of delivered code/features. As an example, a team would write a feature, but then that feature would evolve in time, grow, become more complex. Productivity would be writing that feature, so we do not find ourselves in the future needing to re-write the feature because we didn’t make it flexible.
So in our case, the following are most important: the flexibility of code and the extensibility of code which allows us to incrementally upgrade a feature without totally re-writing it.
Cristian Rennella is the Co-Founder of elMejorTrato.com. They have eight years of experience in online entrepreneurship in South América, where they have developed the biggest education company, now with 134 employees and with more than 21.500.000 unique visitors working internationally in Brazil, Argentina, Chile, México, and Colombia.
“The best ways to measure software development productivity and efficiency are…”
1) For me, being busy means doing stuff, being productive means getting stuff done. Every person in my company knows that everything is measured with things that are finished (it does not matter if it’s a huge product release or just adding a button).
2) We do pair programming, so a great way to measure software development is when each person agreed that there is nothing more to improve. This makes the quality of the code much better.
In the end, productivity (point 1) + quality (point 2) is the key to our success.
Tosho Trajanov is the co-founder of a tech startup, ADEVA, aiming to help startups and businesses build and manage their software development teams. Tosho is a software engineer with extensive experience in building teams and developing complex cloud solutions & enterprise web applications.
“In order to offer better service to our clients, in the last few years I have been researching about software development productivity and efficiency. My findings are simple…”
There is no formal and objective measurement of efficiency and productivity of software development any
organization could use straight away.
Instead of using KPIs by the book, at Adeva, we started scheduling short meetings with the developers we are evaluating, listening closely what problems they are facing with. We are also asking them how helpful and knowledgeable the rest of the team members are to identify the team performance in general. We also encourage them to think about their productivity and efficiency and come up with ideas that could help themselves as well as the other team members.
We also use some tools to analyze how a developer code integrates with the codebase and make a quantitative determination of the developer’s efficiency. We also review the code’s performance, security and whether the code is going to have a lower cost of maintenance in the long term.
Andrew Ward is the 2016 winner of the ‘Most Influential Male’ award at the Silicon Canal Tech Awards, and Birmingham Chamber of Commerce’s ‘Future Face of Entrepreneurship.’ Andrew is the Managing Director of an app development agency called Scorchsoft, and the CTO of MODL, a disruptive platform for booking professional models.
“You need to be extremely careful when measuring efficiency and productivity within a tech business, and just because you are agile, doesn’t mean you are effective…”
To give you some context, I run two tech businesses in the UK, and though measuring effectiveness is critical for both, the way I approach this for each business is very different.
One company develops web and mobile apps for its customers. It’s service based, so delivering projects on-time and to a budget are the most important metrics. As most clients expect us to agree to deliver a product at a fixed cost, this means that we usually have to define a clear, unambiguous specification right at the start. Productivity is measured regarding development hours, and we use tools such as Toggl and Jira to meticulously track this on a daily basis.
A project is a success if the customer is satisfied, and we deliver on or under the quoted number of development hours. If a project does go over, then we measure this regarding the opportunity cost of not being able to spend that time on other paying projects – which works out to be very expensive. Though other metrics exist, such as lines of code used, code quality, the number of tests passed, or the number of external libraries imported, none matter if the project goes over-budget, as even small over-runs can consume the entire profit margin.
The second business is product based, meaning the success of the company is not tied directly to the time features take to develop. If a feature takes twice as long then we may have to delay the launch of the next release, but the cost to the business is the based cost of someone’s salary, unlike the development business, there is no opportunity cost to compare this against.
With this business, we can work agile. We still record time and get developers to estimate hours to hold the team accountable, but you have to be careful treating developers like machines. A shift in frame of mind can be the difference between fixing a bug in five minutes, and being stuck on it for days. In this environment you stand to lose a lot more by management, myself or the other directors, making a poor decision. For example, if we develop a feature that the client does not want, then we could waste months of development time with one bad decision. This has a much larger impact on the business when compared to a developer taking slightly longer.
In this environment we follow the build, measure, learn approach proposed by Eric Ries in his book The Lean Startup. We focus more on marketing metrics such as conversion rate, user behaviors, or customer feedback scores. In working agile, there is more autonomy on the team to make decisions throughout the development process, and strategic decisions are made and tested on a daily basis.
Dawn Roberts is a business owner/entrepreneur of a consulting company, Dawn Roberts Consulting. Her niche is business and personal efficiency — specifically process streamlining, complex problem solving, efficient mindset development, and value leakage/waste removal. She’s saved well over $6 million in efficiency improvement projects she’s worked on thus far. She has also developed a 4-week online class targeted at empowering individuals to be more productive and effective, delivering a higher volume of value-based activities.
“I recommend ranking business softwares based on four criteria…”
- Opportunity – This will include how much time and cost it will save for your business from automation. Not all processes will fit well with program automation. The opportunity should be quantified in clear terms up front.
- Potential Challenges – This will include any downsides of automating, or of the software application itself. Often, there may be incompatibilities that should be flagged up front.
- Cost to Implement – This will include the cost of the program and any associated IT costs in implementing. I encourage people to have an exhaustive list here. I’ve seen budgets blown because unexpected costs creep up during implementation that wasn’t planned for. Sometimes these costs become so high that the project would have been reconsidered up front if the costs had been known.
- Time to Implement – This includes how much time it will take to fully integrate the program. Include staff time and support time from the automation company. Be realistic here if you work for a large company, as integrating IT software can take quite a while in some cases, and can impact project ROI.
These will all differ business to business and application to application. You can create a weighted ranking for these four criteria, or you can create a numbering system for each that will allow you to sort and filter the different options you are looking at.
When looking for software that is aimed to increase automation or efficiency, I recommend the following features:
Easy and intuitive to use:
- No programming required
- Drag and drop user interface
- Online training videos available for staff
Scalable and transferable:
- Remote deployment capabilities
- Ability to be scaled based on future process needs
- Ability to be easily modified based on future process needs
- Ability to be used in different business applications once introduced
- Reporting capabilities – Easily see process health metrics that you assign.
- Analytical capabilities – Easily analyze different parts of the process.
- Optimization capabilities – Program should automatically flag optimization areas.
- Technical support – Technical support should be readily available for the program.
Gady Pitaru is the Chief Technology Officer at Badger Maps. Badger is a sales routing software that helps field salespeople be more successful.
“Measuring software development efficiency and productivity depends on the type of organization…”
Consulting firms will tend to measure efficiency and productivity per project more quantitatively since every hour will be billable. Software product companies might not be able to measure efficiency and productivity as easily, so different project management methodologies can help. We use the Scrum methodology at Badger, which includes a built-in way of measuring software development efficiency and productivity at the team level using a team’s velocity (Scrum stresses team collaboration). Velocity is basically how much work a team can do in a period, and over time can become a good average for measuring how efficient or productive a team is.
With that said, any single metric will never perfectly reflect reality, so at best they are an estimate instead of a perfect measure. A better measure of software development efficiency and productivity is simply to look at how well the business goals are being met. Instead of counting hours or trying to squeeze every last drop from a single hour, you can instead look at how the software development efforts contribute to meeting the overall business goals. That measures the efficiency of a whole organization instead of just a single work group.
Steve Krzysiak is a developer/manager with over 16 years of experience. He’s had the opportunity to manage many remote employees, including offshore 3rd shift developers who require extra attention. He has always looked for ways to quantify developer output and has discovered some approaches that work well. Steve is the founder of On The Road Creative, an agency that travels to clients for discovery phases then finishes the work remotely.
“First and foremost, it is important to communicate to the development team that metrics are not a form of micromanagement, but a professional self quantification that will help devs grow…”
I tell teams that there are no penalties, only introspection and voluntary behavior changes when unconstructive patterns emerge. If you’ve built a solid team then they’ll be on board with this; otherwise, they’ll see it as micromanagement and just game the system. The prerequisite is to build a passionate team of individuals that are inclined to individual growth.
Regarding metrics, there isn’t one metric to rule them all. There are many that I have looked at over the years to spot time wasters and anomalies. Time wasters are often areas where work can be optimized, while anomalies provide insight into a developer’s potential distractions (e.g., personal issues, professional mindset changes). I do not use nor encourage software metrics to be used to assess the quality of a developer.
Moving on the metrics themselves, here is a list of ones I have found useful. I have also included general productivity tips as well.
1. Lines of code (LOC) – Although this is often the most misleading metric it can also provide a baseline for the right project. Some projects are more code-intensive; some are more debugging intensive. A manager will see in a few weeks whether or not LOC is a suitable metric for their project. If the whole team is all over the board with these numbers then likely this metric is not suitable for the project.
2. Commits/Check-ins – This also can be similarly misleading, but for the right project, unless there is a predictable cadence established early on then it can be a strong assessment of work done.
3. Time tracking – The buy-in from your team on this one can be hard. It has to pitch in a manner that this is used to grow individually and identify bottlenecks in the software, as opposed to identifying slow developers. With a good team, this buy-in comes easy. This works only if the manager trusts his team.
4. Pull Requests (i.e. code reviews) – Although not all shops have a pull request (i.e. code review) model in place, for the ones that do, it can provide insight into how to prevent code issues in the first place. After awhile you will notice repetitive PR suggestions/comments that can be avoided in the future even before it wastes another developer time.
Imagine a case where there are four suggestions to do something one way as opposed to another. If the suggestion is something that can be summed up in a code linting rule, then the developer will see their error before they make a future PR and not waste others’ time. Also, if people are nit-picky with their PR suggestions, it’s often a good indicator that there might be some ego issues on your team.
5. Version Control System (VC) History – Provides a sense of past productivity. If a codebase is old enough, you can use a tool like CodeScene.io. CodeScene will look at the VC history and identify ‘hotspots’ in the code, among other things. Places where logic changes happen often. This can allow the team to break up the code and/or be mindful of the sensitive nature of those files. I encourage you to look into the CodeScene product some more, they are the most promising tool I have come across, and one of the founders gives a good talk about what forensic psychology applied to codebases, which is what code scene is built on.
Rob Zuber serves as chief technology officer at CircleCI. Before CircleCI he was a co-founder of Utah Street Labs and was the chief mobile strategist of Critical Path. He works out of CircleCI’s headquarters in downtown San Francisco.
“The best metrics for measuring software development productivity and efficiency are…”
Commit-to-Deploy Time (CDT): This is the time it takes for the code to go from committing to deploy. In between, it could go through testing, QA, and staging, depending on your organization. The goal of measuring CDT is to tell CEOs (and engineering managers) how long it’s taking the code to get from one end of the pipeline to the other and what roadblocks you’re encountering if any.
Ideally, if you’re doing CI best practices, tests are good quality, have been automated, and you can get from commit to deploy-ready status in mere minutes, even seconds for a microservice. If you have a largely manual QA process, that will likely mean your commit-to-deploy time is longer and can reveal where you have room to improve.
These improvements could be more of the technical side (e.g. our tests are flaky) or more process-oriented (we use complex integration tests where only unit tests are needed) or some combination of the two. At any rate, your goal should be to improve your commit-to-deploy time in some increment.
Most fast-moving organizations (e.g., Facebook, Amazon) deploy hundreds of times a day. For smaller organizations, daily deployments would be a good goal. The smaller your commits are, the faster they can get into production, and the faster you’ll be able to fix things when they go wrong… and at some point, they definitely will go wrong. More frequent deployments will also get your team accustomed to doing so, which will hopefully mean they’ll get better and faster at doing it.
David Attard is an established web designer and author on influential web design and development sites. He also manages the development of new software projects for various companies.
“I’ve found that the best way to measure development efficiency and productivity is…”
Team velocity as defined by Agile development.
There are various reasons why this works well.
1 – The unit of work is defined as necessary for the specific team. Whether this is an hour, a day of work, or the completion of a task, it’s the team who decides what best applies to them.
2 – The velocity takes into consideration such stuff as vacations, sick leave, support and other things which eat away from development time.
3 – Averaging the velocity over the last three sprints makes it easier for better prediction and measuring of efficiency (or lack thereof). By monitoring whether the velocity is going up, down or remaining stable, one can understand better how the development team is performing.
“The best way to measure software development productivity and efficiency is…”
We found that the point system on Jira Agile Scrum is the best way to gauge the performance of individual devs and as a team. We gauge the growth of the productivity of the team and can assess deadlines using this.
Dan is the Development Team Lead for Objective in Salt Lake City, Utah. He holds a master’s degree in Linguistics from the University of Hawai’i and received bachelor’s degrees from Brigham Young University in Korean and Linguistics.
“The best metric to measure software development efficiency and productivity is…”
How often you ship new code and features. Ultimately if you’re not shipping often, you’re not efficient or productive.
Hristo Stalev is the CTO and co-founder of Kanbanize. Before starting the company with his partners, Hristo was specialized in front-end development.
“As a Lean organization, we try to continuously improve our process efficiency. The most important metrics for us are…”
The weekly throughput of the dev team, average cycle time of each assignment type, and waste time. The throughput consists of the number of cards that the team places in the done section of their Kanban board. We place special attention on how long each assignment takes to be completed. Out of the cycle time, we calculate how much time was accumulated on waste activities and try to minimize it.
Andrew Haller is a co-founder and co-CEO of AirDev, a San Francisco based startup that designs and develops custom software for websites and mobile apps. Haller, a native of Chicago, graduated from Stanford and Harvard Business School and lives in San Francisco.
“A good rule of thumb for software development efficiency is…”
How long work actually takes relative to original estimates. Sadly the industry has a reputation for running over budget and past deadlines, so while all development shops can set ambitious targets, those that can deliver (or provide fixed pricing and money-back guarantees) are typically the most efficient.
Steve Mezak is the founder and CEO of Accelerance, which connects companies that need software development services with the most qualified outsourcing firms around the globe. Mezak is a software development expert and co-author of Outsource or Else! How a VP of Software Saved His Company and author of Software Without Borders.
“There are multiple ways to measure software development efficiency and productivity, depending on the goals of the organization…”
To emphasize or measure software development efficiency and productivity, several of our clients focus on:
1) Hitting Release Dates – The team’s ability to agree to a product roadmap and then hit the dates for releases. There is some give and take when using an agile development methodology. In some cases, the release date is the most important target because of related marketing, promotional and publicity campaigns. In other cases, the implementation of a specific feature is required for a successful release, whether it is on the target release date or later. The most important thing in this context is the ability of the development team to communicate with product management which is best measured by hitting release dates.
2) Quality and Customer Satisfaction – Happy customers who are now promoters, willing to recommend the software to others because it delivers value and has no serious deficiencies or bugs.
3) Profitability – Increasing revenue and/or cutting costs. This could be the increasing value of the software over time to support increasing the revenue per unit, such as the monthly subscription price for end users. You can also reduce expenses by increasing productivity of the team and possibly by outsourcing.