Metrics of success in Software Development
I am often asked: How do you do it? How do you deliver one successful project after the other? The answer is simple. Metrics. A success that cannot be measured is not repeatable.
What metrics are and what they are not
Some do believe reporting is essential to success. Reporting is, but not reporting as to time control, but reporting for client billing. These 2 are entirely different things. One shows a no trust mentality, which is prohibitive for quality deliveries, whereas the latter is a necessity to justify spent time.
In its entirety, spent time, as this time reports, are not a metric that guarantees success, nor impact success is any form. Time is a matter of capability and the will to succeed, as thus subjective and not a metric which could be applied in a Data ruled world.
Base Question: What metrics can be applied in Software Development?
I Imply that software development in 2020 is carried out based on agile methodologies. Agile development, again in its entirety describes a process. The critical question is: Are there any metrics in a procedure? Yes, there are, but not the ones you are probably looking for. In software development, some might look for the number of commits to a code repository, or the number of lines of Code written by a software engineer in a given period of time. None of them is a qualitative metric. Why can’t code quantity be a metric some may ask? Because in order to guarantee a qualitative outcome, quantity can’t be a qualified metric. Again, subjective metrics have no whatsoever impact on success.
Let me describe an idea; What if success was an outcome of innovation? Innovation would justify as a metric, but again, only subjectively. Innovation can be compared with Einstein’s Theory of Relativity. Depending on the point of the viewer in space, time moves slower or faster. Innovation is the same. For some it sounds innovative, for others, not understanding the fundamental changes, it is just a repetition of the “Old.”
Down the line, there are two groups of metrics we can apply:
- Software metrics and
- Code-based metrics.
To cover the full range of a project (even a long-term software commitment), we need to track both metric groups.
Now let’s take the Code Based metrics one by one and see how to get some value out of them.
Code quality
Code quality metrics measure software health throughout an automated process of code reviews. In real-life situations, a low score for code quality means that the Code is too complicated, and likely it will be too difficult to extend functionality, and even support activities might be difficult post-launch.
The most critical code quality metrics are:
- Maintainability index
- Cyclomatic complexity score
- Inheritance Depth score
- Class coupling
- Number of Lines of Code produced
Some tools, such as Microsoft Visual Studio automatically calculate these performance indicators out of the box. Using MS Visual Studio brings only advantages when combined with a Microsoft code stack.
Testing quality
The testing quality metrics refer to the maturity of Code and the production readiness of a software product. Test Quality metrics also allow some feedback on a QA team’s productivity when software bugs are minimized. QAs also contribute to high-quality software rollouts.
- Test coverage
The term “Test coverage” refers to the percentage of software requirements which are covered with programmatic test cases (sometimes referred as Unit Tests). Keeping Test coverage high, improves software compliance with software requirements specification (SRS). - UAT test
Defects found during user acceptance testing (UAT) reflect on software quality pre-production launch. The number of bugs discovered in this metric should be sufficiently lower than the equivalent in previous stages. If the number of bugs is close to the number of bugs found in earlier stages, both testing and software development stages need improvements. - Production defects
Bugs that slip through into production can cause revenue loss. You should ensure that at least 95% of all bugs are fixed and ironed out before the final software release.
Solution availability
Solution availability is an essential metric group, as end-users tend to abandon a software application that is problematic or difficult to use from a UX perspective. This metric also showcases a development team’s efficiency in testing, troubleshooting, and improving performance, stability, and usability.
- MTBF - Mean time between failures
The MTBF metric can be used for predicting the failures of a software product and measuring the work of a support team. These metrics indicate insufficient system performance monitoring or low quality of work done in the past. - MTTR - Mean time to recovery/repair
The MTTR metric indicates the amount of time a team spends fixing software issues on average. There are two sub-metrics, the repair time, which covers only the active restoring period, testing and returning to a functional state. The second sub-metric to MTTR is the recovery time, which starts from the initial issue detection or reporting and extends over-analysis and evolves until the final repair.
Especially when 3rd parties are involved, and an SLA for software maintenance is in place, both parties should acknowledge what metrics are used to uphold the agreement. A low MTTR score defines a right architecture software product, avoiding long downtimes and potential revenue loss. - Unavailability
Unavailability in a software product indicates how many times over a set timeframe an application failed. This metric helps software engineers in the analysis and the evolution of the solution availability.
Achieving 100% uptime might also not be a good idea, especially when freshly launching a software product. You might want to account for some downtime while load testing, or deliberately crashing the application – and measuring what happens behind the scenes. This could help to predict future cases in given situations or even show performance issues that would have never been discovered. - Page load time metric (only applicable for web)
Page load is a commonly used metric in web-based applications (including SPA and PWA apps). It establishes the speed of an application is fully usable by an end-user. This metric is not set in stone and can differ from end-users location to location.
It should also be continuously improved, as it mainly affects the usability of a web app. Web apps should stay within a 2-3 seconds page load limit. Anything lower is better and will increase conversion rates. Page Load Speed is also an index for search engines, ranking a web page in organic results.
Security and Penetration
The Security metrics group indicates which parts of software could be vulnerable and will need management activities to prevent unwanted guests. While penetration testing is a good practice aforehand of software release, this metric can never be bullet-proof. Generically you should follow the trends set out in the security industry.
- Number of vulnerabilities found by regular penetration testing
This metric indicates the exposure to security risks. In a best-case scenario, the scores achieved should decrease with project maturity. An increasing number of discovered vulnerabilities means that specific release processes might not have been followed, or code coverage unit tests might have been skipped for newly added code/features. - Number of open vulnerability issues
The number of already closed or patched vulnerabilities doesn’t give a full picture of a solution’s security. This metric needs to be directly compared to the number of security loopholes, which are –still- unfixed. This metric allows keeping the focus on this critical aspect of software deployments. Security improvements should be a significant concern in all software development projects. - Quantitative severity of potential security incidents
This metric displays the general trend in solution security and helps software engineers to prioritize incidents and open loopholes, which should have been attended to in the first place. Master criteria in a severity ranking is focused on how strongly an event can affect software reliability.
User satisfaction
Once we let users in, we need to measure several factors. User satisfaction can be measured through surveys, screen recordings, heatmaps, click maps i.e. Common and best practice is offering users to rate their experience. This metric helps to understand, which functionality users use and appreciate, and can be extended into which features the users wish to see next. Following parameters to user satisfaction surveys should be hunted for:
- Did the application meet your expectations?
- Does existing functionality help in achieving the objective?
- Is the User interface convenient to achieve the objective?
- Is the software stable and performs well?
- Which features would you like to see next?
OKRs or KPIs?
What we haven’t defined yet is the system. OKRs are behaving well in Business Processes, measuring personal success, or even the objectives of an entire organization, for above fine-grained metrics they will not sustain. KPI stands for Key Performance Index; It is indicated in the name that a measure or score is the outcome.
KPI examples
To give an example of above given key performance indicators, I summarized them into below synoptic table. Either metric needs to be applied on a given timeframe. The below is an example of monthly targets based on a sample software development project.
KPI Group | Metric | Target |
Code Quality | Maintainability index | 20 to 100. Higher value better |
Cyclomatic complexity index | 1 < Target < 10 | |
Depth of Inheritance index | 0 | |
Class Coupling | 0 | |
Lines of Code | Depends on App complexity. The lower the score, the better | |
Testing Quality | Test Coverage | 80% +, following the general trend |
UAT Defects | < 30, track general trend | |
Production Defects | < 10% of all bugs, track general trend | |
Solution Availability | MTBF | General Trend, the higher the number the better |
MTTR | General Trend, the lower the score, the better | |
Page load | < 2 Seconds | |
Solution Security | No. Of unavailability cases | 1-5 incl. deployments |
Number of vulnerabilities discovered through regular pen testing | 0-3, track general trend | |
Number and severity of security incidents | 0-1 for High Severity incidents and 0-2 for Mid and Low-security incidents |
|
User Satisfaction (survey) |
Meeting expectations on functionality | < 5 |
User interface convenience | < 5 | |
Stability and Performance | < 4 |
Success or Not?
The above indicators measure the quality of the produced Code, the quality of software testing, broader system availability, application security, and finally, user satisfaction. The above factors alone, however, are quantitative and help in understanding bottlenecks in the development process. They can be used to establish the reliability of implemented software development processes.
A final question that remains: Is this the key to success? And the answer is No. This is the quantitative part of measurable success factors. The most important, and often overseen success factors in software development is the team, working on a software solution themselves.
Special Team needs and Setup
I do pay special attention when hiring software engineers. While looking for highly specialized skill sets, I am also looking for a set of soft skills. However, if a software engineer doesn’t fit into the corporate culture, it is a no-go. When all team members are undergoing this personal scrutiny aforehand of being hired, the above quantitative approach becomes enhanced with the human factor of award-winning teams.