Key Performance Indicator
- Anand Nerurkar
- Nov 10, 2023
- 13 min read
Updated: Jul 5, 2024

Strategic KPI
These metrics are important for monitoring the business and IT alignment and reporting to high-level stakeholders involved in the project. Therefore, strategic metrics can track overall IT costs and the involvement of EA with business objectives on a longer timescale.
Total IT cost savings
One of the best indicators that your EA efforts are working is the total cost of money saved on IT services, applications, etc. Enterprise architecture is used to simplify the complex IT landscape. By evaluating capabilities, processes, applications, etc., enterprise architects are able to deliver cost savings on a regular basis.
Examples: Retiring legacy systems, consolidating licenses, modernizing and rationalizing applications, standardizing IT infrastructure, or migrating to the cloud.
. IT portfolio Total Cost of Ownership
As previously mentioned, one way to lower costs is through application rationalization. IT application portfolios belonging to large organizations will consist of hundreds (or even thousands) of different intersecting applications, including SaaS, data storage, servers, hardware, etc.
Rationalizing these will reduce the IT complexity and consequently remove management, support, and training costs, as well as unnecessary communication to lower the total cost of ownership (TCO).
Examples: Lower TCO by eliminating unused apps, retiring redundant software, and standardizing common technology platforms.
Cost for annual IT projects
One of the important by-products of enterprise architecture is that the cost of annual IT projects will be reduced. The cost of each project will be lower as EA acts as a single, comprehensive view into the IT landscape which drastically reduces the time for project preparation.
Example: When business plans to invest in IT projects such as application rationalization, application modernization, post-merger integrations, or business transformations, it can use EA tools to gather affected IT components, visualize the IT roadmap, and manage changes.
Business objectives supported by IT roadmap
One way to measure the success of enterprise architecture is by creating IT roadmaps that support business objectives. The more objectives the IT roadmap supports, the better we can conclude the success of EA within the business, and the higher quality of the end result can be.
An IT roadmap is a visual way for a company to develop and share a strategy for IT initiatives. Such roadmaps are a key part of enterprise architecture to support the ongoing innovation and success of the business.
Examples:
How many of the current business objectives are supported by the IT stack?
How often is EA involved in business strategy?
Common Services Compliance Rate (CSCR)
Enterprise Architecture often defines common services such as ESB, BPM, Infrastructure platforms etc... The CSCR measures the percentage of new projects that are fully compliant with the common service roadmap.
Example: 67% of projects complied with EA's common service strategy this year.
Architectural Due Diligence Rate (ADDR)
The percentage of projects that are fully compliant with the EA governance process. A EA governance process involves steps such as updating EA blueprints, architectural reviews and macro design.
ADDR is a good metric for reporting violations of the EA process. It is often helpful to report ADDR by business unit, technology silo or project manager — to highlight problem areas.
Example: 78% of operations department projects complied with EA governance but only 12% of sales department projects were in compliance.
Sunset Technology (ST)
Percentage of the technology stack that is considered sunset by EA. Measures IT's ability to introduce strategic technology and retire legacy systems.
Example: At the end of the year 54% of production systems were deemed sunset technologies. This compares with 62% last year.
Financial KPIs
Program managers might track some financial metrics for the business as a whole, as doing so can reflect the success of a broad and far-reaching program. They also may track financials specific to the program, such as the following example KPIs:
Capex Reduction
Capex reduction with use of IAAS,PAAS,SAAS solutions
Infrastructure Consolidation
Cloud Enablement
% of server consolidation
% of infrastructure ruiining on demand
License Optimization
adoption of open source
BYOL on cloud
% reduction in licenses
IT Asset Reuse- Maximize IT Asset reuse
--
Application reuse
% server infrastructure shared
% of shared services component
Opex Reduction
==
Portfolio Simplification --- simplification of application and server portfolio
Application portfolio rationalization
====
% of application decomision
% of application consolidation
% of legacy transformation
Shared Services
===
% of FTE shared across the resources
% of shared services componenets
Application health analysis
====
% of reduction in ticket volumes
Earned Value: Managers often track this KPI in projects, but it can also be useful to track this in a program. Earned value refers to the amount of money or budget that was authorized for work that’s been completed up until that point.
In a program, this might include the original amount of money designated for the completion of a certain number of projects within the program, which you complete in phases. If your teams have successfully completed three out of five projects within a program, and the budget to complete those three projects was set at $500,000, then the earned value of your program so far is $500,000.
Actual Cost: Managers track this metric in projects, and can also track it in programs. Using the above example, the actual cost to complete those three projects — regardless of the $500,000 budget — may have been $600,000.
Cost Performance Index: This KPI uses the earned value and actual cost to determine how well your program is completing work on budget.
The cost performance index is the ratio of earned value to actual cost. Using the above figures, this program’s cost performance index would be $500,000/$600,000, or 0.833. Any figure under 1.0 means that your program is currently over budget.
Return on Investment for Program: Organizational leaders calculate return on investment (ROI) with many organizational investments. They can do the same with many programs.
A broad program, for example, might include many projects to improve customer experience on a company’s e-commerce website. After a year of operating the program, program leaders might track ROI on that program. For example, they might track increased sales or the decrease in customers abandoning their online carts and completing a sale.
Customer-Focused KPIs
Many programs will focus on customers: attracting and retaining more of them, increasing their satisfaction, or taking other actions that help customers.
Thus, the KPIs for these programs also must focus on customers in ways that can show whether operations are or aren’t improving.
Examples of customer-focused KPIs include the following:
Business Value due to cloud adoption
---
Business Objective supported by IT Roadmanp
% of business capabilities delivered vs proposed business capabilities
RAID log- Risk/Assumption/Issue/Dependency
Time To Market
==
MVP for new product segment
minimum cycle/delivery time
minimum decision making cycle- Business & IT alignment
Operational Integrity
---
Application Availability statistic
trending of high/critical incident- prblem management
Application Integration Statistic
API based
Messageing based
Remote procedure called
Modernization IT/revamping legacy
==
imrpoved user/customer expereince
customer satisfaction
% of traffic usage
Success Rate: This is the measure of success in meeting any of the various customer-focused goals that a program may establish. The success rate might mean keeping the average customer on your website for 20 percent longer, or increasing the average sale of all customers by 10 percent because of how your website offers suggestions for accessory products. There could be a wide variety of goals, and you’ll want to track how you’re doing in meeting any of them.
Customer Satisfaction: Your team can measure customer satisfaction through various means. Two of the most common ways are customer reviews of your service and product ratings.
Measure these things before the beginning of a program focused on customer satisfaction, and then again as the program progresses.
Customer Retention: You can measure the percentage of customers you retain to become repeat customers, both before and after a program starts, to make changes.
Customer Engagement: Your team might often measure customer engagement through digital measures, including how long a customer stays on your website or uses your software as a service (SaaS) tool. But, you can measure customer engagement in other ways, such as how often your customers or potential customers interact with your sales teams.
Technology Metrices
===
Open source usage
service availability
Automation
workload automation
resource health
bursting
Process KPI
Regulatory Compliance
no of resources audit team consulted for clod repository for legal and regulatory compliance
% of policy in compliance state
cost optimized per policy over time
% of permit to asses
% pf permit to design
% of permit to build
% of permit to operate
Resource Optimization
size of instances,scale out/in on demand
no of instances during peak off period
Goverance KPI
==
% of application passed tolgate
% of certified hardware
% of expedited/exceptional approval
% of senior management support
% of operating unit particiaption
Operational KPIs
You may also want to track a range of internal operational measures. These are not direct customer metrics, but ones that measure the efficiency and effectiveness of operations, insofar as they affect how you engage with customers.
Operational KPIs measure operations that can affect your revenue and profits, depending on how well or how poorly they are working.
Some examples of operational KPIs are as follows:
Number of rationalized applications
Application rationalization is the main operational metric for EA. When architects rationalize their applications, they will go through an organization’s portfolio and determine which applications need to be retired, upgraded, repurposed, or renegotiated.
By tracking the number of applications rationalized through EA efforts, stakeholders have a clear metric as to how the IT landscape has been improved.
Number of overlapping applications
Another operational metric enterprise architects can use is by identifying and removing the number of overlapping applications in their portfolio. Overlapping applications are applications that fit into the same category and provide the same or similar functions. An example of this would be using two similar apps within one business unit.
Overlapping applications increase wasted budgets and create unnecessary complexity if not uncovered. The goal is to remove as many overlapping applications as possible, without impacting the value created.
This makes it easier for architects to upgrade application landscapes, integrate new software, improve efficiency, etc.
Number of functionally unfit applications
Functionally unfit applications tend to emerge during mergers and acquisitions. When this happens, two unique IT landscapes integrate with each other — this is a perfect opportunity to employ enterprise architecture.
There may be applications taken on that no longer serve the new version of the company. This is also the case through software upgrades and cloud migration.
EA tools like the LeanIX EAM identify functionally unfit applications with surveys. The tool differentiates between Unreasonable, Insufficient, Appropriate, and Perfect. It can identify applications that need to be replaced or need to be worked on since they do not functionally fit their purpose.
Number of technically unfit applications
Technically unfit applications refer to applications in the end-of-life lifecycle stage and applications which do not satisfy your technical requirements.
It might be the application does not support SSO when IT requires it or that the application's underlying technology is outdated or not supported anymore. In this case, tracking these applications will tell which ones need to be replaced to support the roadmap.
LeanIX EAM differentiates between Inappropriate, Unreasonable, Adequate, and Fully Appropriate technical fit.
Number of tech obsolescence candidates
By monitoring tech obsolescence applications, enterprise architects can plan the replacements for obsolete technology before its lifecycle ends. Through this, EAs protect business processes from IT problems and mitigate the organization's security from any outdated tech.
Scheduled Performance Index: This metric is similar to the cost performance index. It is a ratio of earned value to planned value. Earned value is what your program has completed. Planned value is what you expected the program to have completed by this point in the project.
Component Delivery Rates: This metric measures how well the projects and other components of your program are doing their work. Are projects delivering certain products and materials as expected? This level of success will have an effect down the line, as other parts of the program might depend on these completed products.
Timeliness of Component Delivery: This metric is related to component delivery rate. It focuses not only on whether components are delivered, but also on whether they are delivered on schedule.
Project Completion Rates: Your program might depend on projects being completed on time. Ask: What are your project completion rates? What percentage of your projects are completing their work on time? What percentage of your projects are failing to complete their work at all?
Communications Effectiveness: Your program and the projects within the program will be hurt by ineffective communications among team members. Ask: How effective are your communications in helping ensure clarity and progress? You can measure communications effectiveness in a number of ways, including timeliness and how knowledgeable team members are about communications that have occurred.
Team Performance: You can measure your team’s performance in a number of ways. For instance, you might measure productivity rates among team members in various ways, or team members’ adherence to project deadlines.
Business Capability KPIs
These KPIs focus on how your program is improving your organization’s overall capabilities. They are similar to operational KPIs, but focus more on the strategic value and goals of your organization.
The “Health” of Your Teams: Always monitor and assess how well your teams are working together. You’ll also want to assess how aligned their work is with the mission of the organization, how committed they are to the organization’s core principles, and a range of similar ideas and metrics.
Expected Program Benefits vs. Actual: While you’ll track specific metrics on how the program is doing, the program charter or management plan will set out some primary goals and expectations of benefits from the program that you will always be tracking. (You can download a template for a program charter in our roundup of program management templates.) Periodically, you should also monitor actual results — compared to the goals — in some of those primary areas, and make adjustments when needed.
Quality KPI
==
Defect leakage
COQ
Code Coverage
Agile KPI
==
Capacity utilization per sprint (hr)
=actual commited/team bandwidth / * 100
Commitement reliability
=total no of story point completed vs accepted
=accepted/completed *100
Backlog health --- no of story point available in comparision with avg velocity
Team Velocity/sprint - no of story point accepted in a sprint
Burndown chart--- no of work completed in a sprint vs total work remaining, it track remaining amount of work.






Risk Report
status report
lesson learned
milestone report
Scaled Agile KPI (SafE)
==


Outcome Matrices
· Do our solutions meet the needs of our customers and the business?
· are focused on the results, such as customer satisfaction, product use, revenue generation, Employee Engagement, iteration goals, pi objectives, impact of the work. These metrices provide a measure of the project’s success and help to determine if the project is meeting its goals.
Flow Metrices
· How efficient is the organization at delivering value to the customer?
· are focused on the movement of output and productivity.
· This can include metrics such as velocity, burn down chart, the average number of defects per week, rework generated due to defects, and cycle time. The specific metrics used will depend on the context and need of the project.

Flow Distribution
Flow distribution measures the amount of each type of work in the system over time. This could include the balance of new business Features (or Stories, Capabilities, or Epics) relative to Enabler work, as well as the work to resolve defects and mitigate risks.
How is this measured? One simple comparison is to count the number of each type of work item at any point in time. A more accurate measure might consider the size of each work item. Agile Teams may measure flow distribution per iteration, but PI boundaries are commonly used to calculate this at the ART level and above, as shown in Figure 6.

Why is this important? To balance both current and future velocity, it is important to be able to track the amount of work of each type that is moving through the system. Too much focus on new business features will leave little capacity for architecture/infrastructure work that addresses various forms of technical debt and enables future value. Alternatively, too much investment in technical debt could leave insufficient capacity for delivering new and current value to the customers. Target capacity allocations for each work type can then be determined to help balance these concerns. Returning to the portfolio example, tracking the distribution of funding across investment horizons provides a means to ensure a balanced portfolio that ensures both near- and long-term health.
Flow Velocity
Flow velocity measures the number of backlog items (stories, features, capabilities, epics) completed in a given timeframe; this is also known as the system’s throughput. (Figure 7).
How is this measured?
As with flow distribution, the simplest measure of velocity is to count the number of work items completed over a time period such as an iteration or PI. Those items can be stories, features, capabilities, or even epics. However, since work items are not all the same size, a more common measure is the total number of completed story points for work items of a type over the timeframe.

Why is this important?
All other things being equal, higher velocity implies a higher output and is a good indicator that process improvements are being applied to identify and remove delays from the system. However, the system’s velocity will not increase forever, and over time stability of the system is important. Significant drops in velocity highlight problems that warrant investigation.
Flow Time
Flow time measures the total time elapsed for all the steps in a workflow and is, therefore, a measure of the efficiency of the entire system.
Flow Time is typically measured from ideation to production. Still, it can also be useful to measure Flow Time for specific parts of a workflow, such as code commit to deployment, to identify opportunities for improvement.
How is this measured?
Flow time is typically measured by the average length of time it takes to complete a particular type of work item (stories, features, capabilities, epics). A histogram is a useful visualization of flow time (Figure 8) since it helps identify outliers that may need attention and supports the goal of reducing the overall average flow time.

Why is this important?
Flow time ensures that organizations and teams focus on what is essential – delivering value to the business and customer in the shortest possible time. The shorter the flow time, the less time our customers spend waiting for new features and the lower the cost of delay incurred by the organization.
Flow Load
Flow load indicates how many items are currently in the system. Keeping a healthy, limited number of active items (limiting work in process) is critical to enabling a fast flow of items through the system (SAFe Principle #6).
How is it measured?
A Cumulative Flow Diagram (CFD) is one common tool used to effectively visualize flow load over time (Figure 9). The CFD shows the quantity of work in a given state, the rate at which items are accepted into the work queue (arrival curve), and the rate at which they are completed (departure curve). At a given point in time, the flow load is the vertical distance between the curves.

Flow Predictability
Flow predictability measures how well teams, ARTs, and Solution Trains can plan and meet their PI objectives.
How is it measured? Flow Predictability is measured via the ART Predictability Measure, Figure 11.

Why is this important?
Low or erratic predictability makes delivery commitments unrealistic and often highlights underlying problems in technology, planning, or organization performance that need addressing.
Reliable trains should operate in the 80 – 100 percent range; this allows the business and its stakeholders to plan effectively.
Competency Metrices
· How proficient is the organization in the practices that enable business agility?
· are focused on the abilities of the project team.
· This can include assessments of team satisfaction, collaboration, trust levels, and other factors contributing to the project team’s sustainability.
Measuring Competency Achieving business agility requires a significant degree of expertise across the Seven SAFe Core Competencies. While each competency can deliver value independently, they are also interdependent in that true business agility can be present only when the enterprise achieves a meaningful state of mastery of all. Measuring the level of organizational competency is accomplished via two separate assessment mechanisms designed for significantly different audiences and different purposes. The SAFe Business Agility Assessment is designed for the business and portfolio stakeholders to assess their overall progress on the ultimate goal of true business agility, as shown in Figure 12.

Comments