9 Software Architecture Metrics for Sniffing Out Issues
Software architecture doesn’t fall apart overnight. It slowly rots, module by module, method by method, until one day, your team is tangled in complexity, deadlines slip, and adding a new feature feels like defusing a bomb.
But it doesn’t have to be that way. Bad architecture leaves a trail! And if you know what to measure, you can sniff out software architecture issues long before they derail your project.
In this post, I’ll share nine simple software architecture metrics I use that act like a nose for detecting issues before they break project. They’ve helped me pinpoint hidden coupling, unnecessary complexity, and code that’s quietly setting teams up for failure.
Here’s a quick cheat sheet of nine software architecture metrics I use when evaluating software architecture. Don’t worry if some of these are new to you or seem obvious. I’ll break each one down in the sections that follow, including what to watch for, how to interpret the numbers, and where they often go wrong in real systems:
| Metric | What it Measures | High Value Implies . . . |
| LCOM | Lack of cohesion | Low cohesion, class may need splitting |
| DIT | Inheritance depth | More complexity, more reuse |
| IFANIN | Inherited base classes | Greater reliance on inheritance |
| CBO | Coupling | More dependencies, tighter coupling |
| NOC | Derived classes | Core abstraction or possibly fragile base |
| RFC | Possible behaviors | Higher complexity/responsibility |
| NIM | Instance methods | Rich behavior; more to test |
| NIV | Instance variables | More state, potential complexity |
| WMC | Number of methods | Larger class, more complexity |
If your system feels more challenging to work on than it should, this list may help explain why. Let’s explore each metric in more detail.
Software Architecture Metric #1 – Lack of Cohesion of Methods (LCOM)
What? Yep. That metric is a mouthful.
LCOM is a software metric used to measure the cohesion of a class’s methods and attributes. In simpler terms, it tells you how well the methods of a class relate to each other and the class’s data.
A high LCOM value indicates that the class performs many unrelated tasks, which typically suggests poor cohesion. In other words, the class is doing too much and is a perfect candidate to be split-up.
We usually measure LCOM in percentage (LCOM%). For example, if a class has 0% LCOM, the class (or if you’re using C, the module / file) has perfect cohesion. Every method uses every field.
On the other hand, if a class has 100% LCOM, that means it has no cohesion at all! We have a class with methods and data that don’t relate to each other very well. It’s a mish-mash.
Interpreting LCOM Metric Results
Remember, cohesion refers to how focused the responsibilities of a class are. High cohesion, low LCOM, means the code is easier to understand, maintain and reuse. (Note: You can still use this metric with C to understand module / file cohesion.)
Below are example ranges that I typically use when I’m evaluating LCOM and how they relate to the cohesion of the class (module/file):
| LCOM (%) | Interpretation |
| 0–10% | 🟢 Excellent cohesion — Methods are highly related, ideal design. |
| 10–30% | 🟢 Good cohesion — Mostly related methods, generally acceptable. |
| 30–50% | 🟡 Moderate cohesion — Potential design smells; keep an eye on it. |
| 50–75% | 🔶 Low cohesion — Methods are too independent; class might be doing too much. |
| 75–100% | 🔴 Poor cohesion — Class is likely doing unrelated things; strong candidate for refactoring. |
LCOM Users BEWARE!
Here’s the rub: LCOM by itself isn’t a helpful software architecture metric. If you have a small class with a few methods, you may get a misleading LCOM%. Don’t overreact if the LCOM% is high for a small data or utility class.
Keep in mind too that there is more than one way to calculate LCOM! Each method handles cohesion slightly differently. In theory, LCOM% is supposed to be a normalized version. Make sure you understand what method you are using and what it means for your source code!
So, if LCOM has these caveats, is it even useful? Absolutely! LCOM is most useful when it is paired with other metrics that we’ll look at such as CBO, WMC, and RFC. Any single metric doesn’t give you the whole picture. It’s only when you can combine them that you can get the whole picture.
If you insist on using LCOM alone, here are my recommendations on whether you can trust it or not:
| Scenario | Trust LCOM? | Notes |
| Small utility class | 🚫 Not reliable | False positives likely |
| Domain entity or service | ✅ Very useful | Expect tight cohesion |
| Abstract base class | ⚠️ Use caution | High LCOM may be OK |
| Mixed responsibility class | ✅ Red flag | Use to guide refactoring |
| Stateless helper | 🚫 Ignore | Expected to have low cohesion |
| Large controller or manager | ✅ Trust and verify | May indicate need for decomposition |
Benefits of Low LCOM
When you keep LCOM% low, you’ll find that it helps you to:
- Increase class / module clarity.
- Reduce maintenance costs and mental fatigue.
- Encourage separation of concerns.
Let’s now look at the next software architecture metric of interest, DIT.
Software Architecture Metric #2 – Depth of Inheritance Tree (DIT)
The Depth of Inheritance Tree (DIT) software architecture metric won’t be much use to you if you are still writing your embedded software in C. It won’t help you if you have adopted Rust (Rust doesn’t support inheritance). However, if you’re using C++ or Python, then it can be helpful in identifying “inheritance hell”!
DIT measures the depth of a class in an inheritance hierarchy. The higher the number, the more ancestors there are between the class and the root.
Over the last 20 years, the software industry has come to recognize that inheritance isn’t the best mechanism to write good software. In fact, the deeper the inheritance tree goes, the more complex the software becomes, and the harder it is to reuse the code.
Interpreting DIT Results
In any modern software system, we would expect an architecture that has a low value for DIT. The values that I use to evaluate DIT can be found in the table below:
| DIT Value | Interpretation |
| 0 | ✅ No Inheritance – The class does not inherit from any other class (usually a root/base class). |
| 1 | ✅ Inherits directly from one base class — the most common case in well-structured OOP. |
| 2–3 | ✅ Good depth — moderate inheritance; promotes reuse without excessive complexity. |
| 4–5 | ⚠️ Deeper hierarchy — may indicate complex inheritance chains that are harder to understand and maintain. |
| >5 | 🚨 Very deep — likely over-engineered or suffering from inheritance misuse (fragile base class problem). Consider using composition instead. |
As you can see, we ultimately want to keep the inheritance tree shallow. Instead of using inheritance, we typically use composition instead. This keeps the inheritance tree shallow and wide rather than narrow and deep.
Benefits to using DIT
There are many advantages to keeping DIT low. For example, low DIT decreases the cognitive load on the developers because they don’t need to understand all the ancestors in the inheritance tree.
It’s a bit of a catch-22 because higher DIT numbers can indicate reuse, but at the same time it’s reuse that leads to additional complexity. The trick, like with anything in life, is balance!
If you decide to use DIT as a metric, be warned that framework code can often have a misleading DIT number. The reason for this is that as you add a large number of abstractions to your code through the use of base classes, DIT can become inflated. So don’t blindly leverage it.
As I mentioned before, make sure you pair any individual software architecture metric with others to get the full picture. If you want to learn how to combine metrics and properly architect your embedded software, consider the following on-demand course:

The Master Class teaches you how not only how to successfully architect a modern embedded architecture, but how to apply these metrics to tease out actionable tasks to improve your software so you can decrease the costs and time to scale and maintain your software.
Software Architecture Metric #3 – Inherited Base Classes (IFANIN)
If DIT tells us how deep the inheritance tree goes, IFANIN tells us how wide the roots spread.
IFANIN measures how many base classes a class directly inherits from. In C++ terms, we’re talking about multiple inheritance—those situations where a class pulls in behavior from two or more parents. It might seem like a neat way to share functionality across classes, but in reality, high IFANIN values often signal trouble.
The more base classes a single class inherits from, the harder it becomes to reason about its behavior. You now have to juggle multiple abstraction layers, manage overlapping responsibilities, and deal with subtle edge cases when two base classes define the same method name or interface. Sound familiar? You might be facing the dreaded diamond problem.
Interpreting IFANIN Results
While a low IFANIN (0 or 1) is typically safe and expected, anything above that should trigger a closer look.
Here’s how I usually interpret the values:
| IFANIN Value | Interpretation |
| 0 | No inheritance — often found in simple data or utility classes. |
| 1 | Inherits from a single base class — standard OOP usage. ✅ |
| 2 | Inherits from two base classes — possibly okay, but needs scrutiny. ⚠️ |
| 3+ | High reliance on multiple inheritance — increased complexity and coupling. 🚨 |
High IFANIN values can also make testing and mocking more difficult, especially in systems where tight memory constraints already limit flexibility. If you’ve ever tried to mock a class with a deep and wide inheritance structure in an embedded environment, you know it’s no walk in the park.
Benefits to Using IFANIN
So, when does IFANIN become truly useful?
- When auditing a class that seems overly complex.
- When you’re tracking down bugs with unclear origins.
- When you’re analyzing code for refactoring or re-architecture.
Look for ways to replace inheritance with composition or interface-based design. In embedded systems, composition tends to be more predictable and easier to unit test.
Ultimately, a high IFANIN count is like a code smell with a megaphone. You don’t have to rip out every instance of multiple inheritance, but you do need to understand why it’s there—and whether it’s doing more harm than good.
Software Architecture Metric #4 – Coupling Between Objects (CBO)
If there’s one metric that always catches my attention, it’s CBO.
CBO measures how many other classes a given class is connected to. The higher the number, the more tightly coupled that class is to the rest of the system. And the more tightly coupled your system is, the harder it becomes to maintain, test, or change anything without creating a ripple effect of unintended consequences.
Interpreting CBO Results
A high CBO value is like a giant red flag waving above your architecture, saying, “Change me at your own risk.”
Here’s how I usually interpret CBO values:
| CBO Value | Interpretation |
| 0 | ✅ Fully isolated — not coupled to anything (might be a utility class or a dead one). |
| 1–5 | ✅ Loosely coupled — ideal for maintainability and testability. |
| 6–10 | ⚠️ Moderate coupling — might be acceptable for core components. |
| 11+ | 🔥 High coupling — hard to change, hard to reuse, high risk of regression. |
High CBO often shows up in God classes—those oversized modules doing way too much. You’ll also see it in classes that act as coordinators or managers but aren’t carefully architected.
Why does it matter?
Here are a few reasons why you should keep CBO low:
- Testing becomes a nightmare. You need to mock or set up a dozen dependencies just to run one unit test.
- Changes become risky: Touching one class requires auditing ten others just to be safe.
- Reusability tanks. The class is so dependent on others that you can’t drop it into a new context without dragging half the codebase with it.
How to Bring Down Your CBO Metrics
To bring CBO down, try these approaches:
- Apply the Law of Demeter (don’t talk to strangers).
- Use dependency injection to decouple concrete implementations.
- Follow the interface segregation principle (for more info, see the SOLID workshop that is available on-demand)—split large interfaces into smaller, focused ones.
- And most importantly: stop shoving everything into one place just because it’s convenient!
Think of CBO like architectural cholesterol—low levels keep your system healthy. High levels might not cause an immediate crash, but over time, they’ll slow your project down and leave you vulnerable when you least expect it.
Software Architecture Metric #5 – Number of Children (NOC)
NOC measures how many classes inherit from a given base class. It’s a count of derived classes—a simple number, but one that can tell you a lot about how your software architecture is structured.
In theory, a high NOC means a base class is doing its job: capturing common behavior and being reused. That’s great … until it isn’t.
The problem is that a high NOC can also signal that your base class has become fragile. One small change to the base can cascade through every child, forcing updates, breaking builds, or causing side effects in places you didn’t expect. Sound familiar?
Interpreting NOC Results
Here’s how I usually interpret NOC values:
| NOC Value | Interpretation |
| 0 | ✅ No children — possibly a leaf class or unused base. |
| 1 | ✅ Limited use — simple, focused abstraction. |
| 2–4 | 🟡 Moderate use — may be a well-factored base class. |
| 5+ | 🔥 Heavy reuse — potential fragility and overreach. Proceed with caution. |
NOC doesn’t tell you if the base class is good—just that it’s being used. You still need to look at cohesion (LCOM) and responsibilities (RFC) to know if that base class is clean or bloated.
One trick I often use when a base class has many children, is to scan through its public API. If it’s overloaded with utility methods, optional behavior, or strange default implementations, it’s probably trying to do too much.
In embedded systems, deep or wide hierarchies can be especially risky. They make your code harder to test, harder to simulate, and harder to port across different hardware configurations. If you’re building software that needs to scale across products or platforms, it’s often safer to favor composition and clear interfaces over inheritance and shared base classes.
NOC tells you how much reuse you’re getting—but not whether it’s healthy. If you see a high number, don’t panic—just dig deeper.
Software Architecture Metric #6 – Response For a Class (RFC)
RFC stands for Response For a Class, but I like to think of it as “how much this class might do when you call it.”
Technically, RFC measures the number of methods that can be executed in response to a message sent to an object. This includes the methods defined in the class plus any methods it calls. The higher the RFC, the more behavior is packed into the class—and the more you have to keep in your head to understand how it works.
Interpreting RFC Results
A high RFC is often a warning sign. It tells you the class might be doing too much or is handling too many responsibilities. This increases the chance that changing one method breaks something unrelated, or that a new developer won’t spot the side effects until it’s too late.
Here’s how I typically interpret RFC:
| RFC Value | Interpretation |
| 0–10 | ✅ Very low complexity — likely a simple helper or data class. |
| 11–30 | 🟡 Moderate complexity — okay for most application logic. |
| 31–50 | 🔶 High complexity — harder to reason about and test. Review needed. |
| 51+ | 🔥 Very high — smells like a God class or architectural hotspot. |
RFC is especially helpful when combined with other metrics like WMC (Weighted Methods per Class) or CBO (Coupling). A class with high RFC and high coupling is a refactoring target. It’s probably taking on too much responsibility and interacting with too many other pieces of the system.
What is RFC Best Used For?
Here’s the thing: a high RFC might not always be bad—some controller or integration classes naturally coordinate a lot of behavior. But if every class in your system starts looking like that, you’ve lost cohesion and crossed into Big Ball of Mud territory.
Use RFC to:
- Spot classes that are too tightly packed with logic.
- Identify modules that may need decomposition.
- Prioritize candidates for simplification or delegation.
A high RFC is like a blinking light on your dashboard. It won’t crash the car, but ignore it long enough and you’ll regret it.
Software Architecture Metric #7 – Number of Instance Methods (NIM)
NIM tells you how many instance methods a class defines. That’s it. No magic. No complex formula. Just a straight count of how much behavior lives in a class (not including static methods or inherited methods).
Why does it matter?
Because instance methods are where the real behavior lives. They often touch internal state, rely on other parts of the system, and represent what the class does. A high NIM means the class is doing a lot. Maybe too much.
Interpreting NIM Results
Here’s how I interpret NIM when reviewing a codebase:
| NIM Value | Interpretation |
| 0–5 | ✅ Lean and focused — likely a value object, data holder, or utility. |
| 6–15 | 🟡 Reasonable — moderate functionality, may be worth a second look. |
| 16–30 | 🔶 High — class may be taking on too many responsibilities. |
| 31+ | 🔥 Very high — this class is probably a God class or needs refactoring. |
High NIM isn’t automatically bad, but it should make you pause.
How do you know if NIM is too High?
Sometimes a high value for NIM might be okay. So how to you determine if it’s okay or something that needs to be adjusted in your software architecture? Ask yourself:
- Are these methods all related to a single responsibility?
- Could this class be split into smaller components or roles?
- Are we trying to “hide” complexity in one place to make other parts of the system look clean?
NIM also correlates strongly with test surface area. More instance methods usually means more code to test, more branching paths, and a higher risk of regression when changes are made. If your class has 25+ instance methods, your test suite had better be rock solid—or you’re walking a tightrope.
Pro tip: If you ever find yourself scrolling through a class wondering “where the actual logic is,” the NIM count is probably a clue that you need to refactor.
Track NIM to keep behavior manageable. When the number starts to climb, it’s time to zoom out and ask if you’ve crossed the boundary from “single responsibility” to “dumping ground.”
Software Architecture Metric #8 – Number of Instance Variables (NIV)
NIV counts how many instance variables (a.k.a. fields or members) a class owns. On the surface, it seems harmless—what’s wrong with a few extra variables?
The problem is that instance variables define the internal state of your class. And the more state you have, the more complex and fragile the class becomes.
Interpreting NIV Results
Every variable adds surface area: for bugs, for unintended interactions, for extra setup in unit tests. A high NIV can lead to low cohesion (see LCOM), tight coupling, and behavior that depends on a delicate configuration of internal values.
Here’s how I typically break it down:
| NIV Value | Interpretation |
| 0–3 | ✅ Low state — simple, focused, easy to reason about. |
| 4–7 | 🟡 Moderate state — okay if variables are tightly related. |
| 8–12 | 🔶 High state — class likely mixing responsibilities or doing too much. |
| 13+ | 🔥 Excessive state — high risk of bugs, hard to test, needs refactor. |
Too many instance variables often show up in “manager” or “controller” classes—things that were meant to coordinate logic but gradually turned into a catch-all for everything that didn’t have a home.
Also, keep an eye on naming. If you see variables like flag1, flag2, or tempData, that’s not just a high NIV—it’s a confused high NIV. The class probably needs to be broken into smaller pieces with clearer responsibilities.
What does mean?
In embedded systems, high NIV can also lead to bloated memory usage—especially dangerous in resource-constrained environments. And when unit testing! The more state you have, the more setup you need, and the more fragile your tests become.
Track NIV to keep your classes lean and understandable. If you can’t describe the purpose of every variable in a sentence, it’s probably time to split the class.
Software Architecture Metric #9 – Weighted Methods per Class (WMC)
WMC counts the number of methods in a class, often with a twist: it can weight each method based on its complexity. Think of it as measuring not just how many things a class can do, but how hard they are to understand and maintain.
Some tools just count methods (simple WMC), while others apply weights based on cyclomatic complexity or similar measures. Either way, WMC gives you a feel for the overall heaviness of a class.
Interpreting WMC Results
Because classes with high WMC are harder to test, harder to reason about, and far more likely to contain bugs. The more behaviors packed into one place, the more likely they are to interact in surprising ways.
Here’s how I evaluate WMC when doing architecture reviews:
| WMC Value | Interpretation |
| 0–10 | ✅ Lightweight — easy to follow and maintain. |
| 11–20 | 🟡 Moderate — might be okay depending on the domain. |
| 21–40 | 🔶 Heavy — complex logic, probably doing too much. |
| 41+ | 🔥 Very heavy — likely a God class or maintenance hazard. |
High WMC almost always correlates with high cognitive load. If you open a class and it feels overwhelming—even before you’ve read a single line of code—WMC is probably why.
How to Reduce WMC
To reduce WMC:
- Apply the Single Responsibility Principle. One class, one reason to change.
- Break large methods into smaller ones with clear intent.
- Extract related methods into helper classes or components.
- Use meaningful abstractions to keep logic focused and reusable.
Keep in mind: WMC is not about punishing large classes—it’s about spotting hotspots that could trip up your team six months down the road. If you want sustainable software, keep your WMC in check.
WMC helps you measure the weight of your classes. And if something feels too heavy, it probably is.
Your Next Steps
Software architecture metrics alone won’t fix a broken architecture—but they will shine a light on where it’s starting to crack and where the issues might be. A modern software architecture today is more than just structure. It’s:
- Software Attributes
- Philosophires and Principles
- Architecture Design Records
- Architectural Style and Structure
Even then, this only covers part of it!
Whether you are just starting a new project or are working through software architecture issues with your legacy firmware, there are a few things you can do to sniff out any issues.
The nine metrics we covered—LCOM, DIT, IFANIN, CBO, NOC, RFC, NIM, NIV, and WMC—aren’t just academic curiosities. They’re signals. When interpreted in context, they reveal architectural hotspots, design drift, and structural decisions that may be quietly sabotaging your system’s maintainability.
So what should you do next?
- Pick a subsystem and run these metrics across its classes. Don’t try to boil the ocean—start where you feel the most friction.
- Look for patterns, not outliers. One high metric value doesn’t mean much. But clusters of problematic scores in key components? That’s where your design needs attention.
- Use metrics to inform—not dictate—refactoring. Let the data spark your curiosity, then dig into the code to confirm the story.
And finally:
- Use these metrics as part of your regular architectural review process.
- Don’t wait for bugs or slowdowns to force your hand.
- Modernize early, and build systems that are easier to scale, test, and evolve.
Your architecture is talking. These metrics help you hear what it’s saying—before the whole system starts screaming.
Struggling to keep your development skills up to date or facing outdated processes that slow down your team, raise costs, and impact product quality?
Here are 4 ways I can help you:
- Embedded Software Academy: Enhance your skills, streamline your processes, and elevate your architecture. Join my academy for on-demand, hands-on workshops and cutting-edge development resources designed to transform your career and keep you ahead of the curve.
- Consulting Services: Get personalized, expert guidance to streamline your development processes, boost efficiency, and achieve your project goals faster. Partner with us to unlock your team's full potential and drive innovation, ensuring your projects success.
- Team Training and Development: Empower your team with the latest best practices in embedded software. Our expert-led training sessions will equip your team with the skills and knowledge to excel, innovate, and drive your projects to success.
- Customized Design Solutions: Get design and development assistance to enhance efficiency, ensure robust testing, and streamline your development pipeline, driving your projects success.
Take action today to upgrade your skills, optimize your team, and achieve success.