The Evolution of Design Debt: The Design Consistence Score (2024)

Design Debt is a concept that is derived from the term technical debt. Technical debt describes a software code that was developed by many different people over the course of time (or even at the same time) without code compliance. Since each developer has their own style, the code for similar functions can differ widely.

So the „debt“ in these terms means that you collect a lot of inconsistencies over the time that should be cleaned up afterwards — which usually doesn’t happen once the team moves on to their next projects. For design elements this can happen as well. Think about the time after a website relaunch during which new sections, new landing pages etc. are created — leading to additional styles for buttons, teasers, form fields, tables etc. And this can result in inconsistent (possibly negative) user experiences (UX). In the past few years we started building Design Systems in an attempt to reduce Design Debt, but still, even with a Design System in place, the Design Debt can emerge.

“Design Debt affects the integrity of the user experience.” (Austin Knight)

Design Debt as UX metric (KPI)

With the Design Debt method we can measure these inconsistencies as a quantitative UX metric and use it as a Key Performance Indicator (KPI) for the UX of our digital products. For example, we can use this metric to compare our digital products before and after a relaunch or even with other digital products from within the same industry.

Quantitative methods normally require many participants (users of the digital product) to get statistically significant results. But the Design Debt method is a heuristic expert evaluation that can be executed by only one or two UI Designers and therefore lets us calculate a quantitative metric in a nice and simple way. We used this in a recent project to enhance quantitative data from user tests with a small budget.

The Evolution of Design Debt: The Design Consistence Score (1)

How to measure Design Debt?

In order to measure the Design Debt of any digital user interface you have to:

  • define the most important elements within your digital product or your whole digital eco-system (e.g. atoms, molecules and maybe organisms according to Atomic Design)
  • count the number of variants for each element and the instances of each variant.

In other words, you will answer these questions:

  • How many variants of a single element exist within your digital product?
  • How many times does each variant show up (on all sub pages)?

Furthermore, the original Design Debt method states, that only one variant of any element should ever be displayed to users. We started calling this “perfect score”.

Then you can use this formula to calculate the Design Debt Score for each element:

Design Debt (of element x) = (variants — 1) x (instances x 2)

As you can see the perfect score is subtracted from the total number of variants and the instances are multiplied by two to give them a higher weight. More instances of any variant lead to a worse score (the higher the score, the worse the consistency). This places the focus of the original Design Debt method on the number of instances.

For example:

  • 3 different buttons across 40 pages = design debt score of 160
  • 2 different buttons across 20 pages = design debt score of 80
  • 1 consistent button across 20 pages = perfect design debt score of 0

After you calculated all the scores for each element, you get your total sum of Design Debt as an absolute number.

We started experimenting with adjustments to the Design Debt method that would deliver the same meaningful results but make it easier to execute:

1. More than one allowed variant for each design element

In some cases, like campaign landing pages, for example, UI designers need a little more creative freedom. So we wanted to allow more than one variant for a perfect score. We also wanted to define different perfect score numbers for different elements.

2. Define a maximum number of allowed variants

In order to make calculating the total score in percentages easier, we then defined a maximum number of allowed variants per element and called it the “insufficient score”.

Now we have three scores for each element:

  • Perfect score
  • Insufficient score
  • Actual score

The first two scores have been defined once and in an optimal state, they are valid for all websites on earth. They will only change according to the size of the website, so if there are more sub pages, then more variants will be allowed (in the next chapter we will explain this in detail).

Example for a single element:

  • perfect score = 3 buttons
  • insufficient score = 5 buttons

Different actual scores result in these Design Consistency Scores:

  • 1 button → 100% consistency score
  • 3 buttons → 100% consistency score
  • 4 buttons → 50% consistency score
  • 6 buttons → 0% consistency score

If an element is not used within the digital product we are studying, we are not scoring it (which lowers all total scores). To calculate the overall consistency score for a digital product, we add up the numbers for each of the three main scores (only of the elements that exist for that specific digital product) and calculate the overall score from the three totals (don’t calculate the average from the percentage scores of each element!).

Example for an overall score:

  • total perfect score = 30
  • total insufficient score = 80

Different total actual scores result in these total Design Consistency Scores:

  • actual score = 9 → 100% consistency score
  • actual score = 37 → 86% consistency score
  • actual score = 99 → 0% consistency score

The formula shown as a graph:

The Evolution of Design Debt: The Design Consistence Score (2)

In this example, we have a lot of creative freedom for up to 30 variants for all elements. For each new variant the Design Consistency Score decreases in small steps — until the total insufficient score (80) is reached and the score remains at 0%.

3. Integrating the total number of pages

We were saving a lot of time by not counting instances anymore, but we still wanted to factor in the size of the websites we were looking at. Our reasoning behind this is that larger websites with a variety of different purposes and sections (i.e. product pages, job listings, blog posts, or company information) need a larger variety of elements than, say, a microsite with only 10 subpages.

Basically what we did was to define that our perfect scores and insufficient scores for each element are valid for 100 pages. Then, we used a SEO crawling tool to count the actual number of pages within a digital product (e.g. screaming frog).

At last, we calculate the difference of the total number of pages from the defined 100. We use this to adjust all perfect scores and insufficient scores — the higher the number of pages, the higher the perfect and insufficient scores for each element (and as a total) and vice versa (for websites with less than 100 pages). Let’s call this “adjustment score” — and we only weigh it by 10%, so the impact on the final score is rather small.

After we applied all these changes to the method, we decided to call it “Design Consistency Score”.

  • You don’t need to count each instance of all elements and variants on hundreds or thousands of sub pages (less effort)
  • Because of this, we can base our score on a larger number of design elements (we are using around 30)
  • The total number of a website’s sub pages is considered as factor (the more pages, the more variety = creativity is allowed)

Our workflow to calculate the Design Consistency Score:

Now that we know what numbers we need, we use this expert evaluation with this workflow:

  • A designer browses through the website and collects all variants of all elements in a Sketch file (on separate artbords).
  • We count the total numbers of variants for each element (actual score).
  • We crawl the website to count the total number of sub pages.
  • We paste all numbers into our excel template with the above formula (including the pre-defined perfect and insufficient score for each element).
  • We have our score!

Additionally, the designer is asked to write a short review about the design of the tested digital product to explain the final score — like a management summary to highlight the most important wins and challenges of the design consistency for this digital product.

The Evolution of Design Debt: The Design Consistence Score (2024)

FAQs

The Evolution of Design Debt: The Design Consistence Score? ›

More instances of any variant lead to a worse score (the higher the score, the worse the consistency). This places the focus of the original Design Debt method on the number of instances. For example: 3 different buttons across 40 pages = design debt score of 160.

How do you measure consistency in design? ›

Calculating a design consistency score involves assessing multiple key elements. These elements include typography, color palette, imagery, layout, and the use of branding elements such as logos, icons, and taglines.

What are the different types of design debt? ›

4 Examples of Design Debt

Some common types of design debt involve user experience (UX), operations (structure & processes), visuals, testing, and research.

How do you track design debt? ›

Design debt is inevitable, but it can be managed if you track and prioritize it regularly. You can use tools like design audits, user feedback, analytics, or bug reports to identify and document the sources and impacts of design debt in your product.

What is the difference between design debt and technical debt? ›

While technical debt impacts the integrity of your product's codebase, design debt impacts the integrity of your user experience. Design debt refers to all the imperfections in your product UX and design processes that accumulate over time as a result of development, innovation, and the lack of design refactoring.

What is design consistency? ›

Design consistency ensures that your website looks coherent and works harmoniously across all its different elements. Having the same functions, styling, symbols, animations, etc. throughout your website will help the usability and learnability.

What is consistency in design theory? ›

Consistency is one of 10 fundamental usability heuristics, which are core principles for interaction design defined by Jakob Nielsen back in 1994. Consistency in this list is based on the principle that users should not have to wonder whether different words, situations, or actions mean the same thing.

What is an example of design debt? ›

Examples: Multi-functional enterprise products often struggle with design debt due to their complex requirements and frequent updates. Legacy systems are prone to accruing design debt as they evolve without comprehensive redesigns, leading to inconsistent user experiences.

What are the types of UX debt? ›

These are: the intentional or predicted UX debt and the unintentional UX debt that the user indicates. The intentional debts are “intentionally” put on hold as they are considered less important, while unintentional debts are the ones that end-users discover themselves.

What is technical debt in design? ›

Also known as Design Debt, it is the accumulated amount/cost of rework that will be necessary to correct and/or recover from the deviation between the current design of the system, versus that which is minimally complex yet sufficiently complete to ensure correctness and consistency for timely delivery.

How do you track design progress? ›

How do you monitor graphic design project progress?
  1. Define your scope and goals.
  2. Break down your tasks and milestones.
  3. Track your time and budget.
  4. Communicate your status and feedback.
  5. Here's what else to consider. Be the first to add your personal experience.
Aug 11, 2023

How do you organize debt? ›

Some of the most popular strategies include the following:
  1. Prioritizing debt by interest rate. This repayment strategy, sometimes called the avalanche method, prioritizes your debts from the highest interest rate to the lowest. ...
  2. Prioritizing debt by balance size. ...
  3. Consolidating debt into one payment.

How do you identify technical debt? ›

One of the easiest ways to identify technical debt is to review the code. You can look for code smells or bad coding practices that can lead to maintenance issues in the future. Code smells are indicators that the code may have a design problem or may be difficult to maintain.

What is a UX debt? ›

UX debt is the consequences of making design decisions that help you gain an advantage in the short-term but may result in long-term consequences. For example, not running user tests may help launch more features faster, but result in UX issues and lower customer satisfaction later.

Who pays for technical debt? ›

Technical Debt (TD) can be paid back either by those that incurred it or by others. We call the former self-fixed TD, and it can be particularly effective, as developers are experts in their own code and are well-suited to fix the corresponding TD issues.

What is the root cause of technical debt? ›

Lack of knowledge, when the developer doesn't know how to write elegant code. Lack of ownership, when outsourced software efforts result in in-house engineering being required to refactor or rewrite outsourced code. Poor technological leadership, where poorly thought out commands are handed down the chain of command.

What are the measures of consistency? ›

Reliability. refers to the consistency of a measure. Psychologists consider three types of consistency: over time (test-retest reliability), across items (internal consistency), and across different researchers (interrater reliability).

What is the consistency measurement? ›

Consistency is like any other measurement, you have to manage variables in a special way in order to get to the desired result. In consistency, there is a lot to consider. First, is the process mechanical, chemical or are we using recovered fibers?

What is consistency measured in? ›

Internal consistency is usually measured with Cronbach's alpha, a statistic calculated from the pairwise correlations between items. Internal consistency ranges between negative infinity and one. Coefficient alpha will be negative whenever there is greater within-subject variability than between-subject variability.

How is data consistency measured? ›

The same data across all related systems, applications, and databases is when we say that data is consistent. Inconsistent data can lead to incorrect analysis, decision-making, and outcomes. The key metrics such as accuracy, completeness, timeliness, and relevance are used to analyze or measure data consistency.

Top Articles
Latest Posts
Article information

Author: Pres. Carey Rath

Last Updated:

Views: 6476

Rating: 4 / 5 (61 voted)

Reviews: 92% of readers found this page helpful

Author information

Name: Pres. Carey Rath

Birthday: 1997-03-06

Address: 14955 Ledner Trail, East Rodrickfort, NE 85127-8369

Phone: +18682428114917

Job: National Technology Representative

Hobby: Sand art, Drama, Web surfing, Cycling, Brazilian jiu-jitsu, Leather crafting, Creative writing

Introduction: My name is Pres. Carey Rath, I am a faithful, funny, vast, joyous, lively, brave, glamorous person who loves writing and wants to share my knowledge and understanding with you.