Author: Piotr Nowinski

Agile Assessment

Many organisations and teams that adopt Agile come to the point where knowing how agile they are becomes an important question. It’s especially important when first stages of Agile transformation prove to be successful, teams finally work at a sustainable pace and it becomes more difficult to identify obvious areas for further improvements.

There are many agile assessments available (please refer to a great blog post by Ben Linders), but none of them fit to my needs 😉 and therefore I ended up creating my own evaluation. It’s grounded on an assessment presented by Dean Leffingwell in Scaling Software Agility:Best Practices for Large Enterprises, but I made several adjustments based on my experience from working with various Agile teams and projects.

The assessment is a great input for the team to improve. Moreover, if you have more Agile teams working together, the assessment can be a vital source of information for Scrum Masters and management that should enable them to work on organisation-level impediments preventing the teams from working even more effectively.

The assessment consists of 66 statements/questions grouped in 7 areas: product ownership, agile process, team, quality, engineering practices, fun & learning and integration. Every team member should assess each statement using a scale from 1 (the worst) to 5 (the best). As a result, you get an average assessment of each statement for the whole team and resulting evaluation for each area. The latter is used to plot a Radar Diagram that graphically presents the results.

Please be aware that there is no single, correct interpretation of the assessment’s results. It’s something that very depends on your organisation, maturity of the team and people you’re working with. Furthermore, it’s not final values that matter the most, but their relative comparison and how they change over time (progress/improvement).

Feel free to download the assessment and adjust it to your needs. Please don’t hesitate to let me know if you think that some changes should be implemented.

Download Agile Assessment Download the Agile Assessment spreadsheet

Salary formula

This post was inspired by Management 3.0 and a very nice reading about Buffer’s policy.

Salaries are classified

Employees’ compensation is a very sensitive topic. In most organisations it’s also the most secret and the least openly discussed one.

My experience says that people don’t necessarily need to know the salaries of their colleagues, but they definitely want to know the rules that determine their wages. Regrettably, only a few organisations have an established set of clear rules and processes for calculating employees’ salaries. In most companies it’s delegated to HR departments that come up with a complex structure of pay grades and pay ranges, but the rules governing climbing the career/salary ladder is usually still very fuzzy.

The truth is that the main reason why so many companies are reluctant to make employees’ salaries transparent is the fact that it would immediately reveal how unfair current compensation system is and that many people are basically screwed. Counter to this, research confirms that long-term pay secrecy only hurts a company’s culture and results in negative morale, decreased performance, and higher turnover.

Transparency is motivating

Studies show that money is rarely a good motivator (my experience shows that it works only short-term or when somebody earns way below the market). On the other hand, money acts as a demotivator when an employee believes she isn’t treated fairly or she feels underpaid compared to her peers. And a lack of transparency makes it very difficult to influence people’s feelings and, as a result, have a company-wide perception of honesty.

A traditional approach to compensation requires employees to individually negotiate their salaries and pay raises with their managers which make this process prone to politics, short-term budgeting issues or current market demand peaks. It also favours smooth-tongued and social savvy people which doesn’t sound like a fair solution that should reward all employees for their efforts and contribution to the success of the organisation.

Salary formula

A solution to the problems discussed above is an introduction of a salary formula – an objective, incorruptible and reliable system for calculating employees’ salaries. A system in which everyone can calculate her salary (and a salary of her colleagues) and everyone knows what to do to earn more. A salary formula that everyone understands is a great step towards a compensation plan that pays people fairly for the value they create in the organisation.

It has to be pointed out that there is no simple remedy to how exactly your salary formula should look like and it’s something you need to experiment with. Above all, you need to make sure that it’s transparent and easy to adapt in a number of further iterations. And of course, before introducing a brand new salary formula, it makes sense to compare the resulting financial projections with your budgets and other internal constraints.

A good salary formula should take into account several variables, including:

  • role (job category – please aim at having as few as possible)
  • job level (maturity/seniority at a given role)
  • loyalty (employment time)
  • total experience
  • relevant education
  • location (very important for geographically-spread companies; costs of living and job markets are different in different locations)

An example of a salary formula can be found in this Buffer’s blog post.

Don’t include performance metrics

It may look very tempting to include some performance metrics in the salary formula, but it should be avoided. Most certainly it will lead to dysfunctional behaviours and employees trying to game the system instead of doing actual work. A salary formula is about steady, predictable monthly income, and compensation for employees’ contribution to the organisation should be resolved in a different way.

My thoughts

Personally, I like the idea of a salary formula. I’ve never worked in a company in which salaries weren’t secret, but I think that having some transparent rules around how employees’ compensation is calculated would definitely help.

However, I also understand that applying a salary formula is much easier in a young, small company rather than an organisation that hires 50k+ people around the world and that was successful in what they’re doing before I was even born.

Nevertheless, even if you’re not allowed to implement a salary formula for the whole company, in my opinion, it makes sense to give it a try at team or department level. At the beginning, you can use it as an advisory tool for checking salaries when calculating your annual pay raises or hiring new team members. And if it works well you can do another step and share it with your team(s) – I’m sure they will appreciate it.


Time-boxed sizing

Creating a long-term product-roadmap or release plan for a large project is a challenge. Features that are going to be implemented in many months ahead aren’t probably well-defined and, therefore, it’s very difficult to size and estimate them reasonably. Moreover, if you’re laying out an early version of the plan it doesn’t make any sense to invest significant time to scope and size those features because they are going to change anyway.

To cut this Gordian knot, you can approach sizing by constraining (time-boxing) rather than estimating.

The idea is to look at business value rather than details of requirements. The team, instead of sizing a feature in points, agrees with the Product Owner the amount of time (i.e. 50 days) he’s willing to invest in the feature bearing in mind its relative business value. This approach enables the team to reduce project’s uncertainty quickly and complete initial planning session at reasonable time and cost.

Of course, the ballpark provided as a result of time-boxed sizing is not set in stone and has to be revised on a regular basis (as each and every estimate, though). The more the team knows about the feature the easier it’s to split it into smaller chunks and finally estimate it in points.

It’s worth mentioning that time-boxed sizing doesn’t guarantee that the ballpark provided is correct and absolutely feasible. On the other hand, it’s not to say that time-boxed sizing is about guesstimating – the value should be reasonable. But for early planning exercises, it gives a common understanding of the size of the feature and brings stakeholders’ expectations into line. Adaptability and change lie in the heart of agility so it’s not a surprise that after several grooming sessions the ballpark size may change.

Time-boxed sizing was suggested by Jim Highsmith in Agile Project Management for large and lengthy projects. I think that it may be also useful in smaller projects when product management team is thinking about a new feature but doesn’t know how exactly it should work. Establishing a time-boxed size for such a feature sets clear boundaries and constraints for future discussions and makes the team plan within the ballpark. It’s not to say that the size of the feature can’t finally exceed an initial value – it’s Product Owner’s call after fall. But it forces the team to make decisions relatively quickly and allows them to focus on agreed ballparks.



Relative Value Points

Product backlog should be DEEP – detailed appropriately, emergent, estimated and prioritised. Everyone who learns about Scrum should be familiar with these key attributes of a good product backlog.

But how is the product backlog ordered? Well, most Scrum practitioners would probably say that it’s Product Owner’s decision, but the business value seems like a fair attribute to be used for product backlog prioritisation.

While saying so is a good generalisation the reality is, unfortunately, a bit more complex. The reason for this is that a good product backlog consists not only of client-facing stories, but there are also other items like design spikes, technical debt reduction stories, maintenance items, etc. Smart Product Owner knows that she can’t concentrate on business stories only because she’s responsible for long-term product’s adaptability and reliability.

In addition, things like upcoming marketing event may also drive prioritisation and, therefore, short-term value and features of a lower overall value can take precedence over long-term gains.

Furthermore, risk mitigation is also a very important factor that influences how product backlog is ordered. Actual risk reduction is a key objective that the team should concentrate on at the beginning of a complex project. For example, it makes sense to prioritise some technical stories first to check if a technology we want to use is right the project before investing too much money into an endeavour that may eventually fail.

As you can see ordering a product backlog is an act of balancing many different factors, and although business value should be the main driver other aspects also influence decisions made by a Product Owner. It’s worth remembering that there is a difference between value and priority.

OK, but how does a Product Owner know a business value of all the stories? Of course, it’s a Product Owner’s job to understand what set of features the product needs to have to delight its users, but how does a Product Owner make day-to-day decisions when a product backlog consist of hundreds or thousands of items? Relative Value Points (see Agile Project Management by Jim Highsmith) and ROI is the answer.

Value points are somehow similar to story points, but they are calculated top-down instead of bottom-up. The idea is that value points (i.e. 1, 2, 3, 5, 8 and 13) are assigned to features first, and then a percentage of total feature points is calculated for each feature (i.e. three features with 3, 5 and 8 value points assigned get 19%, 31%, and 50% respectively). Next value points are allocated to stories (the same way), but the percentages assigned to a set of stories are capped by the percentage of their parent feature (i.e. if the first feature consists of two stories with 2 and 8 value points assigned, the percentage for these stories is 4% and 15% respectively).

As a result of using this simple algorithm all the stories in a product backlog have a percentage value assigned. A value of each story is relative to other stories and, therefore, allows for comparing them with regards to a business value they deliver to clients. Relative value points can be further used to allocate revenue stream or net present value (NPV) to features/stories, but most organisations don’t go through this step (many organisations don’t do a good cost-benefit analysis at all).

Value points have another advantage – they increase transparency and understanding in the team. They can be also used to calculate a relative ROI (Return On Investment) for each story which is another tool that can help the team make better decisions regarding priority of backlog items. In this scenario, relative ROI can be defined as a ratio between the business value of a story (in value points) and its complexity (in story points). Please bear in mind that relative ROI can’t be converted to money, but allows for comparing user stories – a chart below can help Product Owner see which stories have best ROI:

relative value points


Performance appraisals

Traditional performance appraisals

Most organisations have a formal process for evaluating the performance of its employees. Usually, it has a form of an annual performance review where employee’s work performance and behaviours are assessed, rated and documented by direct managers.

The ultimate goal of a performance review system is to reward and retain capable employees by keeping them happy. Managers believe that this process:

  • Provides useful information for promotions and compensation decisions.
  • Motivates employees and enhances their involvement.
  • Improves overall performance of the teams.
  • Boots communication and provides valuable feedback.

Unfortunately, while HR departments are happy with growing documentation records, most employees hate performance reviews. A majority of people find this top-down process useless, counterproductive and, most of all, destroying teamwork.

They don’t see a performance review as an honest discussion based on trust, but rather a place where a superior communicates a pre-determined story and judgement and, in the best scenario, comes up with an already fixed salary raise proposal. Regrettably, it has nothing to do with performance, but largely is a result of the budget limitations and politics.

360-degree feedback

One of the most popular solutions you can use to improve your performance reviews is a concept of 360-degree feedback. It’s based on the assumption that multiple points of view are required to correctly assess somebody’s performance. It means that peers are included in the process and, therefore, everyone gets a more comprehensive picture of employee’s contribution to the organisation.


Changing from a very top-down reviews to a 360-degree feedback model is highly recommended, but it’s not enough to make you succeed. Also other parts of the process have to be adjusted to make it effective.

Set right atmosphere

You need to develop a feedback-rich culture and build trust among your employees. They need to believe that performance review is about learning and improvement they can honestly benefit from.

Get rid of fixed-scale, make it simple

Get rid of checklists or forms created by HR departments that forces you to evaluate employees in terms of a long list of predefined categories and a set of behaviours that are assumed competent people should show.

Please bear in mind that every person is different. Employees come with their own characteristics which include individual strengths and imperfections, and, therefore, applying same fixed-scale to different people makes little sense. Last but not least, it’s really unreasonable to measure the same way employees with different roles and functions which, regrettably, takes place in far too many organisations. It’s far better to concentrate on individual objectives and perform a review based on a short, but descriptive assessment.

Finally, make sure that a process of collecting feedback from peers is quick and easy. Please bear in mind that everyone is busy doing their work, and providing feedback to the peers shouldn’t be seen as an additional burden.

Do it frequently

Conducting performance reviews annually is a waste of time. Personally, I can hardly remember what I was doing a month ago, thus expecting anybody to recall actions from 12 months ago makes little sense.

To make the whole process relevant, you need to start assessing performance and giving feedback regularly. You’ll soon find that doing it on a regular basis (every quarter is a good start) is easier and far more productive.

Encourage self-assessment

Peer reviews are really beneficial, but it make also sense to match it with what employees thinks about themselves. People tend to have a pretty good idea of their own strengths and weaknesses – give them an open and positive opportunity to share it with you. Self-assessment can be a great start for a productive dialogue about goals and expectations.

Collect some quantitative data

Measurable goals and objectives are required, and you should collect and share all the metrics before the review. However, please make sure that the numbers don’t lie in the heart of the process – they should only be used to set a background for honest discussion.

Qualitative data are more important

It’s the core of the review. It’a bidirectional, honest discussion about what was great and what are the areas for potential coaching. You should concentrate on accomplishments and strengths rather than failures.

Please be aware that if somebody isn’t performing as expected it’s not necessarily his fault – the organisation should take responsibility for supporting them or helping them find a better fit if need be.

Foster teamwork and collaboration

The behaviours you should be encouraging are teamwork and collaboration, not individual achievements. Please take it into account while setting objectives and commenting on how your peers contributed to the organisation.

It’s not to say that exceptional individual achievements shouldn’t be appreciated, but it’s teamwork that you should value the most. Please note that in the absence of an understanding of how individual contributions compare to team achievements, self-preservation rules supreme. On the other hand, an ability to link individual performance with a team success increases job satisfaction and employee’s engagement.

Do not rank employees

Stack-ranking employees based on the results of their performance reviews may sound tempting, but it should be avoided. In large organisations even if you get a good score (i.e. second the highest rank possible) it may turn out that there are hundreds of people doing better than you. And it neither feels good nor increases your motivation. And above all ranking employees destroys teamwork by making everyone concentrate on their individual goals and achievements.

Separate from compensation and career plans

In many organisations, the results of performance reviews are explicitly used to decide about bonuses and salary raises. At first sight, it may look reasonable, but the reality is that you should separate the discussions about performance from discussions about compensation and career plans.

The performance review should be about learning and employee’s contribution to the team and organisation rather than a process narrowed down to getting a nice salary raise. It’s not to say that meeting goals or getting a very positive feedback should not be taken into account, but a direct link between performance review and compensation ends up in employee concentrating only on the latter.

Instead of doing good work many employees starts focusing on getting a good review. They spend more time on their “career” than on the actual work at hand. Instead of energising people and promoting teamwork such a process clearly leads to bogus activities, cynicism and employees spending time on cover-your-back actions. I’m sure it’s not what you’re aiming for.

Come up with actionable items

Identification of measurable goals and actionable commitments is critical to successful performance reviews. Open discussion lies in the heart of a good review, but in the end, some well-defined actions should be agreed on.

And the words “measurable” and “actionable” are significant. An accomplishment of measurable goals can be verified at next review, and actionable commitments are well-understood, have clear steps to completion and acceptance criteria.

Please bear in mind that actions may not be related only to the employee being reviewed – it can be also something that a manager has to take care of in order to support the employee, help him with his goals or make him improve his work.

Doesn’t it sound like a good retrospective?

When you look at suggestions discussed above it can be easily spotted that performance reviews are somehow similar to… good retrospectives. I think it’s worth noticing that Agile principles (like adaptation, strive for continuous improvement, teamwork, transparency, etc.) relate not only to software development but a whole organisation and its processes. It shows that you can’t have a high-performing software development teams without transforming other parts of the business.

Next steps

Building empowered teams that take responsibility for their results requires transparency and trust. So the next logical step towards this goal is to make performance reviews… open and public.

It may be difficult at company’s level, but you can try doing it within your team or department. The idea is to gather the team in one place and perform peer assessment of each and every team member together. Despite appearances such an approach reduces time required to perform the review and has several advantages:

  • everyone is evaluated at the same time in equal measures,
  • it’s less formal and, therefore, more open,
  • ability to see if a majority of the team shares same concerns,
  • ability to ask questions and clarify problems,
  • everyone forces themselves to be fair, honest nad more understanding.


Performance appraisals have a terrible track record. But the problem doesn’t lie in performance reviews themselves, but rather in the way they are implemented.

The process of how performance reviews are conducted has to change. It has to stop destroying intrinsic motivation and focus on teamwork. Most people seek for better performance and strive for continuous improvement. They can do that by getting meaningful feedback from their peers and managers, therefore frequent, honest and transparent reviews are desired.

It makes sense to involve employees in designing and establishing your new performance review process. Please bear in mind that a system designed in collaboration better serves all and engages employees. What it boils down to is that employees want to know how they are being evaluated and want to know that they’re making conscious choices.

Further reading


Meetings can be destructive

I’ve just read an interesting article about meetings Maker’s Schedule, Manager’s Schedule which reminded me about 37signals statement that “meetings are toxic” Meetings Are Toxic.

While I understand that meetings can’t be eliminated I also agree that many of them require urgent changes. The main issues I find destructive in far too many meetings are:

  1. No clear purpose.
  2. No agenda (= no chance to prepare or decline the meeting).
  3. No owner.
  4. Not sticking to a schedule.
  5. No clear actions after the meeting (except for a need for calling another meeting).
  6. The more attendees the better.

IMHO fixing the issues above would make most meetings far less destructive and not feeling like a waste of time.

Breaking day into small incoherent chunks can be solved by some simple policies like not having any meetings after/before lunch, having them at the beginning of a day, etc.

Of course, it won’t eliminate all the problems, but it’s easier to stand a couple of destructive meetings per month when all others are either cancelled or scheduled and conducted properly.

No, I’m not against meetings. I’m against useless meetings.


Setting goals: OKRs

I’ve recently come across a great video by Rick Klau about setting goals at Google.

OKRs stands for Objectives and Key Results and are used at Google for setting goals at company’s, teams’ and individuals’ level. A great advantage of this system is that individual goals are aligned with company’s goal, therefore, everyone is rowing in the same direction.

Key points about OKRs at Google:

  • Objectives should be ambitious, not straightforward.
  • Key Results are measurable, they should be easy to grade.
  • OKRs are public (starting from CEO).

Last but not least, OKRs are independent from employee’s performance evaluations. That is, promotions, salary discussions, bonuses, etc. should be separated from OKRs. They are used to make people contribute to company’s goals rather than evaluating how employees are performing.

More in Rick’s post: How Google sets goals: OKRs

Good Retrospectives


Retrospective is one of the inspect-and-adapt opportunities provided by Scrum framework. It’s a time-boxed event for the team to analyse their way of working and identify and plan potential improvements.

Sprint retrospective is one of the most important, but probably also the least appreciated practices in the Scrum framework. Therefore, retrospectives have to be carefully taken care of because inspection and adaptation lie in the heart of agility. It’s an important mechanism that allows a team to continuously evolve and improve throughout the life of a project.


Sprint retrospective is a time to think about the process and practises hence the full Scrum team has to attend it (including Product Owner and Scrum Master who facilitates the discussion).

Assuming trust and safety are in place, everyone is encouraged to be completely honest and reveal all difficult issues one has in mind. Team members are expected to present their opinions in an open, genuine, yet constructive atmosphere. It’s facilitator’s responsibility to ensure that discussion stays positive and professional, focusing on improvement of the team as a whole. It’s critical for retrospectives’ long-term success that finger-pointing, assigning blame and personal criticism is absolutely avoided.

Quantitative Assessment

Before you jump in into a hot discussion some reporting of important metrics should be done. It’s required to set a context for the discussion and objectively present how the team has performed.

There is no one-fits-all set of metrics you should be using – they very depend on the business context, stakeholders, project, etc. However, it’s the team who should come up with the metrics they want to use to monitor and improve their work.

Last but not least, please bear in mind that whatever you finally produce it shouldn’t be static and set in stone forever. People and teams get used to their own measurements therefore, it’s good to try something else after a while. Replacing your metrics not only helps you cover other perspectives and uncover different unknowns, but it also keeps stagnation from creeping in.

Apart from velocity, risk assessment and product quality assessment the metrics you can use may include:

  • # stories
  • # stories accepted
  • % accepted
  • # stories not accepted
  • # stories added
  • defect count
  • # new test cases
  • # new test cases automated
  • % test cases automated

Qualitative Assessment

Retrospective, though important, is not a heavy-weight ceremony. It’s an open discussion that should include three main questions/points to address:

  • What went well during the sprint?
  • What went wrong?
  • What can we do differently to improve our work?

All the improvement ideas identified in the brainstorming session should be noted down, but not all of them should be immediately converted into real actions for the upcoming sprint, though. It may sound disappointing for some team members, but they need to remember that delivering value, not improvement itself, is their primary goal. However, since iterations are short, over time, all the issues will be finally addressed (or replaced with new findings during next refinement).

Actionable commitments

Identification of actionable commitments is the key to successful retrospectives. Open discussion is desirable, but in the end, some well-defined actions have to be agreed on. And the word “actionable” is significant. Actionable commitments are well-understood by the team, have clear steps to completion and acceptance criteria, just like good stories.

Let’s have some fun

The way retrospective meeting should be held is not pre-defined. There are numerous techniques for conducting retrospectives and none of them is objectively better than the other.

To be honest, trying different constructions of the retrospective meeting may keep things fresh and interesting. After all, team’s engagement is critical for a retrospective to succeed. Some sample techniques can be found here:

Personally, I remember a retrospective where “Draw a problem” approach was suggested by the Scrum Master and it was a great success. The project was depicted as a ship where good things were like a wind blowing sails while bad things were showed as an anchor that prevents the ship (team) from sailing faster.

Potential problems

Retrospectives are critical for the team to improve, but they need to be run correctly to make it happen. The team has to see real benefits of spending time in retrospective meetings.

Some typical smells that retrospective is not effective may include:

  • team members not attending retrospectives
  • unengaged attendees
  • no resulting actionable commitments
  • finger-pointing and assigning blame
  • lack of trust
  • complaint sessions and no desire to improve

The most demotivating and frustrating problem, however, is the team failing to follow through on improvement actions identified during previous retrospectives. It makes the meeting waste of time with no real impact on day-to-day work. The team has to understand that commitments they make matter and have to be met.

There is no silver bullet to address the issues mentioned above, but it’s Scrum Master task to realise that retrospectives are ineffective and work towards addressing issues like these. And if the team is new to Scrum inviting an external facilitator to conduct the meeting is highly recommended.

Edit: A great source of restrospective’s fun: How to Make Your Retrospectives Fun


Managing Product Adaptability


Creating reliable and adaptable software is not straightforward even if the team embraces Agile practices. In most cases, however, it’s not lack of technical capability or poor performance that prevents the team from achieving this goal. My experience says that the teams are aware that constant refactoring, automated tests, design spikes, etc. are required to keep the system reliable and adaptable – it’s senior stakeholders that usually fail to understand it.

Market is very demanding nowadays. Industry after industry demands for continuous innovation. Pressure for frequent releases and responsiveness to constant changes grows. And Agile seems a good solution for software teams to deliver valuable, high-quality products quickly. However, they need to be quick but should not cross the line and make mistakes. They “need to be quick but should not hurry“.

And unfortunately senior stakeholders far too often adds to this pressure and sacrifice long-term gain for short-term wins. Unreasonable deadlines are set and all the features are requested to be delivered. And in most cases the team manages to do that – velocity accelerates and the team delivers what they were asked for. But it’s a false acceleration, because corners are cut. And adaptability is the first thing that suffers from such an approach. Technical debt grows quickly and accumulates.


Technical debt

Technical debt is invisible, especially at first sight and in short-term. Software is being delivered, features get implemented so why to bother about it at all, right? What really suffers, however, is system long-term reliability and, most of all, adaptability, but it usually doesn’t come to light quickly. But it’s a real problem that in long-term results in dropping velocity, increased cost and time to delivery, growing number of defects, and above all decreasing customer satisfaction. The company is no longer ready and able to deliver valuable software and adapt to tomorrow’s client needs.


It’s especially frustrating from team’s perspective, because – if you ask them – chances are that they will directly mention parts of the code that need urgent refactoring and that contribute to technical debt the most.

Product Quality Assessment

This being said I suggest that one of the measures you can use to monitor if system’s adaptability is not deteriorating [too fast] is Product Quality Assessment performed by the team:


This measure depicts team’s sense of the project’s “feel” (“smell” as called in XP). It can be supported by a chart that plots growth of the test code compared to executable code over iterations (both should be growing proportionally).

Why, in my opinion, this measurement is so important when there are dozens of other KPIs available? It’s not to say that it’s the most important KPI you should use, but it’s a canary in a coal mine that gives you an advanced warning of system adaptability potentially going down before other measures notice it. It gives you a chance to have a closer look into the problem before velocity drops, number of defects grows and the cost of getting rid of technical debt becomes [too] high.

Managing technical debt

Please be reminded that technical debt, like financial debt, has to be managed. It’s important to realise that there is no debt-free product, but you should be aiming at keeping technical debt low so that it doesn’t affect future product development. There are several possible ways of paying technical debt back, but the most important thing to remember is that it should be done regularly and incrementally. You need to avoid large balloon payments as a means of servicing technical debt when it’s far too costly, frustrating and time-consuming. By using some simple measures you can make technical debt visible – both to the business level and technical level. And hopefully convince everyone that technical quality is critical to achieve long-term goals and it shouldn’t be short-changed for current needs. Continuous value generation should be viewed from a whole product life cycle perspective after all.

To be fair, it has to be noticed that technical debt is not bad by definition – it just has to be acknowledged and managed properly. There are situations when technical debt doesn’t need to be paid back, i.e.: product nearing end of life, throwaway prototype or product built for short-life.

Please bear also in mind that technical debt can be intentional, i.e. when a fixed deadline has to be met. However, rather than being hidden it should be well-thought-out decision. The decision that not only allows for technical debt, but most of all acknowledges the consequences of introducing it and involves a concrete plan for getting it paid back in future. Otherwise it would be living a lie and asking for trouble.