SHADE – A Value-Oriented Strategy to Handle Software Asset Degradation

Back

Introduction and Background

Background: Change is unavoidable, and frequent, in a software project and the ability to respond to change is therefore an essential quality of a software organization. The fast pace of the software development market has however resulted in an environment where changes need to be done under aggressive deadlines with short lead-time. An environment where Swedish software industry excels due to being competitive in terms of speed [1]. However, in this rapid context, design and operational decisions need to be taken quickly, based on just enough or barely enough information, which often comes at the prize of degrading the value of the product, its assets (such as the source code, the tests, or documentation), environment or the development process itself. Hence, although rapid decisions help solve short-term problems, they often have detrimental long-term effects on the quality of the software being developed and the development organization’s ability to respond to change.

For this project we have therefore coined the term Asset Degradation (AD) that we define as the loss of value that a software asset suffers due to intentional or unintentional, sub-optimal decisions caused by technical or non-technical development of the asset, or associated assets, during all stages of the product’s lifecycle. The Technical Debt (TD) metaphor has usually been used to explain the dichotomy of sub-optimal design decisions and quality or financial effects on the long run of a software project. However, for this project, the TD concept is too limiting as it does not cover the chronological perspective of degradation, nor propagation effects of sub-optimal design from one asset to other technical, as well as non-technical (e.g. process), assets. As such, TD is only a sub-category of AD, the more generic concept that we address in this work.

Regardless, TD been attracting attention from both researchers and practitioners who have, for 9 years, been sharing their findings in the International Workshop Managing Technical Debt, which will next year become a full conference co-located with ICSE in Gothenburg. Another example venue focused on this topic is the technical debt session track in TD in the SEAA conference. During this 9-year time-period, the visibility of both Swedish industry and Swedish researchers has been noticeable. TD as a concept represents the costs and negative impacts of sub-optimal design decisions and their long-term consequences [2]. According to Gartner [3], technical debt in the software industry was estimated in 2010 to stand at $500 billion, and was estimated to grow until $1 trillion by 2015, and, although the architectural complex problems only account for 8% of the defects, they absorb 52% of the effort spent in repairing defects [4]. However, the financial consequences are only one of the dimensions of the problem: asset degradation has negative effects on the ability of the organization to produce value and impacts on the main four perspectives of the software value map discussed in [5] i.e., financial, customer, internal business and innovation perspectives of value.

Related Work and research Gaps

Gap 1. – Limited scope for TD research: The main deficiency of TD as a concept is its limited scope, focusing on the source code and architectural perspective [6]–[9] i.e., analyzing source code of the products [10], [11] with tools such as SonarQube [12] or DebtFlag [13], or identifying and understanding the sources of architectural technical debt and its evolution over time [14], [15] (although the evolutionary aspect is still one of the main research gaps in TD research [16], [17], which, in general, has disregarded historical data [18]). However, in a software development organization all assets, code, test cases, but also documentation, requirements, manuals, the design / architecture and the product’s APIs, to mention a few, are assets that degrade. These assets are generally interconnected which means that degradation can propagate between them (e.g., code degradation propagating to testing artefacts [19]), which is another aspect of the problem not covered by the TD research.

Even though TD has been studied in various areas of software engineering there is limited research that takes a holistic perspective to the concept to identify definitive root causes, connecting development artifacts with the consequences of said root causes for the overall value-creation capabilities of an organization [18]. There is also a limited understanding of how TD leads to degradation of software assets as well its impact on the value creation capabilities of the development process or organizations. Consequently, software organizations have too limited information to decide which asset degradations to address for a given set of values, from the software value map, they are striving for. Therefore, a more holistic view is required, connecting the technical and management perspectives [18], rooted in the more generic concepts of Asset Degradation and Asset Management (AM) where AM covers the identification, handling and mitigation of asset degradation rather than their symptoms shown as TD. Hence, fundamental research that is vital for the continued health of Swedish software industry to preserve its competitive advantage by providing awareness and methods to manage asset degradation that prohibits speed, allowing them to maximize value creation.

In practice, asset degradation is unavoidable by nature, and will grow as assets grow in size, number and complexity, and sometimes is introduced due to the trade-off between software implementation qualities and the need for speed, resulting in negative impacts on the productivity of the teams working on improving the end user experience [20]. However, this does not necessarily mean that organizations cannot plan actions to keep it under control by either deciding to plan/ budget some re-design (sometimes referred as refactoring) to mitigate it, or to try to choose the design alternatives that have less negative impact over the asset value [21].

Gap 2. – Limited support for (proactive) decision support during refactoring: Refactoring [22] has been applied as a solution to mitigate asset degradation at source code and architectural levels. However, these refactoring activities are not always planned and carried out explicitly, but embedded in the development of new features or change requests [23]. In addition, sometimes refactoring fails to achieve its target, introducing unintended consequences in form of defects [24] or even degrading the architectural design even more [25]. Other important aspects that have not been studied in detail are the propagation, chain reactions and ripple effects of the asset degradation, with only a few studies e.g., [26], which address this issue but only from a code perspective. The primary efforts so far towards understanding the source code degradation have analysed the presence of anti-patterns and code-smells e.g., [10], [11], which are manifestations of source code degradation. Many companies have strategies in place to handle asset degradation but most of the time the solutions are applied ad-hoc and rely on some expert, subjective, opinion. The analysis, measurement and monitoring of the asset degradation can guide critical management decisions [27], but deciding whether, when and which degradation that should be in focus should be governed by the added value [21].

Minimization or removal of asset degradation is however not the primary objective of the project, rather we are interested in enabling companies to actively measure, monitor and predict AD, and when needed, provide guidelines for its mitigation e.g., through refactorings. An organization might decide to allow AD to grow e.g., to enhance release speed during a certain period, however this is not the same as not being able to predict the impact that this operational decision has on AD evolution, thus they can take informed decisions and plan the degradation mitigation when needed [21].

Overall, the main goal should not only be to provide decision support to project managers, architects and dev-ops, when analysing asset degradation and handling its mitigation, but also to help developers identify the potential degradation while the assets are being forged.

Scientific Problem: Organizations are used to rely on accurate models to plan the budget and revenues for their software projects. However, these models hardly ever consider the value of the assets that participate in the development process and their degradation over time, as well as the eventual propagation and ripple effects to other artifacts. Some types of asset degradation are unavoidable, while assets grow in number, size and complexity, but its evolution needs to be kept under control by means of Assess Management frameworks that mitigate negative consequences on the project schedule, the product’s feasibility and sometimes even the viability of the whole organization.

Scientific contribution: The main contribution (MC) and thus outcome of this project is to define an Asset Management Framework, supported also by tools, that allow software development organizations to measure and monitor asset’s value, as well as predict, handle and mitigate its degradation. This will allow organizations to increase their awareness on the potential risks, ripple effects and the economic impact associated with the design and operational decisions, focusing on their organizational value map, and helping them to improve their competitiveness and value creation capabilities. During the development of the framework we will consider the end-to-end development process through mitigation or removal of root-causes of degradation and thereby raise value in terms of quality, improve customer satisfaction and thus prevent monitory or even fatal implications to the users. We also identify the following intermediate contributions:

  • Metrics that allow to measure and a method to monitor and predict asset value and asset degradation (Gap 1)
  • A model that describes the relation between degradation and a subset of the value map (Gap 1)
  • An approach to prioritise which degradation to focus on supported by tools (Gap 1)
  • Degradation mitigation activities and strategies, providing actionable advice to practitioners (Gap 2)
  • Real-time tools to monitor degradation (Gap 2)

Scientific method: The AM framework will be built in close collaboration with the two industry partners (Ericsson and Spotify) where the researchers and practitioners will work in a team. Collaboration between researchers and practitioners working in a team is a key success factor for the practical relevance of research results [28]. Different data sources (triangulation) will be used during the research, which is mainly driven by case studies in the organisations. To understand the effects of degradation (C2) we also must study the phenomenon over time, for instance to detect how degradation of a requirement propagates to the source code and tests with potential detrimental effects on the value creation ability of the organization. This will allow to keep our model (C1) continuously updated. The reason is that asset degradation evolves over time, when the risks are bigger than the cost of mitigation, causing a forced trade-off between adding new characteristics and mitigating further degradation. Hence, thresholds in the development where prevention is no longer an option but rather direct action, e.g. refactoring, is required to mitigate or prevent further degradation.

Next

In collaboration with:

Funded by: