Skip to content

Enduring Techniques from the Technology Radar | ThoughtWorks

From https://www.thoughtworks.com/insights/articles/enduring-techniques-technology-radar :

enduring-tech-radar/enduring-tech-radar.png

Agile IT

The core to Agile IT is of course Agile software delivery, and there’s a large amount of content on this from both ThoughtWorks and the wider industry. From the Radar, specific techniques we think important include:

  • Test at the appropriate level, incorporating unit, functional, acceptance and integration tests to build an effective test pyramid, rather than testing everything through UI testing which is often slow and brittle.
  • Coding Architects who work with teams and actually write software, rather than existing as “the Architecture Department” in an ivory tower pontificating on the best ways to write software. This helps architects understand the full context of their recommendations and achieve their long-term technical vision.
  • Lightweight Architecture Decision Records provide a useful paper-trail on major decisions without becoming yet another piece of documentation on a wiki that no-one reads.
  • While code is malleable, the data storage layers are traditionally less so. Evolutionary database rejects the notion that a database schema is fixed and hard to change and applies refactoring techniques at the database level. This allows the DB to evolve in a similar way to code and avoids a schema that is mismatched to the application that uses it.
  • Software architecture has often been about predicting the future, but anyone who witnessed the explosion of Docker, Kubernetes, or the JavaScript-eats-the-world phenomenon will realize that making predictions further than about six months out is very difficult. Evolutionary Architecture accepts this reality and instead focuses on creating systems that are amenable to change in the future. Creating architectural fitness functions to describe the ideal system characteristics is the engine that drives this overall technique.
  • A significant trend across modern organizations, even beyond the IT department, is to treat assets such as software and services as products, rather than projects. This is a deep topic and a good place to start is Sriram Narayan’s overview article. A follow-on technique is to applying product management to internal platforms, and Evan Bottcher’s article has good detail on what this might mean for you.
  • We’ve gained more understanding over the last few years about how Conway’s Law applies across an organization and how it affects the systems and structures that we build. But what if you don’t have the architecture you want? One strategy is to apply the Inverse Conway Maneuver, structure teams to reflect the systems you aspire to have, and let Conway’s Law restructure those systems for you.

Continuous delivery

The premise of continuous integration was pretty radical 10 years ago — every change made by every software developer should be built and ‘integrated’ against all the other changes, all the time. Continuous delivery took this technique even further by saying that every change to the software should be deployable to production at any time. Jez Humble and David Farley wrote the canonical book on CD, and the approach has gained significant adoption.

  • Continuous delivery centers around the concept of an automated deployment pipeline that takes changes from developer workstations through to production release. Pipeline configuration ideally should be source controlled and versioned, using pipelines as code.
  • In order to achieve CD, a number of techniques need to come together:
  • Automate database deployment to ensure DB updates are correctly matched to the code.
  • Use Consumer-driven contract testing to allow many teams to collaborate without stalling each other or creating bottlenecks.
  • Decoupling deployment from release allows code to sit in production without being active, making release of a feature a software switch.
  • Continuous delivery for mobile devices allows us to apply CD techniques even for native applications.
  • Phoenix Environments is a technique to deliberately tear down and destroy an entire environment (dev, test or even production) before recreating it from scratch. This helps ensure that the new environment is created exactly as described within an infrastructure-as-code framework and contains no unexpected hangover or misconfiguration.
  • Blue-green deployment allows a team to roll out live upgrades to their software by first creating a clone of production configuration, deploying a new release to the clone, cutting over live traffic and then cleaning up the ‘old’ release.
  • Managing build-time dependencies can be tricky. One technique to make it easier is to use Docker for builds, running the compilation step in an isolated environment.
  • Many teams have mastered continuous delivery, but with continuous deployment every change that results in a passing build is also deployed automatically to production.

DevOps

Traditional software delivery faces distinct problems in the “last mile” of getting software out of development, into production and then running it operationally. The problems stem from entirely different teams doing development and operations with very different measures for their success, leading to a lot of friction and animosity between the two.

  • DevOps is a cultural movement that tries to bring together the two sides and have developers who can think more like an operations person, and operations people who think a little more like developers. There are a lot of subtleties to the right way to do this, and several antipatterns in the industry (such as a “DevOps team”) but the key is to create more empathy between the two.
  • The seminal book Accelerate identifies Four Key Metrics as identifiers of IT performance. Improving lead time, deployment frequency, change fail rate and mean time to recovery all directly improve IT performance, and these are generally the exact metrics improved by good DevOps adoption within an organization.
  • Focus on Mean Time to Recovery is a specific instance of caring about the key metrics, and advice that we gave before Accelerate was released.
  • Structured logging is a technique to improve the data we get from log files, by using a systematic approach to log messages.
  • Automation of technical tests allows an organization to take ‘operational’ or cross-functional style testing such as failover and recovery, and automate that testing.
  • Testing techniques for software are relatively well known and covered in the continuous delivery sections of this article. But organizations that are storing infrastructure configuration as code can test that infrastructure too: Pipelines for Infrastructure as Code is a technique that allows errors to be found before infrastructure changes are applied to production.

Cloud

It’s clear that the future of infrastructure is in the cloud. Organizations generally cannot compete with the world-class operational ability of the major cloud vendors, all of whom are racing to provide ever more convenience and value for their customers. While effective use of cloud is a big topic, we think there are at least two pieces of enduring advice in the Radar.

  • Microservices are the first “cloud native” architecture, allowing us to trade reduced development complexity of each component for higher operational complexity of the overall system. For microservices to be successful, they require cloud to make that tradeoff worth it. As with any popular architectural style there are many ways to misuse microservices, and we caution teams against microservice envy, but they are a reasonable architectural default for cloud.
  • With multiple client devices and consuming systems, it can be tempting to try to create a single API that will work for all of them. But client needs vary, and BFF (Backend for Frontends) is an approach that instead creates a simple translation layer allowing many different kinds of clients to access an API efficiently.

Security

With ever increasing reliance on technology, consumers and companies alike rely more and more on software systems, and create valuable, useful data troves. Unfortunately, security has often been an afterthought and implemented poorly, in part because the cost of security measures are paid up front but only a nebulous benefit (did we get hacked?) is ever obtained from those costs. Organizations that have been hacked or even deliberately allowed customer data to be used nefariously have suffered few long-term consequences. While this might sound like it’s all doom and gloom, the good news is that consumer awareness of security and privacy issues is on the rise and governments are creating legislation to better protect data. We advocate an approach to “build security in” to software products rather than treat it as an option or an afterthought. We think the following techniques are particularly important:

  • Threat Modeling is a specific process that teams can follow when creating requirements for the software that they build, identifying exactly which threats are most important and worth defending against.
  • Decoupling secret management from source code and using Secrets as a service helps to avoid the situation where a developer inadvertently adds credentials into (say) a GitHub repository, making them visible where they should not be.