Choice Vs. Complexity in Cloud-Native Applications

Simplicity is at the heart of our desire to use cloud-native application methodologies. Service-based applications are designed to decrease complexity in individual service components. Using cloud-native infrastructure focuses and reduces our available infrastructure choices. Simplicity is core to virtually all cloud-native patterns; the very nature of the cloud-native pattern is based on simplicity.

But one of the fundamental tenets of modern application development (which is driving the cloud-native movement) is actively working against this desire for simplicity. You see, modern application architectures encourage team empowerment. Team empowerment brings decision-making down to the lowest logical part of an organization. Modern cloud-native application methodologies enable distributed decision-making at the lowest levels of the organization.

But how much choice should you give your development teams in building their cloud-native applications? The answer may not be as simple as it seems.

Choice and Your Cloud-Native Teams

Deciding how much choice to give your teams is not an easy decision.

On the one hand, we want to give our development teams the freedom to decide how they design, develop and operate their applications. Empowered teams are innovative teams. The more choices you give your development teams, the greater their opportunity to innovate. This innovation can lead to many architectural and product advantages, including more customer-centric solutions and faster responses to change. This typically results in a shorter time-to-market, more competitive products, higher reliability and availability and, ultimately, happier, most engaged teams.

However, choice has a negative downside. The characteristics that bring you innovative, customer-oriented solutions also works against simplicity. Increased choices mean increased variations in decisions made within your cloud-native applications. More variations increase the overall application complexity. Put simply, the more choices you give your team, the more variations they will use. The more variations used, the more complex your overall application becomes.

You see, choice means complexity at the cost of simplicity.

Early on, choice empowers your organization and fuels innovation through your cloud-native processes. But, as time goes on, the cloud-native processes that initially empowered your organization can work against it in the form of increased complexity.

The more you empower your team, the more complex your application becomes and the less supportable it is in the long term.

Obviously, this counterintuitive result is not what you expect nor what you want for your organization.

Effectively managing knowledge is fundamental to reducing complexity in any application. Managing knowledge is key to reducing cognitive load and ultimately improving maintainability. But, long-term knowledge management is often at odds with innovation and choice.

How do we enable our teams without hurting our long-term maintainability?

Managing Decisions with Sandboxing

This is the main idea behind sandbox policies. A sandbox policy is a framework given to your service teams that define the criteria and framework for the decisions they are enabled to make.

In a sandbox model, your cloud-native service teams are encouraged to make any decision that meets their team’s needs and goals as long as the decision fits within a well-established set of sandbox policies.

What’s an example of a sandbox policy? A sandbox policy might be something like: “Your team can develop its applications in any programming language contained in the following list of languages.” By specifying an allowed list of programming languages, you are giving your teams a choice that encourages innovation. Yet restricting the size of the list keeps their decision from going so far away from the decisions made by other teams in the organization that it increases the overall application complexity. If most developers use Go or Python in your application, you may not want one team going off and developing a service using Perl or C#.

Sandbox policies can be created around any decisions that are pushed down within the organization:

  • What API methodology are we allowed to use in our service design? Procedural or asynchronous? Web-based? REST? REST light?
  • What execution environment can we use to operate our service? Serverless? Containers? Bare metal?
  • What third-party plugins can we use?
  • What can we use to monitor our service?
  • What are the required security policies and systems?
  • What testing strategy should we use?

The team has many decisions to make, and sandboxing gives them choices and options but also provides boundaries and protections. As long as the decision they want to make is within the walled garden of the sandbox, all is good.

But what if a team wants to make a decision that goes outside of the sandbox? There certainly are cases where this can happen: A primarily Linux-based application may have a service that requires Microsoft Azure. A service team may want to bring in and use a new tool that’s never been used before. Exceptions do come up.

In these cases, the decision must be approved by a higher-level decision-making authority. In most companies, this is typically an architecture team, a technical policy board or steering committee or perhaps an executive authority such as the CTO.

The decision is made in the context of other related decisions. Ultimately, the goal is to give the development teams the flexibility they require without allowing decisions that inappropriately increase technical debt, decrease the ability to manage the application or unduly increase long-term complexity.

In a typical organization, sandbox policies themselves are defined and created by the same decision-making authority. As teams request exceptions to the policy, the policy itself may be adjusted, changed and ultimately evolve into a better and more complete policy.

All of this with the same goals: Giving teams choices and options for innovation without endangering the long-term complexity and maintainability of the application.

Having sandbox policies is essential to keeping your cloud-native organization healthy and your application manageable and sustainable in the long term.

Lee Atchison

Lee Atchison is an author and recognized thought leader in cloud computing and application modernization with more than three decades of experience, working at modern application organizations such as Amazon, AWS, and New Relic. Lee is widely quoted in many publications and has been a featured speaker across the globe. Lee’s most recent book is Architecting for Scale (O’Reilly Media).

Lee Atchison has 59 posts and counting. See all posts by Lee Atchison