Three Signs You’re Stuck in a Data Center Mentality

Do you think in terms of being cloud-native? Or are you still clinging to the traditional data center approach when it comes to managing your applications and data? In today’s world of fast-paced, cloud-native applications, the old way of thinking may be holding you back.

But how can you tell if you’re still stuck in a data center mentality? Here are three signs that it’s time to fully embrace cloud-native and leave the static, inflexible world of the data center behind.

Sign One: Requests for Production Resources Are a Bottleneck

If an engineering team is working on a new feature that requires additional production server resources to operate, what do they have to do to get these resources?

In most data center-focused organizations, requests for more server resources require submitting a formal request for new servers to be added to the data center. These servers have to be approved, purchased and shipped. They then have to be installed, configured and prepared. The process can take a long time to complete—in some cases, it can take months.

But in dynamic, cloud-native applications, resource allocation is automatic and dynamic. If a new feature requires additional computation resources, those resources are automatically made available to the application. The changes occur automatically and immediately.

Yet, many organizations that have moved to the cloud still require resource requests to be submitted for approval and configuration. Even if the resources are actually cloud-based, they are treating them like they are static entities that have to be protected.

Sometimes companies do this for cost control purposes. If resources can be dynamically added automatically, how can you maintain infrastructure cost controls?

While this is a valid concern, many ways exist to manage dynamic cloud costs. Techniques, including cost allocations, spending reports and other methods, are readily available. Yet, companies that are still stuck in a data center mentality will tend to go back to the tried and true strategy—limit resource access statically and request changes manually.

The bottom line, cloud-centric organizations focus on how to automate expansion, not on how to limit usage manually.

Sign Two: Is Your Infrastructure Capital or COGS?

How does your organization manage the costs of your cloud infrastructure? Do you consider a new cloud server instance added to the pool of available servers to be a capital expenditure, or is it simply a part of the cost of doing business?

If you consider your cloud infrastructure to be a capital expenditure, then you consider your infrastructure as an investment. By adding a new server, you are investing in your future by growing your available infrastructure. This is how data centers are typically funded, but it does not make sense for cloud-native applications operating in a public cloud.

For a cloud-native application, the resources you use in your infrastructure are tied directly to the resources your cloud-native application currently requires. The resources your application requires are some function of how many customers are using your application. As more customers use your application, the more resources you require. As fewer customers use your application, the fewer resources you require.

The amount of money you pay for a dynamic, cloud-native infrastructure typically varies in proportion to the number of users and the amount of business you are getting. As such, this is more aligned with COGS, or cost of goods sold. The costs of operating the infrastructure are directly related to the amount of business your company receives.

Bottom line: Cloud-centric organizations consider infrastructure costs to be simply a part of the cost of supporting their customers, as opposed to a data center-focused organization, which considers them to be long-term investments.

Sign Three: Is Your Infrastructure Static or Dynamic?

For most online applications, traffic patterns vary throughout the day, the week and the month. There are days that see heavier usage than other days, and there are hours in the day that experience heavier usage than other hours of the day. Most applications have cycles, whether they are daily, weekly, monthly or seasonal. Most applications have times that see substantially higher traffic than other times.

Does your infrastructure automatically resize to meet your current traffic demands, or do you have excess capacity simply lying around in case you might need it someday?

Cloud-native applications use dynamic infrastructures to resize their resource requirements as needed automatically. This means there are fewer excess resources lying around unused at low times, and it means that peak demand can be satisfied, even if the demand is higher than expected.

Dynamic infrastructures save money and they also reduce environmental waste. This is because idle static resources still need power and cooling. Dynamic infrastructures improve your company’s environmental footprint.

Meanwhile, data center-focused companies think statically. They allocate resources based on expected demand, and they make sure they have excess capacity to handle their eventual needs. This results in waste. Additionally, this mindset requires accurate future forecasting of demand, which can often be wrong. Building an infrastructure to an inaccurate forecast can result in over-utilization that causes brownouts and application failures when demand exceeds expectations. This is unfortunate because, at the time when you are the most successful, you end up giving the worse customer experience.

Are your systems configured statically, or are they adjusted dynamically? Can they automatically adjust as demand increases, or does an increase in load require rethinking and reconfiguring your static resources?

Moving to Cloud-Native Thinking

Answer this question: How many servers can I assign to my application if I receive a sudden increase in traffic?

In data center terminology, the answer depends on the number of warm standbys you have lying around, how much spare capacity you can bring online quickly and how much network bandwidth is available.

In cloud terminology, the answer is simple—as many as are needed because there are essentially an unlimited number of servers available to you.

See the difference? Data center terminology is all about limits, not about expansion and growth. You are defining the limits of what you can accomplish when the cloud is all about limitless growth.

When you think about application limits, you also think about user limits, customer limits, sales limits and lower revenue and availability.

This is the power of moving to cloud-native thinking.

Lee Atchison

Lee Atchison is an author and recognized thought leader in cloud computing and application modernization with more than three decades of experience, working at modern application organizations such as Amazon, AWS, and New Relic. Lee is widely quoted in many publications and has been a featured speaker across the globe. Lee’s most recent book is Architecting for Scale (O’Reilly Media). https://leeatchison.com

Lee Atchison has 59 posts and counting. See all posts by Lee Atchison