Eliminate infrastructure complexity with serverless containers

January 20, 2018Mohamed Ahmed
You already use functions to streamline your infrastructure. Why not do the same for containers?

These days, many developers find it easier and quicker to write limited scope services. Cloud Functions like AWS Lambda, Azure Functions and Google Functions provide a new serverless paradigm that hides infrastructure. In short: no servers, no pain!

Containers have already become the new standard for modular and efficient microservice applications. Instead of running applications on a server, the new standard is to package and deploy software across many different environments, providing consistency that keeps developers and operators at peace.

But here’s the thing most people don’t talk about when it comes to containers:

Containers require your team to do a lot of heavy lifting before they’ll deliver any benefits. You’re going to be working 12-hour days for several weeks just to get your first cluster and orchestration tool up and running. Sure, the “Hello World” is simple enough — but then you’ve got to create the right network architecture and storage volumes, secure your cluster, and integrate it with the rest of your pipeline. When all is said and done, you’re looking at several months’ worth of effort before your containers deliver any measurable value.

Along the way, relationships between operators and developers can get tense. Containers offer a promise: to maintain a clean separation between developers and the operations team. But the reality is very different. Containers don’t run in isolation, which means there’s always some amount of necessary back-and-forth between developers and the infrastructure team. This back-and-forth typically centers on control and visibility. For example, developers want to control traffic routing to their containers — but since most orchestration tools fail to provide a clear distinction between containers, many of these discussions turn out to be unproductive. Check out this post to see what we’ve learned in this area.

Containers don’t automatically save money. The problem of overspending often persists given the default resources management policies that orchestration tools like Kubernetes and ECS offer out of the box. Developers still want to make sure that their containerized services have enough resources to handle spikes, but orchestration tools usually schedule container-based reserved resources according to the developers’ specifications — which means you end up running a lot of idle virtual machines!

Okay, so what if my team runs serverless containers?

Well, “No servers, no pain,” right?

Before you start thinking about using containers to streamline your organization, you need to start by focusing on a specific end goal.Technical leaders want their teams to focus on core business development — and in act, many CxOs have told us they’ve been able to achieve both these goals relatively quickly, by adopting the following three tactics:

  1. Choose a container management technology that provides clear separation between infrastructure and development.
    Clear separation requires a certain amount of bias in the design. By helping developers to focus on what matters most to them, and by being clear about all the tools and data that will help them to deploy, monitor, and debug their services, you’ll help them move faster. When you set up specific rules, instead of allowing each developer to follow their own methodology, you’ll end up with a much more secure, consistent architecture.
  2. Accept that many factors impacting services and infrastructure can’t be controlled.
    On one level, your team has control over their code — and hopefully over your infrastructure as well. But it’s not easy to manage the behavior of every user — or to predict what they’ll do next. This means it’s very difficult to predict how users’ behavior and needs will drive the development of your infrastructure. Study your users’ behavior, carefully and constantly — and accept the fact that it’s out of your hands.
  3. Use smart services to help with hard-to-control factors.
    Although smart services may not be necessary for simple tasks like deployment, testing, and monitoring, they’re highly useful for managing interactions between your software and infrastructure. For example, a wave of new product signups on your website should be available on your frontend database, because it’s going to impact your infrastructure. Far too many companies fall into the trap of over-provisioning, because they think it’s going to be cheaper in the long run. But the truth is, you’ll end up paying thousands of dollars for VMs, most of which will have very low utilization. Instead, try using a service like Magalix, which predicts and correlates workloads across containers and infrastructure, keeps infrastructure efficient, and protects you from factors beyond your control.

Magalix uses machine learning to help developers and companies focus on the most critical factors in their container deployments. Our machine learning algorithms automatically manage and scale your applications, containers, and infrastructure as your business’s needs change — saving you up to 50 percent on your monthly bills, and up to 25 percent of your team’s time.

For more information and to sign up visit www.magalix.com