Balance innovation and agility with security and compliance
risks using a 3-step process across all cloud infrastructure.
Step up business agility without compromising
security or compliance
Everything you need to become a Kubernetes expert.
Always for free!
Everything you need to know about Magalix
culture and much more
Even though the public cloud sparked a lot of innovation in scalable infrastructure, it failed to transform or reinvent how infrastructure and applications are built. We still need to deal with the same old constructs including virtual machines, complex networking topologies, and connecting different layers together.
We are now at a very interesting time in the history of computing where infrastructure and applications are being redefined. It is sparking the new generation of cloud infrastructure.
Machine learning in its different forms, such as probabilistic systems, neural networks and linear regressions, has been used in different applications. However, we are now rediscovering what AI can do for us. It is simple. Whenever there is more data and hints derive more value out of it, AI becomes an interesting dimension to add. It recently became a hot field due to significantly cheaper and faster compute, as well as the abundance in data. More insights and services can be now derived and learned with these conventional models. It is now about how and where you apply such AI models. This becomes interesting when we apply it to cloud infrastructure and applications running on top of it.
IT infrastructure, and the cloud, has no shortage in data. However, it is lacking nimble and meaningful ways to control infrastructure and applications to make them work in harmony. This is where we can get a significantly higher value. Infrastructure is currently being controlled with hard coded rules based on predefined conditions. For example, if my cpu us more than 80% for 10 minutes, spin up a new virtual machine. A question: how frequently do we need to revise these rules? Well, it is usually as fast as the application is evolving. If we are now evolving applications faster than before and expecting that evolution to accelerate, do we have enough man hours to review and do the tedious analysis to pick the right rules, parameters, components, etc.? Do we really want to spend time there?
Now, if we have such AI systems that can tell what to do, how can we give enough control to these systems to keep our applications and services up to their key performance indicators (KPIs)? If such control became AI powered and we were able to make it economically sound and smart enough to learn quickly, a new era of smart infrastructure will emerge.
It is like finding a cheap and easy way to make all current cars driverless!
Containers emerged 2 years ago as a great way to wrap software and have a consistent behavior from one environment to another. It has since rapidly evolved and is now considered the building block for modern applications to provide a simple, unified and consistent view of applications running on different infrastructures. Containers actually offer the opportunity to redefine computing. Rather than thinking of compute as objects you rent, such as virtual machines, to utilities that applications and services get at any capacity and with very high precision. An application running inside a container does not care about what is outside of said container as long as it has the necessary resources it needs such as compute, memory and I/O to connect with its other components.
Think of the making compute accessible like electricity!
Big players in the public cloud are creating a new abundance in public infrastructure that will keep driving prices down and enriching services offered on top of them. However, even with the enriching services, basic compute (aka IaaS) will continue to be the biggest chunk of the cloud since it is the most common for all applications and engineers. It is also apparent that despite the abundance of basic compute capacity, we are still in early days of how the cloud can be consumed as a utility.
We need to normalize and decompose compute to its basic elements in order to offer it as a utility. Compute should be thought of like electricity. We consume electricity today the same way with the same side effects regardless of the providers. We also pay for electricity with very high precision based on what devices actually consumed without approximation or rounding.
Imagine compute being fed directly into applications, just like electricity, without affecting how applications are developed or where they are run. We can certainly see the opportunities if we can have compute offered in such a fluid format.
Hint: the solution includes containers!
Stay tuned on what Magalix is up to! Register now to get updates and early access to a new way running your applications on the cloud!
Protect your cloud infrastructure by understanding the key vulnerability areas according to the shared responsibility model.
Know more about the 4 main types of “leaks” that commonly occur with cloud asset management, and some useful strategies to address them.
With the NIST cybersecurity framework implemented using policy-as-code, companies can strengthen their security processes. Learn more.