Developers are forced to spend 3 months each year on dumb and repetitive tasks.
DevOps came up as we started to manage infrastructure as code. In the early days of DevOps, it was easy and fast to modify infrastructure components. Its purpose was to unblock developers by giving them such control. However, it became more and more complex, far beyond the original DevOps definition. In relatively small companies, developers still manage infrastructure themselves, wasting a significant portion of their time and losing more than 25% in productivity whereas many larger companies have a dedicated cloud infrastructure team.
DevOps, in its current complex form, is draining time, money and energy with dumb and repetitive tasks. Think of how many teams and developers had to repeat infrastructure provisioning tasks. How many times developers woke up in the middle of the night to adjust the scalability parameters of applications and services because of a change in deployed services or an unexpected change in workloads?
Although these tasks are relatively easy and straightforward, they are repeated many-many times by teams that have apps running in the cloud. Developer time is valuable and costly and companies are losing a lot of time and money by having them spend time on such tasks.
What will make DevOps die?
AI, machine learning, machine intelligence, or whatever shiny title, is brought up by many as a profound technology that will help humanity claim back wasted hours, or at least make people smarter at what they do. According to Andrew Ng, the AI guru, three conditions will make this valid: abundance in data, repetitive tasks that humans can do well in a series of short thoughts and cheap compute power. When it comes to cloud native apps, these three conditions are well satisfied. However, there are two major factors missing to unleash AI and make DevOps die, clean separation between cloud infrastructure and applications and end-to-end AI powered experience.
Clear separation between cloud infrastructure and. applications
To unleash the power of AI, we need a clean abstraction between infrastructure and applications. Applications currently get needed resources, such as CPUs, memory and I/O, with a heavy dependency on virtual machines. These resources should be available to applications as utilities. In other words, applications should consume CPU, memory and I/O in a streamlined fashion. They should
ld be available to the application based on the application’s KPIs and expected SLA. For example if a microservice in an application needs more memory with a spike in workload, it should get the right amount of memory with high precision to keep the application’s expected performance during that spike. Once this spike has passed the right amounts of memory are reallocated. We need this to be done minute by minute which requires one more step to achieve.
Hint: the answer to this is in containerization ;)
End-To-End AI Powered Experience
In order to make AI a true saver, DevOps automation must be more accessible. AI should be aware and have more control on knobs impacting application’s performance or health. The issue is not having developers identify what needs to be done but quickly anticipating issues based on the right factors, making a decision, executing the decision, and validating it. The majority of the time is spent in the last 2 steps due to complex interdependencies between infrastructure and application performance.
Now, the key is to make AI augment the end-to-end DevOps experience. There is no shortage of metrics and data in cloud native applications. In fact there is a better chance using containers to streamline resources and have any developer define application or service key performance indicators.
We are closer than ever to the brighter future of cloud computing!
It is going to happen... sooner than expected
We are bringing this vision to life! Join our waitlist to try the end-to-end experience AI powered cloud experience! http://bit.ly/GoodbyeDevOps