Not rendering correctly? View this email as a web page here.

This week: Meet Magalix at KubeCon, Serverless Architecture, and more!

Hey there,

KubeCon 2020 is coming up, and Magalix will be there! Come talk to us about how Magalix can help you adopt team-based approaches to optimizing Kubernetes , ensure K8s security best practices, and more!

If you're going to KubeCon, pick a time to meet with us here.

In addition, last week we released our first eBook, the Kubernetes 101 Series! Grab that below if you haven't yet. 👇

Download the Kubernetes 101 eBook

This week, Magalix CEO Mohamed Ahmed covered Serverless Architectures with a focus on Kubernetes. Check that and other articles out below!


Intro to Serverless Architecture

Since its inception, cloud computing was designed to solve a single major issue: scalability. It's a fact that the number of people accessing online services (including web applications, mobile APIs, etc.) is increasing exponentially. About a decade ago, engineers started resorting to the microservices architecture: breaking up large, complex applications into small, atomic, and interdependent components.

Implementing Proxy Server in Kubernetes with Sidecar

Implementing A Reverse Proxy Server In Kubernetes Using The Sidecar

A sidecar refers to a seat attached to the bicycle or motorbike so that they run on three wheels. The sidecar is often used to carry a passenger or equipment. There are many uses for the sidecar in sports as well as the military. Regardless of where you want to use the sidecar, the concept is the same: an object that is attached to another and, thus, becomes part of it. The auxiliary object’s main purpose is aiding the main one.


From the K8s Community

Why Those Gaps in Kubernetes Are Really a Good Thing

Kubernetes allows users to choose from an open ecosystem of application management and infrastructure options and under the stewardship of the Cloud Native Computing Foundation (CNCF), Kubernetes has become the new Linux.

Read More....

Securely Access AWS Services from Google Kubernetes Engine (GKE)

It is not a rare case when an application running on Google Kubernetes Engine (GKE) needs to access Amazon Web Services (AWS) APIs. Any application has needs. Maybe it needs to run an analytics query on Amazon Redshift, access data stored in Amazon S3 bucket, convert text to speech with Amazon Polly or use any other AWS service. This multi-cloud scenario is common nowadays, as companies are working with multiple cloud providers.]


Introducing NodePort Service in Kubernetes

Kubernetes revolves around the pods. So, you have to know what these pods are. Pods are the run time environment using which we deploy the applications. It is an atomic unit of scheduling in Kubernetes. One or multiple containers deployed together on a single host is called as a pod. We will see how pods are deployed and scaled inside Kubernetes cluster. Now I am going to explain about the Kubernetes Pods and NodePort Service with Running Containers in Pods and Connecting to Running Containers in Kubernetes

Read More....

Using Workload Identity to Handle Keys in Google Kubernetes Engine

Workload identity is a modern way to provision keys for pods running on Google Kubernetes Engine. It allows individual pods to use a service account with a suitable set of permissions, without manually managing Kubernetes secrets. In this article, we will describe Workload identity, compare it to other approaches, and finally show a real world example on how to configure a Kubernetes cluster with Workload identity enabled.

Read More....

If there is something you want us to include in a newsletter please send it to

Ready to understand more about your K8s clusters? Check us out at the Azure and GCP Marketplaces below.

Azure GCP marketplace

Find us on GitHub github_PNG58