Switching from traditional Vms to containerized applications can bring major upgrades in efficiency and reliability. But if you handle your containers incorrectly, you’ll be throwing all those benefits out the window – and frustrating your users in the process!
Here are three of the most common mistakes many organizations make when they implement Docker containers for the first time.
Storing important data in containers and registries
While containers are great at storing non-sensitive data for individual sessions, they’re not designed to store data that needs to persist across sessions, or any information that may pose a security risk. In fact, if you store any sensitive data within the container itself, you run the risk of losing that data when that container stops – and if you store secrets such as passwords using environment variables or container registries, you run a significant risk of being compromised, as Vine was in 2016.
Instead of storing your containerized data locally, it’s always a better idea to keep secrets, critical data – and in fact, all data other that container images and temporary files – in the cloud, and fetch it via SSH on an as-needed basis. That’s the only surefire way to keep private information secure, while also keeping your important data in safe.
Running too many services, especially in the same container
One common Docker newbie mistake is to treat a container as if it’s a conventional Linux environment, and practically set up an entire OS inside it. Even in less extreme cases, many novices make the mistake of hosting many (seemingly) necessary services inside the same container. But this is a major mistake, because every service increases your security vulnerability.
As Docker’s best practices document explains, it’s best to stick to the rule of “one process per container.” Even if it’s sometimes necessary to run two or more processes in the same container – for example, instances of cron and syslog – it’s better to let a baseline Linux image handle those services, for you.
Improperly handling Docker's build cache
It won’t take long to notice problems with the Dockerfiles build cache, because your container images will suddenly start taking a very long time to build. If you’re using behaviors like ADD, VOLUMES or RUN in the wrong places, those may be invalidating your cache. It’s also important to keep in mind that Dockerfile execution happens sequentially, so changes can deprecate future caches.
To keep your cache reasonably sized without losing critical data, try grouping your RUN shell sequences together according to type, and handling cache order manually rather than automatically. Purging the cache is only a last-resort measure, but a bit of careful selective invalidation can help you regain control of troublesome cache volumes when necessary.
Steer clear of these three common mistakes, and you’ll be well on your way to providing fast, efficient, reliable containerized apps for your users. But we know it’s not always easy to avoid mistakes like these – which is why we created Magalix, to simplify the whole containerization process, from end to end. Get in touch, and find out how we can make your life easier.
Magalix is the autopilot of the cloud. It runs on top of cloud container providers, helping companies solve the classic problems of complexity and overspending in cloud infrastructure. Magalix makes cloud applications self-healing, able to run anywhere, and deliver maximum ROI with minimum maintenance.
“After years spent managing large infrastructure teams, we quickly realized the seriousness of over-provisioning. It results in overspending, lowered team productivity, and reduced ROI. Existing tools simply fail to solve these problems. That’s when we realized we needed to create a new solution.” —Mohamed Ahmed, CEO