February 27, 2018 in Training by Ell Marquez5 minutes
Is there a hotter technology than containers right now? Companies are incorporating containers into their cloud plans more than ever before, but there are still a lot of folks who are unsure how to use them effectively.
In fact, the more I talk about containers with others, the more I realize what we really need is a solid, real-world way understand containers, their uses and benefits.
That's what we offer with Rackspace's container training webinars. In the first, Introduction to Containerization, you'll end the course understanding the evolution of containerization technology, virtualization and Linux containers, and get an introduction to Docker. We'll also lay the groundwork for the basics of Kubernetes.
For those who already understand the basics behind containerization, our follow up course, the “Hello World” of Kubernetes webinar will introduce the common building blocks of Kubernetes and how objects work together. By the end of this course, students will learn how to install and run a simple application in Kubernetes.
If you're ready to prove that you have the skills needed to be a Kubernetes administrator, spend some time at Rackspace in our Certified Kubernetes Administrator Preparatory hands on course, where you can test your skills on managing a Kubernetes cluster in our lab environments.
Read on for a promo code to take this training for FREE during the entire month of March.
Most Linux administrator uses Linux containers without ever thinking about the technology behind them. How they work is never a question that we stop to think about; all we really care about is that they do work when we need them.
Then I was asked to help explain Kubernetes. That's simple, right? Container orchestration. Except I had an audience who greeted that answer with blank stares. So my next answer was, “Well, it helps organize your Docker containers across your environment.” This was followed by the next logical question: “But what's a Docker container?” I answered the way no one ever should: “Well, you know, Linux containers.” More blank stares.
Oops. I had violated one of my basic rules: never assume what someone knows. The technology sector is so vast that no one knows everything — and if they claim they do, it just means they haven't been tackling anything new recently.
I started asking others for their explanations, and their answers pretty much lined up with mine. I realized everyone was using overly technical jargon in their explanations. If we can't explain containers in simple language, maybe we don't know the topic as well as we think we do. So I started this journey just like I was taught in elementary school: in order to understand something you must know the who, what, when, where and why.
Who:
Containers. Since we are talking about technology, I'm going to take some creative liberties here, as containers could be considered a “what” instead of a “who.” When I think about containers, I think about Tupperware. Actually, I think of the cheap version of Tupperware you just throw away when you're done with it. But as it turns out, when it comes to Linux containers, this definition is actually not too far off the mark.
What:
Linux containers are an approach to operating system virtualization. But what does that even mean? Well, Linux containers might have more in common with your mama's Tupperware than I originally thought. Think about it; if you want to send a meal to someone, what's the easiest approach? Packing up a grocery bag full of groceries, where we include a whole head of lettuce, entire jars of spices and a set of instructions? Or, do we plan ahead, measuring ingredients and cooking the meal before we ship it off to them, all ready to be enjoyed?
When we want to send an application from development to production, we don't want to have to spend the time building out a whole new environment when someone's already done the work once. Containers allow us to simply put the files our application needs into our virtual Tupperware, then ship them off ready for use.
When:
This question was the one I enjoyed answering the most, because I was able to take a journey back to 1979, which went a bit like this:
Where:
We could be running Linux containers to debug provisioning scripts. We could use LXD, which is an open source container management extension of Linux containers, in our data center as part of our OpenStack deployment. Or, we could have a full Kubernetes deployment, running across multiple Raspberry Pis, sitting on a shelf above our desks. That's the great part about containers: the “where” is really only limited by our imaginations.
Why:
Containers allow us to build our applications and ship them off, containing only the files needed and none of the files, binaries or libraries not required by the application. This offers huge advantages in application development. With containers, it's suddenly easy to move an application from a development environment to production. Containers also allow us to create isolation in order to make the best use of our environments. After all, you wouldn't send off a salad and a soup in the same container, would you?