Introduction and chatter.
I’ve been using Docker now as a “hobbyist” for a few years now. I use it from everything from hosting this blog to testing out new ideas I plan to use at work. Using Docker to test new ideas is a good thing especially if the tests I am doing have a potential for destroying something. Who cares if I screw up an ephemeral container that I can always rebuild?
Of course Docker has a serious side and while so far I’ve not needed it in a professional setting (except to test Puppet modules, but that’s another topic for another post) learning Docker can’t hurt professionally so having one’s own Docker environment is a good thing and a great way to get comfortable.
I am not going to go into great detail about setting up a hosting environment in this post. You actually do end up setting on your desktop or laptop just following my instructions. What I am talking about is hosting one on a remote machine in a data center. You can pretty well read between the lines to figure that out there isn’t much difference.
What you’ll Need
- An internet connection to download packages and scripts through
- A fairly “beefy” laptop or desktop. In this case a machine with enough memory (8Gb+) to support running your containers.
- docker-ce package
- docker-compose script
- docker-machine script
The docker-ce package is the main thing needed here. This has all the software infrastructure to create containers on your local host and the commands needed to create packages remotely.
This provides a layer on top of docker (and I’ll get into more of this in another post) that allows your to code entire infrastructures in one file. For instance for a three-tier application you could define the build of a web server, application middleware server and database server as a unit and docker-compose would handle the tasks associated with spinning them up.
This provides and interface to redirect the results of docker or docker-machine commands to a remote Docker hosting machine.
Most of if not all the instructions are geared to working on Linux and in particular Ubuntu. My hosted Docker host machine is running Ubuntu 18.04LTS at the moment and seems to do well. For those on the spawn of the Evil Empire there are of course differences but I don’t do windows so those instructions won’t be here.
If you want to jump right in and get a Docker hosting system there are a few ways of doing that.
Build your own
Gotta nice server of your own? Are your renting one from any of the myriad of hosting providers that exist? No sweat, follow the instruction for installing docker-ce and add docker-machine setup to your list of short projects and away you go.
Digital Ocean , Google, Amazon and Microsoft Azure all offer hosted Docker solutions. I’m sure there are more vendors in this space but those are the ones I am aware of and have had experience with. So far of the provider I am most happy with Digital Ocean’s offering and I use the heck out of it. Again you’ll want docker-machine set up on your laptop or desktop to interact with it. Both have different approaches to setting up the credentials you need to access your Docker machine.
Digital Ocean also has support for Kubernetes coming soon.. just saying…
Let’s Get to Work
Again I’m covering how to do this on Ubuntu and first you’ll need to point to the Docker official repositories.
As root you want to execute the following:
$ curl -fsSL https://download.docker.com/linux/ubuntu/gpg | apt-key add - $ add-apt-repository \ "deb [arch=amd64] https://download.docker.com/linux/ubuntu \ $(lsb_release -cs) \ stable" $ apt-get update
This will import the gpg key for the docker-ce package and install the configuration for apt to find the docker-ce repository. Now we want to install docker-ce along with its dependencies.
NOTE: if you have a previous version of docker, docker-ce or docker-engine or your machine it is advisable that you uninstall those prior to proceeding any further.
$ apt-get install \ apt-transport-https \ ca-certificates \ gnupg-agent \ software-properties-common $ apt-get install docker-ce docker-ce-cli containerd.io
Now you have docker-ce installed you might want to validate everything is working for you.
$ docker run hello-world Hello from Docker! This message shows that your installation appears to be working correctly. To generate this message, Docker took the following steps: 1. The Docker client contacted the Docker daemon. 2. The Docker daemon pulled the "hello-world" image from the Docker Hub. (amd64) 3. The Docker daemon created a new container from that image which runs the executable that produces the output you are currently reading. 4. The Docker daemon streamed that output to the Docker client, which sent it to your terminal. To try something more ambitious, you can run an Ubuntu container with: $ docker run -it ubuntu bash Share images, automate workflows, and more with a free Docker ID: https://hub.docker.com/ For more examples and ideas, visit: https://docs.docker.com/get-started/
Now it is time to install docker-compose.
Become root and perform the following:
$ curl -L "https://github.com/docker/compose/releases/download/1.23.2/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose $ chmod 755 /usr/local/bin/docker-compose
That’s it, it is that simple
If all you want is to run Docker locally you are done. However if you are planning on using Docker against a remote host you’ll also need docker-machine. Very simply become root and run the following:
$ base=https://github.com/docker/machine/releases/download/v0.16.0 && curl -L $base/docker-machine-$(uname -s)-$(uname -m) >/tmp/docker-machine && install /tmp/docker-machine /usr/local/bin/docker-machine
On my next article I’ll tie all this together in a neat package for you so you can start spinning up containers to your heart’s content.