First steps with Azure Container Services
Few weeks ago, Microsoft has released a new PaaS fonctionnality, which is not really a PaaS but a hybrid IaaS where you don’t own anything but you’ve got some freedom. let’s take a look at it.
First of all, go to your Azure tenant and search the marketplace for “Azure Container Service”
After that, you will land on a new page where you can choose Azure Container Service.
Next screen will explain you what you can have with this service, and at the bottom of the page you have the most wanted create button, click it !
Now, you can start the creation of your Docker cluster. But before that, i suggest you to go to your Bash shell, and generate a public/private key pair. Because you can’t use just a single password to log to you Docker cluster.
In order to build public and private key for Azure, it’s easy as a single command
ssh-keygen -b 2048
You will have something like that
Now you just have to copy/paste the content of the /home/fabien/.ssh/id_rsa.pub file to the correct field in the Azure portal.
Cat /home/fabien/.ssh/id_rsa.pub
Fill every field with you values, and for now you’ve got to build a new resource group to host you cluster.
The next screen will ask you about what orchestrator you want. You will have a choice beetween Swarm and Mesos/Marathon. In this post we’ll talk about Swarm, but to be clear Mesos marathon is also a big name in container orchestrators.
On the next screen the fields are totally understable, it’s about how many nodes you want in your cluster and how many master you want to handle this cluster and avoid any SPOF as you can only choose an unpair number. Take care about the size of the VM that’ll host Docker Swarm nodes and the dns prefix for your governance plan.
Click OK after the validation step passed, and let’s have a look at the next screen with caution. Because this is a PaaS, you need to purchase few things. Azure itself doesn’t really know for what you’ll be charged as many Azure services will be used in this service…
So let’s click Purchase and get ready to rumble ! After few minutes you now have your Swarm cluster ready. Let’s take a look about what we have in our newly created resource group, that’ll help us understand how is building this PaaS Service and what can be done (or not)
So let’s get into details, we have:
- An availability set for Swarm Master. It means your masters are spread across multiple racks in Azure Datacenters. Grouping VMs in availability sets gives to the Fabric Controller the information the host OS that Vms are running on. Fabric now knows your machines serve the same goal and your updates will will be done smoother without bringing down all vm at same time.
- Swarm agents are hosted in a virtual machine scale set, but take care autoscalling is OFF. That also means, you can’t connect to your VM using SSH… (In fact, yes you can, from your Master)Le
- Two Azure load balancer with public IP in order to route traffic separatly for agents and masters. By default, port 80, 8080 & 443 are open.
- 1 Virtual Network, with two subnets:
- one for masters
- another for agents
- Storage for each agents and master.
I can’t understand how Microsoft can be so horrible with the naming convention of all componants of this resource group…
Lets get back to our Bash and connect to our Swarm Master. To do this you must mount a tunnel with your master. Just keep in mind that the hostname of the master will be available in the Azure Load balancer.
Now that you have the dns name, let’s mount this tunnel in Bash.
ssh -L 2375:localhost:2375 -f -N <user>@<dns>.azure.com -p 2200
You’ll be asked to enter the passphrase you choose during the keygen. In order to veerify that your tunnel is correctly mounted, use this command to verify it, and you should only have one line in response.
netstat -ano | grep 127.0.0.1:2375
Now we’ll have to install docker and docker-compose. Nothing difficult here, but we are not totally in a PaaS solution ^^ As my Linux machin is a Debian 8 Jessie process described here. I won’t describe it here, just follow their guide. Just run the bellow command to validate that docker engine is correctly installed on your host.
docker run hello-world
In order to get the docker client to connect the Swarm master port mounted in the tunnel, you must declare an env variable
export DOCKER_HOST=:2375
So now if you do the bellow command, you will see your Docker Swarm cluster.
docker info
Many usefull information here:
- Role: Primary. Means we are on the master that have the token (hopefully it’s the only one master)
- Strategy : Spread. Means that each time we ask our cluster to scale another container, it’ll be started on the next node. It’s like round robin. The other Strategies are binpack, where Swarm will fullfill the fist host and after that the second, etc. You can also specify random… i won’t explain this one 😉
- Filters: Divided in two categories, node filters and container configuration filters
- Node filters are
- Constraint
- Health
- Containerslots
- Container configuration filters are
- affinity
- dependency
- port
- Node filters are
It’s not much, but actually in Azure Container Service, you can’t change labels on nodes, so.. i’ll cover this in another post.
In order to get our Swarm cluster to scale as we want, we now need docker-compose to be installed. To do this, i suggest you to follow the docker’s doc.
Docker-compose use “compose file” which is a YAML file used to define services, networks and volumes for docker. A service contains informations applied to each container started for that service, it’s like using the “docker run” command. It’s exactly the same for “docker network create” and “docker volume create”. If you need more information about the Compose file references, you can follow this link.
I’ll show you the one built for a demo at SII.
version: '2' services: web: image: macksize/web ports: - "80:80" worker: image: macksize/worker redis: image: redis networks: default: external: name: my-net
Save it on the server where docker-compose is installed and where you have your SSH tunnel mounted and in this directory use theses commands.
docker network create --driver overlay my-net docker-compose up -d
The first line will create an overlay network in order to get our containers to communicate together, and the second will start one instance of each service described in the docker-compose.yml file.
First, docker will pull each image on each node and after that Swarm will spread services among all hosts.
Now type
docker-compose scale web=3
And go to the agent Azure load balancer dns.. you should see something like that 🙂
Now play with docker-compose scale worker=xx to go up and down with your swarm cluster, but remember that you can’t use more than 3 web container as we mapped our port 80 on host because we need to use Azure Load balancer.
Hope you’ll like this post.
Thanks Christian Tritten for the Docker Scale demo web site !
Edit: To manage your Swarm Cluster, you can also connect directly to your Master using SSH. Using this solution, you’d be able to connect using SSH to all your nodes ! Thanks Julien Corioland for the information.