-
Last week we delved deeper into Terraform, one of the newest projects developed by Hashicorp. We researched what it was all about, what its main advantages are and how we could use it for setting up our own infrastructure. Our desire by using Terraform is to easily deploy our infrastucture and orchestrate our Docker environment.
-
During week 7 & 8 at Small Town Heroes, we researched and deployed a centralized logging system for our Docker environment. We use Fluentd to gather all logs from the other running containers, forward them to a container running ElasticSearch and display them by using Kibana. The result is similar to the ELK (Elasticsearch, Logstash, Kibana) stack, only we use Fluentd instead of Logstash.
-
This week we researched and implemented a way to deploy all of our infrastructure containers automatically on booting or rebooting our CoreOS machine. By defining systemd units in our cloud-config document that will be supplied to our machine through user-data, we created an automatic and continuous Docker environment.
-
During our fourth week at Small Town Heroes, we were asked to dig deeper into benchmarking both performance and security of docker containers.
This blog post will cover the part about benchmarking performance. We run tests for benchmarking the performance of our disk, cpu, memory and also separate services.
-
This week we got a request, what if the app on your container takes a while to start and it isn’t allowed to be accessed yet.
In a development environment it doesn’t really matter if your applications take 2-3 minutes to load and the webpage isn’t acessible.
But when you’re patching live environments, proxy forwards to a server that isn’t accessible for 2-3 minutes, that’s quite a while.
-
In our third week at Small Town Heroes, we started setting up monitoring for our current environment. We use Datadog as a monitoring platform.
Datadog has a neat Docker integration where the Agent is running as an individual container on your Docker-machine.
This container will gather metrics from the machine and all containers created on this machine and forward them to the Datadog web interface.
-
When we started using CircleCI for auto deployment a problem arised. Every time code got pushed and deployed to our CoreOS server, new containers with new ip-adresses were created. Because of this we had to go into the nginx container to adjust the configuration. This wasn’t very continuous so we had to find a solution for this.
-
Our next step in learning docker was setting up continuous integration. For this we used CircleCI and some other nifty features. This allowed the code of the other interns whom were developing applications to be automatically deployed after they pushed it to GitHub.
-
To start getting familiar with Docker, we started out with creating a local environment consisting of two NodeJS containers, a Redis container as a Database and an Nginx container to be used as a load-balancer.
We based ourselves on this sample workflow.
-
Before beginning our internship at Small Town Heroes, we didn’t have any experience with Docker.
Since Rome wasn’t built in a day, we would learn this technology step-by-step, starting with the basics.