Docker nginx-proxy round robin…

nginx + docker
docker + nginx

So we use docker for container services at work. One container that is part of our ‘dev’ tools is an nginx reverse proxy tied to port 80. IT allows us to run many projects at once using hostnames and port 80; just like in production.

So we are chugging along and I start an applications http service. Hit the service in the browser, all good, reload: broken, reload: good, reload: broken. I am all like wait a minute, thats not cool. After about 5 minutes of debugged we make a realization: I have two instances of the http service running.

Turns out the nginx reverse proxy round robins requests when more than one service is running.

*Note: This was all done w/o docker swarm enabled. Just plain docker-compose and docker run where used.

Is the argument of ‘application portability’ a valid one for containerization?

Preface: I am a huge supporter of containerized distributed applications. It give me all sorts of nerd good feelings.

One of the biggest arguments for containerization of applications is that the application becomes portable. It no longer cares what it is running on, nor where, or how many. Windows server running a Unix FORTRAN application? No problem! Windows app running on a CentOS host? Done it! And while these are cool and nerd-awesome configurations. How often does an application actually change root host environments? If an application was/is made with .NET it is to leverage the advantages .NET has over other options. If RAILS, its for the advantages, if PHP, again, for the advantages over other offerings. To move away from those advantages negates the reason for selecting the solution.

On a related note; are we not simply replacing one type of vendor lock for a lock at another layer of abstraction?

Out of the box performance: PHP + PDO + MariaDB

Backstory:

Another week, another evening at the Pub with some friends and colleges. Somehow or the other we got on the topic of database insert performance and how long it would take to reach the 32bit max integer.  That being 2.14somethingsomethingsomething billion. I wagered that the the max signed int could be a reached relatively quickly, my college on the other hand said ‘no no no; it  would take hours. Days even’. And so, a wager was born.

The requirements:

PHP + PDO + a SQL database; default configurations. No editing php.ini to allow higher memory usage, no disabling *SQL disk_flush in my.cnf, etc. Raw install, logic, go for the gold.

The process:

On the local development machines we limit the container service manager to limit hardware usage to 7 of the 2.4Ghz CPUs and 15Gb of memory. For the disk we run 256GB SSD, desktop models; nothing fancy.

On that hardware I setup a PHP 7:latest service and a MariaDB:latest service; then linked them. From there it was a matter of connection credentials and increasing the batch insert count until it was close, but not over, the default memory usage per thread. Then how to start up multiple threads, easy enough, bash helped out there. So using bash I spun up 10 threads and let the process run for 1 to 2 minutes.

Getting the max value after the given time frame I was able to extrapolated out how long  it would take to fill the 2.14 billion rows.

The Result:

At current the fastest time requirement would take 1.526 hrs to go from 0 to 2.14 billion row inserts. I know we can get faster but ran out of time today.

The Source:

If you are interested in the code / stats /etc the repo is here https://github.com/davidjeddy/full-up-the-db. Feel free to fork / PR the repo if you can get a faster speed. It would be really awesome to show 32 bit max int can be reached in 5 minutes or less. (Remember, no editing of configurations.)

Another month, another set of continuing education courses.

A corner stone of the IT/Dev career field is this: ‘never stop learning’. I like to expand on it and include ‘When you stop learning, your start becoming worthless’. While some disagree with this it has served me well. As such last night my Udemy collection increased by another 8 courses. Mainly focused on AWS assoc. certification training but also a few container / CI / CD services.

Hoping to sit down for the exams before the end of the year; here’s hoping ^_^.

When removing an orphans is bad…

So Docker creates containers, when you stop these containers the container still exists it is simply stopped.

And then we create more containers…and more containers…and more containers…until we look at docker ps -a and see dozens. As such I recently began using the --remove-orphan flag of docker-compose. 

As the project progressed it got to the point where we were ready to integration out micro-services. So I started up the API server, then the user frontend…and the server exited with a code 137. Hua? Checked ports: ok, checked logs: nothing, check Google / GitHub / Stack Overflow; not much stated explaining why one service would cause another to exit with a SIGTERM command.

Come to find out when you start a stack based on a docker-compose.yml docker uses the directory name as the project name. Since we try not to clutter the root of projects both/all projects have a structure of ./{app root}/docker/{service name}. Soooo --remove-orphan was removing the services not attached to the YML file of the application. Essentially termination everything not itself.

Well, I thought, we can name containers we should be able to name the project right? Nope. As of 1.13, networks, containers, and swarm can be name (iirc) in the docker-compose.yml configuration. But to name a  projects it can only be done on the CLI during container run.

:S there went 3 hours of my life under the assumption of consistency from a tool.

TL;DR: Never assume your toolset operates consistently, always verify.