Tutorial: Using Docker_Puppeteer_Jest to execute a headless Chrome End-to-End (user/acceptance) testing suites.

The problem:

We know that unit testing is an essential part of software engineering (at least we should all know that). Integration testing assure us that all the pieces work well together properly. At the very top of the pyramid is end-to-end (sometimes called user or acceptance) testing. This is the test set that loads the application, clicks buttons, submits data, reads data. In general, it acts like a user and assures us the applications works in the eyes of the user as it is intended to. Unfortunately in order to emulate the user experience archaic and many time s complicated system were used. These system were expensive to setup, costly to maintain, and fragile to run.

Herein I will show you in less than 6 steps how to use Chrome to emulate a user visiting a number of social media services and provide visual feedback as to what was rendered in the browser.

Pre-flight requirements:

Basic CLI / Terminal abilities


How to do it:

1) Download the image from Docker Hub via the CLI: docker pull davidjeddy/docker_puppeteer_jest

2) Clone the source repository so we can use the example test suites: git clone git@github.com:davidjeddy/docker_puppeteer_jest.git

3) Now lets change into newly created directory: `cd docker_puppeteer_jest`

4) Finally we execute the image: docker run -t -v $(pwd):/app --name dpr --rm davidjeddy/docker_puppeteer_jest. In this example we are mounting the code repository into the container at the /app directory location.

5) If all went well we should see the following in the terminal:

Congratulations, you have just executed your first user acceptance test suite using headless Chrome in a container.

To make it even easier, lets make an alias the execute the custom docker run command. Something like alias 'dta'='docker run -t -v $(pwd):/app --name dpr --rm davidjeddy/docker_puppeteer_jest'. Now type dta and press enter.

Next Steps:

To map your project into the container replace -v $(pwd):/app in the docker run command with -v {Your projects absolute path here}:/app.

Under the hood:

Docker starts a container with Chrome as the browser, Puppeteer starts a headless Chrome session. All of this isolated from the host machine as is the nature of containers. The Jest testing framework is then triggered and the test suites are auto-detected due to there directory location and naming scheme. Jest then executes the tests providing output and screen capture images to ./tests/_ouput/ which is volume mounted to the host machine.



This process can be used with a number of frontend architectures. React, Vue, jQuery, static content, DOM manipulation, and any other material rendered a client browser. The world, as it is said, is your oyster.

I hope you enjoy your acceptance testing using a headless browser and all the assurances that come with it. Hopefully you found this useful to increasie assurance that you changes get to production without negatively affecting the quality of your projects.

Run your End-to-End tests using headless Chrome; Docker_Puppeteer_Jest Docker Image is announced!

About a year ago myself and @ibotpeaces sat down for a couple hours to to put together a docker images with headless Chrome that we could use for End-to-End (user acceptance) testing. At the time the tooling of both Docker and Jest where not at a place at we could get a POC (proof of concept) functional give the constants of the process being a container service, easy integration into existing projects, and using a well adopted JavaScript testing framework.

Google Chrome
Google Chrome






Fast forward to March 2018 and not only has the containerization tooling but advanced significantly but also the headless Chrome control systems. As such I sat down once gain to look into this tool chain. I am happy to announce `Docker Puppeteer Jest‘ docker image. As the name suggests running the image will spin up a headless Chrome instance, controlled by Puppeteer that triggers Jest test suites. Outputting both terminal response and image captures if so instructed.

Checkout the image on the Docker Hub or the repo on GitHub. Let me know what you think or if you find it useful. I’d love to hear from you.


Docker nginx-proxy round robin…

nginx + docker
docker + nginx

So we use docker for container services at work. One container that is part of our ‘dev’ tools is an nginx reverse proxy tied to port 80. IT allows us to run many projects at once using hostnames and port 80; just like in production.

So we are chugging along and I start an applications http service. Hit the service in the browser, all good, reload: broken, reload: good, reload: broken. I am all like wait a minute, thats not cool. After about 5 minutes of debugged we make a realization: I have two instances of the http service running.

Turns out the nginx reverse proxy round robins requests when more than one service is running.

*Note: This was all done w/o docker swarm enabled. Just plain docker-compose and docker run where used.

Is the argument of ‘application portability’ a valid one for containerization?

Preface: I am a huge supporter of containerized distributed applications. It give me all sorts of nerd good feelings.

One of the biggest arguments for containerization of applications is that the application becomes portable. It no longer cares what it is running on, nor where, or how many. Windows server running a Unix FORTRAN application? No problem! Windows app running on a CentOS host? Done it! And while these are cool and nerd-awesome configurations. How often does an application actually change root host environments? If an application was/is made with .NET it is to leverage the advantages .NET has over other options. If RAILS, its for the advantages, if PHP, again, for the advantages over other offerings. To move away from those advantages negates the reason for selecting the solution.

On a related note; are we not simply replacing one type of vendor lock for a lock at another layer of abstraction?