Have we just replaced `Dependency Hell` for a `Hell of a lot of Dependency`?

While recently working with a frontend engineer it struck me that the most basic of applications have massive amounts of dependencies. Specifically React configured as a simple UI for a CRUD app has over 100 packages that have to be pulled down, version resolved, compiled, and generated as an app.js type resource. Why so much for so little actual usage.

I feel that web package manager need to evolve to a a point where singular classes/modules/components/etc can be pulled into a project and not the entire library. PHP has the ability to select the standard PHP libraries to include when compiling the binary.

Thoughts?

A prime example of why automated testing works…

Today I got to a change request that read something to the order of ‘move all this code from place X, the new namespace Y. By the way, it is referenced all over the place.’. This could easily have been a dawning and over whelming task. Moving massive piece of logic that is core the operation of the application…AND cause no errors into production? Ugg..

I started out my moving the classes and find/replace (f/r) the namespace declarations. Ran test, failed; check error log. Log states class not found.  F/R use statements, run test: fail; check error log. Log states class reference not found. F/R inline usage statements, run test: fail; check error log. Log, weird error about not being able to instantiate something something. Fixed that, ran test: pass. Ran entire test suite: passed.

Imagine, if you will, that this change was request on a massive system. The more complex a system gets the more effort is required to execute a change. More effort = more time, more headache, more hating your career choice.

  • Total time to execute the requested changes in this example: < 4 hours.
  • Confidence nothing broke: > 95%.
  • Happiness tests saved some sanity and made the morning pleasant: > 100%.

Tl;DR: Testing works. It’s an level of assurance that your changes did not introduce breaks in a system that is know to operate correctly.

Is the argument of ‘application portability’ a valid one for containerization?

Preface: I am a huge supporter of containerized distributed applications. It give me all sorts of nerd good feelings.

One of the biggest arguments for containerization of applications is that the application becomes portable. It no longer cares what it is running on, nor where, or how many. Windows server running a Unix FORTRAN application? No problem! Windows app running on a CentOS host? Done it! And while these are cool and nerd-awesome configurations. How often does an application actually change root host environments? If an application was/is made with .NET it is to leverage the advantages .NET has over other options. If RAILS, its for the advantages, if PHP, again, for the advantages over other offerings. To move away from those advantages negates the reason for selecting the solution.

On a related note; are we not simply replacing one type of vendor lock for a lock at another layer of abstraction?

Some news from AWS…

…as of now(ish) all AWS accounts get a rolling 7 days of CloutTrail functionality as part of the free tier! While not helpful to business / enterprise; it defiantly helps the solo / small org. Check it out here.

Out of the box performance: PHP + PDO + MariaDB

Backstory:

Another week, another evening at the Pub with some friends and colleges. Somehow or the other we got on the topic of database insert performance and how long it would take to reach the 32bit max integer.  That being 2.14somethingsomethingsomething billion. I wagered that the the max signed int could be a reached relatively quickly, my college on the other hand said ‘no no no; it  would take hours. Days even’. And so, a wager was born.

The requirements:

PHP + PDO + a SQL database; default configurations. No editing php.ini to allow higher memory usage, no disabling *SQL disk_flush in my.cnf, etc. Raw install, logic, go for the gold.

The process:

On the local development machines we limit the container service manager to limit hardware usage to 7 of the 2.4Ghz CPUs and 15Gb of memory. For the disk we run 256GB SSD, desktop models; nothing fancy.

On that hardware I setup a PHP 7:latest service and a MariaDB:latest service; then linked them. From there it was a matter of connection credentials and increasing the batch insert count until it was close, but not over, the default memory usage per thread. Then how to start up multiple threads, easy enough, bash helped out there. So using bash I spun up 10 threads and let the process run for 1 to 2 minutes.

Getting the max value after the given time frame I was able to extrapolated out how long  it would take to fill the 2.14 billion rows.

The Result:

At current the fastest time requirement would take 1.526 hrs to go from 0 to 2.14 billion row inserts. I know we can get faster but ran out of time today.

The Source:

If you are interested in the code / stats /etc the repo is here https://github.com/davidjeddy/full-up-the-db. Feel free to fork / PR the repo if you can get a faster speed. It would be really awesome to show 32 bit max int can be reached in 5 minutes or less. (Remember, no editing of configurations.)

AWS Lambda and automated testing (pt2).

Opps; forgot to circle back to this…

So we finally got it working. Local and Jenkins automated testing that couples with Lambda for media processing! What we ended up doing was creating an ssh tunnel from our DEV to UAT, from UAT to the remote DEV DB. O man was it slow. Like, slow enough we dumped it after 3 days. Do not recommend this type of setup if you can at all help it.

The next week this repo bust out on the scene; would have saved us a a lot of headache: https://github.com/atlassian/localstack. Unfortunately we were past the point where it would have helped instead of hurt the engineering pipeline.

PSA: Do not use Codeception DB and Yii2 modules together…

…specifically the Yii2: ORM and DB module and transactions. The Yii2 $I->seeRecord() & related methods do NOT use the same connection ID as the DB module. So doing actions such as importing fixtures and executing actions via the ActiveRecord abstraction happen on the frameworks connection.

Trying, then, to do actions like $I->seeInDatabase() and related DB modules actions will fail, almost always. Why? The Db module uses a separate connection, as defined in the suite.yml (acceptance/functional/unit) files.

So, from here out I will not be using the two modules together. Either use Yii2+Fixtures or use the DB+dump.sql. Both, together, is problematic at best.

Another month, another set of continuing education courses.

A corner stone of the IT/Dev career field is this: ‘never stop learning’. I like to expand on it and include ‘When you stop learning, your start becoming worthless’. While some disagree with this it has served me well. As such last night my Udemy collection increased by another 8 courses. Mainly focused on AWS assoc. certification training but also a few container / CI / CD services.

Hoping to sit down for the exams before the end of the year; here’s hoping ^_^.