Hackathons…an honest opinion.

When taken as a hole a hackathon is a marketing, networking, and possibly hiring event. Engineers and developers of all levels attend with the award of getting to demo a cool / neat / fresh solution to (generally) an age old problem. Hackathon’s do not exemplify the planning, fitment, trial and error, depth of knowledge, or thoroughness of execution. If you want shake hands and rub elbows, defiantly attend. If you want to demonstrate your ability at a specific technology, attend. If you want to actually solve the problem, pass. If you want to win, read on.

To win a hackathon contest your team will need three main things to stand a chance. 1) Get a public speaker on your team. This person should spend the entire time planning, practicing, and refining the pitch/demo. You will have a very limited amount of time to convence 1 to dozens of NON-technical people why you (possible) solution is THE best. One chance, make it count. It MUST be the best anyone has seen. 2) Have a technical wow factor; the more it looks like magic the better. Voice commands, robotics, haptic feedback, AR, VR, machine learning, computer vision. Or any other ‘WOW’ factor. Simply making a web app  will not get you into the semi-finals. (For example; MongoDB + NodeJs + Bootstrap + Google Maps). And finally 3) if you have item 1 and 2 covered; then, and only then, should you attempt to solve the challenge.  The solution does not have to actually work, just look like it could. Your selling the idea, not the actual implimentation the majority of the time.

The hackathon/s I have attended where fun and defiantly an interesting experience. But I’m to old to survive on 3 hours a sleep day after day. Giving up entire weekends, sleeping on couches, eating take-out every meal, sitting in front of a screen for 18+ hours for days on end is not fun to me anymore. Sure I do software for a paycheck, I am not at the keys more than I need to be. Solving problems takes thinking, lots of thinking; planning, then execution. Hackathon flip this backwards and put the participants under pressure to perform.

TL;DR: Attend a hackathon at least once and see if it is your type of event. For me though; I am to old for programing benders anymore; the body just cant hang with the 20 somethings.

Have we just replaced `Dependency Hell` for a `Hell of a lot of Dependency`?

While recently working with a frontend engineer it struck me that the most basic of applications have massive amounts of dependencies. Specifically React configured as a simple UI for a CRUD app has over 100 packages that have to be pulled down, version resolved, compiled, and generated as an app.js type resource. Why so much for so little actual usage.

I feel that web package manager need to evolve to a a point where singular classes/modules/components/etc can be pulled into a project and not the entire library. PHP has the ability to select the standard PHP libraries to include when compiling the binary.

Thoughts?

A prime example of why automated testing works…

Today I got to a change request that read something to the order of ‘move all this code from place X, the new namespace Y. By the way, it is referenced all over the place.’. This could easily have been a dawning and over whelming task. Moving massive piece of logic that is core the operation of the application…AND cause no errors into production? Ugg..

I started out my moving the classes and find/replace (f/r) the namespace declarations. Ran test, failed; check error log. Log states class not found.  F/R use statements, run test: fail; check error log. Log states class reference not found. F/R inline usage statements, run test: fail; check error log. Log, weird error about not being able to instantiate something something. Fixed that, ran test: pass. Ran entire test suite: passed.

Imagine, if you will, that this change was request on a massive system. The more complex a system gets the more effort is required to execute a change. More effort = more time, more headache, more hating your career choice.

  • Total time to execute the requested changes in this example: < 4 hours.
  • Confidence nothing broke: > 95%.
  • Happiness tests saved some sanity and made the morning pleasant: > 100%.

Tl;DR: Testing works. It’s an level of assurance that your changes did not introduce breaks in a system that is know to operate correctly.

Is the argument of ‘application portability’ a valid one for containerization?

Preface: I am a huge supporter of containerized distributed applications. It give me all sorts of nerd good feelings.

One of the biggest arguments for containerization of applications is that the application becomes portable. It no longer cares what it is running on, nor where, or how many. Windows server running a Unix FORTRAN application? No problem! Windows app running on a CentOS host? Done it! And while these are cool and nerd-awesome configurations. How often does an application actually change root host environments? If an application was/is made with .NET it is to leverage the advantages .NET has over other options. If RAILS, its for the advantages, if PHP, again, for the advantages over other offerings. To move away from those advantages negates the reason for selecting the solution.

On a related note; are we not simply replacing one type of vendor lock for a lock at another layer of abstraction?

Some news from AWS…

…as of now(ish) all AWS accounts get a rolling 7 days of CloutTrail functionality as part of the free tier! While not helpful to business / enterprise; it defiantly helps the solo / small org. Check it out here.

Out of the box performance: PHP + PDO + MariaDB

Backstory:

Another week, another evening at the Pub with some friends and colleges. Somehow or the other we got on the topic of database insert performance and how long it would take to reach the 32bit max integer.  That being 2.14somethingsomethingsomething billion. I wagered that the the max signed int could be a reached relatively quickly, my college on the other hand said ‘no no no; it  would take hours. Days even’. And so, a wager was born.

The requirements:

PHP + PDO + a SQL database; default configurations. No editing php.ini to allow higher memory usage, no disabling *SQL disk_flush in my.cnf, etc. Raw install, logic, go for the gold.

The process:

On the local development machines we limit the container service manager to limit hardware usage to 7 of the 2.4Ghz CPUs and 15Gb of memory. For the disk we run 256GB SSD, desktop models; nothing fancy.

On that hardware I setup a PHP 7:latest service and a MariaDB:latest service; then linked them. From there it was a matter of connection credentials and increasing the batch insert count until it was close, but not over, the default memory usage per thread. Then how to start up multiple threads, easy enough, bash helped out there. So using bash I spun up 10 threads and let the process run for 1 to 2 minutes.

Getting the max value after the given time frame I was able to extrapolated out how long  it would take to fill the 2.14 billion rows.

The Result:

At current the fastest time requirement would take 1.526 hrs to go from 0 to 2.14 billion row inserts. I know we can get faster but ran out of time today.

The Source:

If you are interested in the code / stats /etc the repo is here https://github.com/davidjeddy/full-up-the-db. Feel free to fork / PR the repo if you can get a faster speed. It would be really awesome to show 32 bit max int can be reached in 5 minutes or less. (Remember, no editing of configurations.)

AWS Lambda and automated testing (pt2).

Opps; forgot to circle back to this…

So we finally got it working. Local and Jenkins automated testing that couples with Lambda for media processing! What we ended up doing was creating an ssh tunnel from our DEV to UAT, from UAT to the remote DEV DB. O man was it slow. Like, slow enough we dumped it after 3 days. Do not recommend this type of setup if you can at all help it.

The next week this repo bust out on the scene; would have saved us a a lot of headache: https://github.com/atlassian/localstack. Unfortunately we were past the point where it would have helped instead of hurt the engineering pipeline.

PSA: Do not use Codeception DB and Yii2 modules together…

…specifically the Yii2: ORM and DB module and transactions. The Yii2 $I->seeRecord() & related methods do NOT use the same connection ID as the DB module. So doing actions such as importing fixtures and executing actions via the ActiveRecord abstraction happen on the frameworks connection.

Trying, then, to do actions like $I->seeInDatabase() and related DB modules actions will fail, almost always. Why? The Db module uses a separate connection, as defined in the suite.yml (acceptance/functional/unit) files.

So, from here out I will not be using the two modules together. Either use Yii2+Fixtures or use the DB+dump.sql. Both, together, is problematic at best.

Another month, another set of continuing education courses.

A corner stone of the IT/Dev career field is this: ‘never stop learning’. I like to expand on it and include ‘When you stop learning, your start becoming worthless’. While some disagree with this it has served me well. As such last night my Udemy collection increased by another 8 courses. Mainly focused on AWS assoc. certification training but also a few container / CI / CD services.

Hoping to sit down for the exams before the end of the year; here’s hoping ^_^.