An example of when being to clever can come back to bite you…

As part of a test I had to ensure only two alphabetical characters would allowed. So I used `chr(rand(97,122))`; which on a OSX machine is letters a->z. However, this character code sequence (to the best of knowledge NOW) does not translate to other architectures. 4 Hours latter and I replace the above `char()` usage with:
$letterArray = ['a', 'c', 'd', 'e', 'f', 'g', 'h', 'i', 'j', 'k', 'l', 'm', 'n', 'p', 'q', 's', 't', 'u', 'v', 'w', 'x', 'z',];
$key = \array_rand($letterArray);
...
$letterArray[$key]

After three runs through the applications CI process not once has it failed…yet.

…Here’s hoping it continues to go as planned.

Docker nginx-proxy round robin…

nginx + docker
docker + nginx

So we use docker for container services at work. One container that is part of our ‘dev’ tools is an nginx reverse proxy tied to port 80. IT allows us to run many projects at once using hostnames and port 80; just like in production.

So we are chugging along and I start an applications http service. Hit the service in the browser, all good, reload: broken, reload: good, reload: broken. I am all like wait a minute, thats not cool. After about 5 minutes of debugged we make a realization: I have two instances of the http service running.

Turns out the nginx reverse proxy round robins requests when more than one service is running.

*Note: This was all done w/o docker swarm enabled. Just plain docker-compose and docker run where used.

Economy of Scale: when big is a waste.

Some short musings. I was thinking about why larger organizations tend to be less ‘agile’ or ‘nimble’ than smaller organizations. Even though larger organizations are typically broken into smaller and smaller groups. After thinking about it the phrase administrative abstraction came to mind. Here’s a quick break down of what it means to me:

Larges orgs: Typically high in price expensive but a low wait time once work starts. Getting authority can take time due to authorization, setup, or negotiations.

Med orgs: Reasonably effective, not to many levels of authority and red tape to work through when changes are requesteed. However, you may have to wait in line behind other client projects before your desire is worked on.

Small orgs: Usually very dedicated and willing but not enough resources internally to complete task to level of expertise req. to complete task.

What do you think, am I missing anything here?

Hackathons…an honest opinion.

When taken as a hole a hackathon is a marketing, networking, and possibly hiring event. Engineers and developers of all levels attend with the award of getting to demo a cool / neat / fresh solution to (generally) an age old problem. Hackathon’s do not exemplify the planning, fitment, trial and error, depth of knowledge, or thoroughness of execution. If you want shake hands and rub elbows, defiantly attend. If you want to demonstrate your ability at a specific technology, attend. If you want to actually solve the problem, pass. If you want to win, read on.

To win a hackathon contest your team will need three main things to stand a chance. 1) Get a public speaker on your team. This person should spend the entire time planning, practicing, and refining the pitch/demo. You will have a very limited amount of time to convence 1 to dozens of NON-technical people why you (possible) solution is THE best. One chance, make it count. It MUST be the best anyone has seen. 2) Have a technical wow factor; the more it looks like magic the better. Voice commands, robotics, haptic feedback, AR, VR, machine learning, computer vision. Or any other ‘WOW’ factor. Simply making a web app  will not get you into the semi-finals. (For example; MongoDB + NodeJs + Bootstrap + Google Maps). And finally 3) if you have item 1 and 2 covered; then, and only then, should you attempt to solve the challenge.  The solution does not have to actually work, just look like it could. Your selling the idea, not the actual implimentation the majority of the time.

The hackathon/s I have attended where fun and defiantly an interesting experience. But I’m to old to survive on 3 hours a sleep day after day. Giving up entire weekends, sleeping on couches, eating take-out every meal, sitting in front of a screen for 18+ hours for days on end is not fun to me anymore. Sure I do software for a paycheck, I am not at the keys more than I need to be. Solving problems takes thinking, lots of thinking; planning, then execution. Hackathon flip this backwards and put the participants under pressure to perform.

TL;DR: Attend a hackathon at least once and see if it is your type of event. For me though; I am to old for programing benders anymore; the body just cant hang with the 20 somethings.

Have we just replaced `Dependency Hell` for a `Hell of a lot of Dependency`?

While recently working with a frontend engineer it struck me that the most basic of applications have massive amounts of dependencies. Specifically React configured as a simple UI for a CRUD app has over 100 packages that have to be pulled down, version resolved, compiled, and generated as an app.js type resource. Why so much for so little actual usage.

I feel that web package manager need to evolve to a a point where singular classes/modules/components/etc can be pulled into a project and not the entire library. PHP has the ability to select the standard PHP libraries to include when compiling the binary.

Thoughts?

A prime example of why automated testing works…

Today I got to a change request that read something to the order of ‘move all this code from place X, the new namespace Y. By the way, it is referenced all over the place.’. This could easily have been a dawning and over whelming task. Moving massive piece of logic that is core the operation of the application…AND cause no errors into production? Ugg..

I started out my moving the classes and find/replace (f/r) the namespace declarations. Ran test, failed; check error log. Log states class not found.  F/R use statements, run test: fail; check error log. Log states class reference not found. F/R inline usage statements, run test: fail; check error log. Log, weird error about not being able to instantiate something something. Fixed that, ran test: pass. Ran entire test suite: passed.

Imagine, if you will, that this change was request on a massive system. The more complex a system gets the more effort is required to execute a change. More effort = more time, more headache, more hating your career choice.

  • Total time to execute the requested changes in this example: < 4 hours.
  • Confidence nothing broke: > 95%.
  • Happiness tests saved some sanity and made the morning pleasant: > 100%.

Tl;DR: Testing works. It’s an level of assurance that your changes did not introduce breaks in a system that is know to operate correctly.