Some news from AWS…

…as of now(ish) all AWS accounts get a rolling 7 days of CloutTrail functionality as part of the free tier! While not helpful to business / enterprise; it defiantly helps the solo / small org. Check it out here.

AWS Lambda and automated testing (pt2).

Opps; forgot to circle back to this…

So we finally got it working. Local and Jenkins automated testing that couples with Lambda for media processing! What we ended up doing was creating an ssh tunnel from our DEV to UAT, from UAT to the remote DEV DB. O man was it slow. Like, slow enough we dumped it after 3 days. Do not recommend this type of setup if you can at all help it.

The next week this repo bust out on the scene; would have saved us a a lot of headache: https://github.com/atlassian/localstack. Unfortunately we were past the point where it would have helped instead of hurt the engineering pipeline.

Another month, another set of continuing education courses.

A corner stone of the IT/Dev career field is this: ‘never stop learning’. I like to expand on it and include ‘When you stop learning, your start becoming worthless’. While some disagree with this it has served me well. As such last night my Udemy collection increased by another 8 courses. Mainly focused on AWS assoc. certification training but also a few container / CI / CD services.

Hoping to sit down for the exams before the end of the year; here’s hoping ^_^.

Lets Encrypt on Amazon Linux,

 So after switching some domain names around I wanted to add a Lets Encrypt SSL cert. to the blog here. Simple enough right? Log into the box, follow the instructions (https://coderwall.com/p/e7gzbq/https-with-certbot-for-nginx-on-amazon-linux) and that should be it? Nope, as alway an error occurred, when running the

certbot-auto certonly --standalone -d davidjeddy.com

 command. Turns out Amazon linux does NOT add `/usr/local/bin` to the $PATH. So I instead moved the binary to `/usr/sbin` and all was well with the world.

Couple minutes later I'm in the nginx config adding the cert, a quick restart and away we went into the great beyond of encrypted awesomeness.

AWS Lambda, S3, and VPC

Found this awesome bit of information floating around the internet: https://aws.amazon.com/blogs/aws/new-access-resources-in-a-vpc-from-your-lambda-functions/

AWS Lambda and automated testing.

So as you can see from my previous posts I recently developed a AWS Lambda function that pulls data from S3, processes it, and updates a database. Now the problem: how to automate a test for this :S. I’ll keep you updated to the progress …

The pain with AWS Lambda…

…I realize I am a bit new the AWS platform and some things are just impossible, but here is the desired work flow:

  • POST/PUT image to S3
  • Run node.js app in a container that emulates an EC2 machine (thus a Lambda environment)
  • See logs of service as if it was running in a Lambda ENV.

In reality my current workflow is:

  • Update logic
  • Run package script to build nodeJs app, uploads to Lambda
  • POST/PUT data on S3
  • Goto CloudWatch and press refresh until the logs populate with related request ID

Anyone know how to make this more effective? Average turn around is 3 to 5 minutes per change :S.

The `aws s3 sync` command…

From the manual: ‘Syncs directories and S3 prefixes. Recursively copies new and updated files from the source directory to the destination. Only creates folders in the destination if they contain one or more files.’

OMFG, static copy from a directory to an S3 bucket. No middle man / process required. #winning

http://docs.aws.amazon.com/cli/latest/reference/s3/sync.html

 

Process:

  • Create an AWS API User, create access key
  • aws configure –profile {your-profile-name}
  • aws s3 sync ./ s3://{existing bucket name}/{prefix} –profile {your-profile-name}
  • Watch your local contents sync up to the bucket.

 

 

The down side of micro-service architecture…

…so at my employ we are attempting to deploy a micro-service style application and thus enjoy all the new hotness that comes with it. The technical issues we are now running into is the deployment for each facet of the application is requiring a different deployment strategy. A quick breakdown is as follows:

  • Middleware: Post test pass push artifact to S3. Trigger CodeDeploy and deploy to EC2 instances as part of an auto-scaling group.
  • Frontend: Static content (HTML/JS/CSS) wherein API calls are requested against the middleware layer, push artifact to S3, trigger Lambda to pull package, unpack to S3, invalidate CloudFront cache
  • Backend: See frontend + handle authentication of user accounts
  • Media Processing: When media is PUT onto S3 trigger Lambda to process media and update database once complete.
  • Application Monitoring: Via in house manage installation of Sentry (they are awesome by the way, you should check it out).

All of this, and this is not even getting into the roles and actions that have to be performed for each environment. The cherry on top: is for the v1.x release of the project. While it enable our clients and in turn there clients to access the application data layer via the middleware and skin the user experience (UX) to desire; the implementation of has a learning curve.

Lesson: If your going micro-service, plan your deployment process VERY carefully ahead of time.