The cheapest way to implement a devops flow in AWS

JRichardsz.java
10 min readMar 19, 2023

--

Originally published at https://jrichardsz.github.io on March 19, 2023.

After having implemented devops in a several cloud platforms like gcp, azure, aws, heroku, buddy.works, huawei, etc with several tools like jenkins, kubernetes, travis, bamboo, gitlab-ci, etc, there are in my mind many ways to implement a devops flows from the scratch to ready to use platforms, from manual to automated and from cheapest to more expensive.

In this post I will show you how to implement the cheapest devops flow for startups, pocs, or very limited production environments.

Not for enterprises

If you are part of an enterprise with real users, employees, etc which earns money you should invest in your digital infrastructure if you want to guarantee a good user experience or don’t lose to the competition.

If you are the CEO, CTO, etc but don’t have any knowledge about software engineering, please hire someone who knows.

If you opt to save money in this department, you will regret it and you could lose a lot of money because of it.

What is the most expensive in aws?

The most expensive is the hard drive or storage (Elastic block store EBS) and how many hours is the cloud resource (Amazon Elastic Compute Cloud) online. Everything is billed hourly.

Most basic devops flow

In this example, we will not have testing environment. Just on: Production

Listen the git push

In a real enterprises, they have a 24*7 server called “Continuous Integration Server” which is responsible to listen the git push from github, gitlab, bitbucket, etc.

Here an example with Jenkins: https://jrichardsz.github.io/devops/devops-with-git-and-jenkins-using-webhooks

To listen the git push, the git platform provider (github, gitlab, bitbucket, etc) sends a huge json to an http url (configured previously). The json contains a lot of information about the git event: repository name, target branch, commit message, commit, author, etc

That feature is called Webhooks. Check this https://jrichardsz.github.io/devops/configure-webhooks-in-github-bitbucket-gitlab to understand how to configure the webhook.

To save money on this we will use Aws Lambdas which only charges us for the time it takes to execute the function

Aws Lambda with nodejs

This is the serverless feature of aws. Basically is just a function on several languages which are triggered from several sources.

We will execute this function on every git push using the webhook feature which required a public http url (if you understood what a webhook is).

According to this https://docs.aws.amazon.com/lambda/latest/dg/urls-invocation.html , every lambda function could be executed invoking its http url:

curl -v -X POST \ 
'https://abcdefg.lambda-url.us-east-1.on.aws/?message=HelloWorld' \
-H 'content-type: application/json' \
-d '{ "example": "test" }'

And the incoming body could be read inside the lambda function like this:

exports.handler = async (event) => {
const body = JSON.parse(event.body)
console.log('data: ', body)
const response = {
statusCode: 200,
body: JSON.stringify('Hello from Lambda!'),
};
return response;
};

And the body will be arrive

Finally, to extract values from your real git providers webhook check:

At this point we have a cheap replacement of an entire continuous integration server like Jenkins, Travis, Aws Code Build, etc

Webhook

If you were able to create the lambda and test it with an post http invocation (curl, postman, insomnia, etc), let’s use that http url to register it as our github webhook

Get the url from lambda home page

Then paste to github

Create the server

AWS rest apis are so powerful. So you could use the following snippet to create an EC2 with linux ready to use

https://gist.github.com/jrichardsz/f3ec44a044293b54af3dbff309fe5c83

This snippet should be inside of the aws lambda

Build

We wil perform the build using docker, so your app should be dockerized. If not, contact me!! I want to dockerize any language or framework in the multiverse, except Microsoft Technologies.

Also in this cheapest way, we will perform the docker build … inside of the same ec2 machine created by the aws lambda function. This is not recommended for real enterprises

Deploy

Similar to the previous paragraph, the deploy docker run … will be performed inside of the same ec2 machine created by the aws lambda function.

Devops on each machine reboot

The only way to have a cheapest devops flow is to build and deploy in the same server. So we need to execute several linux bash commands at the start of ec2 (created by aws lambda function)

Again, AWS apis are so powerful, so you could use a feature called user data to attach a bash script to the ec2. So this script will be executed at the start of ec2 machine.

var instanceParams = {
ImageId: 'ami-0b9064170e32bde34',
InstanceType: 't2.micro',
KeyName: 'some_key',
UserData: Buffer.from(script_as_string_here).toString('base64'),
MinCount: 1,
MaxCount: 1
};

For more details check:

The steps

Now if you got to this line and understood everything: github webhook, as lambda, docker, etc , these are the required steps

Step #1 : Configure a github ssh key

ssh-keygen
cat $HOME/.ssh/id_rsa.pub

Register the public key into your github account. This will allow to the devops script to clone your github repository

Save your private key as env variable in aws lambda with SSH_PRIVATE_KEY_BASE64 name. I hate files in the server and I love env variables. So I will convert the private key file into a base64 string with including the \n

cat ~/.ssh/id_rsa | base64 -w 0

Step #2 : Choose your aws machine AMI , type and zone

Every machine on aws has a kind of id called AMI. You could choose ami-0557a15b87f6559cf for your application which is part of free tier.

Or you can search a find whatever you need

Also choose some type for your ec2 instance. I recommend you: t2.nano which is the cheapest

Finally add them as new env variables

  • EC2_AMI
  • EC2_TYPE
  • AWS_ZONE
  • EC2_SECURITY_GROUP : To access the 80 port (inbound rule)

Optional if you want to connect to the machine using ssh add

  • EC2_KEY_NAME

Step #3: The scripts

main.sh to be executed in the linux machine at the start

#! /bin/bash
set -e

#start-readme
: '

# Description

# Variables

- _SSH_PRIVATE_KEY_BASE64
- _GITHUB_REPOSITORY_URL
- _INTERNAL_PORT
- _RUNTIME_APP_VARIABLES_REPOSITORY_URL

'
#end-readme

SRC_ABSOLUTE_LOCATION=~/src
_SSH_PRIVATE_KEY_BASE64="$SSH_PRIVATE_KEY_BASE64"
_GITHUB_REPOSITORY_URL=$GITHUB_REPOSITORY_URL
_INTERNAL_PORT=$INTERNAL_PORT
_RUNTIME_APP_VARIABLES_REPOSITORY_URL="$RUNTIME_APP_VARIABLES_REPOSITORY_URL"


docker_detect() {
prompt=$(docker -v > /dev/null 2>&1)
status=$?
echo $status
}

ssh_git_config_detect() {
prompt=$(ssh -o "StrictHostKeyChecking no" -T git@github.com > /dev/null 2>&1)
status=$?
echo $status
}

docker_install(){
echo "installing docker"
# docker install if it don't exist
sudo apt-get update
sudo apt-get -y install \
apt-transport-https \
ca-certificates \
curl \
gnupg \
lsb-release
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --batch --yes --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg
echo \
"deb [arch=amd64 signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/ubuntu \
$(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
sudo apt-get update
sudo apt-get -y install docker-ce docker-ce-cli
}

ssh_git_configure(){
echo "configuring ssh git connection"
decoded_file=$(echo $_SSH_PRIVATE_KEY_BASE64 | base64 -d)
echo "$decoded_file" > ~/.ssh/id_rsa
chmod 400 ~/.ssh/id_rsa
eval `ssh-agent` && ssh-add ~/.ssh/id_rsa
}

docker_safe_prune(){
echo "cleaning docker"
all_containers=$(docker ps -aq)
if [ ! "$all_containers" == "" ]; then
docker stop $(docker ps -aq)
docker rm $(docker ps -aq)
fi

all_images=$(docker images -q)
if [ ! "$all_images" == "" ]; then
docker rmi $(docker images -q)
fi
docker volume prune --force
}

download_source_code(){
echo "downloading source code"
rm -rf $SRC_ABSOLUTE_LOCATION
git clone $_GITHUB_REPOSITORY_URL $SRC_ABSOLUTE_LOCATION
ls -la $SRC_ABSOLUTE_LOCATION
}

## bash script starts here


if [ "$(docker_detect)" -eq "0" ]; then
echo "docker is already installed"
else
echo "docker is not installed"
docker_install
fi

if [ "$(ssh_git_config_detect)" -eq "0" ]; then
echo "ssh git is already configured"
else
echo "ssh git is not configured"
ssh_git_configure
fi


# docker delete everything
docker_safe_prune

# download the new source code
download_source_code

# build
cd $SRC_ABSOLUTE_LOCATION
docker build -t my_app .

# deploy
docker run -d --name my_app -p 80:$_INTERNAL_PORT -e RUNTIME_APP_VARIABLES_REPOSITORY_URL="$_RUNTIME_APP_VARIABLES_REPOSITORY_URL" my_app

lamda aws to receive github webhook payload

const fs = require("fs");
const AWS = require('aws-sdk');
const https = require('https');
const url = require('url');

var uuidExecution = (Math.random() + 1).toString(36).substring(7);

async function getGithubInformation(object) {

var commitRef = object.ref;
var branchName = commitRef.split("/").pop();

return {
name: object.repository.name,
pushOwnerName: object.repository.owner.name,
pushOwnerMail: object.commits[0].author.email,
branchName: branchName,
sshGitUrl: object.repository.ssh_url
}
}

async function createOrRestartEc2(gitParams) {

AWS.config.update({
accessKeyId: process.env.AWS_ACCESS_KEY_ID,
secretAccessKey: process.env.AWS_SECRET_ACCESS_KEY,
region: process.env.AWS_ZONE
});

// Create EC2 service object
var ec2 = new AWS.EC2({
apiVersion: '2016-11-15'
});

var scriptText = await fs.promises.readFile("main.sh", "utf8");
console.log("script loaded");
scriptText = scriptText.replace("$SSH_PRIVATE_KEY_BASE64",process.env.SSH_PRIVATE_KEY_BASE64);
scriptText = scriptText.replace("$GITHUB_REPOSITORY_URL",gitParams.sshGitUrl);
scriptText = scriptText.replace("$INTERNAL_PORT",80);
scriptText = scriptText.replace("$RUNTIME_APP_VARIABLES_REPOSITORY_URL",process.env.RUNTIME_APP_VARIABLES_REPOSITORY_URL);

var instanceParams = {
ImageId: process.env.EC2_AMI,
InstanceType: process.env.EC2_TYPE,
KeyName: process.env.EC2_KEY_NAME,
UserData: Buffer.from(scriptText).toString('base64'),
MinCount: 1,
MaxCount: 1,
SecurityGroupIds: [process.env.EC2_SECURITY_GROUP]
};

// Create a promise on an EC2 service object
var instancePromise = new AWS.EC2({
apiVersion: '2016-11-15'
}).runInstances(instanceParams).promise();

var instanceDetails = await instancePromise;
var instanceId = instanceDetails.Instances[0].InstanceId;
console.log("Created instance", instanceId);
const params = {
InstanceIds: [
instanceId
]
};

do {
var status;
console.log("instance is not ready yet...");
var describeInstanceStatusPromise = new AWS.EC2({
apiVersion: '2016-11-15'
}).describeInstanceStatus(params).promise();
var isReadyToUse = false;
try {
var instanceStatusDetails = await describeInstanceStatusPromise;
if (instanceStatusDetails.InstanceStatuses.length > 0) {
isReadyToUse = instanceStatusDetails.InstanceStatuses[0].InstanceStatus.Status == "ok" && instanceStatusDetails.InstanceStatuses[0].SystemStatus.Status == "ok"
}
} catch (err) {
console.log(err.toString());
}
await sleep(5000);
} while (isReadyToUse === false);

console.log("instance is ready to use");
var publicDns;
do {
console.log("getting dns...");
var describeInstancesPromise = new AWS.EC2({
apiVersion: '2016-11-15'
}).describeInstances(params).promise();

try {
var instanceDescription = await describeInstancesPromise;
publicDns = instanceDescription.Reservations[0].Instances[0].PublicDnsName
console.log(publicDns);
} catch (err) {
console.log(err.toString());
}
await sleep(5000);
} while (typeof publicDns === 'undefined');

return publicDns;
}

async function sleep(millis) {
return new Promise(resolve => setTimeout(resolve, millis));
}

function sendMailNoWait(urlString, mailParams) {

if(typeof urlString === 'undefined'){
return;
}

var q = url.parse(urlString, true);
var pathname = url.parse(urlString).pathname;

var postData = JSON.stringify(mailParams);

var options = {
hostname: q.host,
path: q.pathname + q.search,
method: 'POST',
headers: {
'Content-Type': 'application/json',
'Content-Length': postData.length
}
};

var req = https.request(options, (res) => {

if (res.statusCode === 301 || res.statusCode === 302) {
return sendMailNoWait(res.headers.location, mailParams)
}

if (process.env.LOG_LEVEL === "debug") console.log('mail statusCode:', res.statusCode);

res.on('data', (d) => {
if (process.env.LOG_LEVEL === "debug") process.stdout.write(d);
});
});

req.on('error', (e) => {
console.error(e);
});

req.write(postData);
req.end();


}

function msToTime(duration) {
var milliseconds = Math.floor((duration % 1000) / 100),
seconds = Math.floor((duration / 1000) % 60),
minutes = Math.floor((duration / (1000 * 60)) % 60),
hours = Math.floor((duration / (1000 * 60 * 60)) % 24);

hours = (hours < 10) ? "0" + hours : hours;
minutes = (minutes < 10) ? "0" + minutes : minutes;
seconds = (seconds < 10) ? "0" + seconds : seconds;

return `${hours} Hours : ${minutes} Minutes: ${seconds} Seconds ${milliseconds} Millis`;
}

exports.handler = async (event) => {

var startTime = new Date().getTime();

var object = JSON.parse(event.body);
var githubParams = await getGithubInformation(object);

sendMailNoWait(process.env.SEND_MAIL_SERVICE_URL, {
recipient: githubParams.pushOwnerMail, subject: `#${uuidExecution} build has started`,
body: "A new build has started with these parameters: " + JSON.stringify(githubParams)
});

var publicDns = await createOrRestartEc2(githubParams);

sendMailNoWait(process.env.SEND_MAIL_SERVICE_URL, {
recipient: githubParams.pushOwnerMail, subject: `#${uuidExecution} build has been completed`,
body: "A build has been completed. Public url: " + publicDns
});

const response = {
statusCode: 200,
body: JSON.stringify({message:"success"}),
};
var endTime = new Date().getTime();

console.log(msToTime(endTime-startTime));

return response;
};

Step #4 : Start the build

To start the build, just perform a classic git push t othe configured git repository.

If you are are capable to implement the mail notification, add a new var to the aws lambda SEND_MAIL_SERVICE_URL

If everything works you will see these mail

If you cannot configure the required settings to send an email, In order to see the log in case of some error, you should go to aws cloud watch

Lecture and image References

Conclusion

If you are using AWS and you need to save money in your startup, poc, etc you could use this approach. Very limited but cheap and only two aws services are required

I will update the script, and try to reduce the manual steps in the following post.

Until the next,
JRichardsz

Originally published at https://jrichardsz.github.io on March 19, 2023.

--

--

JRichardsz.java
JRichardsz.java

Written by JRichardsz.java

Programmer born in vulcan who searches for his internal properties file to perform an overclock in his brain and body. https://stackoverflow.com/users/3957754

No responses yet