Git Product home page Git Product logo

capstoneproject3's Introduction

Capstone project โ€“ Hotel-Side Hospital

Objective: To create an automated provisioned infrastructure using Terraform, EKS cluster, EC2 instances, and Jenkins server.

Tools to use:

  1. Jenkins
  2. Terraform
  3. AWS EC2
  4. AWS EKS

Description

Hotel-Side Hospital, a globally renowned hospital chain headquartered in Australia, is aiming to streamline its operation by setting up an infrastructure within the hotel premises. However, in order to maintain seamless functioning and scalability, they require fully managed virtual machines (VMs) on the Amazon Web Services (AWS) platform.

The organization seeks an automated provisioned infrastructure solution that can enable them to effortlessly create new Amazon Elastic Kubernetes Service (EKS) clusters, whenever required, and promptly delete them when they are no longer needed. This will optimize resource allocation and enhance operational efficiency

Task (Activities)

  1. Validate if Terraform is installed in the virtual machine
  2. Install AWS CLI
  3. Navigate to AWS IAM service, and get AWS Access key and Secret Key to connect AWS with the AWS CLI
  4. Export the AWS Access Key, Secret Key, and Security Token to configure AWS CLI connectivity with AWS Cloud
  5. Create terraform scripts to create a new VM using autoscaling which includes the following files: autoscaling.tf, VPC.tf, internetgateway.tf, subnets.tf (public subnet), routetable.tf, Route_table_association_with_public_subnets.tf
  6. Execute terraform scripts
  7. Connect to an instance and install the stress utility (The stress files are provided along with the problem statement document.)
  8. Validate if autoscaling is working by putting load on autoscaling group

Steps performed:

1. Validate if Terraform is installed in the virtual machine :

To check if the terrform is installed we can use the command

terraform --version

As we can see it is already installed:

Since the terraform is old version we need to install the new version. From the terraform website we get below commands to download and install terraform in our machine:

wget -O- https://apt.releases.hashicorp.com/gpg | sudo gpg --dearmor -o /usr/share/keyrings/hashicorp-archive-keyring.gpg

echo "deb [signed-by=/usr/share/keyrings/hashicorp-archive-keyring.gpg] https://apt.releases.hashicorp.com $(lsb\_release -cs) main" | sudo tee /etc/apt/sources.list.d/hashicorp.list

sudo apt update && sudo apt install terraform

Now if we check we can see that terraform is at the newest version

2. Install AWS CLI*

To install the AWS CLI run the below commands:

(all the commands are taken from amazon official documentation)

curl "https://awscli.amazonaws.com/awscli-exe-linux-x86\_64.zip" -o "awscliv2.zip"

unzip awscliv2.zip

sudo ./aws/install

But as we already have cli in the system pre installed it gave the below error

We can check the version using command:

aws --version

We will upgrade this also using upgrade command:

sudo ./aws/install --bin-dir /usr/local/bin --install-dir /usr/local/aws-cli --update

As you can see it is upgraded.

3. Navigate to AWS IAM service, and get AWS Access key and Secret Key to connect AWS with the AWS CLI

We will go to AWS IAM and then users, here we will create a user by clicking Add user button:

We give a name:

In permissions we will give administrator access:

Then you create the user.

Once you create go to the users security credentials:

Here you should see access keys option, here click Create access key:

Chose your use case:

Once you create you will get the access key and the secret access key

Copy both the access key and the secret access key and save it securely.

4. Export the AWS Access Key, Secret Key to configure AWS CLI connectivity with AWS Cloud

Now we can configure aws on our system by command:

aws confirgure

We can see that it is configured

Please note that you have to configure it also in the jenkins user as jenkins uses this user:

5. Setup Git and Github for storing the files.

First we are creating a new githut repository to store the link:

Now we will clone this account and start using this account and upload all the script files here:

Create a directory:

We recieved an access error while cloning, so to solve the error we will generate a public key and add it to git hub.

Add the key in the ssh section of the settings of github:

Once added we can clone:

Now you can see that we have local working repo for the remote repo

6. Create terraform scripts to create a new VM using auto scaling:

First we will create providers.tf file and we will add below code:

https://github.com/kotianrakshith/CapstoneProject3/blob/main/providers.tf

Now we create the vpc.tf for the VPC

https://github.com/kotianrakshith/CapstoneProject3/blob/main/vpc.tf

Now let us create three subnets with the file name subnets.tf:

https://github.com/kotianrakshith/CapstoneProject3/blob/main/subnets.tf

As of now we have created vpc, subnets terraform file:

Now let us create internte gate terraform file: internetgateway.tf

https://github.com/kotianrakshith/CapstoneProject3/blob/main/internetgateway.tf

Now let us create the route table:routetable.tf

In this below code we will allow the route of all the internet(0.0.0.0/0)

https://github.com/kotianrakshith/CapstoneProject3/blob/main/routetable.tf

Now we will have to do the route table association as the route table needs to be connected with all the public subnets:

We will use the file :

https://github.com/kotianrakshith/CapstoneProject3/blob/main/Route_table_association_with_public_subnets.tf

Now we have to do secrity groups before we got to autoscaling

securitygroup.tf

https://github.com/kotianrakshith/CapstoneProject3/blob/main/securitygroup.tf

Now we will create autoscaling.tf

This will have launch template and autoscaling group required for creation and scaling of the VMs:

https://github.com/kotianrakshith/CapstoneProject3/blob/main/autoscaling.tf

Now that we have created all the files:

Lets push it to github:

Push:

Now we can see all the files in the github repository:

This is only for highly availble ec2 instance.

Now we need to write one more for EKS cluster

For EKS cluster to work first we need to create a role for both eks cluster and nodes and then add proper policies for the same

So first we will create terraform file for this: rolepolicy.tf

https://github.com/kotianrakshith/CapstoneProject3/blob/main/rolepolicy.tf

Then we will create the eks cluser and node in the file : eks.tf

https://github.com/kotianrakshith/CapstoneProject3/blob/main/eks.tf

Now we will add , commit and push this also to github

Now we have all the files in the github:

7. Execute terraform scripts:

We will use jenkins to checkout the github repository and execute the terrraform commands.

First in the jenkins we will install terraform plugin:

Also in the global tool configuration add terraform details:

Now we will can write the checkout and apply as steps in the pipline

Create new pipeline project in jenkins

Give proper description and provide git hub project url:

To get the checkout script we will use pipeline syntax

So we add the script we generate in the checkout stage:

Now we add init and apply stage to the pipeline as well

Now we have the final script:

https://github.com/kotianrakshith/CapstoneProject3/blob/main/Jenkinsfile

We can save this as Jenkinsfile in the git so it can be used easily for the future.

Once saved we click on build now to start the pipeline:

We can see that it has run successfully:

8. Checking the deployment in AWS:

We see in the autoscaling groups that there are two autoscaling groups, one for EKS and one for EC2 as we correctly deployed:

Each have one instance.

9. Connect to an instance and install the stress utility:

We will connect to one of the instance:

Here we will install the stress tool:

sudo yum install stress -y

10. Validate if autoscaling is working by putting load on autoscaling group:

Now we will run the stress command to put load on the system:

sudo stress --cpu 8 -v --timeout 3000s

After we run for some time let us check the CPU utlization:

Cpu utlization is more than our limit.

Now if we check the autoscaling group:

We can see that 3 instance has been deployed as it is our max limit.

Now let us stop the stress test and wait:

We can see that cpu utlization falls eventually to zero

Now the instance has decreased to two:

Eventually we will have only one:

So we have confimed that autoscaling works.

That concludes our project. As per the project we deployed EC2 instances and EKS with autoscaling and we checked that autoscaling works after we performed the stress test.

capstoneproject3's People

Contributors

kotianrakshith avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.