AWS Docker
Overview
A large company needs to run multiple servers across multiple regions. What if the server goes down? Configuring servers from scratch takes time, which might result in losing customers. To tackle these, servers are launched in the form of containers. The container contains all the necessary configuration and dependency files, making launching the server within seconds possible. Docker is a container-based technology that allows us to build and run containers. AWS Docker runs Docker containers on top of AWS Cloud.
Introduction to AWS Docker
Docker is a software platform. It makes the process of building, testing, and deploying applications more accessible and quicker.
You might have heard a phrase quite common among developers 'The code was working on my machine. I don't know what went wrong '. In summary, the code runs on one machine and not another due to dependency or version compatibility issues. Docker works to eradicate these issues.
Usually, starting a new server takes a lot of time (download dependencies, check versions, hit, trials, etc.), and Docker manages to get this within seconds. Docker creates software units called containers with all the files needed for running your software. These containers can be launched on any host machine with the help of Docker without any extra setup required.
How Does It Work?
Let's understand the working of Docker with the help of the following points:
- Docker creates a software unit called containers.
- Container is a bundle that contains everything needed to run the software, like the configuration files, dependencies, libraries, code, etc.
- You can run these containers on any machine with the help of Docker, and you don't need to do any extra environmental configurations for it.
- Using the containers, your software will be up and running within seconds. Companies use it for deploying web servers to save time.
- The container is created based on a script known as images, specifying the things needed inside the container. This same script can launch multiple container servers in no time.
- The containers use underlying resources, i.e., the resources of the machine on which they are hosted.
The picture below compares the architecture of Virtual machines to those of containers.
Why Use Docker?
Using Docker to run applications is completely one's choice, but multiple advantages are associated. So let's discuss some points on why we should use Docker.
Rapid Application Deployment
- You don't need to set up a new environment while launching an application using Docker containers. Instead, we manage a Docker Image which is downloaded and can be run on any environment. This saves a lot of time.
- Docker Image is based on a script containing the requirements for running the application. These images are smaller in size and fasten the process of application deployment. Below is a simple file used to create a docker image.
- Docker applications are the preferred choice for Continuous Integration and Continuous Deployment.
Cost-Effective
- Containerized applications require less memory and resources than virtual machines or bare metals.
- Each time we want to add a new container for running our applications, we don't need to add more infrastructure devices. This saves our cost.
- A single physical server can host multiple containers, sharing the memory and resources.
Highly Scalable and Flexible
- Adding a new container is much easier than adding a new virtual machine to the production environment, as the VMs need to be configured to run the application.
- Updating the containers to a new version of the application or rolling back to the previous version can be done using a single command.
- Docker allows you to update and clean container applications with zero downtime.
Secure and Isolated Environment
- Although we can run multiple containers on a single physical server, these containers are completely isolated from each other. The containers access resources isolatedly without the impact of other containers.
- You can delete any application by deleting the containers, and the container maintains no file backups by default.
- No two containers can access the data of the other one without having the right authorization.
Better Portability
- It is highly portable as the entire application and its dependencies are packed into a single unit.
- The enhanced portability saves time and fastens up the development and deployment process.
When to Use Docker?
Some of the scenarios where using Docker will yield high results are:
Application Runs in Different Environments
- Normally, an application runs in different environments within an organization. We usually have a testing environment and a production environment.
- Moreover, the code is expected to run on different developers' servers with different operating systems. It is often seen that the code works well in the developers' environment but fails to run in the testing environment.
- With the help of docker containers, the software application runs in the desired environmental configuration, separated from the outside world. The environment for running these containers need not be configured separately for every new environment, and It works the same everywhere.
Increase Scalability and Handle More Users
- What if your application became popular overnight? How will you handle the immense amount of traffic? Sligh delay in such cases might lead to application crashes and eventually losing customers.
- If we are unsure of the amount of traffic we will receive and want to scale higher in case of increased traffic, then using AWS Docker containers is the most optimal solution.
- AWS Docker Containers save time by configuring and launching new servers within seconds, which can handle traffic the same as a normal application server. Moreover, the autoscale feature provided by AWS Docker makes the task even more accessible.
Not Sure of Hosting Infrastructure
- For small businesses still determining the infrastructure they will use in the long run, AWS Docker seems to be an ideal way to launch their application.
- AWS Docker containers being highly portable, they can be easily transferred to the new infrastructural environment with zero impact on the containerized application.
Software is Growing
- If the software is in the growing stage, the developers keep on adding new functionalities. This also means new libraries and packages get added.
- Keeping track of these changes for every environment becomes difficult in communicating and documenting these changes also takes time. It may lead to a situation where a particular code version has stopped working, and developers need to be more knowledgeable about this.
- This entire scenario would never arise if we used AWS Docker. The software requirement changes are updated directly in the Dockerfile (the script which contains environmental specifications for launching the containerized application), and it gets automatically updated in every environment.
Run Docker on AWS
Multiple AWS services can be used to run Docker containers on AWS. Let's discuss each of them in short.
Amazon ECS
- Amazon ECS can be used to deploy containerized applications in the AWS environment.
- It is used for managing, scaling, and deploying containerized applications.
- Learn more about this from the reference.
AWS Fargate
- AWS Fargate is a serverless service designed for containers.
- Using AWS Fargate, the developer can focus on their code instead of managing the infrastructure.
- It can be combined with AWS ECS and AWS EKS to serve a greater purpose.
- Learn more about this from the reference.
Amazon EKS
- Amazon EKS is an AWS service that allows us to use Kubernetes in an AWS environment.
- Kubernetes is a tool for managing Docker containers. It monitors auto-scales, performs health checks, etc., to ensure the docker containers are running.
- Learn more about this from the reference.
Amazon ECR
- Amazon ECR (Elastic Container Registry) servers as a private repository for storing Docker images.
- Amazon ECR can push, pull and manage versions of images.
- Learn more about this from the reference.
AWS Batch
- AWS Batch is an AWS service used by developers and data scientists to carry out batch operations with the help of AWS docker containers.
- It allows you to run machine learning operations without installing software or setting up servers.
- Learn more about this from the reference.
AWS Copilot
- AWS Copilot provides a command line interface for managing AWS Docker containers.
- Containerized applications can be built, and infrastructure can be set up easily using code templates.
- The process of running containerized applications via Amazon ECS, Amazon Fargate, and App Runner becomes comparatively more straightforward.
- Learn more about this from the reference.
How to Deploy Docker Containers on AWS?
We learned about docker and why we use it. Let's learn How to deploy docker containers in AWS.
Create a Task Definition with Amazon ECS
- log in to your AWS account, and search for ECS service in the search bar. You can see the service named Elastic Container Service. Click on it.
- We are taken to the Task definitions page. Here we can see the list of Task definitions if we have any. On clicking on the create new task definition button, you will get two options.
- Create new task definition.
- Create new task definition with JSON
We will select the Create new task definition option.
- First, we need to configure task definitions and containers. Fill in the details like task family name, Image URI, etc.
- After filling in the required details, and selecting the port and protocol, click on the Next button.
- Next, we configure the environment on Configure environment, storage, monitoring, and tags page.
- Fill in the amount of storage.
- Leave all the other filed as default, scroll, and click on Next.
- Next Page is to review all the configurations.
- Scroll down and check all the task definition configurations. Click on Create.
- Our task definition is successfully created with a success message. The status of the task definition can be seen as ACTIVE.
Configure the Cluster
We need to create a Cluster to run our containerized application using the following steps:
- Move to the Cluster section on the Amazon Elastic Container Service page. Here we will see the list of clusters if any. Click on the Create Cluster button to create a new cluster.
- We need to fill in the details required for creating a cluster, like a cluster name, infrastructure, etc.
- The optional configurations can be kept as default, and then click on the Create button.
- Our cluster gets successfully created along with a success message. As of now, we will see no tasks running, and the services provided by the cluster read zero.
We must create the security group for our service, which we deploy.
Security
We need to create two security groups, one for the Application load balancer to allow port access and another to give access to this security group for the cluster's tasks.
- Go to the AWS Console, search EC2, and click on EC2 service.
- Click on the security groups.
- Click on Create security group.
- This is the application load balancer's security group, which allows traffic.
- Edit the inbound rule to allow for anyone's IP. Click on create.
- We created a security group successfully.
- Again, click on the Create security group and fill in the security group name as it is for tasks of clusters.
- In the inbound rule, select custom and select the previously created security group. Click on create.
- Our security groups are created.
We successfully created the security groups. Let's deploy the service for the clusters.
Deploy
We can deploy the service or deploy the task. Service is very useful in this case because if any task goes down, it will automatically create a new task.
Let's create the service.
- Go to the ECS cluster console. Under the Services tab, click on Deploy.
- Select the Compute configuration as Launch type.
- Select a service in the Application type. Choose the family which we created in the task definition. Fill in the Service name.
- Fill in 2 tasks as desired tasks.
- In the Networking set the VPC as default, and in the security group, choose an existing security group. Select the security group we created for the cluster's tasks.
- In the load balancing, create a new load balancer. Fill in the name of that load balancer.
- In the target group, set a target group name and path to /. Set the health check period as 20 sec. Click on Deploy.
- In a few minutes, it shows the service is deployed successfully.
9. In the service tab, we can see the service is active.
- Click the Tasks tab to see if two tasks are running.
If we stop the tasks here, it will automatically start another task.
We have successfully deployed the Docker container on AWS.
AWS Docker Use Cases
There are numerous use cases of AWS Docker. Let's explore some of the common ones.
Microservices-Based Apps
- Application that is based on microservices architecture is best suited to be launched using AWS Docker containers.
- The different services can be launched in an isolated environment using containers and integrated to work together as a single application.
- AWS Docker containers' low resource consumption feature makes it a cost-effective approach.
Multi-Cloud or Hybrid Cloud Applications
- AWS Docker containers are highly portable.
- If you are looking to set up a multi-cloud or hybrid-cloud application, where some part of the application is hosted on some cloud service provider and another part is hosted on some other cloud service provider, then using AWS Docker containers is highly beneficial.
- It allows you to move your containerized application easily from one cloud service provider to another or back.
Infrastructure as Code
- AWS Docker allows the developers to focus on application development and frees them from the worries of setting up the infrastructure to host the application server.
- The infrastructure configuration specifications are mentioned in the form of a script which can be used anytime and anywhere to set up a whole new architecture within seconds.
- The monitoring of infrastructural resources can also be managed via a script, automating the entire flow.
Multi-Tenancy
- Multi-tenancy represents a cloud deployment model under which a single application serves numerous users/customers keeping their data completely isolated from each other.
- There are four different kinds of multi-tenancy models, whose names are self-explanatory:
- Shared database – Isolated Schema
- Shared Database – Shared Schema
- Isolated database – Shared App Server
- Docker-based Isolated tenants
- In each kind of multi-tenancy, the application server remains the same, although the users' data remain isolated.
- Using the AWS Docker container helps to maintain separate containers for each user/tenant. The code belonging to a particular tenant runs only in their respective containers.
Disaster Recovery
- Ensuring business continuity without any data loss is the goal of every business organization. Experiencing downtimes can prove to be a huge loss to the business.
- AWS Docker containers are launched using an image based on a script. The containers enable the application server to be ready to serve the traffic within seconds.
- The data volume attached to the container is used to save the container data and can be backed up as a precaution for disaster recovery.
Conclusion
- AWS Docker helps to deploy and scale applications quickly in any environment.
- Docker creates the software unit containers, which are created based on a script known as images.
- AWS Docker provides Rapid application development, which is cost-effective for building and running applications in a secure and isolated environment at any time.
- We can run docker in AWS using multiple AWS services:
- Amazon ECS
- AWS Fargate
- Amazon EKS
- Amazon ECR
- AWS Batch
- AWS Copilot
- AWS Docker has numerous use cases in Multi-cloud or hybrid cloud applications, Microservices-based apps, Infrastructure as Code, etc.
- AWS Docker is very efficient as it helps to build, test, and deploy the application quickly.