IT Solutions Provider

Categories
Blogs

How To Create AWS SQS Sending Messages

How To Create  AWS SQS Sending Messages.

AWS SQS Sending Messages

In the dynamic realm of cloud computing, efficient communication between components is paramount for building robust and scalable applications. Amazon Simple Queue Service (SQS) emerges as a beacon in this landscape, offering a reliable and scalable message queuing service.

Understanding Amazon SQS Queue

Amazon SQS is a fully managed message queuing service that offers a secure, durable, and available hosted queue that lets you integrate and decouple distributed software systems and components.

Key Features of Amazon SQS

Scalability: SQS seamlessly scales with the increasing volume of messages, accommodating the dynamic demands of applications without compromising performance.
Reliability: With redundant storage across multiple availability zones, SQS ensures the durability of messages, minimizing the risk of data loss and enhancing the overall reliability of applications.
Simple Integration: SQS integrates effortlessly with various AWS services, enabling developers to design flexible and scalable architectures without the need for complex configurations.
Fully Managed: As a fully managed service, SQS takes care of administrative tasks such as hardware provisioning, software setup, and maintenance, allowing developers to focus on building resilient applications.
Different Message Types: SQS supports both standard and FIFO (First-In-First-Out) queues, providing flexibility for handling different types of messages based on their order of arrival and delivery requirements.

Use Cases of Amazon SQS Queue

Decoupling Microservices: SQS plays a pivotal role in microservices architectures by decoupling individual services, allowing them to operate independently and asynchronously.
Batch Processing: The ability of SQS to handle large volumes of messages makes it well-suited for batch processing scenarios, where efficiency and reliability are paramount.
Event-Driven Architectures: SQS integrates seamlessly with AWS Lambda, making it an ideal choice for building event-driven architectures. It ensures that events are reliably delivered and processed by downstream services.

There are two types of Queues:

Standard Queues (default)

AWS SQS Sending Messages
SQS offers a standard queue as the default queue type. It allows you to have an unlimited number of transactions per second. It guarantees that a message is delivered at least once. However, sometimes, more than one copy of a message might be delivered out of order. It provides best-effort ordering which ensures that messages are generally delivered in the same order as they are sent but it does not provide a guarantee.

FIFO Queues (First-In-First-Out)

aws-fifo-queues
The FIFO Queue complements the standard Queue. It guarantees to order, i.e., the order in which they are sent is also received in the same order.

The most important features of a queue are FIFO Queue and exactly-once processing, i.e., a message is delivered once and remains available until the consumer processes and deletes it. FIFO Queue does not allow duplicates to be introduced into the Queue.

FIFO Queues are limited to 300 transactions per second but have all the capabilities of standard queues.

amazon dead letter queue

This is not a type of queue but it’s kind of a use case and configuration. It’s a standard or FIFO queue that has been specified as a dead letter queue. The main task of the dead letter queue is handling message failure. It lets you set aside and isolate messages that can be processed correctly to determine and analyze why they couldn’t be processed.

Let’s make our hands dirty.

Log into the management console and in the search box type SQS select SQS under services then click Create Queue in the SQS console.
aws-sqs-services
aws-sqs-app-integration
In the create queue dashboard in the details section under type, this is where you choose your queue type. The default is standard. So, I will proceed to create a standard queue. Under name call it my demoqueue then scroll down.
In the configuration section, this is where you specify visibility timeout, delivery delay, receive message wait time, and message retention period. I will move with the default settings. Scroll down.
Move with the default encryption and scroll down.
Leave all the other settings as default. Scroll down and click Create Queue.
We will now send and poll for messages with our created queue. So, select the queue you just created then click send message.
aws-sqs-mydemo-queue
In the send and receive messages section, under the message body, enter any message and then click send.
aws-sqs-send-and-receive-messages

Send message is successful.

Let’s now poll for our message. Scroll down and click poll for messages.

aws-sqs-receive-msgs
Under receive message we can see one message, and if you click on the message, it’s the message we sent.
aws-sqs-receive-msgs-stop
aws-sqs-body-message-example

This brings us to the end of this blog Pull down.


Stay tuned for more.


If you have any questions concerning this article or have an AWS project that requires our assistance, please reach out to us by leaving a comment below or email us at [email protected].


Thank you!


Categories
Blogs

Amazon Elastic Container Service (ECS)

Amazon Elastic Container Service (ECS)

Introduction:

In the ever-evolving landscape of cloud computing, managing containerized applications efficiently has become a paramount concern for businesses seeking agility, scalability, and ease of deployment. Amazon Elastic Container Service (Amazon ECS) emerges as a powerful solution within the Amazon Web Services (AWS) ecosystem, providing a robust platform for orchestrating and managing Docker containers at scale.

What is a Container?

In the world of software, a container can be thought of as a compact, self-sufficient unit that holds everything a piece of software needs to run. Furthermore, just like a shipping container in the real world, which contains all the goods needed for transportation, a software container encapsulates the necessary components for a program to function. Additionally, these components include the code itself and any libraries, dependencies, and environment settings it requires.

Overview of Amazon ECS

Amazon ECS is a fully managed container orchestration service that simplifies the deployment, management, and scaling of containerized applications using Docker containers. Moreover, it eliminates the need for manual intervention in provisioning and scaling infrastructure, allowing developers to focus on writing code and building applications rather than managing the underlying infrastructure. This automated approach streamlines the development process, making it more efficient and conducive to rapid application deployment.

ECS architecture

ECS Terminology:

Task Definition

This is a blueprint that describes how a Docker container should launch. Additionally, if you are already familiar with AWS, it is like a LaunchTemplate; however, it is tailored for a Docker container instead of an instance. Notably, it contains settings such as exposed port, Docker image, CPU shares, memory requirements, command to run, and environmental variables.

Task

This is a running container with the settings defined in the Task Definition. Consequently, it can be thought of as an “instance” of a Task Definition.

Service — Defines long-running tasks of the same Task Definition. This can be one or multiple running containers all using the same Task Definition.

Cluster

A logical grouping of EC2 instances. When an instance launches the ECS-agent software on the server registers the instance to an ECS Cluster.

Container Instance — This is just an EC2 instance that is part of an ECS Cluster and has docker and the-agent running on it.

Amazon ECS is a service we can use for running docker containers on AWS, either in a serverless manner or with the underlying infrastructure within our control.

An elastic container registry is where we can store the images for our containers.

Images

A container image is essentially a lightweight, standalone, and executable software package that includes everything needed to run a piece of software, including the code, runtime, libraries, and system tools. Docker, the popular containerization platform, defines container images with a layered file system and metadata.

Amazon ECS (Elastic Container Service) provides flexibility in how you launch and manage containers, offering two primary launch types:

1. EC2 (Elastic Compute Cloud)

2. Fargate.

Each launch type caters to different use cases, allowing users to choose the one that aligns with their specific requirements.

Amazon ECS EC2 Launch Type:

The EC2 launch type enables you to run containers on a cluster of Amazon EC2 instances that you manage. This launch type is suitable for users who want more control over the underlying infrastructure and require customization of EC2 instances.

Key Features and Considerations:

Infrastructure Control:

Users have direct control over the EC2 instances, allowing customization of the instances to meet specific requirements, such as installing specific software.

Legacy Applications:

Well-suited for migrating legacy applications that require access to features not available in Fargate or applications that need specific networking configurations.

Cost Management:

Provides more granular control over EC2 instance types, allowing users to optimize costs based on their specific workload requirements.

Custom Networking:

Users can leverage Amazon VPC (Virtual Private Cloud) to define custom networking configurations, including subnet placement and security group settings.

Amazon ECS Fargate Launch Type:

Fargate is a serverless launch type that allows you to run containers without managing the underlying infrastructure. Furthermore, with Fargate, AWS takes care of provisioning and scaling the infrastructure, allowing users to focus solely on defining and running their containerized applications.

Key Features and Considerations:

Serverless Deployment:

Fargate abstracts away the underlying infrastructure, providing a serverless experience for running containers.

Simplified Operations:

Reduces operational overhead as users don’t need to worry about patching, updating, or scaling EC2 instances. Fargate takes care of these tasks automatically.

Resource Isolation:

Containers run in an isolated environment, ensuring resource allocation and utilization are managed effectively. This isolation provides a high level of security and performance.

Cost Efficiency:

Fargate charges based on the vCPU and memory used by your containers, allowing for precise cost management without the need to manage and pay for underlying EC2 instances.

Networking Simplification:

Fargate simplifies networking by abstracting away the complexities of Amazon VPC. Users define task-level networking, and Fargate handles the rest.

Categories
Blogs

How To Create Amazon Route 53

How To Create Amazon Route 53

Amazon Route 53
In the dynamic landscape of cloud computing, efficient and reliable domain name system (DNS) management is crucial for the seamless operation of web applications and services. One powerful solution at the forefront of DNS services is Amazon Route 53. As a scalable and highly available cloud-based domain registration and routing service, Route 53 plays a pivotal role in ensuring the accessibility, performance, and resilience of your web infrastructure.

What is Amazon Route 53?

Amazon Route 53 is not just a catchy name — it refers to the 53rd port, traditionally assigned to DNS. Route 53 is the DNS service provided by AWS. Route 53 is one of the most well-known, reliable, and cost-effective services for managing domains. Amazon Route 53 is a highly available and scalable cloud Domain Name System (DNS). This AWS service provides scalable and highly available domain registration, DNS routing, and health checking for your applications. Whether you’re launching a new website, configuring subdomains, or optimizing the performance of your web applications, Route 53 has got you covered.

Some Amazon Route 53 useful terminology:

Domain: Domains are your standard URLs like amazon.com and google.com.
Subdomains: Subdomains are a unique URL that lives on your purchased domain as an extension in front of your regular domain like www.google.com and docs.google.com.

Hosted Zone: It’s the way AWS describes the information you provide to define how traffic aimed at your domain name will be managed. A hosted zone is a container for records, and records contain information about how you want to route traffic for a specific domain, such as example.com, and its subdomains (web.example.com, admin.example.com). A hosted zone and the corresponding domain have the same name. When we create a public-hosted zone, it automatically creates an SOA and NS that are unique to each hosted zone.

DNS Records: DNS records are what contain the actual information that other browsers or services need to interact with, like your server’s IP address. Nameservers, on the other hand, help store and organize those individual DNS records. Nameservers are the physical phone book itself and DNS records are the individual entries in the phone book.
Start of authority (SOA): It contains info of hosted zones. The type of resource record that every DNS must begin with, contains the following information:

1. Contains the owner’s info (email id).
2. Contains info of the authoritative server.
3. Serial number which is incremented with changes to the data zones. (In case of updates).
4. Stores the name of the server supplying the data.
5. Stores the admin zone.
6. Current version of the data file.
7. Time to live.

Name Server (NS) records: As discussed earlier it is a physical phone book itself. Nameservers play an important role in connecting a URL with a server IP address in a much more human-friendly way. Nameservers look like any other domain name. When you look at a website’s nameservers, you’ll typically see a minimum of two nameservers (though you can use more). Here’s an example of what they look like:

  • ns-380.awsdns-47.com
  • Ns-1076.awsdns-06.org

They are used by top-level domain servers to direct traffic to the content DNS server. It specifies which DNS server is authoritative for a domain. It is of 4 types Recursive resolvers, root nameservers, TLD nameservers, and authoritative nameservers.

Time To Live (TTL):Length of time the DNS record is cached on the server in seconds. The default is 48 hours.
Canonical Name (CNAME): A CNAME, or Canonical Name record, is a record that points to another domain address rather than an IP address. For example, say you have several subdomains, like www.mydomain.com, mail.mydomain.com etc and you want these subdomains to point to your main domain name mydomain.com.
Alias Record: You will use an ALIAS record when you want the domain itself (not a subdomain) to “point” to a hostname. The ALIAS record is similar to a CNAME record, which is used to point subdomains to a hostname. The CNAME record only can be used for subdomains, so the ALIAS record fills this gap. Ex: @ 10800 IN ALIAS example.example.com. Please note the final dot (.) at the end is necessary for the record to work correctly.

REGISTER A NEW DOMAIN NAME IN ROUTE 53

One of the initial steps in establishing your online presence is securing a memorable and relevant domain name. With Amazon Route 53, this process becomes a breeze. Let’s see the steps.

Right away, go to the management console, type route 53 in the search box, and select route 53 under services.
Route 53

In the route 53, dash-board, first we have to check whether that domain name is available. So, under register domain, type the domain name, I call it viktechsolutions.com.  Once you’ve typed your domain name, click checkout.

If the domain name you are trying to register is available, select it. I try to register viktechsolutions.com, and it is available, so I will select it.  Then on the right side of the register domain navigation pane, a list of the selected domain you want to register and the price tag will appear then click proceed to checkout.

You will then be brought to a new page where you need to enter your contact information.  Fill in your details.  Under privacy protection, make sure it’s enabled to hide your contact details, and then click next.

On this page review your contact information, tick the box on “terms and conditions” and then click submit.

This is all we need to do to register a domain name.

The domain name can take up to 3 days to be complete.  For me, it took about 20 minutes. Now it is available for use.

Types of Routing policies:

Simple Routing policy.

This is the default Routing Policy. This routing policy randomly selects the routing path and does not take the resource status (health) into account.  It can be used across regions.

Failover Routing

AWS Failover Routing Policy for Route 53

It allows us route traffic to a resource when the resource is healthy or to a different resource when the first resource is unhealthy. We can associate health checks with this type of policy.

Latency Routing Policy

Amazon Latency Routing Policy for Route 53
It is mainly used when we need a website to be installed or is being hosted in multiple AWS regions. It redirects to the server that has the least latency close to us also helpful when the latency of users is a priority, to improve performance due to the support of the demands of the WORLD region, with a time delay. The response to the request is purely measured by latency and not by the distance to the region of the resource.

Weighted Routing

Amazon Weighted Routing Policy
This routes multiple sources to a single name and controls the percentage % of the requests that go to a specific endpoint. This approach is heavily used in Blue/Green Deployment, where you can release a software product from the dev stage to live production. Depending on your requirements, we can switch the traffic to one of the endpoints at any given point.

Geo Location Routing Policy

Amazon Geolocation Policy for Route 53
Geolocation routing policy refers to the practice of directing network traffic based on the geographical location of the user or the destination server. This approach is often employed by organizations to optimize the performance and efficiency of their network services.

Multi-Value Routing Policy

Unlike Simple Routing Policy, where we can specify multiple IP addresses for a single “A” record set, with multi-value routing policy we can create multiple “A” record sets for each IP address that we want to define. With this approach, we can monitor each endpoint better than the simple routing policy by having a health check attached to each record set.

This brings us to the end of this blog. Stay tuned for more.
If you have any questions concerning this article or have an AWS project that requires our assistance, please reach out to us by leaving a comment below or email us at [email protected].

Thank you!

Categories
Blogs

How To Create Application Load Balancers in AWS

How To Configure Application Load Balancers in AWS

What is an Application Load Balancer?

The Application Load Balancer is a feature of Elastic Load Balancing. Elastic Load Balancing automatically distributes your incoming traffic across multiple targets, such as EC2 instances, containers, and IP addresses, in one or more Availability Zones.


It monitors the health of its registered targets, and routes traffic only to the healthy targets.

Key Features and Benefits.

High Availability: A load balancer distributes incoming application traffic across multiple targets, such as EC2 instances, in multiple Availability Zones. If one server fails or becomes overloaded, the load balancer redirects traffic to healthy servers, preventing service interruptions.


A listener checks for client connection requests, using the protocol and port you configure. The rules that you define for a listener determine how the load balancer routes requests to its registered targets.

SSL Termination: Application Load balancer can handle SSL/TLS termination, offloading the decryption process from the backend servers. This not only reduces the compute burden on servers but also simplifies certificate management.
Health Checks:Application Load Balancers continuously monitor the health of backend servers through health checks. If a server fails a health check, the load balancer automatically redirects traffic to healthy servers, ensuring optimal performance and reliability.
Content-Based Routing: Application Load balancer can route traffic based on the content of the requests. This feature is valuable for applications with multiple services or microservices, allowing for granular control over how traffic is distributed.

Let’s do a little bit of hands-on.

To create a target group, log into the management console aws.amazon.com. on the left side of the navigation pane, scroll down and under load balancers, click target group then click create target group.
Under basic configuration choose the target, we will move with instances, then scroll down.

Under the target group name, give it a name, and call it the prod-target group. For protocol, it is going to be HTTP, port 80, and under VPC, select the drop-down and make sure you select your VPC, I had created a custom VPC, called prod-VPC, so I will select it. Then protocol version, leave it on HTTP1 then scroll down.

Under health-cheques, click the advance, dropdown, and scroll down, Under the success status code, add 301, and 302, We need this status code for when we need to redirect traffic from HTTP to HTTPS. Scroll down and click next.
we will add our instance to the target group, I have one instance running in my account called webserverAZ1 and we can see it under available instances. To add an Instance to the target group, select it then click include as pending below.
When you click include as pending, it will add it to the target, and you will see it. once you see your EC2 instance here as pending, click Create Target group.
Next, we will create an application load balancer to route internet traffic to this target group.
To create an application load balancer on the left side of the navigation pane, scroll down, and under load balancing, select load balancers, then click Create load balancers.
On this page scroll down, remember we are creating an application load balancer. Under application, load balancer click create.
Under basic configuration, give your load-balancer a name, call it prod-application load balancer. Scroll down, the application load balancer is going to be internet-facing so select the radio button next to internet-facing. Again select the radio button under IPv4.
Scroll down, under Network mapping in the VPC section. select the drop-down and select your VPC.
Under mappings, we will select the us-east-1a availability zone, and in us-east-1a, we want to make sure that the application load balancer has a reach to the public subnet az1.

So in us-east-1a, make sure you’ve selected public subnet az1.

Then scroll down a gain and in us-east-1b, make sure you have a gain selected public subnet az2. Remember the application load-balancer always works in the public subnet.
scroll down under the security group, remove the default security group, select the drop-down, and select your security group. I created a security group that allows traffic on ports 80, and 443 (HTTP and HTTPS) and called it an application load balancer security group. Will select it.
Then scroll down, under listeners and routing, the first listener we will create is on port 80, the protocol is HTTP, and the port is 80, then under default action, select the drop-down and select the target group.
scroll down, we will leave all the other options as default then click Create Application load-balancer.
We have successfully created, the application load balancer, click view load balancer.
This brings us to the end of this blog. Pull down and stay tuned for more.

If you have any questions concerning this article or have an AWS project that requires our assistance, please reach out to us by leaving a comment below or email us at [email protected].

Thank you!

Categories
Blogs

How To Configure AWS Transit Gateway

How To Configure AWS Transit Gateway

AWS Transit Gateway

The network architecture in the cloud can quickly become complex as the number of VPCs, accounts, regions, on-premises data Centers, and edge locations grows. Transit Gateways allow customers to simplify the network layout and connectivity between all these environments.

What is AWS Transit Gateway?

AWS Transit Gateway serves as a central hub that connects VPCs and on-premises networks, eliminating the need for individual connections.

With Transit Gateway, you only need to create connections from the VPCs, VPNs, and Direct Connect links to the Transit Gateway. Transit Gateway will then dynamically route traffic between all the connected networks.

Why Choose Amazon Transit Gateway?

  • Simplifies Connectivity: Transit Gateway lets you quickly connect to one central gateway, making it easier for users to interconnect to all VPCS and onsite networks, regardless of how many connected accounts there are. Transit Gateway also supports dynamic and static layer 3 between Amazon VPCs and VPNs.
  • Facilitates Greater Control and Monitoring: AWS Transit Gateway allows users to monitor and manage all their Amazon VPC and edge connections in one place. The service makes it easier to find issues and handle events as they come. You may also enable Equal Cost Multipath (ECMP) between these connections to load balance between paths and increase bandwidth.
  • Bandwidth On Demand: Obtain the network bandwidth you need to move terabytes of data at a time for your applications, or even migrate into the cloud. You may add Amazon VPCs to your network without needing to provision extra connections from on-site networks.
  • Highly Secure: With its integration with IAM, users may control who can access Transit Gateway. Create and manage user accounts and groups and establish permission for them centrally.

Pricing

This service charges you per number of connections you attach to the Transit Gateway per hour, and also for each GB of data processed. The owner is billed per hour their Amazon VPCs or VPN are connected, starting from the instant the VPC is connected to Transit Gateway and until the VPC is disconnected. Note that a portion of an hour is still billed as one full hour.

So, as we can see, AWS Transit Gateway is an awesome service. It’s further described by AWS as being a cloud router and it connects VPCs and on-premises locations together using a central hub. To prove how transit gateway simplifies connectivity, we will look at different scenarios. First will concider a fully meshed architecture without AWS Transit Gateway. This will help us understand the problem Transit Gateway is trying to solve.

We get that when we have lots of VPCs and on-premises location connections without Transit Gateway, the peering connections that we set up if we use VPC peering can become extremely complex.
Mesh Architecture with Transit Gateway v2

Examining the above architecture, we find ourselves navigating through four distinct Virtual Private Clouds (VPCs) denoted as A, B, C, and D, all seamlessly interconnected through established VPC peering links. The complexity becomes apparent as we observe the presence of six peering links, a testament to the intricacy involved in setting up connections for just four VPCs. It becomes evident that as the number of VPCs increases, the complexity of the setup grows exponentially.


In this complicated setup, we have six connections between the VPCs, and the corporate office is linked in through a customer gateway. Now, here’s where it gets tricky: connecting the corporate office to each VPC using Site-to-Site VPNs. This involves having a virtual gateway in each VPC and making a separate secure connection (VPN) to the customer gateway for each VPC. So, in the end, we’re dealing with four of these VPN connections, and it gets even more complicated if we want a backup plan (redundancy).


If we dive a bit deeper into the problem, adding redundancy means we need an extra customer gateway and twice the number of those VPN connections. The more we look into it, the more complex it becomes, turning our setup into a really tangled network.


Now, let’s check out the same setup but using a Transit Gateway. This other option can make all the connections simpler and easier to deal with.

Mesh Architecture with Transit Gateway

In this situation, as seen from the above architecture, we have the same four VPCs and a corporate office. Now, let’s simplify things by putting a Transit Gateway in the middle. It acts like the main hub that connects all the VPCs and the on-premises networks.

So, each of these VPCs gets linked to the Transit Gateway. You choose a subnet from each availability zone, which helps in directing traffic within that zone for other subnets. It’s like giving each area its own route.

Now, there’s also the customer data Center and the corporate office has a customer gateway that also connects to the Transit Gateway. That’s pretty much the setup. This service allows us to connect through a cloud router, this central hub, to any of these VPCs.

Transit Gateways (TGWs) can be attached to VPNs, Direct Connect Gateways, third-party appliances, and even other Transit Gateways in different regions or accounts.

Explore how AWS Transit Gateway seamlessly integrates with Direct Connect Gateway, enabling transitive routing for growing companies with multiple VPCs.

AWS Transit Gateway and Direct Connect Gateway
Instead of using a complicated Site-to-Site VPN, our corporate office has a customer router. We connect to a DX (Direct Connect) location using a DX Gateway.

Now, the DX Gateway has a connection with the Transit Gateway. This connection is called an association. We then physically connect back to the corporate office from Direct Connect, creating something called a Transit VIF. This is like a special connection used only when you’re connecting a DX Gateway to a Transit Gateway.

This setup supports full transitive routing between on-premises, the Transit Gateway, and all those connected VPCs. When your company gets bigger and uses more VPCs in different areas, and you want them all to connect smoothly, Transit Gateway becomes super helpful.

Conclusion.

AWS Transit Gateway makes cloud network setups simpler. It acts like a hub connecting your VPCs, VPNs, and data Centers, making things easy to manage. It does away with confusing mesh setups, provides easy scalability, and keeps your network organized and secure.

As your cloud presence grows, Transit Gateway is the key to keeping your network simple, efficient, and secure.
If you have any questions concerning this article or have an AWS project that requires our assistance, please reach out to us by leaving a comment below or email us at [email protected].

Thank you!