Accend Networks San Francisco Bay Area Full Service IT Consulting Company

Categories
Blogs

Find Out What Decoupling Workflows in AWS Is

Find Out What Decoupling Workflows in AWS Is

Decoupling workflows involves breaking down the components of a system into loosely connected modules that can operate independently. This not only enhances scalability and flexibility but also improves fault tolerance, as failures in one component do not necessarily impact the entire system. AWS provides a variety of services that facilitate decoupling, such as AWS Simple Queue Service (SQS), AWS Simple Notification Service (SNS), and AWS Step Functions.

To explain this, we will use this scenario.

Direct Integration in Application Components

Direct Integration in Application Components
Consider the front end, also known as the web tier, where user interactions take place. This internet-facing layer is where crucial data, such as customer orders, originates. Moving seamlessly to the next layer, we encounter the app tier. This tier is responsible for processing the incoming orders and managing other relevant information received from the web tier.

In a direct integration scenario, the web tier and the app tier are connected without intermediary components. This approach does come with lots of challenges.

One significant drawback arises when the app tier is required to keep pace with the incoming workload. In the event of a sudden surge in demand, such as an influx of customer orders, the app tier must be capable of scaling up rapidly to handle the increased load. This real-time scalability requirement is essential to prevent any system failures and ensure a seamless customer experience.

We can use auto-scaling mechanisms in such scenarios. Auto-scaling, although efficient, involves the automatic launch of instances to meet the rising demand. However, the time taken for these instances to become operational may introduce delays, potentially leading to the loss of critical information. In the context of customer orders, this delay could result in a lost customer order.

Decoupled Workflows

Instead of having the web tier and the app tier directly connected, we’ll put an SQS queue in the middle.

The web tier now talks to the queue and puts in orders as messages. The app tier, on the other hand, keeps an eye on the queue by checking if there are any messages to be processed. If there’s a sudden flood of orders, the queue can handle it easily. More orders just wait in the queue until the app tier is ready to process them. hence, no stress of the app tier keeping up with all the workloads at once. if there is a sudden huge amount of information coming in and lots of orders being placed, then the queue can scale very easily. So, we’ll just end up with a lot more orders in the queue awaiting processing. The app tier may need to scale but it doesn’t have the issue of direct integration where the orders might be lost because they can sit in the queue for quite a bit of time and the app tier can process them as soon as it’s ready.

We can also see decoupling in Lambda invocations where we have synchronous and synchronous invocation of Lambda functions.

When you invoke a function synchronously, Lambda runs the function and waits for a response. With this model, there are no built-in retries. You must manage your retry strategy within your application code.

On the other hand, by decoupling lambda functions, and using a synchronous invocation, we see lots of added advantages, as our Lambda function must not keep up with the surge of events but just poll the queue.

A destination can send records of asynchronous invocations to other services. You can configure separate destinations for events that fail processing such as a dead letter queue to handle failed messages for later analysis as to why the messages could not be processed. With destinations, you can address errors and successes without needing to write more code.

Benefits of Decoupling Workflows on AWS Cloud

Fault Tolerance: By reducing dependencies, decoupled workflows make systems more resilient to failures in individual components.
Improved Performance: Decoupling can lead to improved performance, especially in scenarios where synchronous invocations might introduce bottlenecks.
Enhanced Maintainability: Independent components are easier to maintain, update, and replace without affecting the entire system.

Conclusion

Decoupling workflows on the AWS Cloud is a fundamental architectural approach that enhances the scalability, reliability, and flexibility of systems. developers can design systems that meet specific performance requirements and effectively manage workloads.

Stay tuned for more.

If you have any questions concerning this article or have an AWS project that requires our assistance, please reach out to us by leaving a comment below or email us at [email protected].

Thank you!

Categories
Blogs

How To Create AWS Certificate Manager

AWS CERTIFICATE MANAGER

The world of digital security is complex and ever-evolving, requiring businesses and organizations to deploy various mechanisms to secure their digital assets. A significant component of this digital security spectrum is SSL/TLS X.509 certificates. Let’s start our deep dive into AWS Certificate Manager by first understanding these.

Understanding SSL/TLS X.509 Certificates

Secure Sockets Layer (SSL) and Transport Layer Security (TLS) are digital files that use X.509 certificates, a public-key certificate that adheres to the X.509 standard. The certificate establishes a secure connection by pairing a public key with the identity of a hostname, organization, or individual.

These certificates serve two primary functions:

1. Authentication:They validate and confirm the identity of a host or site, enhancing the trust factor for users.

2. Data Encryption: They protect data transferred to and from a website, ensuring it can only be read by the intended recipient.

These SSL/TLS X.509 certificates are issued by a trusted Certificate Authority, responsible for verifying the credentials of the entity requesting the certificate.

Introduction to AWS Certificate Manager

AWS Certificate Manager (ACM) is a service designed to streamline and automate the management of public and private SSL/TLS X.509 certificates and keys. ACM offers an integrated solution to protect your AWS websites and applications. It can issue certificates directly or import third-party certificates and can be used to secure singular domain names, multiple specific domain names, wildcard domains, or combinations thereof.

ACM also provides wildcard certificates, capable of protecting unlimited subdomains. For enterprise customers, ACM offers two main options:
1. AWS Certificate Manager (ACM): Ideal for those requiring a secure web presence using TLS.

2. ACM Private Certificate Authority (CA): For those aiming to build a Public Key Infrastructure (PKI) for private use within an organization.

Services Integrated with Certificate Manager

AWS Certificate Manager is integrated with several AWS services, providing seamless SSL/TLS certificate management to mention a few.
1. ELB: ACM deploys certificates on the Elastic Load Balancer to serve secure content.

2. CloudFront: ACM integrates with CloudFront, deploying certificates on the CloudFront distribution for secure content delivery.

4. Elastic Beanstalk:You can configure the load balancer for your application to use ACM.

6. API Gateway: Set up a custom domain name and provide an SSL/TLS certificate using ACM.

8. CloudFormation: ACM certificates can be used as a template resource, enabling secure connections.

Additional Concepts in Certificate Manager

Remember that ACM certificates are regional resources. You must request or import a certificate for each region to use a certificate with ELB for the same fully qualified domain name or set of fully qualified domain names in more than one region. Also, you need to request or import the certificate in the US East region to use an ACM certificate with CloudFront.

We will register for a free SSL certificate from the AWS certificate manager.

To register for a free SSL certificate, in the management console, in the search box, type certificate manager, then select certificate manager under services.
Remember, the certificate manager only works in the US-east-1 region, so ensure it’s selected.

Then in the certificate manager console, click request certificate.

We will request a public certificate so click the radio button then click next.

Under domain name, enter the domain name you want to request the certificate for, so enter your domain name, then click add another name for this certificate to add a wild card for your domain. The wild card allows you to have www.yourdomainname.com

In the search box, type *.yourdomainname then scroll down.
Under validation method, select DNS validation and that’s the recommended method. Click request.

Click view certificate.

The status is pending validation, this is because it has not yet been validated. To validate the certificate, we have to create a record set in Route 53 to validate our domain name.

To create a record in route 53, AWS has made it very easy, all you have to do is click create a record.

On this page, select the domain name you are creating a record set for in route 53. So make sure you have checked your domain name and wild card then click Create Record.
We have successfully created a DNS record in route 53, for our domain name validation.
Now click the refresh button and you will see that the SSL certificate status has been issued.

And for the two domain names we requested a certificate for, the status is a success.

This is how you request a free SSL certificate from the AWS, certificate manager.
This brings us to the end of this blog.

Stay tuned for more.

If you have any questions concerning this article or have an AWS project that requires our assistance, please reach out to us by leaving a comment below or email us at [email protected].

Thank you!

Categories
Blogs

How To Create AWS SQS Sending Messages

How To Create  AWS SQS Sending Messages.

AWS SQS Sending Messages

In the dynamic realm of cloud computing, efficient communication between components is paramount for building robust and scalable applications. Amazon Simple Queue Service (SQS) emerges as a beacon in this landscape, offering a reliable and scalable message queuing service.

Understanding Amazon SQS Queue

Amazon SQS is a fully managed message queuing service that offers a secure, durable, and available hosted queue that lets you integrate and decouple distributed software systems and components.

Key Features of Amazon SQS

Scalability: SQS seamlessly scales with the increasing volume of messages, accommodating the dynamic demands of applications without compromising performance.
Reliability: With redundant storage across multiple availability zones, SQS ensures the durability of messages, minimizing the risk of data loss and enhancing the overall reliability of applications.
Simple Integration: SQS integrates effortlessly with various AWS services, enabling developers to design flexible and scalable architectures without the need for complex configurations.
Fully Managed: As a fully managed service, SQS takes care of administrative tasks such as hardware provisioning, software setup, and maintenance, allowing developers to focus on building resilient applications.
Different Message Types: SQS supports both standard and FIFO (First-In-First-Out) queues, providing flexibility for handling different types of messages based on their order of arrival and delivery requirements.

Use Cases of Amazon SQS Queue

Decoupling Microservices: SQS plays a pivotal role in microservices architectures by decoupling individual services, allowing them to operate independently and asynchronously.
Batch Processing: The ability of SQS to handle large volumes of messages makes it well-suited for batch processing scenarios, where efficiency and reliability are paramount.
Event-Driven Architectures: SQS integrates seamlessly with AWS Lambda, making it an ideal choice for building event-driven architectures. It ensures that events are reliably delivered and processed by downstream services.

There are two types of Queues:

Standard Queues (default)

AWS SQS Sending Messages
SQS offers a standard queue as the default queue type. It allows you to have an unlimited number of transactions per second. It guarantees that a message is delivered at least once. However, sometimes, more than one copy of a message might be delivered out of order. It provides best-effort ordering which ensures that messages are generally delivered in the same order as they are sent but it does not provide a guarantee.

FIFO Queues (First-In-First-Out)

aws-fifo-queues
The FIFO Queue complements the standard Queue. It guarantees to order, i.e., the order in which they are sent is also received in the same order.

The most important features of a queue are FIFO Queue and exactly-once processing, i.e., a message is delivered once and remains available until the consumer processes and deletes it. FIFO Queue does not allow duplicates to be introduced into the Queue.

FIFO Queues are limited to 300 transactions per second but have all the capabilities of standard queues.

amazon dead letter queue

This is not a type of queue but it’s kind of a use case and configuration. It’s a standard or FIFO queue that has been specified as a dead letter queue. The main task of the dead letter queue is handling message failure. It lets you set aside and isolate messages that can be processed correctly to determine and analyze why they couldn’t be processed.

Let’s make our hands dirty.

Log into the management console and in the search box type SQS select SQS under services then click Create Queue in the SQS console.
aws-sqs-services
aws-sqs-app-integration
In the create queue dashboard in the details section under type, this is where you choose your queue type. The default is standard. So, I will proceed to create a standard queue. Under name call it my demoqueue then scroll down.
In the configuration section, this is where you specify visibility timeout, delivery delay, receive message wait time, and message retention period. I will move with the default settings. Scroll down.
Move with the default encryption and scroll down.
Leave all the other settings as default. Scroll down and click Create Queue.
We will now send and poll for messages with our created queue. So, select the queue you just created then click send message.
aws-sqs-mydemo-queue
In the send and receive messages section, under the message body, enter any message and then click send.
aws-sqs-send-and-receive-messages

Send message is successful.

Let’s now poll for our message. Scroll down and click poll for messages.

aws-sqs-receive-msgs
Under receive message we can see one message, and if you click on the message, it’s the message we sent.
aws-sqs-receive-msgs-stop
aws-sqs-body-message-example

This brings us to the end of this blog Pull down.


Stay tuned for more.


If you have any questions concerning this article or have an AWS project that requires our assistance, please reach out to us by leaving a comment below or email us at [email protected].


Thank you!


Categories
Blogs

Amazon Elastic Container Service (ECS)

Amazon Elastic Container Service (ECS)

Introduction:

In the ever-evolving landscape of cloud computing, managing containerized applications efficiently has become a paramount concern for businesses seeking agility, scalability, and ease of deployment. Amazon Elastic Container Service (Amazon ECS) emerges as a powerful solution within the Amazon Web Services (AWS) ecosystem, providing a robust platform for orchestrating and managing Docker containers at scale.

What is a Container?

In the world of software, a container can be thought of as a compact, self-sufficient unit that holds everything a piece of software needs to run. Furthermore, just like a shipping container in the real world, which contains all the goods needed for transportation, a software container encapsulates the necessary components for a program to function. Additionally, these components include the code itself and any libraries, dependencies, and environment settings it requires.

Overview of Amazon ECS

Amazon ECS is a fully managed container orchestration service that simplifies the deployment, management, and scaling of containerized applications using Docker containers. Moreover, it eliminates the need for manual intervention in provisioning and scaling infrastructure, allowing developers to focus on writing code and building applications rather than managing the underlying infrastructure. This automated approach streamlines the development process, making it more efficient and conducive to rapid application deployment.

ECS architecture

ECS Terminology:

Task Definition

This is a blueprint that describes how a Docker container should launch. Additionally, if you are already familiar with AWS, it is like a LaunchTemplate; however, it is tailored for a Docker container instead of an instance. Notably, it contains settings such as exposed port, Docker image, CPU shares, memory requirements, command to run, and environmental variables.

Task

This is a running container with the settings defined in the Task Definition. Consequently, it can be thought of as an “instance” of a Task Definition.

Service — Defines long-running tasks of the same Task Definition. This can be one or multiple running containers all using the same Task Definition.

Cluster

A logical grouping of EC2 instances. When an instance launches the ECS-agent software on the server registers the instance to an ECS Cluster.

Container Instance — This is just an EC2 instance that is part of an ECS Cluster and has docker and the-agent running on it.

Amazon ECS is a service we can use for running docker containers on AWS, either in a serverless manner or with the underlying infrastructure within our control.

An elastic container registry is where we can store the images for our containers.

Images

A container image is essentially a lightweight, standalone, and executable software package that includes everything needed to run a piece of software, including the code, runtime, libraries, and system tools. Docker, the popular containerization platform, defines container images with a layered file system and metadata.

Amazon ECS (Elastic Container Service) provides flexibility in how you launch and manage containers, offering two primary launch types:

1. EC2 (Elastic Compute Cloud)

2. Fargate.

Each launch type caters to different use cases, allowing users to choose the one that aligns with their specific requirements.

Amazon ECS EC2 Launch Type:

The EC2 launch type enables you to run containers on a cluster of Amazon EC2 instances that you manage. This launch type is suitable for users who want more control over the underlying infrastructure and require customization of EC2 instances.

Key Features and Considerations:

Infrastructure Control:

Users have direct control over the EC2 instances, allowing customization of the instances to meet specific requirements, such as installing specific software.

Legacy Applications:

Well-suited for migrating legacy applications that require access to features not available in Fargate or applications that need specific networking configurations.

Cost Management:

Provides more granular control over EC2 instance types, allowing users to optimize costs based on their specific workload requirements.

Custom Networking:

Users can leverage Amazon VPC (Virtual Private Cloud) to define custom networking configurations, including subnet placement and security group settings.

Amazon ECS Fargate Launch Type:

Fargate is a serverless launch type that allows you to run containers without managing the underlying infrastructure. Furthermore, with Fargate, AWS takes care of provisioning and scaling the infrastructure, allowing users to focus solely on defining and running their containerized applications.

Key Features and Considerations:

Serverless Deployment:

Fargate abstracts away the underlying infrastructure, providing a serverless experience for running containers.

Simplified Operations:

Reduces operational overhead as users don’t need to worry about patching, updating, or scaling EC2 instances. Fargate takes care of these tasks automatically.

Resource Isolation:

Containers run in an isolated environment, ensuring resource allocation and utilization are managed effectively. This isolation provides a high level of security and performance.

Cost Efficiency:

Fargate charges based on the vCPU and memory used by your containers, allowing for precise cost management without the need to manage and pay for underlying EC2 instances.

Networking Simplification:

Fargate simplifies networking by abstracting away the complexities of Amazon VPC. Users define task-level networking, and Fargate handles the rest.

Categories
Blogs

How To Create Amazon Route 53

How To Create Amazon Route 53

Amazon Route 53
In the dynamic landscape of cloud computing, efficient and reliable domain name system (DNS) management is crucial for the seamless operation of web applications and services. One powerful solution at the forefront of DNS services is Amazon Route 53. As a scalable and highly available cloud-based domain registration and routing service, Route 53 plays a pivotal role in ensuring the accessibility, performance, and resilience of your web infrastructure.

What is Amazon Route 53?

Amazon Route 53 is not just a catchy name — it refers to the 53rd port, traditionally assigned to DNS. Route 53 is the DNS service provided by AWS. Route 53 is one of the most well-known, reliable, and cost-effective services for managing domains. Amazon Route 53 is a highly available and scalable cloud Domain Name System (DNS). This AWS service provides scalable and highly available domain registration, DNS routing, and health checking for your applications. Whether you’re launching a new website, configuring subdomains, or optimizing the performance of your web applications, Route 53 has got you covered.

Some Amazon Route 53 useful terminology:

Domain: Domains are your standard URLs like amazon.com and google.com.
Subdomains: Subdomains are a unique URL that lives on your purchased domain as an extension in front of your regular domain like www.google.com and docs.google.com.

Hosted Zone: It’s the way AWS describes the information you provide to define how traffic aimed at your domain name will be managed. A hosted zone is a container for records, and records contain information about how you want to route traffic for a specific domain, such as example.com, and its subdomains (web.example.com, admin.example.com). A hosted zone and the corresponding domain have the same name. When we create a public-hosted zone, it automatically creates an SOA and NS that are unique to each hosted zone.

DNS Records: DNS records are what contain the actual information that other browsers or services need to interact with, like your server’s IP address. Nameservers, on the other hand, help store and organize those individual DNS records. Nameservers are the physical phone book itself and DNS records are the individual entries in the phone book.
Start of authority (SOA): It contains info of hosted zones. The type of resource record that every DNS must begin with, contains the following information:

1. Contains the owner’s info (email id).
2. Contains info of the authoritative server.
3. Serial number which is incremented with changes to the data zones. (In case of updates).
4. Stores the name of the server supplying the data.
5. Stores the admin zone.
6. Current version of the data file.
7. Time to live.

Name Server (NS) records: As discussed earlier it is a physical phone book itself. Nameservers play an important role in connecting a URL with a server IP address in a much more human-friendly way. Nameservers look like any other domain name. When you look at a website’s nameservers, you’ll typically see a minimum of two nameservers (though you can use more). Here’s an example of what they look like:

  • ns-380.awsdns-47.com
  • Ns-1076.awsdns-06.org

They are used by top-level domain servers to direct traffic to the content DNS server. It specifies which DNS server is authoritative for a domain. It is of 4 types Recursive resolvers, root nameservers, TLD nameservers, and authoritative nameservers.

Time To Live (TTL):Length of time the DNS record is cached on the server in seconds. The default is 48 hours.
Canonical Name (CNAME): A CNAME, or Canonical Name record, is a record that points to another domain address rather than an IP address. For example, say you have several subdomains, like www.mydomain.com, mail.mydomain.com etc and you want these subdomains to point to your main domain name mydomain.com.
Alias Record: You will use an ALIAS record when you want the domain itself (not a subdomain) to “point” to a hostname. The ALIAS record is similar to a CNAME record, which is used to point subdomains to a hostname. The CNAME record only can be used for subdomains, so the ALIAS record fills this gap. Ex: @ 10800 IN ALIAS example.example.com. Please note the final dot (.) at the end is necessary for the record to work correctly.

REGISTER A NEW DOMAIN NAME IN ROUTE 53

One of the initial steps in establishing your online presence is securing a memorable and relevant domain name. With Amazon Route 53, this process becomes a breeze. Let’s see the steps.

Right away, go to the management console, type route 53 in the search box, and select route 53 under services.
Route 53

In the route 53, dash-board, first we have to check whether that domain name is available. So, under register domain, type the domain name, I call it viktechsolutions.com.  Once you’ve typed your domain name, click checkout.

If the domain name you are trying to register is available, select it. I try to register viktechsolutions.com, and it is available, so I will select it.  Then on the right side of the register domain navigation pane, a list of the selected domain you want to register and the price tag will appear then click proceed to checkout.

You will then be brought to a new page where you need to enter your contact information.  Fill in your details.  Under privacy protection, make sure it’s enabled to hide your contact details, and then click next.

On this page review your contact information, tick the box on “terms and conditions” and then click submit.

This is all we need to do to register a domain name.

The domain name can take up to 3 days to be complete.  For me, it took about 20 minutes. Now it is available for use.

Types of Routing policies:

Simple Routing policy.

This is the default Routing Policy. This routing policy randomly selects the routing path and does not take the resource status (health) into account.  It can be used across regions.

Failover Routing

AWS Failover Routing Policy for Route 53

It allows us route traffic to a resource when the resource is healthy or to a different resource when the first resource is unhealthy. We can associate health checks with this type of policy.

Latency Routing Policy

Amazon Latency Routing Policy for Route 53
It is mainly used when we need a website to be installed or is being hosted in multiple AWS regions. It redirects to the server that has the least latency close to us also helpful when the latency of users is a priority, to improve performance due to the support of the demands of the WORLD region, with a time delay. The response to the request is purely measured by latency and not by the distance to the region of the resource.

Weighted Routing

Amazon Weighted Routing Policy
This routes multiple sources to a single name and controls the percentage % of the requests that go to a specific endpoint. This approach is heavily used in Blue/Green Deployment, where you can release a software product from the dev stage to live production. Depending on your requirements, we can switch the traffic to one of the endpoints at any given point.

Geo Location Routing Policy

Amazon Geolocation Policy for Route 53
Geolocation routing policy refers to the practice of directing network traffic based on the geographical location of the user or the destination server. This approach is often employed by organizations to optimize the performance and efficiency of their network services.

Multi-Value Routing Policy

Unlike Simple Routing Policy, where we can specify multiple IP addresses for a single “A” record set, with multi-value routing policy we can create multiple “A” record sets for each IP address that we want to define. With this approach, we can monitor each endpoint better than the simple routing policy by having a health check attached to each record set.

This brings us to the end of this blog. Stay tuned for more.
If you have any questions concerning this article or have an AWS project that requires our assistance, please reach out to us by leaving a comment below or email us at [email protected].

Thank you!