Accend Networks San Francisco Bay Area Full Service IT Consulting Company

Categories
Blogs

How To Create Serverless Computing with AWS Lambda

How To Create Serverless Computing with AWS Lambda.

In the ever-evolving landscape of cloud computing, AWS Lambda has emerged as a revolutionary service, paving the way for serverless computing. This paradigm shift allows developers to focus on building and deploying applications without the burden of managing servers.

What is Lambda

AWS Lambda is a compute service that lets you run code without provisioning or managing servers.


Lambda runs your code on a high-availability compute infrastructure and performs all of the administration of the compute resources, including server and operating system maintenance, capacity provisioning automatic scaling, and logging. With Lambda, all you need to do is supply your code in one of the language runtimes that Lambda supports.


You organize your code into Lambda functions. The Lambda service runs your function only when needed and scales automatically. You only pay for the compute time that you consume — there is no charge when your code is not running.

Things That Can Cause Lambda Capabilities

AWS source triggers (DynamoDB functions, S3 situations, Message Queue functions, and so on)

AWS endpoints (Relaxation calls)

Key Features of AWS Lambda:

Event-driven: AWS Lambda is designed to respond to events from various AWS services or custom events e.g. changes to data in an Amazon S3 bucket, updates to a DynamoDB table, etc.
Multiple Programming Languages: Lambda supports multiple programming languages, including Node.js, Python, Java, Go, Ruby, and .NET Core.
Automatic Scaling: Lambda automatically scales based on the number of incoming requests.
Cost-Efficient: AWS Lambda follows a pay-as-you-go pricing model. You are charged only for the compute time consumed by your code.
Built-in Fault Tolerance: AWS Lambda provides built-in fault tolerance by automatically distributing the execution of functions across multiple availability zones.

Use Cases for AWS Lambda:

Real-time File Processing: AWS Lambda can be used to process files uploaded to an S3 bucket in real-time.
Microservices Architecture: Lambda functions are well-suited for building microservices, allowing developers to break down large applications into smaller, manageable components promoting agility and maintainability.
API Backend: With the help of API Gateway, AWS Lambda can be used to build scalable and cost-effective API backends. This allows developers to focus on building the application’s logic without worrying about managing servers.
Data Transformation and Analysis: Lambda functions can process and analyze data from various sources, providing a serverless solution for tasks like log processing, data transformation, and real-time analytics.

Create a Lambda function with the console

Log into the management console and type lambda in the search box then select lambda under services.
In the lambda dashboard on the left side of the navigation pane, select function then click create function.
In the create function dashboard, Select Author from scratch then in the Basic information pane, for Function name enter mytestfunction.

Then for Runtime, choose Node.js 20.x

Then scroll down, leave the architecture set to x86_64, and choose the Create function.
Remember default, Lambda will create an execution role with permissions to upload logs to Amazon CloudWatch Logs.
These are the only settings we need to create our function so scroll down and click create function.

Lambda creates a function that returns the message Hello from Lambda!

Lambda creates a function that returns the message Lambda also creates an execution role for your function. An execution role is an AWS Identity and Access Management (IAM) role that grants a Lambda function permission to access AWS services and resources.

To see the role scroll down and select the configuration tab then select permission, in the execution role under the role name you can see the role.
When you select it, it will take you to the I am console and you can see the policy.

Now back to lambda under code, we can see the Hello from Lambda! Code.

We will change this code with a different code Choose the Code tab.

In the console’s built-in code editor, you should see the function code that Lambda created. Then we will replace this cord with our code as shown below.
Select Deploy to update your function’s code. When Lambda has deployed the changes, the console displays a banner letting you know that it’s successfully updated your function.
Invoke the Lambda function using the console.
To invoke our lambda function using the Lambda console, we first create a test event to send to our function. The event is a JSON-formatted document.

To create the test event

In the Code source pane, we will choose Test then you will be taken to configure the test console.

select Create new Event, then for Event name enter myTestEvent In the Event JSON panel, we will paste in our code as shown below. Then we choose to save.
We will now test our function and use the Lambda console and CloudWatch Logs to view records of our function’s invocation.

To test our function, In the Code source pane, we will choose Test. Then wait for our function to finish running. We will see the response and function logs displayed in the Execution results tab as shown below. This confirms that our lambda function was invoked successfully.
This brings us to the end of this blog.

Pull down and stay tuned for more.

If you have any questions concerning this article or have an AWS project that requires our assistance, please reach out to us by leaving a comment below or email us at sales@accendnetworks.com.

Thank you!

Categories
Blogs

Find Out What Decoupling Workflows in AWS Is

Find Out What Decoupling Workflows in AWS Is

Decoupling workflows involves breaking down the components of a system into loosely connected modules that can operate independently. This not only enhances scalability and flexibility but also improves fault tolerance, as failures in one component do not necessarily impact the entire system. AWS provides a variety of services that facilitate decoupling, such as AWS Simple Queue Service (SQS), AWS Simple Notification Service (SNS), and AWS Step Functions.

To explain this, we will use this scenario.

Direct Integration in Application Components

Direct Integration in Application Components
Consider the front end, also known as the web tier, where user interactions take place. This internet-facing layer is where crucial data, such as customer orders, originates. Moving seamlessly to the next layer, we encounter the app tier. This tier is responsible for processing the incoming orders and managing other relevant information received from the web tier.

In a direct integration scenario, the web tier and the app tier are connected without intermediary components. This approach does come with lots of challenges.

One significant drawback arises when the app tier is required to keep pace with the incoming workload. In the event of a sudden surge in demand, such as an influx of customer orders, the app tier must be capable of scaling up rapidly to handle the increased load. This real-time scalability requirement is essential to prevent any system failures and ensure a seamless customer experience.

We can use auto-scaling mechanisms in such scenarios. Auto-scaling, although efficient, involves the automatic launch of instances to meet the rising demand. However, the time taken for these instances to become operational may introduce delays, potentially leading to the loss of critical information. In the context of customer orders, this delay could result in a lost customer order.

Decoupled Workflows

Instead of having the web tier and the app tier directly connected, we’ll put an SQS queue in the middle.

The web tier now talks to the queue and puts in orders as messages. The app tier, on the other hand, keeps an eye on the queue by checking if there are any messages to be processed. If there’s a sudden flood of orders, the queue can handle it easily. More orders just wait in the queue until the app tier is ready to process them. hence, no stress of the app tier keeping up with all the workloads at once. if there is a sudden huge amount of information coming in and lots of orders being placed, then the queue can scale very easily. So, we’ll just end up with a lot more orders in the queue awaiting processing. The app tier may need to scale but it doesn’t have the issue of direct integration where the orders might be lost because they can sit in the queue for quite a bit of time and the app tier can process them as soon as it’s ready.

We can also see decoupling in Lambda invocations where we have synchronous and synchronous invocation of Lambda functions.

When you invoke a function synchronously, Lambda runs the function and waits for a response. With this model, there are no built-in retries. You must manage your retry strategy within your application code.

On the other hand, by decoupling lambda functions, and using a synchronous invocation, we see lots of added advantages, as our Lambda function must not keep up with the surge of events but just poll the queue.

A destination can send records of asynchronous invocations to other services. You can configure separate destinations for events that fail processing such as a dead letter queue to handle failed messages for later analysis as to why the messages could not be processed. With destinations, you can address errors and successes without needing to write more code.

Benefits of Decoupling Workflows on AWS Cloud

Fault Tolerance: By reducing dependencies, decoupled workflows make systems more resilient to failures in individual components.
Improved Performance: Decoupling can lead to improved performance, especially in scenarios where synchronous invocations might introduce bottlenecks.
Enhanced Maintainability: Independent components are easier to maintain, update, and replace without affecting the entire system.

Conclusion

Decoupling workflows on the AWS Cloud is a fundamental architectural approach that enhances the scalability, reliability, and flexibility of systems. developers can design systems that meet specific performance requirements and effectively manage workloads.

Stay tuned for more.

If you have any questions concerning this article or have an AWS project that requires our assistance, please reach out to us by leaving a comment below or email us at sales@accendnetworks.com.

Thank you!

Categories
Blogs

How To Create AWS Certificate Manager

AWS CERTIFICATE MANAGER

The world of digital security is complex and ever-evolving, requiring businesses and organizations to deploy various mechanisms to secure their digital assets. A significant component of this digital security spectrum is SSL/TLS X.509 certificates. Let’s start our deep dive into AWS Certificate Manager by first understanding these.

Understanding SSL/TLS X.509 Certificates

Secure Sockets Layer (SSL) and Transport Layer Security (TLS) are digital files that use X.509 certificates, a public-key certificate that adheres to the X.509 standard. The certificate establishes a secure connection by pairing a public key with the identity of a hostname, organization, or individual.

These certificates serve two primary functions:

1. Authentication:They validate and confirm the identity of a host or site, enhancing the trust factor for users.

2. Data Encryption: They protect data transferred to and from a website, ensuring it can only be read by the intended recipient.

These SSL/TLS X.509 certificates are issued by a trusted Certificate Authority, responsible for verifying the credentials of the entity requesting the certificate.

Introduction to AWS Certificate Manager

AWS Certificate Manager (ACM) is a service designed to streamline and automate the management of public and private SSL/TLS X.509 certificates and keys. ACM offers an integrated solution to protect your AWS websites and applications. It can issue certificates directly or import third-party certificates and can be used to secure singular domain names, multiple specific domain names, wildcard domains, or combinations thereof.

ACM also provides wildcard certificates, capable of protecting unlimited subdomains. For enterprise customers, ACM offers two main options:
1. AWS Certificate Manager (ACM): Ideal for those requiring a secure web presence using TLS.

2. ACM Private Certificate Authority (CA): For those aiming to build a Public Key Infrastructure (PKI) for private use within an organization.

Services Integrated with Certificate Manager

AWS Certificate Manager is integrated with several AWS services, providing seamless SSL/TLS certificate management to mention a few.
1. ELB: ACM deploys certificates on the Elastic Load Balancer to serve secure content.

2. CloudFront: ACM integrates with CloudFront, deploying certificates on the CloudFront distribution for secure content delivery.

4. Elastic Beanstalk:You can configure the load balancer for your application to use ACM.

6. API Gateway: Set up a custom domain name and provide an SSL/TLS certificate using ACM.

8. CloudFormation: ACM certificates can be used as a template resource, enabling secure connections.

Additional Concepts in Certificate Manager

Remember that ACM certificates are regional resources. You must request or import a certificate for each region to use a certificate with ELB for the same fully qualified domain name or set of fully qualified domain names in more than one region. Also, you need to request or import the certificate in the US East region to use an ACM certificate with CloudFront.

We will register for a free SSL certificate from the AWS certificate manager.

To register for a free SSL certificate, in the management console, in the search box, type certificate manager, then select certificate manager under services.
Remember, the certificate manager only works in the US-east-1 region, so ensure it’s selected.

Then in the certificate manager console, click request certificate.

We will request a public certificate so click the radio button then click next.

Under domain name, enter the domain name you want to request the certificate for, so enter your domain name, then click add another name for this certificate to add a wild card for your domain. The wild card allows you to have www.yourdomainname.com

In the search box, type *.yourdomainname then scroll down.
Under validation method, select DNS validation and that’s the recommended method. Click request.

Click view certificate.

The status is pending validation, this is because it has not yet been validated. To validate the certificate, we have to create a record set in Route 53 to validate our domain name.

To create a record in route 53, AWS has made it very easy, all you have to do is click create a record.

On this page, select the domain name you are creating a record set for in route 53. So make sure you have checked your domain name and wild card then click Create Record.
We have successfully created a DNS record in route 53, for our domain name validation.
Now click the refresh button and you will see that the SSL certificate status has been issued.

And for the two domain names we requested a certificate for, the status is a success.

This is how you request a free SSL certificate from the AWS, certificate manager.
This brings us to the end of this blog.

Stay tuned for more.

If you have any questions concerning this article or have an AWS project that requires our assistance, please reach out to us by leaving a comment below or email us at sales@accendnetworks.com.

Thank you!

Categories
Blogs

How To Create AWS SQS Sending Messages

How To Create  AWS SQS Sending Messages.

AWS SQS Sending Messages

In the dynamic realm of cloud computing, efficient communication between components is paramount for building robust and scalable applications. Amazon Simple Queue Service (SQS) emerges as a beacon in this landscape, offering a reliable and scalable message queuing service.

Understanding Amazon SQS Queue

Amazon SQS is a fully managed message queuing service that offers a secure, durable, and available hosted queue that lets you integrate and decouple distributed software systems and components.

Key Features of Amazon SQS

Scalability: SQS seamlessly scales with the increasing volume of messages, accommodating the dynamic demands of applications without compromising performance.
Reliability: With redundant storage across multiple availability zones, SQS ensures the durability of messages, minimizing the risk of data loss and enhancing the overall reliability of applications.
Simple Integration: SQS integrates effortlessly with various AWS services, enabling developers to design flexible and scalable architectures without the need for complex configurations.
Fully Managed: As a fully managed service, SQS takes care of administrative tasks such as hardware provisioning, software setup, and maintenance, allowing developers to focus on building resilient applications.
Different Message Types: SQS supports both standard and FIFO (First-In-First-Out) queues, providing flexibility for handling different types of messages based on their order of arrival and delivery requirements.

Use Cases of Amazon SQS Queue

Decoupling Microservices: SQS plays a pivotal role in microservices architectures by decoupling individual services, allowing them to operate independently and asynchronously.
Batch Processing: The ability of SQS to handle large volumes of messages makes it well-suited for batch processing scenarios, where efficiency and reliability are paramount.
Event-Driven Architectures: SQS integrates seamlessly with AWS Lambda, making it an ideal choice for building event-driven architectures. It ensures that events are reliably delivered and processed by downstream services.

There are two types of Queues:

Standard Queues (default)

AWS SQS Sending Messages
SQS offers a standard queue as the default queue type. It allows you to have an unlimited number of transactions per second. It guarantees that a message is delivered at least once. However, sometimes, more than one copy of a message might be delivered out of order. It provides best-effort ordering which ensures that messages are generally delivered in the same order as they are sent but it does not provide a guarantee.

FIFO Queues (First-In-First-Out)

aws-fifo-queues
The FIFO Queue complements the standard Queue. It guarantees to order, i.e., the order in which they are sent is also received in the same order.

The most important features of a queue are FIFO Queue and exactly-once processing, i.e., a message is delivered once and remains available until the consumer processes and deletes it. FIFO Queue does not allow duplicates to be introduced into the Queue.

FIFO Queues are limited to 300 transactions per second but have all the capabilities of standard queues.

amazon dead letter queue

This is not a type of queue but it’s kind of a use case and configuration. It’s a standard or FIFO queue that has been specified as a dead letter queue. The main task of the dead letter queue is handling message failure. It lets you set aside and isolate messages that can be processed correctly to determine and analyze why they couldn’t be processed.

Let’s make our hands dirty.

Log into the management console and in the search box type SQS select SQS under services then click Create Queue in the SQS console.
aws-sqs-services
aws-sqs-app-integration
In the create queue dashboard in the details section under type, this is where you choose your queue type. The default is standard. So, I will proceed to create a standard queue. Under name call it my demoqueue then scroll down.
In the configuration section, this is where you specify visibility timeout, delivery delay, receive message wait time, and message retention period. I will move with the default settings. Scroll down.
Move with the default encryption and scroll down.
Leave all the other settings as default. Scroll down and click Create Queue.
We will now send and poll for messages with our created queue. So, select the queue you just created then click send message.
aws-sqs-mydemo-queue
In the send and receive messages section, under the message body, enter any message and then click send.
aws-sqs-send-and-receive-messages

Send message is successful.

Let’s now poll for our message. Scroll down and click poll for messages.

aws-sqs-receive-msgs
Under receive message we can see one message, and if you click on the message, it’s the message we sent.
aws-sqs-receive-msgs-stop
aws-sqs-body-message-example

This brings us to the end of this blog Pull down.


Stay tuned for more.


If you have any questions concerning this article or have an AWS project that requires our assistance, please reach out to us by leaving a comment below or email us at sales@accendnetworks.com.


Thank you!


Categories
Blogs

Amazon Elastic Container Service (ECS)

Amazon Elastic Container Service (ECS)

Introduction:

In the ever-evolving landscape of cloud computing, managing containerized applications efficiently has become a paramount concern for businesses seeking agility, scalability, and ease of deployment. Amazon Elastic Container Service (Amazon ECS) emerges as a powerful solution within the Amazon Web Services (AWS) ecosystem, providing a robust platform for orchestrating and managing Docker containers at scale.

What is a Container?

In the world of software, a container can be thought of as a compact, self-sufficient unit that holds everything a piece of software needs to run. Furthermore, just like a shipping container in the real world, which contains all the goods needed for transportation, a software container encapsulates the necessary components for a program to function. Additionally, these components include the code itself and any libraries, dependencies, and environment settings it requires.

Overview of Amazon ECS

Amazon ECS is a fully managed container orchestration service that simplifies the deployment, management, and scaling of containerized applications using Docker containers. Moreover, it eliminates the need for manual intervention in provisioning and scaling infrastructure, allowing developers to focus on writing code and building applications rather than managing the underlying infrastructure. This automated approach streamlines the development process, making it more efficient and conducive to rapid application deployment.

ECS architecture

ECS Terminology:

Task Definition

This is a blueprint that describes how a Docker container should launch. Additionally, if you are already familiar with AWS, it is like a LaunchTemplate; however, it is tailored for a Docker container instead of an instance. Notably, it contains settings such as exposed port, Docker image, CPU shares, memory requirements, command to run, and environmental variables.

Task

This is a running container with the settings defined in the Task Definition. Consequently, it can be thought of as an “instance” of a Task Definition.

Service — Defines long-running tasks of the same Task Definition. This can be one or multiple running containers all using the same Task Definition.

Cluster

A logical grouping of EC2 instances. When an instance launches the ECS-agent software on the server registers the instance to an ECS Cluster.

Container Instance — This is just an EC2 instance that is part of an ECS Cluster and has docker and the-agent running on it.

Amazon ECS is a service we can use for running docker containers on AWS, either in a serverless manner or with the underlying infrastructure within our control.

An elastic container registry is where we can store the images for our containers.

Images

A container image is essentially a lightweight, standalone, and executable software package that includes everything needed to run a piece of software, including the code, runtime, libraries, and system tools. Docker, the popular containerization platform, defines container images with a layered file system and metadata.

Amazon ECS (Elastic Container Service) provides flexibility in how you launch and manage containers, offering two primary launch types:

1. EC2 (Elastic Compute Cloud)

2. Fargate.

Each launch type caters to different use cases, allowing users to choose the one that aligns with their specific requirements.

Amazon ECS EC2 Launch Type:

The EC2 launch type enables you to run containers on a cluster of Amazon EC2 instances that you manage. This launch type is suitable for users who want more control over the underlying infrastructure and require customization of EC2 instances.

Key Features and Considerations:

Infrastructure Control:

Users have direct control over the EC2 instances, allowing customization of the instances to meet specific requirements, such as installing specific software.

Legacy Applications:

Well-suited for migrating legacy applications that require access to features not available in Fargate or applications that need specific networking configurations.

Cost Management:

Provides more granular control over EC2 instance types, allowing users to optimize costs based on their specific workload requirements.

Custom Networking:

Users can leverage Amazon VPC (Virtual Private Cloud) to define custom networking configurations, including subnet placement and security group settings.

Amazon ECS Fargate Launch Type:

Fargate is a serverless launch type that allows you to run containers without managing the underlying infrastructure. Furthermore, with Fargate, AWS takes care of provisioning and scaling the infrastructure, allowing users to focus solely on defining and running their containerized applications.

Key Features and Considerations:

Serverless Deployment:

Fargate abstracts away the underlying infrastructure, providing a serverless experience for running containers.

Simplified Operations:

Reduces operational overhead as users don’t need to worry about patching, updating, or scaling EC2 instances. Fargate takes care of these tasks automatically.

Resource Isolation:

Containers run in an isolated environment, ensuring resource allocation and utilization are managed effectively. This isolation provides a high level of security and performance.

Cost Efficiency:

Fargate charges based on the vCPU and memory used by your containers, allowing for precise cost management without the need to manage and pay for underlying EC2 instances.

Networking Simplification:

Fargate simplifies networking by abstracting away the complexities of Amazon VPC. Users define task-level networking, and Fargate handles the rest.