Accend Networks San Francisco Bay Area Full Service IT Consulting Company

Categories
Blogs

How To Create AWS EC2 Auto Scaling

How To Configure AWS EC2 Auto Scaling

aws ec2 auto scaling groups

Amazon Web Services (AWS) offers a robust solution through Auto Scaling Groups, facilitating automatic adjustments to instance numbers based on demand.

Auto Scaling Group Fundamentals

In real-world scenarios, the demand for websites and applications can fluctuate significantly. AWS Auto Scaling Groups allow rapid creation or termination of servers to accommodate these changes. The key objectives of ASGs encompass dynamically adding or removing EC2 instances based on demand, setting limits on instance numbers, and ensuring the automatic registration of newly launched instances with load balancers. Furthermore, ASGs are designed to replace terminated instances, ensuring continuous availability, all while being cost-effective as you only pay for the underlying EC2 instances.

How Do Auto Scaling Groups Work?

The size of your auto scaling group is maintained according to a pre-defined number of instances, which you configure as the required capacity. You can use manual or automatic sizing to resize groups according to application requirements.

Initially, an auto scaling group launches enough instances to reach the required capacity. By default, it maintains this number of instances by performing regular health checks, identifying unhealthy instances, terminating them and launching other replacement instances.

So let’s look at auto-scaling policies

Auto Scaling and CloudWatch Alarms

Target Tracking Scaling

First, consider a scenario where you want to maintain an average CPU utilization of 40%.

Simple/Step Scaling

Then, for a more granular approach, consider Simple/Step Scaling.

Scheduled Actions

Next, Scheduled Actions can be employed in scenarios where scaling needs can be anticipated. e.g. an increase in minimum capacity to 10 at 5 pm every Friday.

Predictive Scaling

Now, Predictive Scaling introduces a proactive approach by continuously analyzing historical data.

Metrics for Scaling

Choosing appropriate metrics is crucial. Consider metrics such as average CPU utilization, requests per EC2 instance, average network in/out, or custom metrics pushed using CloudWatch.

Scaling Cooldowns

To prevent rapid and unnecessary scaling activities, cooldown periods are essential.

Instance Refresh

Instance Refresh allows you to update the launch template of your Auto Scaling Group and then gradually replace instances with the new configuration. This process helps you seamlessly apply changes to your instances, ensuring that your application remains available and responsive during the update.

Auto Scaling Group Configuration

To create an autoscaling group, firstly start by creating a launch template. Then launch template specifies how to configure EC2 instances that are going to be launched by an autoscaling group. We will do this practically.


Go to the log into the management console, type EC2 in the search box, and select EC2 under services.


In the EC2 console on the left side of the navigation pane under instances, click launch templates, then click Create launch template.

aws ec2 auto scaling groups

Give your template a name, I will call it Auto-scaling-Template and then you can skip the version description for now and scroll down.

We will do configurations for our launch template. Under launch template contents under Application and OS images, select the QuickStart tab then select Amazon Linux. Then under Amazon Machine Image (AMI) select the dropdown and select Amazon Linux 2 AMI. Scroll down.

Under instance type select the drop-down button and select t2. Micro because it is the free one. Then under key-pair login, select the drop-down and select your key-pair. Scroll down.

In the network settings section, for subnet Leave it as it is. Then move to firewall and security, then select the existing security group. For security groups, select your security groups. I created a security group and opened port 22 for SSH and called it SSH security group. Again, I opened ports 80 and 443 for HTTP and HTTPS traffic and called this security group, web traffic. Scroll down. Leave all the other settings as default and click Create Launch template.
Come back to the EC2 dashboard at the bottom left of the navigation pane and select Auto Scaling groups.
aws ec2 auto scaling groups
aws ec2 auto scaling groups
In the create auto-scaling group dashboard under the name, give a name to your auto-scaling group. Then in the launch template section, select the drop-down and choose the template you just created. Then scroll down and click next.

Under network, select your VPC, I will move with the default VPC and then under availability zone, select the drop-down and select your AZs, I will select Us-east-1a, and then 1b AZs then scroll down and click next.

aws ec2 auto scaling groups
Under the configure advance option because we don’t have a load balancer, we will pass that page and click next. Then under the configure group size and scaling section, under desired capacity, select 2, then scroll down.
For scaling limits, minimum desired capacity select 1, and for maximum desired capacity select 4. Leave the other options and click next.
We will not add notification, click next.

Leave tags optional and click next.

On this page, review and click create auto-scaling-group.

When you come back to the EC2 dashboard under instances already we can confirm autoscaling has provisioned our desired capacity instances.
This brings us to the end of this demo. Pull down everything. Stay tuned for more.

If you have any questions concerning this article or have an AWS project that requires our assistance, please reach out to us by leaving a comment below or email us at sales@accendnetworks.com.

Thank you!

Categories
Blogs

How Does Amazon CloudWatch Work?

How Does Amazon CloudWatch Work?

Amazon CloudWatch monitors your Amazon Web Services (AWS) resources and the applications you run on AWS. Additionally, CloudWatch enables real-time monitoring of various AWS resources including EC2 instances, RDS database instances, load balancers, and AWS Lambda. CloudWatch allows to collect & track metrics, monitor log files, set alarms, and automate reactions to AWS resource changes.
It automatically provides metrics for CPU utilization, latency, and request counts. Moreover, it can monitor other vital metrics such as memory usage, error rates, etc.

CloudWatch Metrics

CloudWatch metrics give the users visibility into resource utilization, application performance, and operational health. These help you make sure that you can resolve technical issues and streamline processes and that the application runs smoothly.

How does Amazon CloudWatch work?

Basically, the Amazon CloudWatch primarily performs the following four actions:

Collect metrics and logs

In the first step, CloudWatch gathers metrics and logs from all your AWS services, like AWS EC2 instances. Following this, CloudWatch retrieves these metrics from the repository. This repository may also contain custom metrics entered into it.

Monitor and visualize the data

Next, CloudWatch monitors and visualizes this data using CloudWatch dashboards. These dashboards provide a unified view of all your AWS applications, resources, and services, on-premise or in the cloud. In addition, you can correlate metrics and logs. Consequently, this facilitates visual analysis of your resources’ health and performance.

Act on an automated response to any changes.

In this step, CloudWatch executes an automated response to any operational changes using alarms. For example, you can configure an alarm to start or terminate an EC2 instance after it meets specific conditions. Additionally, you can use alarms to start services such as Amazon EC2 auto-scaling or Amazon SNS If a triggered alarm activates, you can set up automated actions such as auto-scaling.

Analyze your metrics

The final step is analyzing and visualizing your collected metric and log data for better insight. You can perform real-time analysis using CloudWatch Metric Math which helps you dive deeper into your data.

Amazon CloudWatch Logs

CloudWatch Logs helps users access, monitor, and store access log files from EC2 instances, CloudTrail, Lambda functions, and other sources. With the help of CloudWatch Logs, you can troubleshoot your systems and applications. It offers near real-time monitoring and users can search for specific phrases, values, or patterns. You can provision CloudWatch logs as a managed service without any extra purchases from within your AWS accounts. CloudWatch logs are easy to work with from the AWS console or the AWS CLI. They have deep integration with AWS services. Furthermore, CloudWatch logs can trigger alerts based on certain logs occurring in the logs. For log collection, AWS provides both a new unified CloudWatch agent and an older CloudWatch Logs agent. However, AWS recommends using the unified CloudWatch agent. When you install a CloudWatch Logs agent on an EC2 instance, it automatically creates a log group. Alternatively, you can create a log group directly from the AWS console. For the demonstration, I have the following Lambda functions that I created.
Next, we will proceed to view the CloudWatch logs of my destination test function. To do so, select it and navigate to the monitoring tab. Then, click on “View CloudWatch logs,” as shown below.
After clicking “View CloudWatch logs,” the system takes you to the CloudWatch dashboard. And under log streams, you can select one of the log streams to view.
On selecting the first one we can see the below logs events.

CloudWatch Events

CloudWatch Events allows users to consume a near real-time stream of events as changes to their AWS environment occur. These event changes can subsequently trigger notifications or other actions. Despite this, CloudWatch events monitor EC2 instance launches, shutdowns, and detect auto-scale events. Additionally, it detects when AWS services provision or terminate.

What are the benefits of Amazon CloudWatch?

Access all monitoring data from a single dashboard

Essentially, Amazon CloudWatch allows you to monitor data from different services using a single dashboard.

Collects and analyzes metrics from AWS and on-premise applications

Thanks to its seamless integration with over 70 AWS services, CloudWatch can collect and publish metric data automatically.

Using this metric and log data, you can now optimize your AWS services and resources

Improve your operational efficiency and optimize your available resource

The Amazon CloudWatch service provides real-time insights into cloud operations. Hence, this enable you to optimize operational efficiency and reduce costs.

Improve operational visibility

With the Amazon CloudWatch service, you gain operational visibility across all your running applications

Extract valuable insights

Ultimately, Amazon CloudWatch enables you to extract valuable and actionable insights from generated logs.

Conclusion

Using the Amazon CloudWatch service, you can monitor cloud-based applications and other AWS services. Consequently, this helps you in troubleshooting any performance issues. With its centralized dashboard, AWS administrators have complete visibility into applications and services across AWS regions. In conclusion, this brings us to the end of this blog. Stay tuned for more.
For questions or AWS project assistance, contact us at sales@accendnetworks.com. or leave a comment below. Thank you!
Categories
Blogs

How To Create with Network Load Balancer in AWS

Extreme Performance with Network Load Balancers

In today’s fast-paced digital era, where every millisecond counts, minimizing latency and optimizing network performance have become paramount for businesses. Network load balancing plays a crucial role in achieving these goals. By distributing incoming network traffic across multiple servers, network load balancing ensures efficient resource utilization, enhances scalability, and reduces latency.

We can see in the above diagram, choose a network load balancer if you need ultra-high performance.

What is a Network Load Balancer?

A Network Load Balancer operates on the Transport Layer (Layer 4) of the Open Systems Interconnection (OSI) model rather than the application layer, making it ideal for Transmission Control Protocol (TCP) and User Datagram Protocol (UDP) traffic. It is suitable for applications anticipating an unexpected spike in traffic because it can handle millions of concurrent requests per second.

Network load balancing is the process of evenly distributing incoming network traffic across multiple servers or resources. This intelligent traffic management technique helps to eliminate overload on individual servers and optimizes resource utilization.

Components of AWS NLB

A load balancer serves as the single point of contact for clients. The following are the two main components of the AWS NLB:
Listeners. Before an AWS NLB can be used, an admin must add one or more listeners. A listener is a process that uses the configured protocol and port number to look for connection requests. The rules defined for a listener dictate how an NLB routes traffic to the target groups.
Target groups. A target group consists of multiple registered targets to which the listener can route traffic, such as Amazon EC2 instances, IP addresses, microservices, and containers. A target can be registered with multiple target groups, which increases the availability of the application, especially if demand spikes.

How does load balancing work in AWS?

The network load balancer performs health checks on targets to ensure traffic is routed to only high-performing resources. When a target becomes slow or unresponsive, the NLB routes traffic to a different target.

Features of Network Load Balancer

Network Load Balancer serves over a million concurrent requests per second while providing extremely low latencies for applications that are sensitive to latency.

The Network Load Balancer allows the back end to see the client’s IP address by preserving the client-side source IP.

Network Load Balancer also provides static IP support per subnet.

To provide a fixed IP, Network Load Balancer also gives the option to assign an Elastic IP per subnet.

Other AWS services such as Auto Scaling, Elastic Container Service (ECS), CloudFormation, Elastic BeanStalk, and CloudWatch can be integrated with Network Load Balancer.

To communicate with other VPCs, network load balancers can be used with AWS Private Link. AWS Private Link offers secure and private access between on-premises networks, AWS services, and VPCs.

Network load balancing offers several key advantages:

Improved Scalability: By distributing incoming traffic across multiple servers, network load balancing ensures that your system can handle increasing demands without compromising performance.

Enhanced Redundancy: Network load balancing introduces redundancy into your network infrastructure. If one server fails or experiences a high load, the load balancer automatically redirects traffic to the healthy servers, eliminating downtime.

Minimized Latency: Latency, Network load balancing helps minimize latency by dynamically directing requests to the server with the lowest latency or optimal proximity.

How to Create a Network Load Balancer?

To create a network load balancer, log in to the management console then type EC2 in the search and select EC2 under services. On the EC2 console under load balancing, select load balancers.
Fill in your load balancer details. Under name give it a name, leave it on internet facing and IPV4 address then scroll down to the networking section.

select your VPC, then under mappings select the availability zones make sure to select the AZs where your targets will reside for the EC2 instance target then under security Select the security group for your load balancer then scroll down.

Under listener will move with TCP on port 80. Then for default action, click Create Target group. Remember you can also create it before.
In the target group console, under target types, we will move with instances, and for a name call it NLB-Target. Leave it on TCP port 80, select your VPC then scroll down and click next.
Then under register targets, select your instances, I had already created two instances for this demo. will select my instances. Then click Include as pending below then click Create target group.
Come back to the network load balancer and select your target group. It will now be showing up.
Scroll down to review the summery then click create load balancer.
This is how we create a network load balancer. This brings us to the end of this blog. Make sure to clean up.

If you have any questions concerning this article or have an AWS project that requires our assistance, please reach out to us by leaving a comment below or email us at sales@accendnetworks.com.

Thank you!

Categories
Blogs

How To Configure Serverless Computing

How To Configure Serverless Computing..

Serverless computing has emerged as a revolutionary paradigm in the world of cloud computing, transforming the way developers build and deploy applications. Unlike traditional server-centric models, serverless computing abstracts away infrastructure management, allowing developers to focus solely on writing code and delivering value to end-users.

What is serverless computing?

Serverless computing is a cloud computing model where the Cloud provider manages the underlying infrastructure required to run an application.

Comparison with traditional architecture

Traditional architecture and serverless computing represent two different approaches to building and deploying applications. Here are some differences between the two:

Infrastructure management:

In traditional architecture, developers manage the underlying infrastructure, such as servers, storage, and networking. In serverless computing, the cloud provider manages the infrastructure, allowing developers to focus on writing code.

Scaling: In traditional architecture, scaling is typically achieved by adding more servers or resources as needed. In serverless computing, the cloud provider automatically scales the resources needed to handle the workload.
Cost: Traditional architecture can be expensive, as it requires the purchase and management of hardware and software. Serverless computing, on the other hand, is typically billed on a usage basis, which can be more cost-effective for variable workloads.
Cold starts: In serverless computing, functions may experience a cold start when they are invoked for the first time or after a period of inactivity. This can lead to longer response times, whereas in traditional architecture, the infrastructure is typically always running and ready to respond to requests.
Control: With traditional architecture, developers have full control over the infrastructure and can customize it to meet their specific needs. With serverless computing, the cloud provider manages the infrastructure and developers have less control over the environment.

Benefits of serverless computing

Serverless Computing has offered the following benefits just to mention a few.
Cost savings: Serverless applications are cost-effective because developers only pay for the resources used during the function’s execution, rather than paying for the entire infrastructure.
Scalability: Serverless applications can automatically scale up or down based on demand, ensuring that the application can handle sudden spikes in traffic or other events.
Reduced operational complexity: Serverless computing eliminates the need for developers to manage infrastructure and server-side resources, reducing operational complexity and allowing developers to focus on writing code.
Improved fault tolerance and availability: The cloud provider manages the infrastructure required to run the application, including monitoring, scaling, and failover, ensuring that the application is always available and can handle sudden spikes in traffic or other events. This provides a high level of fault tolerance and availability.

Best practices

Here are some best practices for developing serverless applications:
Function design: Design functions to be small, stateless, and focused on a single task. This will help ensure that they can be easily tested, deployed, and scaled independently.
Use event-driven architectures: Use events to trigger functions in response to changes in the system. This can help reduce the cost of running your application, as functions only execute when needed.
Minimize cold starts: Cold starts occur when a function is invoked for the first time or when it has been idle for a while. These can lead to longer response times for users. To minimize cold starts, consider using provisioned concurrency or keeping functions warm by periodically invoking them.
Optimize resource usage: Because serverless applications are charged based on usage, it’s important to optimize resource usage to reduce costs. Consider using a CDN to cache static content, and use serverless databases to minimize the amount of server resources needed.
Use security best practices: Serverless applications are still vulnerable to security threats, so use security best practices such as encrypting sensitive data, limiting access to resources, and regularly patching software.

When to use serverless applications?

Serverless computing can be a good choice for certain types of applications. Here are some scenarios where serverless computing may be a good fit:
Event-driven workloads: Serverless computing is well-suited for event-driven workloads that are triggered by events, such as HTTP requests, changes to a database, or messages from a queue.
Variable workloads: Serverless computing is also well-suited for workloads that have variable demand, as the cloud provider can automatically scale the resources needed to handle the workload. This can reduce the cost of running the application during periods of low demand.
Rapid development: Serverless computing can be a good choice for applications that require rapid development and deployment. By removing the need to manage infrastructure, developers can focus on writing code and deploying features quickly.
Data processing: Serverless computing can also be a good choice for data processing workloads that can be broken down into smaller, independent tasks. This can help reduce the cost and complexity of managing the infrastructure needed to process large amounts of data.

Conclusion:

Serverless computing on AWS marks a paradigm shift, empowering developers to focus on creating innovative applications without the burden of managing infrastructure.
Stay tuned for more.
If you have any questions concerning this article or have an AWS project that requires our assistance, please reach out to us by leaving a comment below or email us at sales@accendnetworks.com.
Thank you!

Categories
Blogs

How to Configure VPC Endpoints to Enhance Security and Efficiency in Cloud Networking.

How to Configure VPC Endpoints to Enhance Security and Efficiency in AWS Cloud Networking.

What is a VPC Endpoint?

Many AWS customers run their applications within a VPC for security or isolation reasons.

For example, previously, if you wanted your EC2 instances in your VPC to be able to access DynamoDB, you had two options.

You could use an Internet Gateway (with a NAT Gateway or assigning your instances public IPs)

You could route all of your traffic to your local infrastructure via VPN or AWS Direct Connect and then back to DynamoDB.

Both of these solutions had security and throughput implications and it could be difficult to configure NACLs or security groups to restrict access to just DynamoDB.

A VPC endpoint is a service offered by AWS VPC, which lets customers privately connect to supported AWS services and VPC endpoint services powered by AWS Private Link.

VPC Endpoints are virtual devices, that can be horizontally scaled, redundant, and highly available VPC components that allow communication between instances in your VPC and services without imposing availability risks or bandwidth constraints on your network traffic.

By using VPC Endpoints we don’t require public IP addresses for Amazon VPC instances to communicate with the resources of the service, and this network traffic between an Amazon VPC and an AWS service does not leave the Amazon network, which is our exact requirement.

VPC endpoints are virtual devices. They are horizontally scaled, redundant, and highly available Amazon VPC components that allow communication between instances in an Amazon VPC and services without imposing availability risks or bandwidth constraints on network traffic.

In other words, VPC endpoints enable you to privately connect your VPC to supported AWS services and VPC endpoint services powered by Private Link without requiring an IGW, NAT instance, VPN connection, or Direct Connect connection. Instances in your VPC do not require public IP addresses to communicate with resources in the service. Traffic between your VPC and the other service does not leave the Amazon network.

Types of VPC Endpoint

There are two types of VPC endpoints we’ve:

1.Gateway Endpoints

2. Interface Endpoints

Gateway Endpoints

A VPC Gateway Endpoint is a way to connect your VPC to an AWS service like S3 or DynamoDB without going through the public internet or needing to set up a VPN connection. This helps improve security and can also improve network performance since the traffic stays within the AWS network.

So if we want to utilize S3 or DynamoDB services inside VPC using Gateway Endpoints is recommended over Internet Gateway, NAT, or any other service, as this method also improves security, and latency for the application traffic.

Interface Endpoints

Interface endpoints enable connectivity to services over AWS Private Link. These services include some AWS managed services, services hosted by other AWS customers and partners in their own Amazon VPCs (referred to as endpoint services), and supported AWS Marketplace partner services. The owner of a service is a service provider. The principal creating the interface endpoint and using that service is a service consumer.

A Case Study — Connecting to S3 via VPC Gateway Endpoint

The following scenario connects to an AWS S3 bucket from an EC2 instance (within

a public subnet) via a VPC endpoint.

We are going to create a VPC gateway endpoint. We will use policies to see how we can control traffic access to the AWS S3 bucket. According to our reference architecture, we have a VPC, a public subnet, and an instance running.

When we create our endpoint, a route will be added to our route table which would point to, a destination which would result in the IP addresses of AWS S3. What happens is that this is a more specific route than any other route like the route going out to the internet. So, we get that any connections that go to the AWS S3 endpoints should be routed by the gateway endpoint and will not use the public internet.

Now let’s head over and create the VPC endpoint.

Log into the management console with an admin user privilege. and in the search box, type VPC, then select VPC under services.

In the VPC dashboard on the left side of the navigation pane select Endpoints.

On the Endpoint dashboard, click Create Endpoint.

In the create endpoint dashboard, we will enter the settings to create the endpoint.

Under the name tag, give your endpoint a name, and call it Demogatewayendpoint.

Then under the service category, leave it on AWS service. Then scroll a little beat down under services.

Under services, type S3 in the search box then select the radio button with the gateway type endpoint. Scroll a little beat down to the VPC section.

In the VPC section we will select the VPC we will create this endpoint. I have a VPC created already. Create a VPC, refer to our blog post on creating a custom VPC from scratch.

So, under VPC, I will select the drop-down button and select prod-VPC. Endpoint enables our resources not to access the public internet but to use AWS’s private network.

In the Route table section, I will select the public Route table. Then under policy, I will select full access.

These are the only settings we need to create our Endpoint, scroll down and click Create Endpoint.

And there we go; our endpoint has been successfully created.

When you create an Endpoint, an endpoint route is added to the route table you selected during creation. Let’s verify this, on the left side of the VPC dashboard, click route tables.

On the route table dashboard, select your public route table then move down to the routes tab and here you will see the VPC endpoint route added to this route table.

With all this done, we will now go to the verification section and test our endpoint. We will do this with an EC2 instance, I have already created an Instance, if you don’t have one create it and then proceed.

What I will now create is an I am role to allow EC2 Instance access S3 bucket. Straight away let’s proceed to the I am dashboard then select role on the left side of the navigation pane.

For trusted entity, select AWS services then under use case, select the dropdown and select EC2, then click next.

In the add permissions dashboard, type S3 in the search box tick the box with S3 full access then click next.

Review the policy then click Create Role.

Next, we will attach this role to our EC2 instance. So, let’s go to the EC2 instance console.

I have one instance running.

Select it then choose the action drop-down button move down to the security tab then select modify I am role.

In the modify I am role dashboard, select your S3 full access role then click update role.

Next, let’s head to the S3 console, I had already created a bucket, and uploaded files to it. So go ahead and create a bucket for yourself.

Then let’s now SSH into our EC2 instance and then try to list buckets, if we will be able to get access via the endpoint. Success As we can see we can list all the buckets.

Again, if I try to list the contents of a bucket, I call my demo bucket, and we get success.

To verify that we were accessing the bucket via the endpoint, let us go back to the endpoint and modify its permission to deny. Go back to your endpoint, select it, move to the policy tab, change the policy from allow to deny, and then click save.

Now come back to your terminal and try to list the contents of your buckets, you will get access denied.

This demonstrates that we were only accessing our bucket plus its contents via the endpoint.

This brings us to the end of this demo. Clean up to avoid surprise bills.

If you have any questions concerning this article or have an AWS project that requires our assistance, please reach out to us by leaving a comment below or email us at sales@accendnetworks.com.

Thank you!