Accend Networks San Francisco Bay Area Full Service IT Consulting Company

Categories
Blogs

How To Configure AWS Transit Gateway

How To Configure AWS Transit Gateway

AWS Transit Gateway

The network architecture in the cloud can quickly become complex as the number of VPCs, accounts, regions, on-premises data Centers, and edge locations grows. Transit Gateways allow customers to simplify the network layout and connectivity between all these environments.

What is AWS Transit Gateway?

AWS Transit Gateway serves as a central hub that connects VPCs and on-premises networks, eliminating the need for individual connections.

With Transit Gateway, you only need to create connections from the VPCs, VPNs, and Direct Connect links to the Transit Gateway. Transit Gateway will then dynamically route traffic between all the connected networks.

Why Choose Amazon Transit Gateway?

  • Simplifies Connectivity: Transit Gateway lets you quickly connect to one central gateway, making it easier for users to interconnect to all VPCS and onsite networks, regardless of how many connected accounts there are. Transit Gateway also supports dynamic and static layer 3 between Amazon VPCs and VPNs.
  • Facilitates Greater Control and Monitoring: AWS Transit Gateway allows users to monitor and manage all their Amazon VPC and edge connections in one place. The service makes it easier to find issues and handle events as they come. You may also enable Equal Cost Multipath (ECMP) between these connections to load balance between paths and increase bandwidth.
  • Bandwidth On Demand: Obtain the network bandwidth you need to move terabytes of data at a time for your applications, or even migrate into the cloud. You may add Amazon VPCs to your network without needing to provision extra connections from on-site networks.
  • Highly Secure: With its integration with IAM, users may control who can access Transit Gateway. Create and manage user accounts and groups and establish permission for them centrally.

Pricing

This service charges you per number of connections you attach to the Transit Gateway per hour, and also for each GB of data processed. The owner is billed per hour their Amazon VPCs or VPN are connected, starting from the instant the VPC is connected to Transit Gateway and until the VPC is disconnected. Note that a portion of an hour is still billed as one full hour.

So, as we can see, AWS Transit Gateway is an awesome service. It’s further described by AWS as being a cloud router and it connects VPCs and on-premises locations together using a central hub. To prove how transit gateway simplifies connectivity, we will look at different scenarios. First will concider a fully meshed architecture without AWS Transit Gateway. This will help us understand the problem Transit Gateway is trying to solve.

We get that when we have lots of VPCs and on-premises location connections without Transit Gateway, the peering connections that we set up if we use VPC peering can become extremely complex.
Mesh Architecture with Transit Gateway v2

Examining the above architecture, we find ourselves navigating through four distinct Virtual Private Clouds (VPCs) denoted as A, B, C, and D, all seamlessly interconnected through established VPC peering links. The complexity becomes apparent as we observe the presence of six peering links, a testament to the intricacy involved in setting up connections for just four VPCs. It becomes evident that as the number of VPCs increases, the complexity of the setup grows exponentially.


In this complicated setup, we have six connections between the VPCs, and the corporate office is linked in through a customer gateway. Now, here’s where it gets tricky: connecting the corporate office to each VPC using Site-to-Site VPNs. This involves having a virtual gateway in each VPC and making a separate secure connection (VPN) to the customer gateway for each VPC. So, in the end, we’re dealing with four of these VPN connections, and it gets even more complicated if we want a backup plan (redundancy).


If we dive a bit deeper into the problem, adding redundancy means we need an extra customer gateway and twice the number of those VPN connections. The more we look into it, the more complex it becomes, turning our setup into a really tangled network.


Now, let’s check out the same setup but using a Transit Gateway. This other option can make all the connections simpler and easier to deal with.

Mesh Architecture with Transit Gateway

In this situation, as seen from the above architecture, we have the same four VPCs and a corporate office. Now, let’s simplify things by putting a Transit Gateway in the middle. It acts like the main hub that connects all the VPCs and the on-premises networks.

So, each of these VPCs gets linked to the Transit Gateway. You choose a subnet from each availability zone, which helps in directing traffic within that zone for other subnets. It’s like giving each area its own route.

Now, there’s also the customer data Center and the corporate office has a customer gateway that also connects to the Transit Gateway. That’s pretty much the setup. This service allows us to connect through a cloud router, this central hub, to any of these VPCs.

Transit Gateways (TGWs) can be attached to VPNs, Direct Connect Gateways, third-party appliances, and even other Transit Gateways in different regions or accounts.

Explore how AWS Transit Gateway seamlessly integrates with Direct Connect Gateway, enabling transitive routing for growing companies with multiple VPCs.

AWS Transit Gateway and Direct Connect Gateway
Instead of using a complicated Site-to-Site VPN, our corporate office has a customer router. We connect to a DX (Direct Connect) location using a DX Gateway.

Now, the DX Gateway has a connection with the Transit Gateway. This connection is called an association. We then physically connect back to the corporate office from Direct Connect, creating something called a Transit VIF. This is like a special connection used only when you’re connecting a DX Gateway to a Transit Gateway.

This setup supports full transitive routing between on-premises, the Transit Gateway, and all those connected VPCs. When your company gets bigger and uses more VPCs in different areas, and you want them all to connect smoothly, Transit Gateway becomes super helpful.

Conclusion.

AWS Transit Gateway makes cloud network setups simpler. It acts like a hub connecting your VPCs, VPNs, and data Centers, making things easy to manage. It does away with confusing mesh setups, provides easy scalability, and keeps your network organized and secure.

As your cloud presence grows, Transit Gateway is the key to keeping your network simple, efficient, and secure.
If you have any questions concerning this article or have an AWS project that requires our assistance, please reach out to us by leaving a comment below or email us at sales@accendnetworks.com.

Thank you!

 

Categories
Blogs

How To Create AWS EC2 Auto Scaling

How To Configure AWS EC2 Auto Scaling

aws ec2 auto scaling groups

Amazon Web Services (AWS) offers a robust solution through Auto Scaling Groups, facilitating automatic adjustments to instance numbers based on demand.

Auto Scaling Group Fundamentals

In real-world scenarios, the demand for websites and applications can fluctuate significantly. AWS Auto Scaling Groups allow rapid creation or termination of servers to accommodate these changes. The key objectives of ASGs encompass dynamically adding or removing EC2 instances based on demand, setting limits on instance numbers, and ensuring the automatic registration of newly launched instances with load balancers. Furthermore, ASGs are designed to replace terminated instances, ensuring continuous availability, all while being cost-effective as you only pay for the underlying EC2 instances.

How Do Auto Scaling Groups Work?

The size of your auto scaling group is maintained according to a pre-defined number of instances, which you configure as the required capacity. You can use manual or automatic sizing to resize groups according to application requirements.

Initially, an auto scaling group launches enough instances to reach the required capacity. By default, it maintains this number of instances by performing regular health checks, identifying unhealthy instances, terminating them and launching other replacement instances.

So let’s look at auto-scaling policies

Auto Scaling and CloudWatch Alarms

Target Tracking Scaling

First, consider a scenario where you want to maintain an average CPU utilization of 40%.

Simple/Step Scaling

Then, for a more granular approach, consider Simple/Step Scaling.

Scheduled Actions

Next, Scheduled Actions can be employed in scenarios where scaling needs can be anticipated. e.g. an increase in minimum capacity to 10 at 5 pm every Friday.

Predictive Scaling

Now, Predictive Scaling introduces a proactive approach by continuously analyzing historical data.

Metrics for Scaling

Choosing appropriate metrics is crucial. Consider metrics such as average CPU utilization, requests per EC2 instance, average network in/out, or custom metrics pushed using CloudWatch.

Scaling Cooldowns

To prevent rapid and unnecessary scaling activities, cooldown periods are essential.

Instance Refresh

Instance Refresh allows you to update the launch template of your Auto Scaling Group and then gradually replace instances with the new configuration. This process helps you seamlessly apply changes to your instances, ensuring that your application remains available and responsive during the update.

Auto Scaling Group Configuration

To create an autoscaling group, firstly start by creating a launch template. Then launch template specifies how to configure EC2 instances that are going to be launched by an autoscaling group. We will do this practically.


Go to the log into the management console, type EC2 in the search box, and select EC2 under services.


In the EC2 console on the left side of the navigation pane under instances, click launch templates, then click Create launch template.

aws ec2 auto scaling groups

Give your template a name, I will call it Auto-scaling-Template and then you can skip the version description for now and scroll down.

We will do configurations for our launch template. Under launch template contents under Application and OS images, select the QuickStart tab then select Amazon Linux. Then under Amazon Machine Image (AMI) select the dropdown and select Amazon Linux 2 AMI. Scroll down.

Under instance type select the drop-down button and select t2. Micro because it is the free one. Then under key-pair login, select the drop-down and select your key-pair. Scroll down.

In the network settings section, for subnet Leave it as it is. Then move to firewall and security, then select the existing security group. For security groups, select your security groups. I created a security group and opened port 22 for SSH and called it SSH security group. Again, I opened ports 80 and 443 for HTTP and HTTPS traffic and called this security group, web traffic. Scroll down. Leave all the other settings as default and click Create Launch template.
Come back to the EC2 dashboard at the bottom left of the navigation pane and select Auto Scaling groups.
aws ec2 auto scaling groups
aws ec2 auto scaling groups
In the create auto-scaling group dashboard under the name, give a name to your auto-scaling group. Then in the launch template section, select the drop-down and choose the template you just created. Then scroll down and click next.

Under network, select your VPC, I will move with the default VPC and then under availability zone, select the drop-down and select your AZs, I will select Us-east-1a, and then 1b AZs then scroll down and click next.

aws ec2 auto scaling groups
Under the configure advance option because we don’t have a load balancer, we will pass that page and click next. Then under the configure group size and scaling section, under desired capacity, select 2, then scroll down.
For scaling limits, minimum desired capacity select 1, and for maximum desired capacity select 4. Leave the other options and click next.
We will not add notification, click next.

Leave tags optional and click next.

On this page, review and click create auto-scaling-group.

When you come back to the EC2 dashboard under instances already we can confirm autoscaling has provisioned our desired capacity instances.
This brings us to the end of this demo. Pull down everything. Stay tuned for more.

If you have any questions concerning this article or have an AWS project that requires our assistance, please reach out to us by leaving a comment below or email us at sales@accendnetworks.com.

Thank you!

Categories
Blogs

How Does Amazon CloudWatch Work?

How Does Amazon CloudWatch Work?

Amazon CloudWatch monitors your Amazon Web Services (AWS) resources and the applications you run on AWS. Additionally, CloudWatch enables real-time monitoring of various AWS resources including EC2 instances, RDS database instances, load balancers, and AWS Lambda. CloudWatch allows to collect & track metrics, monitor log files, set alarms, and automate reactions to AWS resource changes.
It automatically provides metrics for CPU utilization, latency, and request counts. Moreover, it can monitor other vital metrics such as memory usage, error rates, etc.

CloudWatch Metrics

CloudWatch metrics give the users visibility into resource utilization, application performance, and operational health. These help you make sure that you can resolve technical issues and streamline processes and that the application runs smoothly.

How does Amazon CloudWatch work?

Basically, the Amazon CloudWatch primarily performs the following four actions:

Collect metrics and logs

In the first step, CloudWatch gathers metrics and logs from all your AWS services, like AWS EC2 instances. Following this, CloudWatch retrieves these metrics from the repository. This repository may also contain custom metrics entered into it.

Monitor and visualize the data

Next, CloudWatch monitors and visualizes this data using CloudWatch dashboards. These dashboards provide a unified view of all your AWS applications, resources, and services, on-premise or in the cloud. In addition, you can correlate metrics and logs. Consequently, this facilitates visual analysis of your resources’ health and performance.

Act on an automated response to any changes.

In this step, CloudWatch executes an automated response to any operational changes using alarms. For example, you can configure an alarm to start or terminate an EC2 instance after it meets specific conditions. Additionally, you can use alarms to start services such as Amazon EC2 auto-scaling or Amazon SNS If a triggered alarm activates, you can set up automated actions such as auto-scaling.

Analyze your metrics

The final step is analyzing and visualizing your collected metric and log data for better insight. You can perform real-time analysis using CloudWatch Metric Math which helps you dive deeper into your data.

Amazon CloudWatch Logs

CloudWatch Logs helps users access, monitor, and store access log files from EC2 instances, CloudTrail, Lambda functions, and other sources. With the help of CloudWatch Logs, you can troubleshoot your systems and applications. It offers near real-time monitoring and users can search for specific phrases, values, or patterns. You can provision CloudWatch logs as a managed service without any extra purchases from within your AWS accounts. CloudWatch logs are easy to work with from the AWS console or the AWS CLI. They have deep integration with AWS services. Furthermore, CloudWatch logs can trigger alerts based on certain logs occurring in the logs. For log collection, AWS provides both a new unified CloudWatch agent and an older CloudWatch Logs agent. However, AWS recommends using the unified CloudWatch agent. When you install a CloudWatch Logs agent on an EC2 instance, it automatically creates a log group. Alternatively, you can create a log group directly from the AWS console. For the demonstration, I have the following Lambda functions that I created.
Next, we will proceed to view the CloudWatch logs of my destination test function. To do so, select it and navigate to the monitoring tab. Then, click on “View CloudWatch logs,” as shown below.
After clicking “View CloudWatch logs,” the system takes you to the CloudWatch dashboard. And under log streams, you can select one of the log streams to view.
On selecting the first one we can see the below logs events.

CloudWatch Events

CloudWatch Events allows users to consume a near real-time stream of events as changes to their AWS environment occur. These event changes can subsequently trigger notifications or other actions. Despite this, CloudWatch events monitor EC2 instance launches, shutdowns, and detect auto-scale events. Additionally, it detects when AWS services provision or terminate.

What are the benefits of Amazon CloudWatch?

Access all monitoring data from a single dashboard

Essentially, Amazon CloudWatch allows you to monitor data from different services using a single dashboard.

Collects and analyzes metrics from AWS and on-premise applications

Thanks to its seamless integration with over 70 AWS services, CloudWatch can collect and publish metric data automatically.

Using this metric and log data, you can now optimize your AWS services and resources

Improve your operational efficiency and optimize your available resource

The Amazon CloudWatch service provides real-time insights into cloud operations. Hence, this enable you to optimize operational efficiency and reduce costs.

Improve operational visibility

With the Amazon CloudWatch service, you gain operational visibility across all your running applications

Extract valuable insights

Ultimately, Amazon CloudWatch enables you to extract valuable and actionable insights from generated logs.

Conclusion

Using the Amazon CloudWatch service, you can monitor cloud-based applications and other AWS services. Consequently, this helps you in troubleshooting any performance issues. With its centralized dashboard, AWS administrators have complete visibility into applications and services across AWS regions. In conclusion, this brings us to the end of this blog. Stay tuned for more.
For questions or AWS project assistance, contact us at sales@accendnetworks.com. or leave a comment below. Thank you!
Categories
Blogs

How To Create with Network Load Balancer in AWS

Extreme Performance with Network Load Balancers

In today’s fast-paced digital era, where every millisecond counts, minimizing latency and optimizing network performance have become paramount for businesses. Network load balancing plays a crucial role in achieving these goals. By distributing incoming network traffic across multiple servers, network load balancing ensures efficient resource utilization, enhances scalability, and reduces latency.

We can see in the above diagram, choose a network load balancer if you need ultra-high performance.

What is a Network Load Balancer?

A Network Load Balancer operates on the Transport Layer (Layer 4) of the Open Systems Interconnection (OSI) model rather than the application layer, making it ideal for Transmission Control Protocol (TCP) and User Datagram Protocol (UDP) traffic. It is suitable for applications anticipating an unexpected spike in traffic because it can handle millions of concurrent requests per second.

Network load balancing is the process of evenly distributing incoming network traffic across multiple servers or resources. This intelligent traffic management technique helps to eliminate overload on individual servers and optimizes resource utilization.

Components of AWS NLB

A load balancer serves as the single point of contact for clients. The following are the two main components of the AWS NLB:
Listeners. Before an AWS NLB can be used, an admin must add one or more listeners. A listener is a process that uses the configured protocol and port number to look for connection requests. The rules defined for a listener dictate how an NLB routes traffic to the target groups.
Target groups. A target group consists of multiple registered targets to which the listener can route traffic, such as Amazon EC2 instances, IP addresses, microservices, and containers. A target can be registered with multiple target groups, which increases the availability of the application, especially if demand spikes.

How does load balancing work in AWS?

The network load balancer performs health checks on targets to ensure traffic is routed to only high-performing resources. When a target becomes slow or unresponsive, the NLB routes traffic to a different target.

Features of Network Load Balancer

Network Load Balancer serves over a million concurrent requests per second while providing extremely low latencies for applications that are sensitive to latency.

The Network Load Balancer allows the back end to see the client’s IP address by preserving the client-side source IP.

Network Load Balancer also provides static IP support per subnet.

To provide a fixed IP, Network Load Balancer also gives the option to assign an Elastic IP per subnet.

Other AWS services such as Auto Scaling, Elastic Container Service (ECS), CloudFormation, Elastic BeanStalk, and CloudWatch can be integrated with Network Load Balancer.

To communicate with other VPCs, network load balancers can be used with AWS Private Link. AWS Private Link offers secure and private access between on-premises networks, AWS services, and VPCs.

Network load balancing offers several key advantages:

Improved Scalability: By distributing incoming traffic across multiple servers, network load balancing ensures that your system can handle increasing demands without compromising performance.

Enhanced Redundancy: Network load balancing introduces redundancy into your network infrastructure. If one server fails or experiences a high load, the load balancer automatically redirects traffic to the healthy servers, eliminating downtime.

Minimized Latency: Latency, Network load balancing helps minimize latency by dynamically directing requests to the server with the lowest latency or optimal proximity.

How to Create a Network Load Balancer?

To create a network load balancer, log in to the management console then type EC2 in the search and select EC2 under services. On the EC2 console under load balancing, select load balancers.
Fill in your load balancer details. Under name give it a name, leave it on internet facing and IPV4 address then scroll down to the networking section.

select your VPC, then under mappings select the availability zones make sure to select the AZs where your targets will reside for the EC2 instance target then under security Select the security group for your load balancer then scroll down.

Under listener will move with TCP on port 80. Then for default action, click Create Target group. Remember you can also create it before.
In the target group console, under target types, we will move with instances, and for a name call it NLB-Target. Leave it on TCP port 80, select your VPC then scroll down and click next.
Then under register targets, select your instances, I had already created two instances for this demo. will select my instances. Then click Include as pending below then click Create target group.
Come back to the network load balancer and select your target group. It will now be showing up.
Scroll down to review the summery then click create load balancer.
This is how we create a network load balancer. This brings us to the end of this blog. Make sure to clean up.

If you have any questions concerning this article or have an AWS project that requires our assistance, please reach out to us by leaving a comment below or email us at sales@accendnetworks.com.

Thank you!

Categories
Blogs

How To Configure Serverless Computing

How To Configure Serverless Computing..

Serverless computing has emerged as a revolutionary paradigm in the world of cloud computing, transforming the way developers build and deploy applications. Unlike traditional server-centric models, serverless computing abstracts away infrastructure management, allowing developers to focus solely on writing code and delivering value to end-users.

What is serverless computing?

Serverless computing is a cloud computing model where the Cloud provider manages the underlying infrastructure required to run an application.

Comparison with traditional architecture

Traditional architecture and serverless computing represent two different approaches to building and deploying applications. Here are some differences between the two:

Infrastructure management:

In traditional architecture, developers manage the underlying infrastructure, such as servers, storage, and networking. In serverless computing, the cloud provider manages the infrastructure, allowing developers to focus on writing code.

Scaling: In traditional architecture, scaling is typically achieved by adding more servers or resources as needed. In serverless computing, the cloud provider automatically scales the resources needed to handle the workload.
Cost: Traditional architecture can be expensive, as it requires the purchase and management of hardware and software. Serverless computing, on the other hand, is typically billed on a usage basis, which can be more cost-effective for variable workloads.
Cold starts: In serverless computing, functions may experience a cold start when they are invoked for the first time or after a period of inactivity. This can lead to longer response times, whereas in traditional architecture, the infrastructure is typically always running and ready to respond to requests.
Control: With traditional architecture, developers have full control over the infrastructure and can customize it to meet their specific needs. With serverless computing, the cloud provider manages the infrastructure and developers have less control over the environment.

Benefits of serverless computing

Serverless Computing has offered the following benefits just to mention a few.
Cost savings: Serverless applications are cost-effective because developers only pay for the resources used during the function’s execution, rather than paying for the entire infrastructure.
Scalability: Serverless applications can automatically scale up or down based on demand, ensuring that the application can handle sudden spikes in traffic or other events.
Reduced operational complexity: Serverless computing eliminates the need for developers to manage infrastructure and server-side resources, reducing operational complexity and allowing developers to focus on writing code.
Improved fault tolerance and availability: The cloud provider manages the infrastructure required to run the application, including monitoring, scaling, and failover, ensuring that the application is always available and can handle sudden spikes in traffic or other events. This provides a high level of fault tolerance and availability.

Best practices

Here are some best practices for developing serverless applications:
Function design: Design functions to be small, stateless, and focused on a single task. This will help ensure that they can be easily tested, deployed, and scaled independently.
Use event-driven architectures: Use events to trigger functions in response to changes in the system. This can help reduce the cost of running your application, as functions only execute when needed.
Minimize cold starts: Cold starts occur when a function is invoked for the first time or when it has been idle for a while. These can lead to longer response times for users. To minimize cold starts, consider using provisioned concurrency or keeping functions warm by periodically invoking them.
Optimize resource usage: Because serverless applications are charged based on usage, it’s important to optimize resource usage to reduce costs. Consider using a CDN to cache static content, and use serverless databases to minimize the amount of server resources needed.
Use security best practices: Serverless applications are still vulnerable to security threats, so use security best practices such as encrypting sensitive data, limiting access to resources, and regularly patching software.

When to use serverless applications?

Serverless computing can be a good choice for certain types of applications. Here are some scenarios where serverless computing may be a good fit:
Event-driven workloads: Serverless computing is well-suited for event-driven workloads that are triggered by events, such as HTTP requests, changes to a database, or messages from a queue.
Variable workloads: Serverless computing is also well-suited for workloads that have variable demand, as the cloud provider can automatically scale the resources needed to handle the workload. This can reduce the cost of running the application during periods of low demand.
Rapid development: Serverless computing can be a good choice for applications that require rapid development and deployment. By removing the need to manage infrastructure, developers can focus on writing code and deploying features quickly.
Data processing: Serverless computing can also be a good choice for data processing workloads that can be broken down into smaller, independent tasks. This can help reduce the cost and complexity of managing the infrastructure needed to process large amounts of data.

Conclusion:

Serverless computing on AWS marks a paradigm shift, empowering developers to focus on creating innovative applications without the burden of managing infrastructure.
Stay tuned for more.
If you have any questions concerning this article or have an AWS project that requires our assistance, please reach out to us by leaving a comment below or email us at sales@accendnetworks.com.
Thank you!