Accend Networks San Francisco Bay Area Full Service IT Consulting Company

Categories
Blogs

How Does Amazon CloudWatch Work?

How Does Amazon CloudWatch Work?

Amazon CloudWatch monitors your Amazon Web Services (AWS) resources and the applications you run on AWS. Additionally, CloudWatch enables real-time monitoring of various AWS resources including EC2 instances, RDS database instances, load balancers, and AWS Lambda. CloudWatch allows to collect & track metrics, monitor log files, set alarms, and automate reactions to AWS resource changes.
It automatically provides metrics for CPU utilization, latency, and request counts. Moreover, it can monitor other vital metrics such as memory usage, error rates, etc.

CloudWatch Metrics

CloudWatch metrics give the users visibility into resource utilization, application performance, and operational health. These help you make sure that you can resolve technical issues and streamline processes and that the application runs smoothly.

How does Amazon CloudWatch work?

Basically, the Amazon CloudWatch primarily performs the following four actions:

Collect metrics and logs

In the first step, CloudWatch gathers metrics and logs from all your AWS services, like AWS EC2 instances. Following this, CloudWatch retrieves these metrics from the repository. This repository may also contain custom metrics entered into it.

Monitor and visualize the data

Next, CloudWatch monitors and visualizes this data using CloudWatch dashboards. These dashboards provide a unified view of all your AWS applications, resources, and services, on-premise or in the cloud. In addition, you can correlate metrics and logs. Consequently, this facilitates visual analysis of your resources’ health and performance.

Act on an automated response to any changes.

In this step, CloudWatch executes an automated response to any operational changes using alarms. For example, you can configure an alarm to start or terminate an EC2 instance after it meets specific conditions. Additionally, you can use alarms to start services such as Amazon EC2 auto-scaling or Amazon SNS If a triggered alarm activates, you can set up automated actions such as auto-scaling.

Analyze your metrics

The final step is analyzing and visualizing your collected metric and log data for better insight. You can perform real-time analysis using CloudWatch Metric Math which helps you dive deeper into your data.

Amazon CloudWatch Logs

CloudWatch Logs helps users access, monitor, and store access log files from EC2 instances, CloudTrail, Lambda functions, and other sources. With the help of CloudWatch Logs, you can troubleshoot your systems and applications. It offers near real-time monitoring and users can search for specific phrases, values, or patterns. You can provision CloudWatch logs as a managed service without any extra purchases from within your AWS accounts. CloudWatch logs are easy to work with from the AWS console or the AWS CLI. They have deep integration with AWS services. Furthermore, CloudWatch logs can trigger alerts based on certain logs occurring in the logs. For log collection, AWS provides both a new unified CloudWatch agent and an older CloudWatch Logs agent. However, AWS recommends using the unified CloudWatch agent. When you install a CloudWatch Logs agent on an EC2 instance, it automatically creates a log group. Alternatively, you can create a log group directly from the AWS console. For the demonstration, I have the following Lambda functions that I created.
Next, we will proceed to view the CloudWatch logs of my destination test function. To do so, select it and navigate to the monitoring tab. Then, click on “View CloudWatch logs,” as shown below.
After clicking “View CloudWatch logs,” the system takes you to the CloudWatch dashboard. And under log streams, you can select one of the log streams to view.
On selecting the first one we can see the below logs events.

CloudWatch Events

CloudWatch Events allows users to consume a near real-time stream of events as changes to their AWS environment occur. These event changes can subsequently trigger notifications or other actions. Despite this, CloudWatch events monitor EC2 instance launches, shutdowns, and detect auto-scale events. Additionally, it detects when AWS services provision or terminate.

What are the benefits of Amazon CloudWatch?

Access all monitoring data from a single dashboard

Essentially, Amazon CloudWatch allows you to monitor data from different services using a single dashboard.

Collects and analyzes metrics from AWS and on-premise applications

Thanks to its seamless integration with over 70 AWS services, CloudWatch can collect and publish metric data automatically.

Using this metric and log data, you can now optimize your AWS services and resources

Improve your operational efficiency and optimize your available resource

The Amazon CloudWatch service provides real-time insights into cloud operations. Hence, this enable you to optimize operational efficiency and reduce costs.

Improve operational visibility

With the Amazon CloudWatch service, you gain operational visibility across all your running applications

Extract valuable insights

Ultimately, Amazon CloudWatch enables you to extract valuable and actionable insights from generated logs.

Conclusion

Using the Amazon CloudWatch service, you can monitor cloud-based applications and other AWS services. Consequently, this helps you in troubleshooting any performance issues. With its centralized dashboard, AWS administrators have complete visibility into applications and services across AWS regions. In conclusion, this brings us to the end of this blog. Stay tuned for more.
For questions or AWS project assistance, contact us at sales@accendnetworks.com. or leave a comment below. Thank you!
Categories
Blogs

How To Create with Network Load Balancer in AWS

Extreme Performance with Network Load Balancers

In today’s fast-paced digital era, where every millisecond counts, minimizing latency and optimizing network performance have become paramount for businesses. Network load balancing plays a crucial role in achieving these goals. By distributing incoming network traffic across multiple servers, network load balancing ensures efficient resource utilization, enhances scalability, and reduces latency.

We can see in the above diagram, choose a network load balancer if you need ultra-high performance.

What is a Network Load Balancer?

A Network Load Balancer operates on the Transport Layer (Layer 4) of the Open Systems Interconnection (OSI) model rather than the application layer, making it ideal for Transmission Control Protocol (TCP) and User Datagram Protocol (UDP) traffic. It is suitable for applications anticipating an unexpected spike in traffic because it can handle millions of concurrent requests per second.

Network load balancing is the process of evenly distributing incoming network traffic across multiple servers or resources. This intelligent traffic management technique helps to eliminate overload on individual servers and optimizes resource utilization.

Components of AWS NLB

A load balancer serves as the single point of contact for clients. The following are the two main components of the AWS NLB:
Listeners. Before an AWS NLB can be used, an admin must add one or more listeners. A listener is a process that uses the configured protocol and port number to look for connection requests. The rules defined for a listener dictate how an NLB routes traffic to the target groups.
Target groups. A target group consists of multiple registered targets to which the listener can route traffic, such as Amazon EC2 instances, IP addresses, microservices, and containers. A target can be registered with multiple target groups, which increases the availability of the application, especially if demand spikes.

How does load balancing work in AWS?

The network load balancer performs health checks on targets to ensure traffic is routed to only high-performing resources. When a target becomes slow or unresponsive, the NLB routes traffic to a different target.

Features of Network Load Balancer

Network Load Balancer serves over a million concurrent requests per second while providing extremely low latencies for applications that are sensitive to latency.

The Network Load Balancer allows the back end to see the client’s IP address by preserving the client-side source IP.

Network Load Balancer also provides static IP support per subnet.

To provide a fixed IP, Network Load Balancer also gives the option to assign an Elastic IP per subnet.

Other AWS services such as Auto Scaling, Elastic Container Service (ECS), CloudFormation, Elastic BeanStalk, and CloudWatch can be integrated with Network Load Balancer.

To communicate with other VPCs, network load balancers can be used with AWS Private Link. AWS Private Link offers secure and private access between on-premises networks, AWS services, and VPCs.

Network load balancing offers several key advantages:

Improved Scalability: By distributing incoming traffic across multiple servers, network load balancing ensures that your system can handle increasing demands without compromising performance.

Enhanced Redundancy: Network load balancing introduces redundancy into your network infrastructure. If one server fails or experiences a high load, the load balancer automatically redirects traffic to the healthy servers, eliminating downtime.

Minimized Latency: Latency, Network load balancing helps minimize latency by dynamically directing requests to the server with the lowest latency or optimal proximity.

How to Create a Network Load Balancer?

To create a network load balancer, log in to the management console then type EC2 in the search and select EC2 under services. On the EC2 console under load balancing, select load balancers.
Fill in your load balancer details. Under name give it a name, leave it on internet facing and IPV4 address then scroll down to the networking section.

select your VPC, then under mappings select the availability zones make sure to select the AZs where your targets will reside for the EC2 instance target then under security Select the security group for your load balancer then scroll down.

Under listener will move with TCP on port 80. Then for default action, click Create Target group. Remember you can also create it before.
In the target group console, under target types, we will move with instances, and for a name call it NLB-Target. Leave it on TCP port 80, select your VPC then scroll down and click next.
Then under register targets, select your instances, I had already created two instances for this demo. will select my instances. Then click Include as pending below then click Create target group.
Come back to the network load balancer and select your target group. It will now be showing up.
Scroll down to review the summery then click create load balancer.
This is how we create a network load balancer. This brings us to the end of this blog. Make sure to clean up.

If you have any questions concerning this article or have an AWS project that requires our assistance, please reach out to us by leaving a comment below or email us at sales@accendnetworks.com.

Thank you!

Categories
Blogs

How To Configure Serverless Computing

How To Configure Serverless Computing..

Serverless computing has emerged as a revolutionary paradigm in the world of cloud computing, transforming the way developers build and deploy applications. Unlike traditional server-centric models, serverless computing abstracts away infrastructure management, allowing developers to focus solely on writing code and delivering value to end-users.

What is serverless computing?

Serverless computing is a cloud computing model where the Cloud provider manages the underlying infrastructure required to run an application.

Comparison with traditional architecture

Traditional architecture and serverless computing represent two different approaches to building and deploying applications. Here are some differences between the two:

Infrastructure management:

In traditional architecture, developers manage the underlying infrastructure, such as servers, storage, and networking. In serverless computing, the cloud provider manages the infrastructure, allowing developers to focus on writing code.

Scaling: In traditional architecture, scaling is typically achieved by adding more servers or resources as needed. In serverless computing, the cloud provider automatically scales the resources needed to handle the workload.
Cost: Traditional architecture can be expensive, as it requires the purchase and management of hardware and software. Serverless computing, on the other hand, is typically billed on a usage basis, which can be more cost-effective for variable workloads.
Cold starts: In serverless computing, functions may experience a cold start when they are invoked for the first time or after a period of inactivity. This can lead to longer response times, whereas in traditional architecture, the infrastructure is typically always running and ready to respond to requests.
Control: With traditional architecture, developers have full control over the infrastructure and can customize it to meet their specific needs. With serverless computing, the cloud provider manages the infrastructure and developers have less control over the environment.

Benefits of serverless computing

Serverless Computing has offered the following benefits just to mention a few.
Cost savings: Serverless applications are cost-effective because developers only pay for the resources used during the function’s execution, rather than paying for the entire infrastructure.
Scalability: Serverless applications can automatically scale up or down based on demand, ensuring that the application can handle sudden spikes in traffic or other events.
Reduced operational complexity: Serverless computing eliminates the need for developers to manage infrastructure and server-side resources, reducing operational complexity and allowing developers to focus on writing code.
Improved fault tolerance and availability: The cloud provider manages the infrastructure required to run the application, including monitoring, scaling, and failover, ensuring that the application is always available and can handle sudden spikes in traffic or other events. This provides a high level of fault tolerance and availability.

Best practices

Here are some best practices for developing serverless applications:
Function design: Design functions to be small, stateless, and focused on a single task. This will help ensure that they can be easily tested, deployed, and scaled independently.
Use event-driven architectures: Use events to trigger functions in response to changes in the system. This can help reduce the cost of running your application, as functions only execute when needed.
Minimize cold starts: Cold starts occur when a function is invoked for the first time or when it has been idle for a while. These can lead to longer response times for users. To minimize cold starts, consider using provisioned concurrency or keeping functions warm by periodically invoking them.
Optimize resource usage: Because serverless applications are charged based on usage, it’s important to optimize resource usage to reduce costs. Consider using a CDN to cache static content, and use serverless databases to minimize the amount of server resources needed.
Use security best practices: Serverless applications are still vulnerable to security threats, so use security best practices such as encrypting sensitive data, limiting access to resources, and regularly patching software.

When to use serverless applications?

Serverless computing can be a good choice for certain types of applications. Here are some scenarios where serverless computing may be a good fit:
Event-driven workloads: Serverless computing is well-suited for event-driven workloads that are triggered by events, such as HTTP requests, changes to a database, or messages from a queue.
Variable workloads: Serverless computing is also well-suited for workloads that have variable demand, as the cloud provider can automatically scale the resources needed to handle the workload. This can reduce the cost of running the application during periods of low demand.
Rapid development: Serverless computing can be a good choice for applications that require rapid development and deployment. By removing the need to manage infrastructure, developers can focus on writing code and deploying features quickly.
Data processing: Serverless computing can also be a good choice for data processing workloads that can be broken down into smaller, independent tasks. This can help reduce the cost and complexity of managing the infrastructure needed to process large amounts of data.

Conclusion:

Serverless computing on AWS marks a paradigm shift, empowering developers to focus on creating innovative applications without the burden of managing infrastructure.
Stay tuned for more.
If you have any questions concerning this article or have an AWS project that requires our assistance, please reach out to us by leaving a comment below or email us at sales@accendnetworks.com.
Thank you!

Categories
Blogs

How to Configure VPC Endpoints to Enhance Security and Efficiency in Cloud Networking.

How to Configure VPC Endpoints to Enhance Security and Efficiency in AWS Cloud Networking.

What is a VPC Endpoint?

Many AWS customers run their applications within a VPC for security or isolation reasons.

For example, previously, if you wanted your EC2 instances in your VPC to be able to access DynamoDB, you had two options.

You could use an Internet Gateway (with a NAT Gateway or assigning your instances public IPs)

You could route all of your traffic to your local infrastructure via VPN or AWS Direct Connect and then back to DynamoDB.

Both of these solutions had security and throughput implications and it could be difficult to configure NACLs or security groups to restrict access to just DynamoDB.

A VPC endpoint is a service offered by AWS VPC, which lets customers privately connect to supported AWS services and VPC endpoint services powered by AWS Private Link.

VPC Endpoints are virtual devices, that can be horizontally scaled, redundant, and highly available VPC components that allow communication between instances in your VPC and services without imposing availability risks or bandwidth constraints on your network traffic.

By using VPC Endpoints we don’t require public IP addresses for Amazon VPC instances to communicate with the resources of the service, and this network traffic between an Amazon VPC and an AWS service does not leave the Amazon network, which is our exact requirement.

VPC endpoints are virtual devices. They are horizontally scaled, redundant, and highly available Amazon VPC components that allow communication between instances in an Amazon VPC and services without imposing availability risks or bandwidth constraints on network traffic.

In other words, VPC endpoints enable you to privately connect your VPC to supported AWS services and VPC endpoint services powered by Private Link without requiring an IGW, NAT instance, VPN connection, or Direct Connect connection. Instances in your VPC do not require public IP addresses to communicate with resources in the service. Traffic between your VPC and the other service does not leave the Amazon network.

Types of VPC Endpoint

There are two types of VPC endpoints we’ve:

1.Gateway Endpoints

2. Interface Endpoints

Gateway Endpoints

A VPC Gateway Endpoint is a way to connect your VPC to an AWS service like S3 or DynamoDB without going through the public internet or needing to set up a VPN connection. This helps improve security and can also improve network performance since the traffic stays within the AWS network.

So if we want to utilize S3 or DynamoDB services inside VPC using Gateway Endpoints is recommended over Internet Gateway, NAT, or any other service, as this method also improves security, and latency for the application traffic.

Interface Endpoints

Interface endpoints enable connectivity to services over AWS Private Link. These services include some AWS managed services, services hosted by other AWS customers and partners in their own Amazon VPCs (referred to as endpoint services), and supported AWS Marketplace partner services. The owner of a service is a service provider. The principal creating the interface endpoint and using that service is a service consumer.

A Case Study — Connecting to S3 via VPC Gateway Endpoint

The following scenario connects to an AWS S3 bucket from an EC2 instance (within

a public subnet) via a VPC endpoint.

We are going to create a VPC gateway endpoint. We will use policies to see how we can control traffic access to the AWS S3 bucket. According to our reference architecture, we have a VPC, a public subnet, and an instance running.

When we create our endpoint, a route will be added to our route table which would point to, a destination which would result in the IP addresses of AWS S3. What happens is that this is a more specific route than any other route like the route going out to the internet. So, we get that any connections that go to the AWS S3 endpoints should be routed by the gateway endpoint and will not use the public internet.

Now let’s head over and create the VPC endpoint.

Log into the management console with an admin user privilege. and in the search box, type VPC, then select VPC under services.

In the VPC dashboard on the left side of the navigation pane select Endpoints.

On the Endpoint dashboard, click Create Endpoint.

In the create endpoint dashboard, we will enter the settings to create the endpoint.

Under the name tag, give your endpoint a name, and call it Demogatewayendpoint.

Then under the service category, leave it on AWS service. Then scroll a little beat down under services.

Under services, type S3 in the search box then select the radio button with the gateway type endpoint. Scroll a little beat down to the VPC section.

In the VPC section we will select the VPC we will create this endpoint. I have a VPC created already. Create a VPC, refer to our blog post on creating a custom VPC from scratch.

So, under VPC, I will select the drop-down button and select prod-VPC. Endpoint enables our resources not to access the public internet but to use AWS’s private network.

In the Route table section, I will select the public Route table. Then under policy, I will select full access.

These are the only settings we need to create our Endpoint, scroll down and click Create Endpoint.

And there we go; our endpoint has been successfully created.

When you create an Endpoint, an endpoint route is added to the route table you selected during creation. Let’s verify this, on the left side of the VPC dashboard, click route tables.

On the route table dashboard, select your public route table then move down to the routes tab and here you will see the VPC endpoint route added to this route table.

With all this done, we will now go to the verification section and test our endpoint. We will do this with an EC2 instance, I have already created an Instance, if you don’t have one create it and then proceed.

What I will now create is an I am role to allow EC2 Instance access S3 bucket. Straight away let’s proceed to the I am dashboard then select role on the left side of the navigation pane.

For trusted entity, select AWS services then under use case, select the dropdown and select EC2, then click next.

In the add permissions dashboard, type S3 in the search box tick the box with S3 full access then click next.

Review the policy then click Create Role.

Next, we will attach this role to our EC2 instance. So, let’s go to the EC2 instance console.

I have one instance running.

Select it then choose the action drop-down button move down to the security tab then select modify I am role.

In the modify I am role dashboard, select your S3 full access role then click update role.

Next, let’s head to the S3 console, I had already created a bucket, and uploaded files to it. So go ahead and create a bucket for yourself.

Then let’s now SSH into our EC2 instance and then try to list buckets, if we will be able to get access via the endpoint. Success As we can see we can list all the buckets.

Again, if I try to list the contents of a bucket, I call my demo bucket, and we get success.

To verify that we were accessing the bucket via the endpoint, let us go back to the endpoint and modify its permission to deny. Go back to your endpoint, select it, move to the policy tab, change the policy from allow to deny, and then click save.

Now come back to your terminal and try to list the contents of your buckets, you will get access denied.

This demonstrates that we were only accessing our bucket plus its contents via the endpoint.

This brings us to the end of this demo. Clean up to avoid surprise bills.

If you have any questions concerning this article or have an AWS project that requires our assistance, please reach out to us by leaving a comment below or email us at sales@accendnetworks.com.

Thank you!


Categories
Blogs

Exploring the Power of AWS Direct Connect: Bridging the Gap Between On-Premises and the Cloud.

How To Configure AWS Direct Connect.

AWS Direct Connect is a high-speed, low-latency connection that allows you to access public and private AWS Cloud services from your local (on-premises) infrastructure. 

The connection is enabled via dedicated lines and bypasses the public Internet to help reduce network unpredictability and congestion. This can give us some significance and advantages in terms of bandwidth as well as latency. And it ensures that we have got a consistence network for use. It will cost more than if you establish an AWS-managed VPN across the Internet.

Let’s look at the configuration of Direct Connect.

According to our reference architecture above, we have an AWS region with A VPC, and we’ve got a corporate data Center that could be an office. What we want to do is to connect that corporate data center to AWS.

To do that, we have to connect that via what is called an AWS direct connect location and this can be found in many cities around the world.

In the direct connect location, there is something called an AWS cage, and that’s where AWS has its networking equipment.

And then there is a customer or a partner cage, customer means that you will also own a rack into that Data Center with your networking equipment.

A partner simply means an APN partner partnered with AWS, and they have their cage of networking equipment and you can get a connection into their cage, and they do the connectivity into the AWS.

So, the way it works is that there are routers into these cages, into the direct connect location. Then a DX port (Direct connect port) must be allocated, in the Direct connect location. So, this is the port into which you are going to plug into a cross-connect.

The cross-connect is a cable that goes between the customer or partner cage, into the DX port that has been allocated into the AWS cage.

We then have a customer router in the corporate Data Center and we need to connect that to the DX router in the Direct Connect location. So that’s where we form the entrance connection.

So, AWS has their connection from their cage into the direct connect in the region but you must connect from your corporate data Center to the customer partner cage.

This is where some expenses and some challenges may take place because you will find you need to connect from your Data Center to the customer cage location if you don’t have a pre-existing location and sometimes that connection takes a bit of money and might take quite a bit of time.

It’s something you cannot set up very quickly, with Direct Connect, it takes weeks to months. And that’s how long Direct Connect takes to provision.

The actual DX connection is a physical fiber connection to AWS and runs either 1Gps or 10Gps.

Key Benefits of Direct Connect.

Private connectivity between your AWS and data Center or office,

Consistency network experience means increased speed and lower latency/consistency.

It can lower the cost for organizations to transfer large volumes of Data.

Let’s look at a bit more details.

Once we have established a physical connection, we then have to establish something called a virtual interface. as shown above by our reference architecture.

There are public and private virtual interfaces.

A private virtual interface connects to a single VPC in the same AWS region using a virtual private gateway VGW.

A VIF is essentially a virtual interface that uses 802.1Q VLAN and BGP sessions.

NOTE: 802.1Q VLAN (Virtual Local Area Network) and BGP (Border Gateway Protocol) are two distinct networking concepts.

The next Virtual interface is public, and what this does is to connect you to the AWS public services in any region. So, private VIF goes into the same region as the actual physical connection, but the public VIF can connect to a public service in any region. It doesn’t connect you to the Internet, so you can’t use it for internet purposes.

What if you have multiple VPCs in the region,

In this case, you will have multiple VGWs and multiple private VIFs that we will use to connect to those VGWs. So, you are connecting to multiple VPCs each is connected over a separate private face.

This can also be shared with other AWS accounts, when you do that, is known as a hosted VIF.

You can get speeds of 50Mbps to 100Mbps if you use an APN partner. An APN partner will have a connection already to AWS, and essentially, they are giving you a subset of that connection.

This can be implemented, either via a hosted, VIF, or a hosted connection.

Hosted VIF, is a single VIF, that is shared with other customers. In that case, there is a shared bandwidth.

Or you use a hosted connection, that is a DX with a single VIF dedicated to you, so obviously there is a cost implication to this, but if you go with the cheaper option, the hosted VIF, you are going to ensure that you understand that there is shared bandwidth happening there.

NOTE: DX connections are not encrypted hence this could be a security risk.

So, the question is what can we do to make sure our data is encrypted?

We can have an IPsec S2S VPN (site-to-site VPN connection) running over a VIF to add encryption in transit, so essentially you are running Direct Connect then on top of Direct Connect so you encrypt the traffic that goes over that direct connect using an AWS managed VPN connection.

Link aggregation groups (LAGs) can be used to combine multiple physical connections into a single logical connection and use LACP (the link aggregation control protocol). This is just for improved speed; it’s not going to give you high availability.

DX design for high availability.

On the subject of high availability, let’s look at how you can design DX to make sure that you have high availability built into these connections. When we look at our first architectural diagram, we can identify very many points of failure. For high Resiliency for critical workloads using two single connections to more than one location. The below design helps to prevent connection against device, connectivity, and complete location failure.

This is going to be a considerably more expensive setup than if we have very low levels of redundancy. As you add redundancy you always add cost but you have to consider your business and the impact of an outage.

another implementation we could do to add redundancy is rather than adding a Direct Connect connection, we can add an IPsec VPN as shown below.

In this architecture we have the DX connection which is the primary path, we have a VGW which is connected back to the corporate data Center using a site-to-site VPN, which is the backup path, we give priority to our traffic to use the DX link, but if the DX link fails due to some reason, we have a backup path over the internet. but since the internet has some disadvantages, like lower bandwidth and considerably lower latency this might have an impact on your business. so, you get to consider an option that is best for your business but the last option could be significantly cheaper than having a fully redundant DX configuration.

a recommendation from AWS is that you should not use this architecture if your speeds are over 1Gbps. so, if you need more than 1Gbps of bandwidth don’t use this setup because of limitations in the AWS VPN.

This brings us to the end of this blog. Thanks for your attention.

If you have any questions concerning this article or have an AWS project that requires our assistance, please reach out to us by leaving a comment below or email us at info@accendnetworks.com.

Thank you!.