Accend Networks San Francisco Bay Area Full Service IT Consulting Company

Categories
Blogs

Unlocking the Power of AWS EBS Volumes: A Comprehensive Introduction

Unlocking the Power of AWS EBS Volumes: A Comprehensive Introduction.

EBS is a popular cloud-based storage service offered by Amazon Web Services (AWS).

EBS, called Elastic Block Store, is a block storage system used to store data. Designed for mission-critical systems, EBS provides easy scalability to petabytes of data.

What Is EBS?

Elastic Block Store (EBS) is a block storage service based in the AWS cloud. EBS stores huge amounts of data in blocks, which work like hard drives (called volumes). You can use it to store any type of data, including file systems, transactional data, NoSQL and relational databases, backup instances, containers, and applications.

EBS volumes are virtual disk drives that can be attached to Amazon EC2 instances, providing durable block-level storage.

What Is an EBS Volume?

It’s a Network drive attached to one EC2 instance at a time and works like a hard drive.
 An EBS Volume is a network drive (not a physical drive) you can attach to EC2 instances while they run.

This means to communicate between EC2 instance and EBS volume it will be using the network.

EBS volume because of their network drive, can be detached from one EC2 instance and attached to another one quickly.
 It allows EC2 instances to persist (continue to exist) data, even after the instance is terminated.
 EBS volumes can be mounted to one instance at a time (at the CCP level).
 EBS volumes are bound up/ linked/ tied to specific AZ’s. An EBS volume in us-east-1a cannot be attached to us-east-1b. But if we do a snapshot then we are able to move an EBS volume across different availability zones.

common use cases for EBS volumes:

Frequent updates — storage of data that needs frequent updates. For example: database applications, and instances’ system drives.

Throughput-intensive applications — that need to perform continuous disk scans.

EC2 instances — once you attach an EBS volume to an EC2 instance, the EBS volume serves the function of a physical hard drive.

Types of EBS Volumes

The performance and pricing of your EBS storage will be determined by the type of volumes you choose. Amazon EBS offers four types of volumes, which serve different functions.

Solid State Drives (SSD)-based volumes

General Purpose SSD (gp2) — the default EBS volume, configured to provide the highest possible performance for the lowest price. Recommended for low-latency interactive apps, and dev and test operations.

Provisioned IOPS SSD (io1) — configured to provide high performance for mission-critical applications. Ideal for NoSQL databases, I/O-intensive relational loads, and application workloads.

What is IOPS?

IOPS, which stands for Input/Output Operations Per Second, is a measure of the performance or speed of an EBS (Elastic Block Store) volume in Amazon Web Services (AWS). In simple terms, it represents how quickly data can be read from or written to the volume.

Think of IOPS as the number of tasks the EBS volume can handle simultaneously. The higher the IOPS, the more tasks it can handle at once, resulting in faster data transfers. It is particularly important for applications that require a lot of data access, such as databases or applications that deal with large amounts of data.

Hard Disk Drives (HDD)-based volumes

Throughput Optimized HDD (st1) — provides low-cost magnetic storage. Recommended for large, sequential workloads that define performance in throughput.

Cold HDD (sc1) — uses a burst model to adjust capacity, thus offering the cheapest magnetic storage. Ideal for cold large sequential workloads.

The Beginner’s Guide to Creating EBS Volumes Prerequisite: an AWS account.

If you don’t have an AWS account, you can follow the steps explained here.

How to Create a New (Empty) EBS Volume via the Amazon EC2 Console

Go to the Amazon EC2 console.

Locate the navigation bar, then select a Region. Region selection is critical. An EBS volume is restricted to its Availability Zone (AZ). That means you won’t be able to move the volume or attach it to an instance from another AZ. Additionally, each region is priced differently. So do this wisely, and choose in advance prior to initiating the volume.

In the console, type EC2 in the search box and select EC2 under services.

In the EC2 dashboard on the left side under Elastic block store, select volumes then click create volume.

Choose the volume type. If you know what you’re doing, and you know which volume you need, this is where you can choose the volume type of your choice. If you’re not sure what type you need, or if you’re just experimenting, go with the default option (which is set to gp2)

Under availability zone, select the dropdown and choose your availability zone, keep in mind that you can attach EBS volumes only to EC2 instances located in the same AZ. I will move with us-east-1a

EBS volumes are not encrypted automatically. If you want to do that, now is the time.

For EBS encryption, tick the box, for Encrypt this volume, then choose default CMK for EBS encryption. This type of encryption is offered at no additional cost.

For customized encryption, choose Encrypt this volume, then choose a different CMK from Master Key. Note that this is a paid service and you’ll be charged with additional costs.

Tag your volume. This is not a must, and you’ll be able to initiate your EBS volume without tagging it. We will leave this section as optional.

Choose Create Volume.

Success you now have a new empty EBS volume. You can now use it to store data or attach the volume to an EC2 instance.

 

Conclusion:

Amazon EBS volumes are a fundamental component of the AWS ecosystem, providing scalable and durable block storage for a wide range of applications. By understanding the features, use cases, and best practices associated with EBS volumes, users can make informed decisions to meet their specific storage needs in the AWS cloud environment.

Pull down and stay tuned for more.

If you have any questions concerning this article or have an AWS project that requires our assistance, please reach out to us by leaving a comment below or email us at. sales@accendnetworks.com

Thank you!

Categories
Blogs Uncategorized

Understanding AWS Security Groups and Bootstrap Scripts

Understanding AWS Security Groups and Bootstrap Scripts: Enhancing Cloud Security and Automation.

In the realm of AWS, achieving a balance between robust security and streamlined automation is essential for efficient cloud management. AWS Security Groups and Bootstrap Scripts play pivotal roles in this Endeavor. In this article, we’ll delve into these two AWS components and provide a hands-on demo to illustrate how to leverage them effectively.

AWS Security Groups:

What are AWS Security Groups?

AWS Security Groups are a fundamental element of AWS’s network security model. They act as virtual firewalls for your instances to control inbound and outbound traffic. Security Groups are associated with AWS resources like EC2 instances and RDS databases and allow you to define inbound and outbound rules that control traffic to and from these resources.

Key Features of AWS Security Groups:

Stateful: Security Groups are stateful, meaning if you allow inbound traffic from a specific IP address, the corresponding outbound traffic is automatically allowed. This simplifies rule management.

Default Deny: By default, Security Groups deny all inbound traffic. You must explicitly define rules to allow traffic.

Dynamic Updates: You can modify Security Group rules anytime to adapt to changing security requirements.

Use Cases:

Web Servers: Security Groups are used to permit HTTP (port 80) and HTTPS (port 443) traffic for web servers while denying other unwanted traffic.

Database Servers: For database servers, Security Groups can be configured to only allow connections from known application servers while blocking access from the public internet.

Bastion Hosts: In a secure architecture, a bastion host’s Security Group can be set up to allow SSH (port 22) access only for specific administrators.

Demo: Creation of security group to open port 22 for SSH, port 80 for HTTP, and port 443 for HHTPS.

To create security group, click on this link and follow our previous demo on the security group.

https://accendnetworks.com/comprehensive-guide-to-creating-and-managing-security-groups-for-your-amazon-ec2-instances/

Bootstrap Scripts

What are Bootstrap Scripts?

A bootstrap script, often referred to as user data or initialization script, is a piece of code or script that is executed when an EC2 instance is launched for the first time. This script automates the setup and configuration of the instance, making it ready for use. Bootstrap scripts are highly customizable and allow you to install software, configure settings, and perform various tasks during instance initialization.

Key Features of Bootstrap Scripts:

Automation: Bootstrap scripts automate the instance setup and configuration process, reducing manual intervention and potential errors.

Flexibility: You have full control over the contents and execution of the script, making it adaptable to your specific use case.

Idempotent: Bootstrap scripts can be designed to be idempotent, meaning they can be run multiple times without causing adverse effects.

Use Cases:

Software Installation: You can use bootstrap scripts to install and configure specific software packages on an instance.

Configuration: Configure instance settings, such as setting environment variables or customizing application parameters.

Automated Tasks: Run scripts for backups, log rotation, and other routine maintenance tasks.

Combining Security Groups and Bootstrap Scripts

The synergy between Security Groups and Bootstrap Scripts offers a robust approach to enhancing both security and automation in your AWS environment.

Security Controls: Security Groups ensure that only authorized traffic is allowed to and from your EC2 instances. Bootstrap scripts can automate the process of ensuring that your instances are configured securely from the moment they launch.

Dynamic Updates: In response to changing security needs, Bootstrap Scripts can automatically update instance configurations.

Demo: Bootstrapping AWS EC2 Instance to update packages, install and start Apache HTTP server. (HTTP is on port 80)

sign in to your AWS Management Console, type EC2 in the search box then select EC2 under services

In the EC2 dashboard, select instances then click launch

In the launch instance dashboard, under name and tags, give your instance a name. call it bootstrap-demo-server.

Under application and OS images, select the QuickStart tab then select Amazon Linux. Under Amazon Machine Image (AMI), select the drop-down button and select Amazon Linux 2 AMI. Scroll down.

Under instance type make sure it is t2. Micro because it is the free-tier one. Under keypair login select the dropdown and select your key-pair. Scroll down.

We will leave all the other options as default move all the way to the advanced details section, then click the drop-down button to expand it.

Move all the way down to the instance user data section. Copy and paste this code inside there.

  • #!/bin/bash
  • yum update -y
  •  yum install httpd -y
  •  systemctl start httpd
  •  systemctl enable httpd

Tip: Once the instance is launched and you may have to go back and modify this User Data section, stop the instance, click Actions-Instance settings-Edit user data.

Under instance, summary, review, and click launch instance.

Click on the instance ID then wait for the status check to initialize. Copy the public IPv4 DNS and paste it into your browser.

We are getting this error because HTTP port 80 is not open, we will go back to our instance modify security group, and open port 80.

Select your instance and click the instance state dropdown move to security then click modify security group.

Under the associated security group, select the search box and look for the web traffic security group which opens port 80 and 443 select it, then click save.

Now come back to your instance copy its public DNS and paste it into your browser.

Congratulations, we can now access our HTTP web traffic on port 80. This clearly shows how security groups can allow or deny internet traffic.

Again, remember this instance was bootstrapped and installed Apache on launch.

Pull everything down and stay tuned for more.

If you have any questions concerning this article or have an AWS project that requires our assistance, please reach out to us by leaving a comment below or email us at sales@accendnetworks.com

Thank you!

Categories
Blogs

Leveraging AWS IAM Roles for Secure and Efficient Cloud Resource Management

Leveraging AWS IAM Roles for Secure and Efficient Cloud Resource Management.

With millions of active users accessing AWS, the AWS Identity and Access Management (IAM) service acts as the helm of the security by governing who is authenticated (signed in) and who is authorized (has permissions) to use the AWS resources.

What are AWS IAM Roles?

IAM roles are entities that provide access to different AWS services based on the level of permissions they have, which makes them similar to AWS users. Roles do not have passwords or access keys associated with them. Instead, roles provide temporary security credentials to whomever has the ability to assume that role. Roles eliminate the overhead of managing users and their long-lived credentials by generating temporary credentials whenever required.

Any of these entities can assume a role to use its permissions:

  • AWS user from the same account
  • AWS user from a different account
  • AWS service
  • Federated identity

Structure of an IAM role

There are two essential aspects to an IAM role:

    1. Trust policy: Who can assume the IAM role
    2. Permission policy: What does the role allow once you assume it.

Trust policy

These are policies that define and control which principals can assume the role based on the specified conditions. Trust policies are used to prevent the misuse of IAM roles by unauthorized or unintended entities.

Permission policy

These policies define what the principals who assume the role are allowed to do once they assume it.

Creating IAM Role

Pre-requisite: An AWS account with an IAM admin user.

In this tutorial, we will create a custom IAM role with the trusted entity as the AWS user and the permission policy to allow admin access to AWS EC2.

Log into the Management console and type I AM in the search box, then select I AM under services.

In the I AM dashboard, on the left side of the navigation pane under Access Management, select roles.

You might notice some pre-created roles in your account on the roles panel Ignore them.

To create a new role, click on the “Create role” button.

As previously stated, a role has two core aspects, the trust policy and the permission policy. So, the first thing that we have to specify is who can assume this role.

We will begin by selecting the “Custom trust policy”

Upon selection of the “Custom trust policy”, AWS automatically generates a JSON policy with an empty principal field. The principal field specifies who can assume this role. If we keep it empty then this policy cannot be assumed by any principal.

We will add the Amazon Resource Name (ARN) of the AWS IAM user who should be allowed to assume this role. The ARN can be obtained from the user details page in the IAM dashboard.

Copy the ARN and paste it as a key-value pair, with the key being “AWS” as shown below.

Once you have reviewed the trust policy, click on the next button to move on to the next page.

The next step is to choose a permission policy for this role. We can either use an existing policy or create a new one. We will choose an existing managed policy by the name AmazonEC2FullAccess to grant our role full access to the AWS EC2 service.

Remember that AWS denies everything by default. We are only granting the EC2 access to this role by attaching this policy. Leave all the other settings unchanged and click on next.

We have already taken care of the two essential aspects of a role. All that is left is to give the role a name and a description.

In the role details under the name, give your role a name. call it AWSIAMroleforEC2.

Then describe it as provide full access to EC2 under the description. Then Review all the details and click on the Create role button.

Congratulations! You have successfully created your first role. Click on view role.

Security learning: The user ARN in the trust policy is transformed into a unique principal ID when the policy is saved to prevent escalation of privilege by removing and recreating a different user with the same name. As a result, if we delete and recreate a user, the new user’s unique ID is different, rendering the trust policy ineffective.

Assuming IAM Roles

There are multiple ways to assume IAM roles. We can assume a role using the console, the CLI, or even the SDK.

Switching to a role through the console

Log in as an IAM user allowed to assume that role.

Already logged in as user Edmond.

To switch on a different role, select the drop-down user information panel in the top right corner. When we click on it, the “Switch role” button appears next to the “Sign out” button.

Upon clicking on the “Switch role” button, we are greeted by a form requesting information about the role we wish to switch to.
 If you are switching your role for the first time you would be greeted by a different screen explaining the details about switching a role. Click on the “Switch role” button.

Fill in the role details with the role we created in the previous tutorial and click on the “Switch role” button.

Upon switching to the IAM role, we arrive at the IAM dashboard again with the principal as the AWSIAMRoleforEC2@123456777

Because this role is only for EC2, if we try creating an s3 bucket, we will get access denied.

Congratulations! You have successfully assumed a role.

Pull down and stay tuned for more.

If you have any questions concerning this article or have an AWS project that requires our assistance, please reach out to us by leaving a comment below or email us at. sales@accendnetworks.com

Thank you!

Categories
Blogs

Turbocharge Your Amazon S3: Performance Enhancement Guidelines.

Turbocharge Your Amazon S3: Performance Enhancement Guidelines.

INTRODUCTION

Amazon S3, Amazon Web Services’ versatile storage service, is the backbone of modern data management. In this blog, we present a concise set of guidelines and strategies to boost your Amazon S3 storage performance.

AWS recommended best practices to optimize performance.

Using prefixes

A prefix is nothing but an object path. When we create a new s3 bucket, we define our bucket name, and later regarding the path or URL, we have directories so that we can have dirA, and a directory subB and then last, we have our object name so that it could be documentA.pdf

bucketName/dirA/subB/documentA.pdf

the s3 prefix is just folders inside our buckets. So, in the example above, the s3 prefix will be /dirA/subB.

How can the S3 prefix give us a better performance?

The AWS S3 has a remarkably low latency, and you can reach the first bite out of s3 within approximately 200ms and even achieve an increased number of requests. For example, your application can achieve at least 3,500 PUT/COPY/POST/DELETE and 5,500 GET/HEAD requests per second per prefix in a bucket. And there is no limit on prefix numbers. So, the more prefix you have inside your buckets, the more increased performance you’ll be capable of getting.

The essential number is to look at, 5500 GET requests. For example, if we try to access an object in one of the specific s3 buckets, performing a GET request, gets 5500 requests per second per prefix. So, it means that if we wanted to get more satisfactory performance out of s3 in terms of GET, what we would do is spread our reads across various folders.

If we were to use four different types of prefixes, we would get 5500 requests times 4, which would provide us 22000 requests per second which is a more satisfactory performance.

Optimize s3 performance on uploads, use multipart Upload:

For files, more significant in size than 5GB, it is mandatory to use multi-part upload. But for files more significant than 100MB, it has been recommended to use it as well. What does a multi-part upload do? For example, for a big file, you are cutting it into pieces, then you’re uploading those pieces simultaneously, and this is parallel upload which improves your efficiency. hence, speeding up transfers.

Implement Retries for Latency-Critical Applications:

Given Amazon S3’s vast scale, a retried request is likely to take an alternative path if the initial request is slow, leading to quicker success.

Utilize Both Amazon S3 and Amazon EC2 within the Same Region:

Access S3 buckets from Amazon EC2 instances in the same AWS Region whenever feasible. This reduces network latency and data transfer costs, optimizing performance.

S3 Transfer Acceleration:

Transfer Acceleration uses the globally distributed edge locations in CloudFront to accelerate data transport over geographical distances.

For transferring to the Edge location, it uses a public network and then from the Edge Location to the S3 bucket, it uses the AWS private network which is very fast. Hence, it reduces the use of public networks and maximizes the use of AWS private networks to improve S3 performance.

To implement this, open up the s3 console, and select your bucket. Click on the Properties Tab. Then find the transfer acceleration tab from there. All you have to do is hit enabled and save.

Properties Tab
Edit transfer acceleration

Amazon CloudFront

Is a fast content delivery network (CDN) that transparently caches data from Amazon S3 in a large set of geographically distributed points of presence (Pops).

When objects might be accessed from multiple Regions, or over the internet, CloudFront allows data to be cached close to the users that are accessing the objects. This results in high-performance delivery of Amazon S3 content.

Evaluate Performance Metrics:

 Employ Amazon CloudWatch request metrics specific to Amazon S3, as they include a metric designed to capture 5xx status responses.

Leverage the advanced metrics section within Amazon S3 Storage Lens to access the count of 503 (Service Unavailable) errors.
 By enabling server access logging for Amazon S3, you can filter and assess all incoming requests that trigger 503 (Internal Error) responses.

Keep AWS SDKs Up to Date: The AWS SDKs embody recommended performance practices. They offer a simplified API for Amazon S3 interaction, are regularly updated to adhere to the latest best practices, and incorporate features like automatic retries for HTTP 503 errors.

Limitations with s3,

Suppose we use the Key Management Service (KMS), Amazon’s encryption service. In that case, if you enabled the SSE-KMS to encrypt and decrypt your objects on s3, you must remember that there are built-in limitations, within the KMS function ‘generate data key’ in the KMS API. Also, the same situation happens when we download a file. We call the decrypt function from the KMS API. The essential information and the built-in limits are region-specific; however, it will be around 5500, 10000, or 30000 requests per second, and uploading and downloading will depend on the KMS quota. And nowadays we can’t ask for a quota expansion of KMS.

If you need performance and encryption simultaneously, then you just think of utilizing the native S3 encryption that’s built-in rather than utilizing the KMS.

If you are looking at a KMS issue, it could be that you’re reaching the KMS limit, which could be what’s pushing your downloads or requests to be slower.

Stay tuned for more.

If you have any questions concerning this article, or have an AWS project that require our assistance, please reach out to us by leaving a comment below or email us at. sales@accendnetworks.com

Thank you!

Categories
Blogs

Mastering Data Redundancy: Amazon S3 Replication Demystified.

Mastering Data Redundancy: Amazon S3 Replication Demystified.

In today’s data-driven world, safeguarding your digital assets is paramount. As the preferred choice for scalable and secure data storage, Amazon S3 has revolutionized the way we manage and protect our valuable information. However, the true power of data resilience lies in Amazon S3 Replication, an often-underutilized feature that holds the key to an unshakeable data strategy.

INTRODUCTION

Replication is a process of automatically copying objects between buckets in the same or different AWS Regions.
This copying happens when you create new objects or update existing ones. Amazon Simple Storage Service (S3) replication allows you to have an exact copy of your objects stored in other buckets.

PURPOSE OF S3 REPLICATION

Reduce Latency.
Enhance availability.
Disaster Recovery.
Copy Objects to cost-effective storage class.
Data redundancy.
Meet compliance requirements.

REPLICATION OPTIONS:

AMAZON S3 SAME REGION REPLICATION

S3 Same Region Replication (SRR) will automatically replicate objects from a source bucket to a destination bucket within the same AZ or different AZ in the same region. It uses asynchronous replication, which means objects are not copied to the destination bucket as soon as it is created or modified.

AMAZON S3 CROSS-REGION REPLICATION

S3 Cross Region Replication (CRR) will automatically replicate objects or data from the source bucket to the destination bucket in different regions. It minimizes latency for data access in different geographic regions.

S3 BATCH REPLICATION:

With CRR and SRR, Amazon S3 protects your data by automatically replicating new objects that you upload. S3 Batch Replication, on the other hand, lets you replicate existing objects using S3 Batch Operations, which are managed jobs.

Let’s get Practical

We will show steps in setting up s3 CRR

Conditions for enabling Cross-region replication

Setting up CRR:

Log in to the AWS s3 console then in the search box type s3, then select s3 under services.
In the s3 console, select buckets then click Create Bucket.
In the create bucket console we will name our bucket as sourcedemobucket11, for region, we will keep it in the Asia Pacific (Mumbai) ap-south 1 region. Also, note that the S3 bucket name needs to be globally unique. scroll down.
Under bucket versioning, it’s disabled by default. Enable it by checking the radio button. Then leave everything as default, scroll down, and click Create bucket.
Now following the same steps create a destination bucket: destinationdemobucket12 with versioning enabled, but this time let’s put it in US-east-1
After versioning is enabled leave everything as default and click Create bucket.
Next, click on your source bucket head over to the management tab then scroll down to the replication rule.
Now, click on “Create a replication rule”
Give your replication rule a name as “replicatedemo11’’
Choose the destination bucket as “destinationdemobucket12”.

Notice that you have an option to choose a destination bucket in another account.

In order to replicate objects from the source bucket to the destination bucket, you need to create an IAM role. So, select the drop-down button under the I AM role and click Create New Role.
If you want your S3 objects to be replicated within 15 minutes you need to check the “Replication Time Control (RTC) box. But you will be charged for this. So, we will move forward without enabling that for now and click on save.
As soon as you click on save, a screen will pop up asking if you want to replicate existing objects in the S3 bucket. But that will incur charges so we will proceed without replicating existing objects and click on submit.
After completing this setup, you can see a screen saying “Replication configuration successfully updated”.
head over to our destination bucket: destinationdemobucket12 to check if the uploaded file is replicated to our destination bucket. You can see that our uploaded file is successfully copied to the destination bucket. if you are not seeing it just click the refresh button.
Note: pull everything down

Facts about CRR

Thanks for your attention and stay tuned for more.

If you have any questions concerning this article or have an AWS project that require our assistance, please reach out to us by leaving a comment below or email us at 
sales@accendnetworks.com.

Thank you!