Accend Networks San Francisco Bay Area Full Service IT Consulting Company

Categories
Blogs

Leveraging AWS IAM Roles for Secure and Efficient Cloud Resource Management

Leveraging AWS IAM Roles for Secure and Efficient Cloud Resource Management.

With millions of active users accessing AWS, the AWS Identity and Access Management (IAM) service acts as the helm of the security by governing who is authenticated (signed in) and who is authorized (has permissions) to use the AWS resources.

What are AWS IAM Roles?

IAM roles are entities that provide access to different AWS services based on the level of permissions they have, which makes them similar to AWS users. Roles do not have passwords or access keys associated with them. Instead, roles provide temporary security credentials to whomever has the ability to assume that role. Roles eliminate the overhead of managing users and their long-lived credentials by generating temporary credentials whenever required.

Any of these entities can assume a role to use its permissions:

  • AWS user from the same account
  • AWS user from a different account
  • AWS service
  • Federated identity

Structure of an IAM role

There are two essential aspects to an IAM role:

    1. Trust policy: Who can assume the IAM role
    2. Permission policy: What does the role allow once you assume it.

Trust policy

These are policies that define and control which principals can assume the role based on the specified conditions. Trust policies are used to prevent the misuse of IAM roles by unauthorized or unintended entities.

Permission policy

These policies define what the principals who assume the role are allowed to do once they assume it.

Creating IAM Role

Pre-requisite: An AWS account with an IAM admin user.

In this tutorial, we will create a custom IAM role with the trusted entity as the AWS user and the permission policy to allow admin access to AWS EC2.

Log into the Management console and type I AM in the search box, then select I AM under services.

In the I AM dashboard, on the left side of the navigation pane under Access Management, select roles.

You might notice some pre-created roles in your account on the roles panel Ignore them.

To create a new role, click on the “Create role” button.

As previously stated, a role has two core aspects, the trust policy and the permission policy. So, the first thing that we have to specify is who can assume this role.

We will begin by selecting the “Custom trust policy”

Upon selection of the “Custom trust policy”, AWS automatically generates a JSON policy with an empty principal field. The principal field specifies who can assume this role. If we keep it empty then this policy cannot be assumed by any principal.

We will add the Amazon Resource Name (ARN) of the AWS IAM user who should be allowed to assume this role. The ARN can be obtained from the user details page in the IAM dashboard.

Copy the ARN and paste it as a key-value pair, with the key being “AWS” as shown below.

Once you have reviewed the trust policy, click on the next button to move on to the next page.

The next step is to choose a permission policy for this role. We can either use an existing policy or create a new one. We will choose an existing managed policy by the name AmazonEC2FullAccess to grant our role full access to the AWS EC2 service.

Remember that AWS denies everything by default. We are only granting the EC2 access to this role by attaching this policy. Leave all the other settings unchanged and click on next.

We have already taken care of the two essential aspects of a role. All that is left is to give the role a name and a description.

In the role details under the name, give your role a name. call it AWSIAMroleforEC2.

Then describe it as provide full access to EC2 under the description. Then Review all the details and click on the Create role button.

Congratulations! You have successfully created your first role. Click on view role.

Security learning: The user ARN in the trust policy is transformed into a unique principal ID when the policy is saved to prevent escalation of privilege by removing and recreating a different user with the same name. As a result, if we delete and recreate a user, the new user’s unique ID is different, rendering the trust policy ineffective.

Assuming IAM Roles

There are multiple ways to assume IAM roles. We can assume a role using the console, the CLI, or even the SDK.

Switching to a role through the console

Log in as an IAM user allowed to assume that role.

Already logged in as user Edmond.

To switch on a different role, select the drop-down user information panel in the top right corner. When we click on it, the “Switch role” button appears next to the “Sign out” button.

Upon clicking on the “Switch role” button, we are greeted by a form requesting information about the role we wish to switch to.
 If you are switching your role for the first time you would be greeted by a different screen explaining the details about switching a role. Click on the “Switch role” button.

Fill in the role details with the role we created in the previous tutorial and click on the “Switch role” button.

Upon switching to the IAM role, we arrive at the IAM dashboard again with the principal as the AWSIAMRoleforEC2@123456777

Because this role is only for EC2, if we try creating an s3 bucket, we will get access denied.

Congratulations! You have successfully assumed a role.

Pull down and stay tuned for more.

If you have any questions concerning this article or have an AWS project that requires our assistance, please reach out to us by leaving a comment below or email us at. sales@accendnetworks.com

Thank you!

Categories
Blogs

Turbocharge Your Amazon S3: Performance Enhancement Guidelines.

Turbocharge Your Amazon S3: Performance Enhancement Guidelines.

INTRODUCTION

Amazon S3, Amazon Web Services’ versatile storage service, is the backbone of modern data management. In this blog, we present a concise set of guidelines and strategies to boost your Amazon S3 storage performance.

AWS recommended best practices to optimize performance.

Using prefixes

A prefix is nothing but an object path. When we create a new s3 bucket, we define our bucket name, and later regarding the path or URL, we have directories so that we can have dirA, and a directory subB and then last, we have our object name so that it could be documentA.pdf

bucketName/dirA/subB/documentA.pdf

the s3 prefix is just folders inside our buckets. So, in the example above, the s3 prefix will be /dirA/subB.

How can the S3 prefix give us a better performance?

The AWS S3 has a remarkably low latency, and you can reach the first bite out of s3 within approximately 200ms and even achieve an increased number of requests. For example, your application can achieve at least 3,500 PUT/COPY/POST/DELETE and 5,500 GET/HEAD requests per second per prefix in a bucket. And there is no limit on prefix numbers. So, the more prefix you have inside your buckets, the more increased performance you’ll be capable of getting.

The essential number is to look at, 5500 GET requests. For example, if we try to access an object in one of the specific s3 buckets, performing a GET request, gets 5500 requests per second per prefix. So, it means that if we wanted to get more satisfactory performance out of s3 in terms of GET, what we would do is spread our reads across various folders.

If we were to use four different types of prefixes, we would get 5500 requests times 4, which would provide us 22000 requests per second which is a more satisfactory performance.

Optimize s3 performance on uploads, use multipart Upload:

For files, more significant in size than 5GB, it is mandatory to use multi-part upload. But for files more significant than 100MB, it has been recommended to use it as well. What does a multi-part upload do? For example, for a big file, you are cutting it into pieces, then you’re uploading those pieces simultaneously, and this is parallel upload which improves your efficiency. hence, speeding up transfers.

Implement Retries for Latency-Critical Applications:

Given Amazon S3’s vast scale, a retried request is likely to take an alternative path if the initial request is slow, leading to quicker success.

Utilize Both Amazon S3 and Amazon EC2 within the Same Region:

Access S3 buckets from Amazon EC2 instances in the same AWS Region whenever feasible. This reduces network latency and data transfer costs, optimizing performance.

S3 Transfer Acceleration:

Transfer Acceleration uses the globally distributed edge locations in CloudFront to accelerate data transport over geographical distances.

For transferring to the Edge location, it uses a public network and then from the Edge Location to the S3 bucket, it uses the AWS private network which is very fast. Hence, it reduces the use of public networks and maximizes the use of AWS private networks to improve S3 performance.

To implement this, open up the s3 console, and select your bucket. Click on the Properties Tab. Then find the transfer acceleration tab from there. All you have to do is hit enabled and save.

Properties Tab
Edit transfer acceleration

Amazon CloudFront

Is a fast content delivery network (CDN) that transparently caches data from Amazon S3 in a large set of geographically distributed points of presence (Pops).

When objects might be accessed from multiple Regions, or over the internet, CloudFront allows data to be cached close to the users that are accessing the objects. This results in high-performance delivery of Amazon S3 content.

Evaluate Performance Metrics:

 Employ Amazon CloudWatch request metrics specific to Amazon S3, as they include a metric designed to capture 5xx status responses.

Leverage the advanced metrics section within Amazon S3 Storage Lens to access the count of 503 (Service Unavailable) errors.
 By enabling server access logging for Amazon S3, you can filter and assess all incoming requests that trigger 503 (Internal Error) responses.

Keep AWS SDKs Up to Date: The AWS SDKs embody recommended performance practices. They offer a simplified API for Amazon S3 interaction, are regularly updated to adhere to the latest best practices, and incorporate features like automatic retries for HTTP 503 errors.

Limitations with s3,

Suppose we use the Key Management Service (KMS), Amazon’s encryption service. In that case, if you enabled the SSE-KMS to encrypt and decrypt your objects on s3, you must remember that there are built-in limitations, within the KMS function ‘generate data key’ in the KMS API. Also, the same situation happens when we download a file. We call the decrypt function from the KMS API. The essential information and the built-in limits are region-specific; however, it will be around 5500, 10000, or 30000 requests per second, and uploading and downloading will depend on the KMS quota. And nowadays we can’t ask for a quota expansion of KMS.

If you need performance and encryption simultaneously, then you just think of utilizing the native S3 encryption that’s built-in rather than utilizing the KMS.

If you are looking at a KMS issue, it could be that you’re reaching the KMS limit, which could be what’s pushing your downloads or requests to be slower.

Stay tuned for more.

If you have any questions concerning this article, or have an AWS project that require our assistance, please reach out to us by leaving a comment below or email us at. sales@accendnetworks.com

Thank you!

Categories
Blogs

Mastering Data Redundancy: Amazon S3 Replication Demystified.

Mastering Data Redundancy: Amazon S3 Replication Demystified.

In today’s data-driven world, safeguarding your digital assets is paramount. As the preferred choice for scalable and secure data storage, Amazon S3 has revolutionized the way we manage and protect our valuable information. However, the true power of data resilience lies in Amazon S3 Replication, an often-underutilized feature that holds the key to an unshakeable data strategy.

INTRODUCTION

Replication is a process of automatically copying objects between buckets in the same or different AWS Regions.
This copying happens when you create new objects or update existing ones. Amazon Simple Storage Service (S3) replication allows you to have an exact copy of your objects stored in other buckets.

PURPOSE OF S3 REPLICATION

Reduce Latency.
Enhance availability.
Disaster Recovery.
Copy Objects to cost-effective storage class.
Data redundancy.
Meet compliance requirements.

REPLICATION OPTIONS:

AMAZON S3 SAME REGION REPLICATION

S3 Same Region Replication (SRR) will automatically replicate objects from a source bucket to a destination bucket within the same AZ or different AZ in the same region. It uses asynchronous replication, which means objects are not copied to the destination bucket as soon as it is created or modified.

AMAZON S3 CROSS-REGION REPLICATION

S3 Cross Region Replication (CRR) will automatically replicate objects or data from the source bucket to the destination bucket in different regions. It minimizes latency for data access in different geographic regions.

S3 BATCH REPLICATION:

With CRR and SRR, Amazon S3 protects your data by automatically replicating new objects that you upload. S3 Batch Replication, on the other hand, lets you replicate existing objects using S3 Batch Operations, which are managed jobs.

Let’s get Practical

We will show steps in setting up s3 CRR

Conditions for enabling Cross-region replication

Setting up CRR:

Log in to the AWS s3 console then in the search box type s3, then select s3 under services.
In the s3 console, select buckets then click Create Bucket.
In the create bucket console we will name our bucket as sourcedemobucket11, for region, we will keep it in the Asia Pacific (Mumbai) ap-south 1 region. Also, note that the S3 bucket name needs to be globally unique. scroll down.
Under bucket versioning, it’s disabled by default. Enable it by checking the radio button. Then leave everything as default, scroll down, and click Create bucket.
Now following the same steps create a destination bucket: destinationdemobucket12 with versioning enabled, but this time let’s put it in US-east-1
After versioning is enabled leave everything as default and click Create bucket.
Next, click on your source bucket head over to the management tab then scroll down to the replication rule.
Now, click on “Create a replication rule”
Give your replication rule a name as “replicatedemo11’’
Choose the destination bucket as “destinationdemobucket12”.

Notice that you have an option to choose a destination bucket in another account.

In order to replicate objects from the source bucket to the destination bucket, you need to create an IAM role. So, select the drop-down button under the I AM role and click Create New Role.
If you want your S3 objects to be replicated within 15 minutes you need to check the “Replication Time Control (RTC) box. But you will be charged for this. So, we will move forward without enabling that for now and click on save.
As soon as you click on save, a screen will pop up asking if you want to replicate existing objects in the S3 bucket. But that will incur charges so we will proceed without replicating existing objects and click on submit.
After completing this setup, you can see a screen saying “Replication configuration successfully updated”.
head over to our destination bucket: destinationdemobucket12 to check if the uploaded file is replicated to our destination bucket. You can see that our uploaded file is successfully copied to the destination bucket. if you are not seeing it just click the refresh button.
Note: pull everything down

Facts about CRR

Thanks for your attention and stay tuned for more.

If you have any questions concerning this article or have an AWS project that require our assistance, please reach out to us by leaving a comment below or email us at 
sales@accendnetworks.com.

Thank you!
Categories
Blogs

Maximizing Efficiency and Cost Savings with AWS S3 Lifecycle Management

Maximizing Efficiency and Cost Savings with AWS S3 Lifecycle Management

Introduction

In the age of cloud computing, managing vast amounts of data efficiently and cost-effectively is a top priority for organizations of all sizes. Amazon Web Services (AWS) offers a robust solution for this challenge through Amazon S3 (Simple Storage Service). AWS S3 is a scalable and durable cloud storage service used to store and retrieve data. To further optimize data management, AWS provides a feature known as S3 Lifecycle Management. This powerful tool automates the management of objects stored in S3 buckets, allowing users to optimize their storage costs, meet compliance requirements, and simplify data management. In this article, we will delve into the world of AWS S3 Lifecycle Management, exploring its benefits, configuration options, and real-world applications.

Understanding AWS S3 Lifecycle Management

AWS S3 Lifecycle Management is a set of rules and policies applied to S3 objects to automate their lifecycle. This automation is particularly valuable when managing large volumes of data, as it helps keep storage costs in check, ensures data durability, and streamlines data management.

The key components of S3 Lifecycle Management include two types of actions

Transition actions

These rules define when objects should transition from one storage class to another. You can specify actions such as moving objects from the standard storage class to the infrequent access (IA) storage class after a certain number of days or changing the storage class to Glacier for archiving purposes.

Expiration actions

Expiration rules define when objects should be deleted from S3. You can set expiration based on a specific number of days since the object’s creation or the date when it was last modified.

Benefits of AWS S3 Lifecycle Management

Cost Optimization.

Data Durability.

Compliance and Data Retention.

Simplified Data Management.

Real-World Applications

Data Backup and Archiving: you can set up rules to transition objects to Glacier after 90 days, ensuring data durability and cost savings.
Log Data Management: Organizations can automatically move logs from the standard storage class to Glacier after a certain period, saving on storage costs while maintaining compliance with data retention policies.
Content Delivery: In scenarios where you store content for your website or application in S3, you can use S3 Lifecycle Management to move old or infrequently accessed content to a more cost-effective storage class.

Creation of Lifecycle rule

sign-in to your AWS Management Console.
Type S3 in the search box, then select S3 under services.
In the Buckets list, choose the name of the bucket that you want to create a lifecycle rule.
From the above screen, we observe that the bucket is empty. Before uploading the objects in a bucket, you can choose to first create lifecycle rule.
Choose the Management tab, and choose the Create lifecycle rule.
Under Lifecycle rule name, enter a name for your rule, and call it demolifecyclerule. always remember the name must be unique within the bucket.
Choose the scope of the lifecycle rule, I will select apply to all objects so select the radio button then Scroll down.
Under Lifecycle rule actions, choose the actions that you want your lifecycle rule to perform, and remember depending on the actions that you choose, different options appear.
I will start by transitioning current versions of objects between storage classes.
Then under choose storage class transitions, choose standard-IA, under days after object creation, a minimum of 30 days is required, so enter 30 days then click add to add another transition.
In the other transition, I will now move to intelligent tiering, and here a minimum of 90 days is required, so I will enter it.
Next, I will choose Transition non-current versions of objects between storage classes: move to that tab then choose the storage class to transition to.
Then To permanently delete previous versions of objects, under Permanently delete noncurrent versions of objects, in Days after objects become noncurrent, enter the number of days. You can optionally specify the number of newer versions to retain by entering a value under.
Review then click Create rule.
Success.
clean up.
If you have any questions concerning this article or have an AWS project that require our assistance, please reach out to us by leaving a comment below or email us at sales@accendnetworks.com!
Thank you!
Categories
Blogs

Mastering Data Protection: AWS Backup.

Mastering Data Protection: AWS Backup.

In the ever-evolving digital landscape, where data is the lifeblood of businesses and organizations, the concept of data backup has become paramount. In essence, a backup is a secure and duplicate copy of your critical data and information. ensuring that in the face of unexpected calamities, or data corruption, your valuable information remains intact and recoverable. A well-implemented backup strategy is akin to a safety net for your digital assets, providing you with the assurance that, even in the worst-case scenarios, your data can be resurrected, and business operations can continue with minimal disruption.

What is AWS Backup Service and Why It Matters?

AWS Backup is a fully managed service offered by Amazon Web Services (AWS). It makes it easy to centralize and automate data protection across AWS services. AWS Backup is designed to simplify the backup process, making it easier to create, manage, and restore backups for critical data.

Key components of AWS backup service.

Backup’s

A backup or recovery point represents the content of a resource, such as an Amazon Elastic Block Store (Amazon EBS) volume at a specified time. It is a term that refers generally to the different backups in AWS services, such as Amazon EBS snapshots and DynamoDB backups.

Backup vaults

Vaults are simply logical containers that help you organize and manage your backups effectively. You can create multiple vaults to categorize and store backups based on your requirements.

Backup plan

These are at the heart of AWS Backup and define your backup policies and schedules. Within a backup plan, you specify settings such as backup frequency, retention periods, and lifecycle rules.

Lifecycle Rules:

Lifecycle rules determine the retention period of your backups and when they should be deleted. You can configure rules to automatically transition backups to cold storage or remove them when they’re no longer needed.

Backup Jobs:

Once the backup is scheduled, the backup details can be monitored status and other details like backup, restore, and copy activity. Backup job statuses include pending, running, aborted, completed, and failed.

Recovery Points:

These are specific states of your resources captured by backup jobs at particular points in time. AWS Backup retains multiple recovery points based on the retention settings in your backup plan.

Vault Lock:

Vault lock provides an additional layer of security for your backups. When enabled, it prevents the deletion of backup data for a specified retention period, ensuring data integrity and compliance with retention policies.

the importance of AWS backup in today’s data-driven world.

Data resilience.

Data loss can be catastrophic for any organization. AWS Backup ensures that your critical data is protected and can be quickly recovered in case of accidental deletions, hardware failures, or data corruption.

Security and Compliance

AWS Backup integrates with AWS Identity and Access Management (IAM) and AWS Key Management Service (KMS) to provide secure, encrypted backups. This is crucial for meeting regulatory requirements and maintaining data privacy.

Simplicity and Automation

AWS Backup simplifies the backup process with automated policies, making it easy to create, schedule, and manage backups without the need for complex scripting or manual interventions.

Centralized Management

With AWS Backup, you can manage backups for multiple AWS services from a single console, streamlining backup operations and reducing management overhead.

Cross-Region and Cross-Account Backups

AWS Backup enables you to create backups that span regions and AWS accounts, enhancing data resilience and disaster recovery capabilities.

Cross-Account and Cross-Regional Backups

Cross-account and cross-regional backups form the cornerstone of a resilient data protection strategy. Cross-account backups involve replicating critical data from one AWS account to another, mitigating the risk of accidental data loss, and enhancing security by adhering to the principle of least privilege.

Cross-regional backups extend this protection by replicating data across different AWS regions, guarding against region-specific outages or unforeseen disruptions.

If you have any questions concerning this article or have an AWS project that require our assistance, please reach out to us by leaving a comment below or email us at sales@accendnetworks.com! 

Thank you!