Accend Networks San Francisco Bay Area Full Service IT Consulting Company

Categories
Blogs

AWS Elastic Network Interface

Understanding AWS Elastic Network Interface (ENI)

In the AWS environment, ENIs offer many features and benefits. They enable you to assign multiple private IP addresses to your instances. Additionally, ENIs can be attached or detached from instances on the fly, providing flexibility in managing your infrastructure. This article dives deep into ENIs—what they are and how they work—and provides a hands-on demo on how to create and attach an ENI.

What is an Elastic Network Interface (ENI)?

An Elastic Network Interface (ENI) is a virtual network card that you can attach to instances that you launch in the same Availability Zone. You can create, attach, detach, and manage ENIs independently of EC2 instances.

When you move a network interface from one instance to another, network traffic is redirected from the original example to the new instance.

Key ENI attributes:

Primary Private IP Address: Each ENI must have a primary private IP, which cannot be removed.

Secondary Private IP Addresses: Optionally, ENIs can have multiple secondary private IPs for handling different workloads.

Elastic IP Addresses: You can associate an Elastic IP (EIP) with the ENI’s primary or secondary private IP, allowing external access.

MAC Address: Each ENI comes with its own unique MAC address.

Security Groups: ENIs can have one or more security groups that define inbound and outbound traffic rules.

Attachment to Subnet: ENIs must belong to a specific subnet within a VPC.

Let us look at the types of ENI Configurations

Primary ENI (Default): Each EC2 instance has a primary ENI by default, which is created when the instance is launched. This primary ENI is tied to the instance for its entire lifecycle and cannot be detached.

Secondary ENI (Additional): Secondary ENIs can be created and attached to instances. These are useful when an EC2 instance requires multiple network interfaces. For instance, a web server that must handle traffic from multiple subnets.

Detached ENI: An ENI can be detached from one instance and reattached to another. This capability allows you to transfer the network configuration of one EC2 instance to another without network downtime, an advantage in failover scenarios.

Here are some of the benefits of using Elastic Network Interfaces

High availability: ENIs are highly available. If one ENI fails, another ENI will automatically take its place.

Scalability: ENIs are scalable. You can easily add or remove ENIs from your EC2 instances as needed.

Flexibility: ENIs can be used to connect to a variety of AWS services and networks.

Let’s dive into the Demo.

Step 1: Launch Two EC2 Instances:

For this demo, make sure you have two EC2 instances up and running. I have two instances running server1 and server2.

Step 2: Check Network Interfaces:

Go to Instances under the EC2 dashboard.

Select each instance, and go to the Networking tab.

Scroll down to check the Network Interfaces section to see the attached ENIs. For server1 we can see the ENI. Each instance has an ENI with a primary private IPv4.

Repeating the same for server 2 we can see the ENI.

For the above ENIs, we can observe no Elastic IPs attached, which demonstrates you can attach an Elastic IP address to the ENIs.

You can also view the ENIs by selecting Network Interfaces on the left side of the EC2 dashboard.

Step 3 we will now create a new ENI:

Click Create network interface.

Set Description

Select a Subnet (same AZ as instances, e.g., us-east-1a).

Enable Auto-assign private IPv4.

Attach a Security group then click Create network interface.

Step 4: Attaching new ENI to an Instance:

Select the newly created ENI.

Click ActionsAttach.

Choose VPC and instance to attach it to from the drop-down buttons. (e.g., the first instance).

Confirm attachment.

Check the instance’s Networking tab to see the new ENI. We can see the demo ENI under Network Interfaces.

Step 5: Network Failover demostration:

Detach the new ENI from the first instance.

Click ActionsDetach (use force detach if necessary).

Attach the ENI to the second instance. Click actions then select Attach.

Check the second instance’s Networking tab to see the new ENI.

Step 6: Terminate Instances and Observe ENIs

Terminate both instances.

Check Network Interfaces.

The ENIs created with the instances will be deleted.

The manually created ENI will remain as can be shown.

Manually delete the newly created ENI to clean up.

Key Points:

ENI: Virtual network card providing network connectivity.

Flexibility: Attach/detach ENIs between instances for failover.

High Availability: Move ENIs between instances for minimal downtime.

Persistence: Manually created ENIs persist after instance termination.

Conclusion

Understanding how ENIs function and how to implement them is crucial for optimizing AWS network configurations. ENIs can improve the fault tolerance of your network setup. For instance, if you attach an ENI to a backup EC2 instance, the backup instance can take over immediately in case the primary instance fails.

Thanks for reading and stay tuned for more.

If you have any questions concerning this article or have an AWS project that requires our assistance, please reach out to us by leaving a comment below or email us at sales@accendnetworks.com.


Thank you!

Categories
Blogs

Protecting Sensitive Data with AWS Macie

Protecting Sensitive Data in AWS: A Practical Demo with AWS Macie

In today’s digital landscape, keeping sensitive data safe has become essential. As cloud environments grow more complex, it’s crucial to protect your sensitive data from unauthorized access and leaks. Amazon Macie is a powerful service designed to automatically discover, classify, and protect sensitive data stored in AWS. In this article, we will provide a hands-on demo to walk you through the process of using AWS Macie to enhance your data security and explore its features.

Challenges Amazon Macie is addressing

With a wide range of data residing across multiple AWS services and regions. Identifying and safeguarding this data manually can be a daunting task. AWS Macie steps in as a guardian, utilizing advanced machine learning algorithms and pattern recognition to automate the process of data discovery and classification.

AWS Macie and It’s Functions

Discover Sensitive Data: AWS Macie thoroughly scans data repositories, such as Amazon S3 buckets to identify a broad spectrum of sensitive information. Doing so assists organizations in gaining full visibility into their data landscape.

Intelligent Threat Detection: Macie not only discovers sensitive data but also acts as an ever-watchful sentry. It proactively alerts organizations to any data access or movement that may pose security risks or violate compliance policies.

Compliance and Reporting: Macie streamlines this process by generating detailed reports on data security and access, helping organizations demonstrate adherence to industry-specific compliance requirements.

Let’s dive into the hand-son

Step 1 creating S3 bucket.

Log in to your AWS management console and in the search bar, type S3 then select S3 under services.

In the S3 console, click Create Bucket.

In the create bucket console, fill in your bucket details by entering a unique bucket name.

Make sure that all public access check box is ticked. Leave all other settings as default, then scroll down and click Create Bucket.

Step 2 uploading sensitive data into S3 bucket.

Our bucket is successfully created, and we will now upload our sensitive data in our S3 bucket. You can get this sensitive data in this Github link.

Select your bucket, move to the object tab then click upload.

Select the sensitive data you downloaded then click upload.

Step 3 Enabling and configuring Amazon Macie.

Let’s now head to the Macie console, type Macie in the search bar, and select Mace under services.

In the Macie console, click Get Started.

In the Get Started dashboard click Enable Macie.

When you click enable Macie, it will start automated discovery. We will create a job for Macie, click Create job.

Select the S3 bucket that you intend to run continuous audits on. Then, Click Next again to go to Step Three.

In a real-world environment, we would want to continuously audit for sensitive data on a schedule. However, we only need to scan on-demand. Select a One-time job and click Next.

Let us refine the job.

We will create a custom identifier, in the steps of the prompt in creating a job, select manage custom identifiers.

Click Manage Custom Identifiers which takes you to a window where you can define a regex identifier.

Click create.

Give the custom data identifier ICD-10 Diagnosis Code and paste the following regular expression field:

[A-TV-Z][0-9][0-9AB]\.?[0-9A-TV-Z]{0,4}

To confirm our Regex works, let’s test some dummy sample data. In the sample data, paste the below data and hit Test

id,first_name,last_name,email,gender,ip_address,diagnosis_code,ein,favorite_movie,favorite_movie_genre
1,Moshe,Tolefree,mtolefree0@imageshack.us,Male,111.207.126.6,G3185,49-6935923,A Flintstones Christmas Carol,Animation|Children|Comedy
2,Jacqui,Harbour,jharbour1@home.pl,Non-binary,152.80.84.54,T43693,44-6915050,Forgotten Silver,Comedy|Documentary
3,Koo,Readitt,kreaditt2@tripadvisor.com,Female,25.241.0.38,S061X2A,49-5315541,”Alamo, The”,Drama|War|Western
4,Annetta,Moultrie,amoultrie3@msu.edu,Female,214.224.120.104,H6123,62-5428600,Nine Ways to Approach Helsinki (Yhdeksän tapaa lähestyä Helsinkiä),Documentary
5,Oralie,Halversen,ohalversen4@networksolutions.com,Female,239.220.166.49,S52501S,79-7959398,Those Awful Hats,Comedy

Once the test is successful, scroll down and click submit.

Return to the original window. Hit the refresh button, select the newly created identifier, and click Next.

Continue clicking next, under general settings, enter job name.

Click next, review then hit the submit button.

At this point, the job will run for roughly 10–15 minutes. When the status moves from Active (Running) to Complete. Then we will proceed to the findings on the sidebar and you will be able to see Macie’s findings.

This brings us to the end of this demo, make sure to clean up resources.

Conclusion

Amazon Macie offers a robust solution for securing sensitive data in the cloud by combining advanced machine learning with automated data discovery and classification. By following the above-outlined steps, you’ve learned how to set up and utilize Amazon Macie to protect your sensitive data.

Thanks for reading and stay tuned for more.

If you have any questions concerning this article or have an AWS project that requires our assistance, please reach out to us by leaving a comment below or email us at sales@accendnetworks.com.


Thank you!

Categories
Blogs

AWS Recycle Bin

AWS Recycle Bin: Your Key to Enhanced Data Protection and Recovery

Introduction

As more and more companies depend on the cloud infrastructure to run their businesses, at the same time they need a strong strategy to protect and recover their data, as accidental deletions or unexpected failures can lead to critical data loss, resulting in downtime and potential financial losses.  AWS well known for its wide range of services, provides various tools to keep data safe and retrievable. Among these, the AWS Recycle Bin service stands out as a powerful feature to improve data recovery options. This blog explores the AWS Recycle Bin service, what it offers, and how to use it to protect your important resources.

Getting to Know AWS Recycle Bin

The Recycle Bin service, in the context of AWS, is a concept often used to describe a safety mechanism for storing and recovering deleted resources. While AWS doesn’t offer a specific service named “Recycle Bin,” users can implement similar functionality using AWS services like AWS Lambda and Amazon S3.

In this setup, when a resource is deleted, it isn’t immediately gone forever. Instead, it’s moved to a specific storage area, often called a “Recycle Bin” or “Trash,” where it remains for a set period before being permanently deleted. This serves as a safety net, allowing users to easily recover accidentally deleted resources without needing to go through complicated backup and recovery processes.

By using AWS Lambda functions and S3 event notifications, you can automate the transfer of deleted resources to the Recycle Bin and set up policies for how long they should be retained before final deletion. This approach strengthens your data protection and management strategy within your AWS environment.

The AWS Recycle Bin is particularly useful when mistakes happen or automated systems accidentally delete resources. By enabling the Recycle Bin, you ensure that even if a resource is deleted, it can still be restored, preventing data loss and avoiding service interruptions.

Benefits of AWS Recycle Bin

Enhanced Data Protection: It allows you to recover deleted resources within a specified period, reducing the risk of permanent data loss.

Compliance and Governance: It ensures that data is not permanently lost due to accidental deletions, which is essential for maintaining audit trails and adhering to data retention policies.

Cost Management: By setting appropriate retention periods, you can manage storage costs effectively.

Let’s now get to the hands-on.

Implementation Steps

Make sure you have an EC2 instance up and running.

Get EBS Volume Information

AWS Elastic Block Store (EBS) is a scalable block storage service provided by Amazon Web Services (AWS), offering persistent storage volumes for EC2 instances. To view your block storage, in the EC2 dashboard move to the storage tab.

Take a Snapshot of the Volume

An AWS EBS snapshot is a point-in-time backup of an EBS volume stored in Amazon S3. It captures all the data on the volume at the time the snapshot is taken, including the data that is in use and any data that is pending EBS snapshots are commonly used for data backup, disaster recovery, and creating new volumes from existing data.

On the left side of EC2 UI, click Snapshots then click Create Snapshot.

In the Create snapshot UI, under resource types, select volumes. Then under volume ID, select the drop-down button and select your EBS volume.

Scroll down and click Create Snapshot.

Success.

Head to the Recycle Bin console and click the Create retention rule.

Fill in retention rule details.

Under retention settings for resource type select the drop-down button and select EBS Snapshot, then tick the box apply for all resources, then for retention period, select one day.

For the Rule lock settings, select Unlock.

Rule Created

Now go ahead and delete the snapshot

Open the recycle bin

Click on the snapshot present in the recycle bin

Objective achieved

Snapshot recovered successfully

Conclusion

The AWS Recycle Bin service offers a valuable layer of protection against accidental deletions, ensuring that critical resources like EBS snapshots and AMIs can be recovered within a defined period. Whether you’re protecting against human error or looking to strengthen your disaster recovery strategy, AWS Recycle Bin is an essential tool in your AWS toolkit.

This brings us to the end of this article.

Thanks for reading and stay tuned for more.

If you have any questions concerning this article or have an AWS project that requires our assistance, please reach out to us by leaving a comment below or email us at sales@accendnetworks.com.


Thank you!

Categories
Blogs

Scaling Big Data with Amazon Redshift

Scaling Big Data with Amazon Redshift: Insights into Managing Large Databases

In today’s world driven by data, companies face a flood of information making it essential to analyze and understand this data. This is where Amazon Redshift, a fully managed cloud data warehouse service, comes into play. Designed to handle large-scale data analytics and processing tasks. It helps businesses to gain deeper insights, faster query performance, and cost-effective scalability.

What is Big Data?

Big Data is a term used to describe extremely large and complex datasets that cannot be easily processed or analyzed using traditional data processing tools.

With the boom of digital technologies, we generate vast amounts of data from various sources such as social media, sensors, online transactions, etc. Big Data encompasses all this information, and it continues to grow rapidly.

Importance of a Data Warehouse

For organizations that need to manage and analyze large amounts of data, a data warehouse is essential. It enables them to make informed decisions based on the data, by providing a comprehensive view of the organization’s data in one place.

Importance of a Data Warehouse

Centralized Data Storage: With a data warehouse, all data is stored in one place, making it easier to manage and analyze. This eliminates the need for businesses to search through multiple sources to find the data they need.

Data Integration: A data warehouse allows businesses to integrate data from various sources, including applications, relational databases, and external sources. This makes it possible to combine data from different systems and gain a more complete view of business operations.

Efficient Analysis: With a data warehouse, businesses can perform complex queries, data analysis, and reporting to derive actionable insights. This enables them to make informed decisions based on their data.

Scalability and Performance: A data warehouse can handle large datasets and provide high-performance processing. This makes it possible for businesses to store and analyze vast amounts of data, even as their needs grow over time.

Traditional Data Warehouse Challenges

Traditional data warehousing solutions had many challenges that made them insufficient for managing and analyzing Big Data. Some of these challenges include:

  • Lack of Scalability
  • Lack of Data Integration:
  • High Cost
  • Low Performance
  • Lack of Real-time Processing and Analysis

Introduction to AWS Redshift

What is AWS Redshift?

AWS Redshift is a cloud-based data warehousing service that allows businesses to store and analyze large amounts of structured and semi-structured data in a scalable and cost-effective manner. It is designed to handle petabyte-scale data processing and analysis tasks and is a fully managed data warehouse service provided by Amazon Web Services (AWS).

Redshift Architecture and Components

A Redshift cluster consists of one or more nodes. Each cluster has a leader node and one or more compute nodes.

Leader Node: Manages communication with client applications and coordinates query execution.

Compute Nodes: Execute queries and store data. Each compute node has its CPU, memory, and storage.

Nodes and Node Types

Redshift offers different node types based on your performance and storage requirements:

Dense Compute (DC): Optimized for high performance with SSD storage.

Dense Storage (DS): Optimized for large storage capacity with HDDs.

Redshift Spectrum and Data Lake Integration

Redshift Spectrum allows you to query data directly from Amazon S3 without having to load it into Redshift. This feature enables seamless integration with your data lake, allowing you to extend your data warehouse to exabytes of data in S3.

Use Cases for AWS Redshift.

Data Warehousing: Redshift can be used as a centralized repository for all enterprise data, enabling organizations to store and manage large volumes of structured and unstructured data.

Business Intelligence: Redshift can help organizations process and analyze large volumes of data to uncover insights that can inform business decisions.

Machine Learning: Redshift can be used as a data source for machine learning applications, providing access to large volumes of structured and unstructured data that can be used to train machine learning models.

Data Analytics: With Redshift, organizations can analyze large volumes of data to identify patterns, trends, and anomalies.

Getting Started with AWS Redshift

Note: In this demo, we will focus on navigating the console and exploring its features rather than creating a Redshift cluster, as provisioning a cluster could incur significant costs.

Access the Redshift Console: Navigate to the AWS Management Console and search for Redshift in the search bar then select Redshift under services.

Creating and Configuring a Redshift Cluster

Click on Create cluster.

Choose a cluster identifier, database name, and master user credentials.

Select the node type and the number of nodes based on your needs.

Configure Cluster Settings:

Choose the VPC and subnet group for network settings.

Configure the security settings, including setting up security groups for network access control.

Launch the Cluster:

Review your settings and click on Create cluster.

This brings us to the end of this article.

Thanks for reading and stay tuned for more.

If you have any questions concerning this article or have an AWS project that requires our assistance, please reach out to us by leaving a comment below or email us at sales@accendnetworks.com.


Thank you!

Categories
Blogs

AWS Code Commit Obsolete

AWS CodeCommit Obsolete: Transitioning from AWS CodeCommit and Steps for a Seamless Migration

AWS CodeCommit, Amazon Web Services’ fully managed version control service, has been a leading solution for developers and organizations seeking a scalable, secure, and reliable version control system. However, AWS recently announced that it will no longer accept new customers for CodeCommit, effective June 6, 2024.

In this article, we’ll examine the impact of this phase-out, examine alternative version control systems, and offer tips on seamlessly transitioning your repositories.

Adapting to AWS CodeCommit’s Shutdown: Key Impacts and Your Next Step

AWS’s choice to end CodeCommit is part of a bigger plan to simplify its offerings and cut down on duplicate services. The rise in popularity of more powerful platforms like GitHub and GitLab, which provide advanced features and strong community backing, has had a big impact on this change. If you’re still using CodeCommit, the takeaway is clear: you can still access your repositories, but it’s time to start thinking about moving. AWS has given helpful documentation to help you through the switch to a new platform.

Exploring Alternative Version Control Systems

With CodeCommit being phased out, organizations need to explore alternative version control systems, and here are some of the top options.

GitHub: It’s the world’s largest Git repository hosting service and offers extensive features, including GitHub Actions for CI/CD, a vibrant community, and seamless integration with many third-party tools.

GitLab: It stands out for its built-in DevOps capabilities, offering robust CI/CD pipelines, security features, and extensive integration options.

Bitbucket: It is well-suited for teams already using Atlassian products like Jira and Confluence.

Self-Hosted Git Solutions: This is for organizations with specific security or customization requirements.

Migrating your AWS CodeCommit Repository to a GitHub Repository

Before you start the migration, make sure you have set up a new repository and the remote repository should be empty.

The remote repository may have protected branches that do not allow force push. In this case, navigate to your new repository provider and disable branch protections to allow force push.

Log into the AWS management console and navigate to the code commit console, in the AWS CodeCommit console, select the clone URL for the repository you will migrate. The correct clone URL (HTTPS, SSH, or HTTPS (CRC)) depends on which credential type and network protocol you have chosen to use.

In my case, I am using HTTPS.

Step 1: Clone the AWS CodeCommit Repository
Clone the repository from AWS CodeCommit to your local machine using Git. If you’re using HTTPS, you can do this by running the following command:

git clone https://your-aws-repository-url your-aws-repository

Replace your-aws-repository-url with the URL of your AWS CodeCommit repository.

Change the directory to the repository you’ve just cloned.

Step 2: add the New Remote Repository.

Navigate to the directory of your cloned AWS CodeCommit repository. Then, add the repository URL from the new repository provider.

git remote add <provider name> <provider-repository-url>

Step 3. Push Your Repository to the New Provider

Push your local repository to the new remote repository

This will push all branches and tags to your new repository provider’s repository. The provider’s name must match the provider’s name from step 2.

git push <provider name> –mirror

I use SSH keys for authentication, so I will run the git set URL command to authenticate with my SSH keys. Then lastly will run the git push command.

Step 4. Verify the Migration

Once the push is complete, verify that all files, branches, and tags have been successfully migrated to the new repository provider. You can do this by browsing your repository online or cloning it to another location and checking it locally.

Step 5: Update Remote URLs in Your Local Repository

If you plan to continue working with the migrated repository locally, you may want to update the remote URL to point to the new provider’s repository instead of AWS CodeCommit. You can do this using the following command:

git remote set-url origin <provider-repository-url>

Replace <provider-repository-url> with the URL of your new repository provider’s repository.

Step 6: Update CI/CD Pipelines

If you have CI/CD pipelines set up that interact with your repositories, such as GitLab, GitHub, or AWS CodePipeline, update their configuration to reflect the new repository URL. If you removed protected branch permissions in Step 3 you may want to add these back to your main branch.

Step 7: Inform Your Team

If you’re migrating a repository that others are working on, be sure to inform your team about the migration and provide them with the new repository URL.

Step 8: Delete the Old AWS CodeCommit Repository

This action cannot be undone. Navigate back to the AWS CodeCommit console and delete the repository that you have migrated.

Conclusion

By carefully evaluating your options and planning your migration, you can turn this transition into an upgrade for your development processes. Embracing a new tool not only enhances your team’s efficiency but also ensures you stay aligned with current industry standards.

This brings us to the end of this blog.

Thanks for reading and stay tuned for more.

If you have any questions concerning this article or have an AWS project that requires our assistance, please reach out to us by leaving a comment below or email us at sales@accendnetworks.com.


Thank you!