Accend Networks San Francisco Bay Area Full Service IT Consulting Company

Categories
Blogs

AWS Code Commit Obsolete

AWS CodeCommit Obsolete: Transitioning from AWS CodeCommit and Steps for a Seamless Migration

AWS CodeCommit, Amazon Web Services’ fully managed version control service, has been a leading solution for developers and organizations seeking a scalable, secure, and reliable version control system. However, AWS recently announced that it will no longer accept new customers for CodeCommit, effective June 6, 2024.

In this article, we’ll examine the impact of this phase-out, examine alternative version control systems, and offer tips on seamlessly transitioning your repositories.

Adapting to AWS CodeCommit’s Shutdown: Key Impacts and Your Next Step

AWS’s choice to end CodeCommit is part of a bigger plan to simplify its offerings and cut down on duplicate services. The rise in popularity of more powerful platforms like GitHub and GitLab, which provide advanced features and strong community backing, has had a big impact on this change. If you’re still using CodeCommit, the takeaway is clear: you can still access your repositories, but it’s time to start thinking about moving. AWS has given helpful documentation to help you through the switch to a new platform.

Exploring Alternative Version Control Systems

With CodeCommit being phased out, organizations need to explore alternative version control systems, and here are some of the top options.

GitHub: It’s the world’s largest Git repository hosting service and offers extensive features, including GitHub Actions for CI/CD, a vibrant community, and seamless integration with many third-party tools.

GitLab: It stands out for its built-in DevOps capabilities, offering robust CI/CD pipelines, security features, and extensive integration options.

Bitbucket: It is well-suited for teams already using Atlassian products like Jira and Confluence.

Self-Hosted Git Solutions: This is for organizations with specific security or customization requirements.

Migrating your AWS CodeCommit Repository to a GitHub Repository

Before you start the migration, make sure you have set up a new repository and the remote repository should be empty.

The remote repository may have protected branches that do not allow force push. In this case, navigate to your new repository provider and disable branch protections to allow force push.

Log into the AWS management console and navigate to the code commit console, in the AWS CodeCommit console, select the clone URL for the repository you will migrate. The correct clone URL (HTTPS, SSH, or HTTPS (CRC)) depends on which credential type and network protocol you have chosen to use.

In my case, I am using HTTPS.

Step 1: Clone the AWS CodeCommit Repository
Clone the repository from AWS CodeCommit to your local machine using Git. If you’re using HTTPS, you can do this by running the following command:

git clone https://your-aws-repository-url your-aws-repository

Replace your-aws-repository-url with the URL of your AWS CodeCommit repository.

Change the directory to the repository you’ve just cloned.

Step 2: add the New Remote Repository.

Navigate to the directory of your cloned AWS CodeCommit repository. Then, add the repository URL from the new repository provider.

git remote add <provider name> <provider-repository-url>

Step 3. Push Your Repository to the New Provider

Push your local repository to the new remote repository

This will push all branches and tags to your new repository provider’s repository. The provider’s name must match the provider’s name from step 2.

git push <provider name> –mirror

I use SSH keys for authentication, so I will run the git set URL command to authenticate with my SSH keys. Then lastly will run the git push command.

Step 4. Verify the Migration

Once the push is complete, verify that all files, branches, and tags have been successfully migrated to the new repository provider. You can do this by browsing your repository online or cloning it to another location and checking it locally.

Step 5: Update Remote URLs in Your Local Repository

If you plan to continue working with the migrated repository locally, you may want to update the remote URL to point to the new provider’s repository instead of AWS CodeCommit. You can do this using the following command:

git remote set-url origin <provider-repository-url>

Replace <provider-repository-url> with the URL of your new repository provider’s repository.

Step 6: Update CI/CD Pipelines

If you have CI/CD pipelines set up that interact with your repositories, such as GitLab, GitHub, or AWS CodePipeline, update their configuration to reflect the new repository URL. If you removed protected branch permissions in Step 3 you may want to add these back to your main branch.

Step 7: Inform Your Team

If you’re migrating a repository that others are working on, be sure to inform your team about the migration and provide them with the new repository URL.

Step 8: Delete the Old AWS CodeCommit Repository

This action cannot be undone. Navigate back to the AWS CodeCommit console and delete the repository that you have migrated.

Conclusion

By carefully evaluating your options and planning your migration, you can turn this transition into an upgrade for your development processes. Embracing a new tool not only enhances your team’s efficiency but also ensures you stay aligned with current industry standards.

This brings us to the end of this blog.

Thanks for reading and stay tuned for more.

If you have any questions concerning this article or have an AWS project that requires our assistance, please reach out to us by leaving a comment below or email us at [email protected].

Thank you!

Categories
Blogs

Deep Dive into CloudFront

Deep Dive into CloudFront: Understanding Internal Caching Mechanisms and Implementing Websites on S3 with Region Failover Part One

Amazon CloudFront, a Content Delivery Network (CDN) provided by AWS, is key in ensuring that content is delivered swiftly to users across the globe. When paired with S3, it’s perfect for hosting fast, secure, and reliable static websites. In this article, we will explore CloudFront’s internal caching mechanisms and discuss how to implement an S3-hosted website with region failover capabilities.

What is CloudFront

CloudFront is a CDN service (Content delivery network). CloudFront caches content such as HTML, CSS, and dynamic content to a worldwide data center called Edge Location or Regional Edge location. it is used to boost your website performance by availing content closer to users all over the world.

How it works?

CloudFront caches the contents in all the edge locations around the world. Caching refers to storing frequently accessed data in high-speed hardware, allowing for faster retrieval. This hardware is known as a cache. However, caches have limited memory capacity, and it is not possible to store everything in them due to their relatively expensive hardware. We use caching strategically to maximize performance.

Cache Hierarchy in CloudFront

Regional Edge Caches: Before content reaches the edge locations, it may pass through regional edge caches. These are a middle layer that provides additional caching, helping to reduce the load on the origin server and improve cache hit ratios.

Cache Hit: This refers to a situation where the requested data is already present in the cache. It improves performance by avoiding the need to fetch the data from the source such as a disk or server. Cache hits are desirable because they accelerate the retrieval process and contribute to overall system efficiency.

Cache Miss: This occurs when the requested data is not found in the cache. When a cache miss happens, the system needs to fetch the data from the source, which can involve a longer retrieval time and higher latency compared to a cache hit. The data is then stored in the cache for future access, improving subsequent performance if the same data is requested again. Cache misses are inevitable and can happen due to various reasons, such as accessing new data or when the data in the cache has expired.

How CloudFront utilizes caching to reduce the latency and increase the performance

When a user requests a website, the DNS service resolves to the DNS of the CloudFront distribution, which then redirects the user to the nearest edge location. The user receives the response from that particular edge location. However, there are instances when the requested data is not present in the edge location, resulting in a cache miss. In such cases, the request is sent from the regional edge location, and the user receives the data from there if it is available, indicating a cache hit. However, this process can take some time.

In situations where the data is not present in the regional edge location either, retrieving the data becomes a lengthier process. In such cases, the data needs to be fetched from the origin server, which, in our case, is the S3 bucket. This additional step of fetching the data from the origin server can introduce latency and increase the overall response time for the user.

CloudFront origin failover

For high-availability applications where downtime is not an option, CloudFront origin failover ensures that your content remains accessible even if the primary origin server becomes unavailable. By setting up multiple origins (like two S3 buckets in different regions) and configuring CloudFront to switch to a backup origin when the primary one fails, we can maintain uninterrupted service for users, enhancing our website’s reliability and resilience.

For CloudFront origin to be achieved, we create an origin group with two origins: a primary and a secondary. If the primary origin is unavailable or returns specific HTTP response status codes that indicate a failure, CloudFront automatically switches to the secondary origin.

To set up origin failover, you must have a distribution with at least two origins. Next, you create an origin group for your distribution that includes two origins, setting one as the primary. Finally, you create or update a cache behavior to use the origin group. We will demonstrate this with a hands-on in the second part of this blog.

Thanks for reading and stay tuned for more.

If you have any questions concerning this article or have an AWS project that requires our assistance, please reach out to us by leaving a comment below or email us at [email protected].

Thank you!

Categories
Blogs

AWS Security Hub

Enhancing Cloud Security with AWS Security Hub

Introduction

In the error of cloud computing, security remains supreme for organizations around the world. With the increasing of sophisticated cyber threats, organizations must adopt robust security measures to safeguard their data and infrastructure. AWS security hub emerges as a comprehensive solution to address these challenges by providing a centralized platform for managing security across the AWS cloud.

What is AWS Security Hub?

AWS Security Hub provides you with a comprehensive view of your security state. It provides a centralized, aggregated, and prioritized overview of security findings and compliance status in a standard format for a single AWS account and multiple AWS accounts. It helps you analyze your security trends and identify the highest-priority security issues.

Key Features of AWS Security Hub

  • Centralized security monitoring
  • Continuous security assessment
  • Prioritized alerting
  • Custom insights and compliance checks: 
  • Integration with third-party security tools
  • Automation
  • Security scores and summary dashboards

Benefits of AWS Security Hub

  • Simplified security operations: It provides a centralized view, simplifying security operations, and enabling faster response to threats.
  • Enhanced threat visibility: By integrating with various AWS security services and third-party tools, it provides a wide range of security insights, ensuring comprehensive visibility into potential threats and vulnerabilities.
  • Proactive risk mitigation: The continuous and automated compliance checks of AWS Security Hub allow organizations to proactively identify and remediate security gaps, reducing the risk of breaches, data leaks, and compliance violations.
  • Simplified compliance management: AWS Security Hub simplifies compliance management by aligning with industry-standard frameworks and providing pre-built compliance checks. It simplifies reporting, and audits, and ensures compliance with regulatory requirements.
  • Efficient collaboration: AWS Security Hub enables seamless collaboration between security teams by providing a centralized and shared view of security findings, allowing them to work together on analysis, remediation, and incident response.

Demo on how to enable AWS Security Hub?

Sign in to the management console and navigate to the security hub console. Then click on Go to security hub.

Before you can enable the security hub, you must first enable recording for the relevant resources in AWS Config.

Then Select the relevant Recording strategy and Recording frequency as per your requirements.

Configure Override settings to override the recording frequency for specific resource types or exclude specific resource types from recording and create a new IAM Role or select the existing IAM Role for AWS Config in Data governance.

Remember AWS Config needs an S3 bucket to store configuration history and configuration snapshots. Configure S3 bucket details, then click on Next.

AWS Config Managed Rules provide a set of predefined rules that you can use to evaluate the compliance of your AWS resources according to best practices and security standards. Select the AWS-managed rules as per your requirements and click on Next.

Review AWS Config details and click on Confirm to finish the AWS Config setup.

Select the Security standards as per your requirement from built-in security standards and click on Enable Security Hub to finish the setup.

Once setup is complete, you’ll be directed to the Security Hub dashboard. Here, you can access a unified view of security findings, compliance status, and actionable insights across your AWS accounts. Explore the dashboard in detail and familiarize yourself with the available features and navigation options.

Once you enable an AWS Security Hub, it will take some time to complete the initial analysis and to appear the results on the dashboard. This is because AWS Security Hub needs to scan your entire AWS environment to identify all the relevant resources to the standard.

After the initial analysis is done, AWS Security Hub will continue to scan your AWS environment regularly to identify any new resources or modifications to existing resources. The results will be posted on the dashboard in real time. You can then check the findings and prioritize the remediation of the threats/vulnerabilities detected.

Below are some sample reports from the AWS Security Hub dashboard.

Security score from AWS Security Hub summary.

Findings from all linked Regions are visible from the aggregation Region

Track New findings over time by severity and the provider, and see the top resources at risk across multiple resource types.

Security score for specific security standards

Conclusion

AWS Security Hub is an essential component in securing AWS cloud infrastructure by providing a comprehensive and centralized view of security posture. As the cloud landscape evolves, AWS Security Hub remains a pivotal tool for enhancing cloud security posture, enabling organizations to proactively identify and mitigate security risks.

This brings us to the end of this blog. Clean up.

If you have any questions concerning this article or have an AWS project that requires our assistance, please reach out to us by leaving a comment below or email us at [email protected].

Thank you!

Categories
Blogs

AWS Systems Manager

Unveiling the Power of AWS Systems Manager: Simplifying Management and Automation.

AWS Systems Manager

Managing and maintaining a fleet of virtual machines and services in the error of cloud computing can be a daunting task. This is where AWS systems manager comes in as a powerful suit of tools designed to simplifying operational tasks, automate workflows and enhance security a cross your AWS infrastructure. In this blog article, we will delve into the capabilities and benefits of AWS Systems Manager.

What is AWS Systems Manager

According to AWS documentation, AWS Systems Manager is the operations hub for your AWS applications and resources and a secure end-to-end management solution for hybrid and multicloud environments that enables secure operations at scale. AWS Systems Manager (SSM) is an agent-based service for managing servers on any infrastructure: AWS, on-premise and other clouds.

SSM Agent

The AWS Systems Manager Agent (SSM Agent) is Amazon software that operates on Amazon EC2 instances, edge devices, and on-premises servers and virtual computers (VMs). Systems Manager may update, manage, and configure these resources using the SSM Agent. The agent receives requests from the AWS Cloud’s Systems Manager service and executes them as stated in the request. The SSM Agent then uses the Amazon Message Delivery Service to deliver status and execution information back to the Systems Manager service.

AWS Systems Manager Features Automation

With SSM automation we have something called a document which defines an action to perform and it’s written in YAML or JSON. For instance, we can have a document that creates a snapshot of an RDS database. The documents are fed into the SSM automation which will then automate the IT operations and management tasks across the AWS resources.

Run Commands

The run command is also very similar, it also has documents. They include things such us commands, automation and packages. For example, we have run command that lists missing Microsoft windows updates, find out what they are and be able to patch them.

Inventory

This quit gives you an inventory of resources that you are managing. Once the information is collected, we can gather all that data, visualize it then drill down into the various components of the inventory.

Patch Manager

Helps you select and deploy operating systems and software patches across large groups of Amazon EC2 and on-premises instances.

We have something called patch baselines where we can set rules to auto approve select categories of patches to be installed, specify a group of patches that override these rules and are automatically approved or rejected.

We can also specify maintenance windows for patches so that they are only specified during predefined times.

Patch manager helps to ensure that your software systems are up to date and meets your compliance policies you might have in your organisation.

SSM helps you scan your managed instances for patch compliance as well as configuration inconsistencies.

Session Manager

Allows you to connect to the command lines on your instances enabling secure management of instances at scale without logging into your servers. It replaces the needs for Bastion hosts, SSH or Remote PowerShell.

This means you don’t open ports you typically need for these protocols. It also integrates with I AM for granular permissions and all the actions that are taken can be seen in AWS cloud Trail. You can store your session logs in AWS S3 and have outputs go to Amazon CloudWatch logs as well.

To enable this to work you need I AM permissions for EC2 instance to access SSM, S3 and CloudWatch logs.

Parameter Store

This is a service that allows you to store configuration data and secretes. You can store data such us passwords, database strings and licence codes. Data can be stored in plain text, or cyphertext.

How does Systems Manager work?

Let’s understand with a general example of a systems Manager process flow.

  1. Access Systems Manager– The AWS Console provides access to the Systems Manager. You can use the AWS Command Line Interface, AWS Tools for Windows PowerShell, or the AWS SDK to manage resources programmatically. You may use Systems Manager to configure, schedule, automate, and execute operations on your AWS resources and managed nodes.
  2. Choose a systems Manager capability – More than two dozen functions are included in Systems Manager to assist you in performing activities on your resources. Only a handful of the features that administrators employ to configure and manage their resources are shown in the illustration.
  3. Verification and processing – Systems Manager verifies configurations, including permissions, and makes requests to the AWS Systems Manager agent (SSM Agent) running on your hybrid environment’s instances, edge devices, or servers and VMs. The configuration changes given by SSM Agent are implemented.
  4. Reporting– SSM Agent notifies the user, Systems Manager in the AWS Cloud, Systems Manager operations management capabilities, and various AWS services, if configured, about the status of the configuration changes and actions.
  5. Systems Manager operations management capabilities– In reaction to events or issues with your resources, Systems Manager operations management features such as Explorer OpsCenter and Incident Manager aggregate operations data or create artifacts such as operational work items (Ops Items) and incidents if enabled. These features might assist you in investigating and troubleshooting issues.

This brings us to the end of this blog, thanks for reading and stay tuned for more.

If you have any questions concerning this article or have an AWS project that requires our assistance, please reach out to us by leaving a comment below or email us at [email protected].

Thank you!

Categories
Blogs

Slash AWS Expenses

Slash AWS Expenses: Automate EC2 Idle Instance Shutdown with CloudWatch Alarms.

Slash AWS

Introduction

Effective management of cloud resources is important for anyone who uses cloud services, especially when it comes to managing costs. A common issue is that you forget to stop using EC2 instances for purposes such as development, testing, and temporary work, which can lead to unexpectedly high costs.

There are several scenarios in which you might want to automatically stop or terminate your instance. For example, you might have instances dedicated to batch payroll processing jobs or scientific computing tasks that run for some time and then complete their work. Rather than letting those instances sit idle (and accrue charges), you can stop or terminate them, which helps you to save money.

Forgetting to stop an EC2 instance used for brief testing can lead to unnecessary charges. To solve this, create a CloudWatch alarm to automatically shut down the instance after 1 hour of inactivity, ensuring you only pay for what you use. In this article, I’ll share how to set up this solution using the AWS Management Console.

CloudWatch Alarm

Amazon CloudWatch is a monitoring service for AWS. It serves as a centralized repository for metrics and logs that can be collected from AWS services, custom applications, and on-premises applications. One of its important features is CloudWatch Alarms, which allows you to configure alarms based on the collected data.

A CloudWatch alarm checks the value of a single metric, either simple or composite, over some time you specify and launches the actions that you specify once the metric reaches a threshold that you define.

Key Components of CloudWatch Alarms

Metric: A metric is performance data that you monitor over time.

 

Threshold: This is the value against which the metric data is evaluated.

 

Period (in seconds): The period determines the frequency at which the value of the metric is collected.

 

Statistic: This specifies how the metric data is aggregated over each period. Common statistics include Average, Sum, Minimum, and Maximum.

 

Evaluation Periods: The number of recent periods that will be considered to evaluate the state of the alarm, based on the metric values during these periods.

 

Datapoints to Alarm: The number of evaluation periods during which the metric must breach the threshold to trigger the alarm.

 

Alarm Actions: Actions that are taken when the alarm state changes. These can include sending notifications via Amazon SNS, and stopping, terminating, or rebooting an EC2 instance.

Setting Up a CloudWatch Alarm to Automatically Stop Inactive Instances.

Solution with Console

Open the CloudWatch console, In the navigation pane, choose AlarmsAll alarms. Then choose Create alarm.

Choose Select Metric

for AWS namespaces, choose EC2

Choose Per-Instance Metrics

Select the check box in the row with the correct instance and the CPUUtilization metric, and select “select metric”.

For the statistic, choose Average. Choose a period (for example, 1 Hour).

For threshold type select static, then select lower/average. Select threshold value, and data points to alarm then select treat missing data as missing then click next.

The first action is to send a notification to an SNS topic with an email subscription. This ensures that you will be notified when the alarm stops the instance. You can create the SNS topic at this step, or you can reference an existing one if you have already created it. Had already created an SNS topic.

The second action will be to terminate the EC2 instance, under the alarm state trigger, select in alarm then select stop instance, and click next.

Provide a name for the alarm, and you can also add a description then click next.

Review a summary of all your configurations. If everything is correct, confirm the alarm creation.

The alarm was successfully created, and we can see the alarm state is ok.

You can either wait for the alarm state to be in alarm or you can use the below command to set the alarm to alarm state.

Our alarm has gone to an alarm state and if you check the state of the EC2 instance, we can see our objective has been achieved and our EC2 instance is already stopped.

Additionally, a notification has also been sent to my email via SNS.

This brings us to the end of this demo, clean up. Thanks for reading, and stay tuned for more.

Conclusion

Automating idle EC2 instance shutdown with CloudWatch Alarms cuts AWS costs and ensures efficient resource use, preventing unnecessary charges and optimizing cloud spending.

If you have any questions concerning this article or have an AWS project that requires our assistance, please reach out to us by leaving a comment below or email us at [email protected].

Thank you!