Accend Networks San Francisco Bay Area Full Service IT Consulting Company

Categories
Blogs

Cisco Umbrella Monitoring

Cisco Umbrella Monitoring and Logging Best Practices

How to: Validate Cisco Umbrella Configuration
Cisco Umbrella offers a range of URLs to validate and ensure the successful configuration of Umbrella on your network. These URLs enable you to perform various tests to confirm the functionality of Umbrella’s DNS resolution, security settings, content filtering, and Intelligent Proxy feature. Below the table is an extra category of test page for the Intelligent Proxy.

Umbrella/OpenDNS Test URLs

Correctly Configured Result

Incorrectly Configured Result

The first stage in using Umbrella is to point your DNS addresses to our anycast IP addresses (208.67.222.222 and 208.67.220.220).  

Once you’ve done that, to test whether you are using Umbrella/OpenDNS for DNS resolution, go to:
http://welcome.opendns.com

If you’ve correctly configured your DNS  settings on your router, computer or  mobile device to use Umbrella, this is the  result you should see. 

Check the settings on your device again to ensure it’s correctly configured. 

To test the Security Settings of your configuration, we recommend using one of the following test sites,depending on what you want to test.  

All of the test sites below are blocked with the default Umbrella Security Settings.

To test blocking the Security setting for Phishing:

http://www.internetbadguys.com

To test blocking the Security Setting for Malware:

http://www.examplemalwaredomain.com

or

http://malware.opendns.com/

To test blocking the Security Setting for Command and Control Callback:

http://www.examplebotnetdomain.com

An Umbrella block page should appear if you are correctly configured. With Security Settings, each of the block pages will vary based on your settings and could include custom block pages.

If this page appears, check your settings, including the order of policies and which identity you are appearing as in the logs.

To test Content Settings for your configuration, we recommend using the following test site to test blocking pornography sites. However, not every individual Content Settings has an Umbrella block page for it.  

Instead, if you have created your own block page (or added one to a policy) and applied it to the policy with a blocked Content Setting, you should see that block page appear.

To test blocking for pornographic websites:

http://www.exampleadultsite.com

An Umbrella block page should appear if you are correctly configured. With Content Settings, each of the block pages will vary based on your settings and could include custom block pages.

If this page appears, check your settings, including the order of policies and which identity you are appearing as in the logs.

If these tests return results other than those described in the table, further troubleshooting may be required. To begin, we suggest to contact your ISP to ask them if they allow 3rd-party DNS services, such as Umbrella’s global DNS or Google DNS. 

Additional Test: Intelligent Proxy

To validate the Intelligent Proxy feature:

  • Enable the Intelligent Proxy policy for an identity, such as your laptop or mobile device.
  • Visit http://proxy.opendnstest.com/ and follow the instructions to test image blocking and website blocking using the Intelligent Proxy.
  • Ensure that the identity you’re using has the Intelligent Proxy enabled in the applicable policy.

If any test results differ from the expected outcomes, further troubleshooting may be necessary. Consider reaching out to your ISP to confirm compatibility with third-party DNS services like Umbrella’s global DNS or Google DNS.

By following these steps, you can effectively validate your Cisco Umbrella configuration and ensure optimal performance of your network security measures.

How to Monitor Umbrella Service Health and System Status

Monitoring Cisco Umbrella’s health and status is key for network security. Bookmark system status pages and subscribe to the Cisco Umbrella Service Status page for notifications. Stay informed with service updates, notifications, and announcements. Regularly check the “Message Center” on the Umbrella Dashboard for alerts.

  1. Bookmark System Status Pages:
  2. Subscribe to Service Status Updates:
    • Subscribe to the Cisco Umbrella Service Status page at https://146.112.59.2/#/ to receive notifications regarding Service Degradations, Outages, Maintenance, and Events.
  3. Stay Informed with Service Updates:
  4. Check Service Notifications:
  5. Stay Updated with Announcements:
  6. Review Service Updates:
  7. Monitor Cisco Umbrella Dashboard:
    • Periodically check the Cisco Umbrella Dashboard’s “Message Center” for product alerts and notifications.

Following these steps will help you stay informed about the health and status of your Cisco Umbrella service, ensuring timely action and awareness of any potential issues.

Network Registration:

Ensure all IP addresses and CIDR ranges associated with your organization are registered with Umbrella. For more information, refer to https://docs.umbrella.com/product/umbrella/protect-your-network/.

Logging:

Umbrella retains detailed logs for 30 days before converting them into aggregated report data. To preserve detailed data beyond 30 days, configure an Amazon S3 bucket for data export at “Settings -> Log Management”.

How to Contact and Work with the Umbrella Support Team:

  1. Submit a Support Request:
  2. Telephone Support:
    • If you have purchased telephone support from Cisco Umbrella will see a telephone icon at the top right-hand corner of the Umbrella dashboard screen.
    • Clicking on the telephone icon will display the telephone number for Support.
  3. Provide Detailed Information:
    • When contacting support, provide as much detail as possible about your issue or question.
  4. Use the Diagnostic Tool:

By following these steps, you can effectively contact and work with the Umbrella support team to resolve any issues or questions you may have regarding the Umbrella service.

Feel free to reach out to us if you have any questions at info@accendnetworks.com and we’ll be glad to assist you.

Happy DNS Security!

Categories
Blogs

Optimizing ECR Costs

Optimizing AWS ECR Costs: Effective Use of Lifecycle Policies

Amazon Elastic Container Registry (ECR) is a highly secure, scalable, and reliable managed AWS Docker registry service. It simplifies your development and product development. However, as your container usage increases, so do the costs associated with storing container images. One effective way to manage and reduce these costs is to implement ECR life cycle policies. In this article, we’ll explore what an ECR lifecycle plan is, how it works, and how to use it to optimize your ECR costs.

What is AWS ECR?

Amazon ECR is an AWS-managed Container image registry that is secure, scalable, and reliable. We can create public and private repositories.

What is lifecycle policy?

A lifecycle policy consists of one or more sets of rules where each rule defines the action that needs to be taken on an ECR repository.

With the help of this lifecycle policy, we can automate the cleanup of expired application images in our ECR repository based on age or count.

What is lifecycle policy?

Cost Reduction: By automatically deleting old and unused images, you can significantly reduce your storage costs.

 

Improved Repository Management: Lifecycle policies help in maintaining a clean and organized repository, making it easier to manage and locate necessary images.

Enhanced Security: Regularly deleting old images can reduce the attack surface, thereby enhancing security.

 

Automated Management: Lifecycle policies automate the image deletion process, reducing the manual effort required to manage the repository.

Implementation

Log in to the management console and in the search box, type ECR then select Elastic Container Registry under services.

On the left side of the ECR UI, select repositories then click your repo. I had already created a repository called ecr-repo, as a prerequisite for this blog.

 

On the left side of the repository UI, select life cycle policy. Then click Create Rule.

Specify the following details for each test lifecycle policy rule.

 

For Rule priority, type a number for the rule priority. The rule priority determines in what order the lifecycle policy rules are applied.

For Rule description, type a description for the lifecycle policy rule.

 

For Image status, choose Tagged (wildcard matching)Tagged (prefix matching)Untagged, or Any.

 

Image status options

Here is the explanation for each of these image status

 

Tagged (wildcard matching)

 

Here we specify a comma-separated list of image tag patterns that may contain wildcards (*) on which to take action with your lifecycle policy.

For example, if our images are tagged as prod, prod1, prod2, and so on then you can use the tag pattern as prod* to specify all the prod images.

 

Note: If you specify multiple tags then images satisfying all the expressions are selected.

 

For example, if we specify tag pattern list prod*, prod*web then images with prod1web, prod2web will be selected and the images with prod1, prod2 and so on will not be selected.

 

Tagged (prefix matching).

 

Here we need to specify the comma-separated list of image tag prefixes on which action will be taken by lifecycle policy.

For example, if we have images tagged with prod, prod1, prod2, and so on then specify the tag prefix prod to target all these images.

 

Untagged

 

This is used when we have untagged images in our ECR and we want to apply a lifecycle policy rule on them. We don’t have to specify any matching rule for this and this rule will not have any impact on tagged images

 

Any

 

This image status is specified when we want to target all the images residing in our repository irrespective of whether they are tagged or not.

This rule must be assigned a higher priority number so that it can be evaluated at the end by the lifecycle policy rule evaluator.

Choose Save.

Objective achieved.

 

Conclusion

In conclusion, we have seen how the lifecycle policies are quite useful in the handling of storage cost reduction. This way, you will be able to automate the removal of old unused images hence making sure that your repository is well-arranged and cost-effective. Also, reviewing and adjusting your policies as the need arises will go a long way into helping you cope with the different needs thus making sure that your usage of ECR is optimized in the long run.

 

This brings us to the end of this blog. Clean up.

 

If you have any questions concerning this article or have an AWS project that requires our assistance, please reach out to us by leaving a comment below or email us at sales@accendnetworks.com.


Thank you!

Categories
Blogs

Rotating SSH Keys

Rotating SSH Keys: Adding or Removing a Public Key on Your AWS EC2 Instance

Managing SSH keys is a crucial aspect of maintaining the security and accessibility of your AWS EC2 instances. Whether adding a new user, granting temporary access, or revoking permissions, understanding how to add or remove public keys is essential. This guide will walk you through the process of adding and removing a public keys on your AWS EC2 instance.

When you launch an instance, you can specify a key pair.

If you specify a key pair at launch, when your instance boots for the first time, the public key material is placed on your Linux instance in an entry within ~/.ssh/authorized_keys.

You can change the key pair that is used to access the default system account of your instance by adding a new public key on the instance, or by replacing the public key (deleting the existing public key and adding a new one) on the instance. You can also remove all public keys from an instance.

Reasons for Changing a Public Key

Several reasons might lead to changing the public key of our EC2 instance.

Compromised Key: If someone has a copy of the private key (.pem file) and you want to prevent them from connecting to your instance (for example, if they’ve left your organization), you can delete the public key on the instance and replace it with a new one.

Adding a New User: When a new team member needs access to the instance, you must add their public key.

Key Rotation Policy: As part of your security best practices, regularly rotating keys helps mitigate the risk of key compromise.

Revoking Access: When a user no longer requires access, removing their public key ensures they cannot connect to the instance.

Temporary Access: Granting temporary access to a user for a specific task or duration, after which the key is removed.

Lost Key: If you’ve lost access to your private key, you’ll need to add a new key pair to regain access.

If a user in your organization requires access to the system user using a separate key pair, you can add the public key to your instance.

To achieve this goal, Let’s proceed as follows.

Launch an EC2 instance.

Log in to the AWS management console as an admin user. Search for EC2 in the search bar then select EC2.

Click instances in the left UI of the EC2 dashboard then click Launch instance.

Fill in the instance details, select the Ubuntu image, and move within the free tier with t2. Micro.

Select your key pairs then scroll down.

Leave networking as default and select Create New Security Group, with port 22 open for SSH.

Leave other settings as default, scroll down, review then click launch instance.

Click the launched instance ID, then copy the instance public IP then let’s proceed to SSH into our instance.

Type in the following command to ssh into your server.

ssh -i <keyname.pem>  user@publicIP

Successfully logged into our server.

Let’s move to the .ssh directory where the authorized_key file is located.

List the contents, of the .ssh directory and then cat the contents of the authorized_keys. You will see your public key.

To add or remove the public key, this is the file we have to edit.

I have the following keys, in my AWS account.

Retrieve the public key material using the bellow command.

Using a text editor of your choice, open the .ssh/authorized_keys file on the instance. Delete the old public key information, add the new one then save the file.

Let’s disconnect from our instance, and test if we can connect back using the new private key file.

Success, we have now logged back to our EC2 instance using the new key pair.

Clean up.

Conclusion

Managing SSH keys on your AWS EC2 instances is a straightforward yet vital task to ensure secure access. By following the steps outlined above, you can easily add or remove public keys, thus maintaining control over who can access your servers.

Thanks for reading and stay tuned for more.

If you have any questions concerning this article or have an AWS project that requires our assistance, please reach out to us by leaving a comment below or email us at sales@accendnetworks.com.


Thank you!

Categories
Blogs

Securing S3 Buckets

Securing Your S3 Buckets: Preventing Unauthorized Access with CloudFront OAI

In today’s cloud computing world, keeping data safe is crucial. AWS does a great job storing data in its data centers, but it’s up to us to set up who can access it.

Many companies use Amazon S3 buckets. These offer a way to store lots of data through Amazon Web Services (AWS). But making sure your S3 buckets are secure is important. Even a minor error can allow unauthorized access to your private information.

This blog post explores how to prevent unauthorized access to your S3 buckets using CloudFront Origin Access Identity (OAI).

Understanding the Issue

By default, S3 buckets are private. However, when hosting a static frontend website, it’s common to grant public access to your bucket and enable the static web hosting property. Even with security policies applied to your bucket objects, a small mistake or incorrect configuration can lead to unauthorized access to sensitive data.

This is where CloudFront Origin Access Identity (OAI) comes to the rescue!

CloudFront Origin Access Identity

AWS CloudFront is a content delivery network (CDN) service that can be used to distribute content globally while providing a security layer. CloudFront OAI is a feature that helps secure your S3 buckets by allowing you to restrict access to your data to only CloudFront distributes.

CloudFront OAI does this by updating the bucket policy to only allow access from CloudFront.

Let’s look into how we can configure CloudFront on top of our S3 bucket.

Setup S3 bucket

Create an S3 bucket with block all public access enabled. Additionally, by default bucket ACLs, are disabled. Upload all your front-end static content into your bucket.

Note: If you already have a static frontend hosted on S3, make sure to disable the static website hosting property since we will leverage CloudFront. Additionally, remove any existing bucket policies, and disable public access to your bucket I already have a bucket with my web files uploaded. If I try accessing my objects directly, am getting access denied. Now let’s see on how to leverage CloudFront to securely access our objects.

Log into the AWS management console then in the search bar search for CloudFront then select it.

On the left side of CloudFront UI, select distributions, then click create distributions.

For the Origin domain, choose your S3 bucket, name will be updated automatically.

Let’s configure the origin access control settings.

Select legacy access settings then select the drop-down button to select Origin Access Identity, if you don’t have one click Create. Then click the radio button for Yes update bucket policy. This will automatically update your bucket policy.

 

Leave the remaining settings as default.

Keep the default cache behaviour settings.

Functional associations will be kept as default.
Under Web Application Firewall (WAF), enable WAF for additional security if needed; otherwise, select do not enable security protections.

Under settings, add index.html for the default root object and click create distribution.

Our CloudFront distribution policy has been successfully created as we can see.

Additionally, we can see our bucket policy updated to only allow access from CloudFront.

Retrieve your CloudFront distribution domain name and verify your website’s availability.

You can also assess the security of your index.html object by accessing its object URL. We can see access denied.

This indicates that your bucket is exposed only to your created CloudFront distribution and cannot be accessed directly, which ensures any unauthorized or direct access to your S3 bucket is denied.

This brings us to the end of this demo. Always make sure you clean resources to avoid surprise bills.

Conclusion

To wrap up, now know how to use Amazon S3 to host static websites set up CloudFront to deliver content, and keep sensitive data safe in your S3 bucket with CloudFront Origin Access Identity (OAI). When you put these methods into action, you can stop direct access to your content and make content delivery much faster. This way, you keep your data secure and give your users a smooth and quick experience.

Thanks for reading and stay tuned for more.

If you have any questions concerning this article or have an AWS project that requires our assistance, please reach out to us by leaving a comment below or email us at sales@accendnetworks.com.


Thank you!

Categories
Blogs

Enhancing AWS Security

Enhancing AWS Security: Implementing Root Account Login Alerts

When it comes to cloud computing, security is a big deal. Luckily, Amazon Web Services (AWS) has got your back with some awesome tools and services to help keep your infrastructure safe. But it’s still up to you to monitor and protect your AWS accounts. One of the most important things you need to take care of is controlling access to your account’s Root user.

In this article, I’m going to walk you through a step-by-step guide on how to configure AWS services to send you alerts whenever someone logs in with your Root user. By following these instructions, you can beef up your AWS security.

Account Root User

The root user is the account’s primary user with full administrative privileges, similar to the root user in Linux systems.
Root user credentials are the email address and password used during account creation. The root user has full control over the account and a user cannot limit most of the permissions associated with the root user. For that reason, we don’t always use the root user.

The Root user is like the master key to your AWS kingdom. It has full access to everything in your AWS environment, which means it’s a prime target for bad actors.

Now what do we do instead?

Leverage the I AM (Identity and Access Management) to create users for our daily

tasks and activities.

Admin User: Use IAM to create a user with administrative privileges for daily

administrative tasks instead of using the root user.

Other Users: Utilize IAM to generate individual users responsible for daily tasks.

Each IAM user can be assigned specific permissions, ensuring they only

have access to the resources they need.

Step 1: Create a CloudTrail with CloudWatch login enabled.

Let’s Create a Trail in the CloudTrail console.

In the search box, type CloudTrail, then select CloudTrail. On the left side of CloudTrail UI, select trails, then click Create Trail.

Fill in the required details under the general information.

Under the CloudWatch Logs, enable the CloudWatch logs by ticking the check box. Fill in the required details.

Click Next and keep the things as default on the next page.

Click Next Review and Click Create trail.

The Trail is created. Copy the log group name for later use.

Step 2: Create an SNS topic and create a subscription on it

Now, jump to the SNS service.

Under the Topics, click Create Topic.

Fill in the details as shown below.

Keep the remaining as default and click Create topic. At the bottom, Under the subscription, click Create subscription.

Provide the required details, for protocol, select the drop-down button and select email, endpoint key in your email account then Click Create subscription.

You will receive an email at the mentioned email address to confirm the subscription. Make sure you confirm your subscription.

Step 3: Create a Metric filter on the CloudWatch

Now, jump to the CloudWatch. Click on the Log Groups and search for the log group name you have copied in the previous step.

Click the log group name. Click on the Metric filters. Click Create metric filter.

In the Filter pattern, put the following pattern:

{ $.userIdentity.type = “Root” && $.userIdentity.invokedBy NOT EXISTS && $.eventType != “AwsServiceEvent” }

Click Next

Provide a Filter name. Fill in the Metric details.

Click Next and Click Create metric filter.

You can find the created metric filter.

Step 4: Create a CloudWatch Alarm.

Tick the check box at the upper right corner of the metric and click Create alarm.

You will be redirected to the CloudWatch alarm dashboard.
Change the Condition to Greater/Equal and define the threshold value as 1. Click Next.

Under the Configure actions, select the SNS topic you have created in the previous step. Click Next.

Provide a name to the alarm. Click Next.

Review and click Create alarm.

That’s it! The alarm is created and the condition is ok.

Let’s use this command to set the condition of the alarm to in-alarm

Our alarm has gone to alarm state and checking our email account, we can find an email notification.

Conclusion:

Setting up alerts for Root account logins adds an important layer of security to your AWS environment. This allows you to respond quickly to potential threats.

With these alerts in place, you can rest assured that you have an additional layer of protection safeguarding your AWS environment Monitoring Root account activity not only helps prevent unauthorized access but also promotes best practices by encouraging the use of IAM users for everyday operations.

If you have any questions concerning this article or have an AWS project that requires our assistance, please reach out to us by leaving a comment below or email us at sales@accendnetworks.com.


Thank you!