The adoption of cloud services as storage solutions has now become the new normal for many businesses. The utilisation of the flexible scalability of such services as Amazon Web Services (AWS) allows complete ownership over cost and redundancy with options to suit all budgetary, compliance and disaster recovery requirements depending on the need of the business.
We could harp on all day about how a cloud storage solution is essential to any business looking to scale up their digital transformation strategy and Amazon S3 is the perfect tool to start with.
If you would like to read more about Amazon S3, please read our blog on Top 5 Practices to Secure Data in Amazon S3.
The problem with using such powerful and flexible tools when building or re-enforcing your digital environment is that with so many features and settings to choose from, there can be certain features which can be easily overlooked when architecting your storage solution. Today we would like to talk about Intelligent tiering in Amazon S3, a feature that since it’s inception in 2018 has flown under the radar.
What are storage classes in Amazon S3?
You may already be familiar with the three most popular storage classes; S3 standard, S3 Standard Infrequent Access and S3 Glacier storage. You may already have segregated your data between hot, warm and cold data into these tiers, but if you haven’t - it’s important to understand what these are before understanding how intelligent tiering can be a huge asset your environment.
S3 Standard – This is your ‘As it says on the tin’ storage solution for your hot data – data that might need to be access frequently and therefore has a standardized (but more expensive) cost associated with it, with full flexibility to access the data on demand without any financial penalty.
S3 Standard Infrequent Access – This is where you might store your warm data – data which you might need occasionally but not often enough to justify the higher cost of S3 standard. For a significant monthly saving you can store your warm data in S3 Standard Infrequent Access and only pay the premium when the data is retrieved.
S3 Glacier – If you have data that you don’t plan on retrieving (Except in a DR scenario) and want to ensure it’s safe and secure at a minimal cost that Glacier is perfect for your cold storage. For an extremely low monthly cost, your cold data can be kept quite comfortably. The only downside is that if this data needs to be accessed then you will pay the price.
Many businesses and technicians have already mastered the art of the lifecycle policy. A combination of tools can be used to automatically move data into a lower tier after a set amount of time. Still, often data is not as binary as you would be led to believe, with different kinds of data needing different treatment.
Often, manual lifecycle policy methods are ‘catch all’ mean that data is often stored in the wrong class at the wrong time. For example; some backups of applications or servers might need to be accessed more often than standard backups however because they are grouped together they might be moved into a lower storage class whilst they still need to be accessed meaning that your business ends up footing the bill.
What is Intelligent Tiering in Amazon S3?
Because access patterns for different type of data can be so varied and complex, Intelligent Tiering focuses on addressing this to save un-necessary complexity and cost. If you utilise the S3-Intellegent-Tiering storage class, AWS will focus one ensuring your data is moved into the correct storage class at the right time on an object level rather than blindly grouping all S3 storage data together.
It smartly works out the frequency of access for your objects and moves it into the correct tier. For example, if an object is not access for 30 days it will automatically be moved into S3 Standard Infrequent Access. If that data is then accessed again, it will automatically move it back to S3 Standard until the data becomes accessed again.
In 2020, AWS also announced the ability for Intelligent Tiering to also have options for Archive Access and Deep Archive access, which allows you to utilise the cost savings of the S3 Glacier storage classes as part of Intelligent Tiering, meaning that your archive data can also be managed as part of this process.
With this feature being enabled and automatic, it means it takes away any unnecessary monitoring of data access, it takes the reliance away from trusting a manual process to categorise data correctly, and it takes away the fear that your business is not controlling its costs effectively. This allows for true Total Cost of Ownership to be achieved when managing your storage data.
Where do we go from here?
Before you go ahead and move everything into the S3 Intelligent Tiering model, There are a few examples where S3 Intelligent Tiering may not suit your business.
-Data which only needs to be stored temporarily (Under 30 days) as this would not benefit from being moved.
-Data which is small (Under 128KB) as these would not automatically be moved.
-Data which is predictable – If your objects are in fact very straightforward in terms of access, then you would want to save yourself the monitoring charge for S3 Intelligent Tiering.
However, if you think that your data would benefit from the cost savings and stress-free process of Intelligent Tiering, then we have linked the below resources and articles from AWS:
https://aws.amazon.com/s3/storage-classes/intelligent-tiering/
https://aws.amazon.com/blogs/aws/s3-intelligent-tiering-adds-archive-access-tiers/
You can also find S3 Intelligent-Tiering available on the AWS Cost Calculator
Finally, 3Gi Technology are AWS Advanced Consulting Partners with specialisms in Server and storage solutions, we have a team of Solutions Architects and Consultants who are here to help you demystify your digital transformation strategies and work out what AWS services could benefit your plans for growth.
SUBMIT YOUR COMMENT