How to Optimize AWS Costs with FinOps

Nesh (Steven Puddephatt) Senior Solutions Engineer @ GlobalDots
9 Min read

Amazon is locked in the race against rising energy and infrastructure costs. Their per-instance pricing has halted its freefall after years of relentless reductions; at the same time, external pressures are threatening a big squeeze on organizations’ budgets. Identifying and understanding AWS spend is quasi-analytical – controlling it is wholly cultural. Unifying these requires fluency with FinOps that many organizations still need to develop.

What Causes AWS Costs to Increase?

AWS and other hyperscalers drive an immense amount of innovation, with total cloud expenditure now predicted to account for $84.6 billion dollars of corporate spend. Having placed itself at the cornerstone of emerging technologies such as artificial intelligence, AWS grants teams the tools to incorporate their immense compute power into everyday business ops. Despite the performance increases and reduced latency,  even AWS’ market dominance does not make it immune to external factors, however. 

Reduce your AWS costs by over 50%

Discover your Cloud Saving Potential – Answer just 5 simple questions. AppsFlyer, Playtika, Lufthansa, IBM, top leading companies are already using our FinOps services.

Reduce your AWS costs 
by over 50%

Heavy investment in infrastructure around Europe sits uncomfortably close to the war in Ukraine, highlighting a dependency on Russian energy supplies. Even following heavy investment in its high-efficiency Graviton 3 chip – which Amazon CFO Brian Olsavsky has credited with a-billion-dollar expenses drop in the first quarter of 2022 –  further international tensions surrounding Taiwan’s chip industry still shed a gloomy shadow on cost prospects.

With cloud vendors battling increased spend, it’s up to the customer to navigate increasingly complex cost models. With over 160 cloud services, the sheer variety on offer – alongside the nitty-gritty demands of each unique integration – can threaten the most well-intentioned cost-saving aspiration. 

There are three fundamental drivers of cost within AWS: computestorage, and outbound data transfer. Understanding the impact of each is one of the primary ways that organizations can begin to reclaim control over their cloud cost. 

Compute 

The first cost driver within the AWS is compute: elastic cloud compute (EC2), for example, provides secure resizable compute to developers, offering a variety of virtual hardware across CPU, memory, storage and network capabilities. Each instance type demands its own cost alongside the geographic region it’s placed within – the cost is proportional to the energy and storage requirements of the associated hardware, alongside the overall demand currently being placed on that datacenter.

While the wide selection of instance types provides DevOps with unmatched flexibility, it provides the first clue to how AWS costs can increase. With DevOps granted direct access to Cloud compute power – and therefore cost – a siloed development team can easily disregard budget constraints in favor of higher availability or power. With no guardrails for cloud spending, this newfound autonomy can result in unexpected and unexplainable costs. Compute is one area that quickly accrues extra cost, partly thanks to the speed with which AWS introduces new product families – building a technical debt of outdated compute instances – and partly thanks to the sheer speed with which DevOps can spin up and release new projects.

Storage

With compute power outsourced, many organizations choose to make further use of AWS’ data storage. Amazon Simple Storage Service (S3) grants endpoints the ability to access stored files from any location.  Now an established backbone of cloud-native applications and AI data lakes, S3 lends incredible configurability to data storage. One of the more intuitive cost drivers within S3 is the size of the original being stored; this facet of cloud cost is often just assumed to be a cost of business.

However, many DevOps overlook or underestimate the importance of access frequency. AWS offers storage solutions based on how often each database is accessed  – S3 Standard’s low latency and high throughput makes it perfect for rapid-access fields such as content distribution, gaming applications, and big data analytics. However, their products range all the way to Glacier Deep Archive – built for low-cost storage of data with very rare access; for example, backups in the event of widespread compromise. 

Policies are able to be implemented that automatically transition objects between storage classes, and Intelligent-Tiering is an S3 offering that even does it for you. The assumption that storage classes are simply set-and-forget can be a major driver of cost. 

Outbound Data

With storage and compute power costs accounted for, the last major consideration is AWS data transfer fees. One of the more overlooked areas of cost impact, the quantity of data being transferred is as important as its destination. Keep in mind that only outbound transfers incur fees, as the incoming gigabytes aren’t transported by Amazon-owned architecture.

The first ‘layer’ of cost impact is based on whether your data is going from AWS to the internet or to another Amazon-based workload. Compute is more than just remote servers: once the third-party server has processed the request, each byte must be sent back to the user’s device over the public internet. As the standard AWS compute service, outbound EC2 pricing can shed some light on the sheer variety of cost. Up to one gigabyte is transferable for free every month. From there, the following 9.9 terabytes in that month are charged at $0.108 dollars per GB. Given that the average AWS customer is managing a total of 883 terabytes of data, today’s enterprise demands are lightyears beyond even the most inexpensive tier. To reflect this, AWS offers economy-of-scale tiered pricing, with each tier offering a better price per GB of transferred data. This is yet another factor in the sheer unpredictability of AWS cost: even if you’re relying on one service in one region, changes in demand mean that your cloud spend is subject to constant fluctuation.

As we zoom out and start to look at services within their surrounding networks, the view of data transfer costs becomes even more cluttered. Consider the architectural best practice of setting up multiple availability zones: with a primary RDS database set up, both the ingress and egress of data to the second availability zone is charged. This means that – should your organization suddenly have to rely on the secondary availability zone – getting data to your consumers could suddenly cost a great deal more. 

What is Cost Optimization in AWS?

Cost optimization starts with visibility. In this way, controlling AWS cost differs significantly from the traditional approach used for physical servers and on-premises software licenses. The traditional spending model can be neatly segmented into a few major roles: finance teams authorize budgets; procurement teams oversee vendor relationships; and IT teams handle installation and provisioning. In contrast, cloud computing has enabled virtually any end user from any business sector to independently and rapidly acquire technology resources. As IT and development teams are pushed to the end of this procurement chain, there’s very little incentive left to optimize resources that have already been acquired.

AWS cost optimization recognizes that this approach leaves IT teams chronically unprepared. One on-the-ground consequence of this is the popularity of on-demand instances – the highest-cost form of resource fulfillment. If siloed teams are the barrier to individual cost responsibility, then democratized access to real-time cost information becomes a major key. While efficient DevOps teams further help the organization remain agile and competitive, a solid foundation of Financial DevOps (FinOps) is vital as cloud service adoption expands. Without this, an aspiring cost optimization project risks slipping back into cloud confusion.

AWS Cost Optimization Best Practices

Since its inception in 2006, Amazon Web Services (AWS) has slashed its compute prices over 67 times. This may come as a surprise to more than a few customers, as real-world spend has increased astronomically. This is due to the fact that cloud usage has grown to the point that all cost reductions are negated – expenses have steadily eaten up increasing swathes of revenue. Regaining control over AWS costs demands visibility, resource reservation, and a new level of team cohesion.

To effectively manage and optimize AWS costs, consider the following AWS cost optimization best practices and strategies:

Decommission Intelligently

First and foremost when it comes to cost management best practices is a proactive approach to provisioning and decommissioning resources. Regularly identify and remove unneeded applications, instances, storage volumes, EBS volumes, snapshots, and unattached Elastic IP addresses.

Making this a challenge are fluctuating monthly expenditures and the speed with which new environments are created. Discovering unused resources necessitates an approach of continuous refinement and a commitment to discipline. AWS CloudWatch offers one way to identify unused DynamoDB instances by measuring any active reads or writes on the table or global secondary indexes (GSIs). Identifying what tables have seen no activity over the last 30 days gives you a decent idea of what you can axe. Decommissioning them can place new strain on lean teams, so make sure to set reasonable KPIs.

Support Devs to Decommission

Further decommissioning can be supported at the development level – for instance, unused EBS volumes can be avoided by selecting the Delete on Termination box when EC2 instances are created. Implementing guidelines at the development level helps prevent the need for decommissioning entirely.

Be Proactive About Performance Tuning

FinOps isn’t all about reducing cloud resources – it also covers how you issue more. Waiting until application performance drops is a surefire way to overspend on cloud resources. Instead, use this as an opportunity to implement robust tracking tools – Cost Explorer’s inbuilt forecasting can be a great first step to seeing how much your application will need. 

This allows you to be judicious with the resources you add, setting up a solid foundation for the following best practices. 

Reserve to Save

Now that you know how many resources are truly necessary, Reserved Instances are a cost management best practice that offers immediate effect. 

For specific services such as Amazon EC2 and Amazon RDS, Reserved Instances can achieve savings of up to 75% compared to the corresponding on-demand capacity. RIs are offered in three payment varieties: all up-front, partial up-front, and no upfront. 

Purchasing Reserved Instances on the RI Marketplace involves a trade-off between upfront payment and discounts. A larger initial payment results in a bigger discount. For maximum savings, paying entirely upfront grants the highest discount. Opting for Partial up-front Reserved Instances provides lesser discounts but requires less initial expenditure. 

Sell Unused Reservations

Should you find yourself hanging onto unused reservations, it’s crucial to utilize them – or else waste your investment. When you identify a reserved instance that is not being fully used, consider deploying it for a new application or an existing one currently operating on costlier on-demand instances. Alternatively, you have the option to sell your Reserved Instances in the RI Marketplace.

Prioritize Optimization Strategies

When first exploring FinOps adoption, the sheer scale of change can be intimidating. One way around this analysis paralysis is to evaluate and rank optimization techniques based on their impact and the effort required. Tackle them systematically, with the following framework:

Seek out the right stakeholders

Consider their cost optimization blocks, the impact of their unoptimized spending, and what tools they would need for optimal visibility and cost reduction.

Map out the change network

By visualizing the change process, it becomes possible to identify which teams would be impacted by the potential optimization method, alongside whether new communication structures are required.

Put a cost to the project

Balance the time and tools that each optimization project will require against the potential savings. This helps provide a clear metric for project prioritization.

Regularly Review Your Architectural Choices

Annually or quarterly, reassess applications for architectural efficiency. Seek assistance from your AWS account team for an AWS Well-Architected Review. Some of the recommendations, for instance, may allow you to keep on top of infrequently-accessed data by moving it to S3 Glacier, and long-term archived data to Glacier Deep Archive.

Upgrade to Latest-Generation Instances

Stay informed about AWS updates and new features. Upgrading to the latest generation instances can offer improved performance and functionality, potentially allowing for reductions at an even lower cost.

FinOps Adoption Success Stories

When GlobalDots partnered with a major eCommerce giant, their cloud operations were spread across 74 different accounts, and some inefficient development habits snowballed into a multi-million cloud spend. GlobalDots spent several months consolidating this client’s cloud resources, doubling the number of machines running on reservation contracts, and systematically streamlining outdated architecture. In this FinOps case study, the eCommerce giant would go on to enjoy a 20% reduction in its cloud bill. At the same time, they grew by a third thanks to increased feature development and web traffic. 

Large-scale cultural changes can feel glacially slow at times. However, FinOps adoption sits at the divide between analysis and action – with just a few major stakeholders on board, it becomes possible to transform your organization’s approach from the inside out.

Latest Articles

Cut Big Data Costs by 23%: 7 Key Practices

In this webinar, we reveal a solution that cuts big data costs by 23% and enhances system efficiency - without changing a single line of code. We’ll also explore 7 key practices that will free your engineers to process and analyze data at the pace and scale they need - and ensure they never lose control of the process.

Developer AXE-WEB
15th April, 2024
Project FOCUS: A New Age of FinOps Visibility

It’s easy for managers and team leaders to get caught up in the cultural scrum of FinOps. Hobbling many FinOps projects, however, is a lack of on-the-ground support for the DevOps teams that are having to drive this widespread change – this is how all too many FinOps projects become abandoned on the meeting room […]

Nesh (Steven Puddephatt) Senior Solutions Engineer @ GlobalDots
27th March, 2024
Optimize Your Cloud Spend with a FinOps Maturity Assessment

Achieving FinOps is a tall order: it demands a degree of organizational self-awareness that some companies are constantly battling for. Consider the predicament that many teams find themselves in: while their cloud environments may contain a number of small things that could be optimized, there are no single glaring mistakes that are consuming massive quantities […]

Nesh (Steven Puddephatt) Senior Solutions Engineer @ GlobalDots
27th March, 2024

Unlock Your Cloud Potential

Schedule a call with our experts. Discover new technology and get recommendations to improve your performance.

Unlock Your Cloud Potential