AWS Data Transfer Cost Optimization: Everything You Need to Know

While AWS services provide a wealth of mission-critical services – storing over 2.2 trillion objects in S3 – many organizations are left floundering in the solution’s complex pricing structures. Spanning transfer types and geographies, data transfer costs can be hugely unpredictable and rapidly get out of hand. 

Below, we leverage decades of industry experience to show the major drivers behind uncontrolled AWS spend. From there, we can establish a comprehensive list of best practices for reducing AWS data transfer costs. 

How One AI-Driven Media Platform Cut EBS Costs for AWS ASGs by 48%

How One AI-Driven Media Platform Cut EBS Costs for AWS ASGs by 48%

What Are AWS Data Transfer Costs? 

Cloud computing enables businesses to transfer vast swathes of data between users and applications. The data lifecycle begins with data ingress – where GBs and TBs worth of information is sent to the cloud’s secure and global platform (aka AWS s3). However, the value of data is only truly realized when used outside of storage. Data egress is the data transfer lifecycle’s final step. Given its importance, data exiting the cloud is one of the biggest influences on your AWS bill.

While the AWS pricing calculator gives businesses a helping hand, many customers find their real-world data transfer charges to be hugely unpredictable. Amazon lists over a dozen reasons why costs can vary from the calculator’s quotes. Due to the sheer quantity of variables, keeping control over data transfer costs requires a thorough understanding of just how your data is being transferred – as well as the charges associated with each type of transfer. Starting with the broader architecture, here are the three primary transfer routes – and their unique pricing quirks. 

Between AWS and the Public Internet

The public internet provides rapid object delivery between Amazon S3 buckets and the millions of apps and websites available today.  Amazon S3 provides 100GB of data transfer for free each month. Exceeding that, additional fees apply: for the first 50TB, AWS charges $0.09 per GB. This means – for an organization transferring 50TB of data between AWS via the public internet every month – their theoretical cost would be $4,491. For the large corporations handling even more data, an extra 100TB is charged at $0.07 per GB.

Between AWS Services

Data egress for sites and applications is only one small part of the pricing puzzle. While many organizations rely on the free data transfer AWS offers between its services, it’s vital to note that these services are only free of charge when within the same Availability Zone and held by the same account. 

This pricing structure directly contradicts architectural best practice. Always-on availability relies on workloads being spread across different regions in case of localized outage. Placing even further weight onto AWS’ pricing complexity is the fact that the same region can encompass multiple availability zones. AWS’ highly geographical pricing is discussed in further detail below. These surprise costs add up rapidly, as data transfer fees between different regions start at $0.02 per GB, and at $0.01 per GB between different accounts within the same region. As every layer adds up, so too does the total bill for each month.

Between AWS and On-Premises 

AWS offers a number of different methods for transferring data to on-prem. To make it even more interesting, each has its own unique pricing model. The three most common options are Direct Connect, Storage Gateway, and Snowball.

  • AWS Direct Connect establishes a dedicated connection between on-prem infrastructure and AWS – customers pay not only for the amount of data transferred, but also the speed of connection and the hours connected. These final two factors create a fee that ranges from $0.03 per hour to $2.25 per hour, with data quantity demanding $0.02 per GB on top. 
  • AWS Storage Gateway offers a hybrid service for organizations to store data in the cloud, while still offering on-prem access. This fee structure is based on storage usage, data transfer amounts, and request quantity. Every month, data storage demands $0.01 per GB; request handling up to $0.10 for every thousand requests; and there’s the fact that Storage Gateway also charges for data transfer via the internet, Amazon S3, or other AWS regions. 
  • AWS Snowball provides a physical device that facilitates large-scale data transfer between on-prem and cloud. This product’s fee structure is based off rental fees, quantities of data transferred, and an extra import or export fee. In the Eastern US region, the service fee for a Snowball device costs $300 for up to 10 days’ usage. Data egress clocks in at $0.03 per GB, with increasing tiers of pricing based on edge storage and optimized EC2 instances. 

While the monthly charges for data transfer seem fair in a vacuum, it’s vital that your cost review process takes a macro view of your entire architecture. By understanding the context of each transfer, it becomes possible to answer the following core question.

Why Are AWS Data Transfer Costs So High?  

With data transfer fees ranging from $0.025 to $0.85 per gigabyte – and the average company handling petabytes of data – it’s easy to see how costs can spiral rapidly out of control. One of the biggest factors to AWS’ unpredictability is their concept of Regions. Regions are physical locations around the world where AWS’ data centers are clustered. Drilling down deeper, each Region is composed of at least three data centers, grouped into Availability Zones. These AZs are physically separated to help ensure high availability. Sao Paulo is the most expensive AWS region, with a basic transfer fee that reaches $0.15 per GB.

On AWS’ side, building and maintaining data centers is hugely expensive, with each region fronting its own labor, real estate, and energy costs. The demand placed on each location is unique and fluctuating, leading to prices that quickly become varied. On the other hand, organizations are rapidly developing a global mindset. In the pursuit of rapid global growth, data transfer patterns can rapidly become a tangled and unmanaged mess, leading to uncontrolled AWS pricing. With Regions holding a major sway over the cost of every single AWS service, they also offer a unique opportunity to bring AWS costs back under control.

How to Reduce Data Transfer Costs

Reducing data transfer costs is a multi-phase process. From building a foundation of visibility to actioning immediate cost savings, here are the first 10 steps toward AWS cost optimization.

#1. Utilize Cost Allocation Tags

Cost allocation tags in AWS are a way to categorize and track resource usage and costs across your AWS account. They allow you to tag your AWS resources (such as EC2 instances, S3 buckets, and RDS databases) with custom metadata that describes their purpose, owner, or any other relevant attribute.

Taking the extra time to assign tags to instances and load balancers enables a culture of cloud visibility. By tracking the resources being used by each project, it becomes possible to discover over-utilization.

#2. Review Data Transfer Usage

Actively reviewing your AWS cost distribution doesn’t have to be a painstaking process. The most basic tool in your arsenal is AWS’ inbuilt Cost Explorer. By navigating to its usage page, you’re offered a view of how much data each AWS service is transferring. It’s then possible to rejig your architecture to reduce this. For example, many larger organizations make use of NAT gateways, allowing instances in a private subnet to communicate with the internet without being directly exposed. NAT gateways charge based on how much data is being transferred. If this is eating away at cloud spend, however, NAT instances can offer the same functionality at far lower cost.

For a wider view of how much data is flowing throughout an organization, AWS CloudTrail logs all API calls that reach an AWS service. Together, you’re granted insight into which apps and services are most data transfer-hungry. This essentially creates a heatmap of inefficient hotspots throughout your organization’s stack, defining your priority list. 

#3. Reduce Outbound Data 

Once you’ve gained a clear understanding of where your current AWS budget is going, it’s time to begin making changes. First and foremost: the quantity of data leaving your cloud. Data compression can offer immediate benefits to suitable use-cases, with reduced latency and bandwidth requirements. If data compression isn’t suitable, then optimizing application code – with a focus on reducing file sizes and minimizing reliance on external libraries – can also help drop transfer costs.

#4. Optimize S3 Storage

Not all S3 buckets are created equal. Optimizing S3 storage can focus on individual instances – S3 Select, for instance, allows a workload to retrieve only the data you need from an object, rather than retrieving the entire object. At the same time, S3 Intelligent Tiering offers automatic changes to each object’s access tier. This is based on access patterns as they change in real-time. This can help to optimize your storage costs by automatically moving objects to the most cost-effective access tier without any impact on performance. Finally, lifecycle policies can automate the process of transitioning objects between classes or deleting them when no longer needed. Comprehensively addressing unoptimized S3 storage allows you to pay only for the storage you actually need.

#5. Use Amazon CloudFront 

CloudFront caches frequently-accessed content at edge locations, reducing the need to retrieve content from the origin server after every request. Caching rules are able to be modified based on specific criteria such as URL patterns, HTTP headers, and cookies, further optimizing cache utilization and cutting the amount of data being transferred over the internet. 

CloudFront integrates with Amazon S3 and EC2, allowing users to serve content directly from these services and dodging additional data transfer costs from external services. Its pay-as-you-go pricing means you only pay for the data transfer and requests actually used. It’s worth mentioning that as an Advanced Tier Services Partner of AWS, GlobalDots can provide superior CloudFront pricing.

And if you’re considering alternatives to Amazon CloudFront, GlobalDots also partners with Akamai and Cloudflare, two of the largest CDN providers worldwide. Providing bargain rates for both, customers can make use of efficient architecture and reduced bandwidth demands at an even lower cost than AWS CloudFront. 

#6. Keep Within a Single Region

AWS charges data transfer fees for data transferred between different regions or availability zones. By keeping data transfer within a single region, these fees can be avoided. One way to support this is with region-specific endpoints. For example, using S3, utilize a specific endpoint for the region in which you are storing your data. In the same vein, Virtual Private Cloud (VPC) peering is a networking feature that allows you to connect two VPCs within the same region. This routes traffic directly between the VPCs, without going over the public internet.

Note that high-availability and mission-critical functions may require a higher degree of fault tolerance. This is typically achieved by sharing these workloads across multiple regions. Make sure to identify which multi-region workloads are necessary, and restrict those that aren’t. 

#7. Keep EC2 Data Transfer within a Single Availability Zone

When you launch EC2 instances, ensure that they are in the same VPC, and subnet within the same region. To help reinforce this boundary, consider configuring security groups: these act as virtual firewalls and allow you to specify which traffic is allowed to and from each instance. 

Furthermore, some AWS services are specific to particular regions, and using them can result in lower costs. For an S3-specific example, issuing a Standard-Infrequent Access (IA) storage class in the same region as the EC2 instance can reduce the data transfer costs associated with infrequently accessed data.

#8. Make Use of Less Expensive AWS Regions

AWS regions can vary dramatically in price. However, a major benefit of cloud computing is its global accessibility: make the most of this by tailoring your cost optimization approach around this. With a cohesive cloud restructure that streamlines all services into this cheaper region, cost savings can be truly transformative. 

It’s important to note that choosing the cheapest region isn’t always the best option, as other factors like latency and data transfer costs should also be considered. Other factors, such as proximity to users, compliance requirements, and service availability, may also play a role in choosing a region. Perform a cost-benefit analysis to determine the most cost-effective solution for you.

#9. Automate for Off-Peak Hours

Too many AWS accounts provision resources on a worst-case-scenario approach. This leads to chronic overprovisioning, and one of the biggest cost realization actions can be achieved by identifying off-peak hours. Take AWS Redshift: its tremendous data warehousing ability offers real computing power. However, when Redshift clusters are run on-demand, idle clusters are still being paid for. Off-hour shutdown is vital for Redshift and EC2. This change can run in parallel with a culture of cloud cost optimization, where DevOps are shown the true cost-saving potential they wield. 

#10. Monitor and Set Up Billing Alerts

Although it’s normal to experience small seasonal fluctuations in cloud usage, it’s important to remain vigilant for any abrupt and unforeseen spikes in usage. To manage cost drivers proactively, set benchmarks and monitor them regularly with AWS Cost Explorer. Make accurate spending forecasts to stay on track with your KPIs and avoid any unpleasant surprises.

How GlobalDots Cut $1.5 Million From eCommerce Giant’s Cloud Bill 

While you can independently begin laying a cultural foundation of cost optimization, projects executed by FinOps specialists can ensure deeper cost reduction thanks to accurate planning and implementation that stretch across entire organizations. 

For example, one large eCommerce company had been shifting its operations to the cloud, but this transition was done without any clear methodology or proper architectural planning. Consequently, the company was suffering a disproportionate cloud bill and limited visibility into their own resource usage. 

Counting a total of 74 AWS accounts across 16 business units, this optimization process represented a complex challenge. Alongside this, there was – understandably – some resistance to external intervention. Working to gain trust with the pre-established teams, GlobalDots was soon able to lead fruitful and open conversations around cloud usage and implementation decisions. From there, action could be taken to address the more egregious inefficiencies. See here for a full run-down of GlobalDots’ approach and techniques.

Cloud Costs Optimization FAQ

What Is Cost Optimization in the Cloud?

Cost optimization in the cloud refers to the process of minimizing the amount of money spent on cloud services while ensuring that the organization’s needs are met. It involves identifying opportunities to reduce costs and then taking steps to implement cost-saving measures.

Why Is Cloud Cost Optimization Important?

Cloud cost optimization is a crucial component to organizational buoyancy, particularly in today’s wider context of financial instability. By helping organizations forecast and budget for their cloud spending more accurately, the risk of unexpected costs is drastically reduced. The benefits go deeper, too, as organizations are empowered to scale up or down without incurring unnecessary expenses, allowing full architectural agility. Ultimately, cloud cost optimization is crucial for maximizing the value of cloud services and achieving business goals while minimizing expenses.

What Is Cloud Cost Management?

Cloud Cost Management refers to the set of practices and tools that organizations use to monitor, analyze, and optimize their cloud spending. It involves tracking cloud usage and spending, identifying areas of inefficiency or waste, and implementing strategies to reduce costs  – all while maintaining a high level of performance and functionality. 

How Are Enterprises Optimizing Cloud Cost?

Cutting-edge solutions allow for close tracking and monitoring of all cost drivers, while insight-rich tools can support your team via automated changes in provisioning. 

How Should Organizations Prioritize Cloud Spending?

Prioritizing cloud spending requires identifying and categorizing all cloud services used by the organization, evaluating their importance, and allocating resources accordingly. 

Latest Articles

How Optimizing Kafka Can Save Costs of the Whole System

Kafka is no longer exclusively the domain of high-velocity Big Data use cases. Today, it is utilized on by workloads and companies of all sizes, supporting asynchronous communication between even small groups of microservices.  But this expanded usage has led to problems with cost creep that threaten many companies’ bottom lines. And due to the […]

29th September, 2024
How E-commerce TrustMeUp Achieved 40% Faster Delivery and 25% Bandwidth Savings with GlobalDots & CloudFront

A popular e-commerce platform was growing fast, but that growth created challenges. With a poorly optimized cloud setup, the company faced content quality problems, as well as ongoing security issues. The only way to solve the problem was to optimize their CloudFront distribution – leading them to work with GlobalDots’ innovation experts. Using the solution […]

11th September, 2024
EBS-Optimized Instances: A Guide to Cut Costs and Maintain Performance

A recent study of over 100 enterprises found more than 15% of AWS cloud bills comes from Elastic Block Store (EBS). But what can you do to cut those costs without impacting performance? The key is to select EBS-optimized instances. With the right combination of EBS-optimized instances and EBS volumes, companies consistently maintain at least […]

Ganesh The Awesome Senior Pre & Post-Sales Engineer at GlobalDots
19th May, 2024
Cut Big Data Costs by 23%: 7 Key Practices

In this webinar, we reveal a solution that cuts big data costs by 23% and enhances system efficiency - without changing a single line of code. We’ll also explore 7 key practices that will free your engineers to process and analyze data at the pace and scale they need - and ensure they never lose control of the process.

Ganesh The Awesome Senior Pre & Post-Sales Engineer at GlobalDots
15th April, 2024

Unlock Your Cloud Potential

Schedule a call with our experts. Discover new technology and get recommendations to improve your performance.

    GlobalDots' industry expertise proactively addressed structural inefficiencies that would have otherwise hindered our success. Their laser focus is why I would recommend them as a partner to other companies

    Marco Kaiser
    Marco Kaiser

    CTO

    Legal Services

    GlobalDots has helped us to scale up our innovative capabilities, and in significantly improving our service provided to our clients

    Antonio Ostuni
    Antonio Ostuni

    CIO

    IT Services

    It's common for 3rd parties to work with a limited number of vendors - GlobalDots and its multi-vendor approach is different. Thanks to GlobalDots vendors umbrella, the hybrid-cloud migration was exceedingly smooth

    Motti Shpirer
    Motti Shpirer

    VP of Infrastructure & Technology

    Advertising Services