Imagine a championship-caliber NBA basketball team losing its three best players to serious injuries halfway through the season. Devastated fans would immediately reduce their expectations from a championship to bottom feeder, and management would essentially give up on the season and focus on rebuilding the franchise. While a sports team might still bounce back after losing its top assets, in the business world, losing one of your most valuable assets — data — is very often the killing blow for the company.
94% of companies suffering from catastrophic data loss do not survive. Out of those, 43% never reopen, according to the University of Texas. And despite the perception that the cloud has our back for backup, data loss through user error, overwriting data or malicious actors is actually quite prevalent.
The past year saw a spike in data breaches, with the number of incidents adding 68 percent year on year to hit a record high. Ransomware attacks, which in a best-case scenario result in unwanted downtime and have the capability to wipe out the company’s entire data pool, also surged in 2021. The concerning trend doesn’t seem likely to reverse any time soon.
These and other data-security trends put more and more pressure on companies to not just ensure the maximum possible security for their networks, but also to properly back up every crucial service in a way that’s most tailored to their business needs.
Assessing your data
Before developing its comprehensive backup strategy and business continuity plan (BCP), any company must start with a strategic evaluation of its digital assets. The company must establish how crucial each service and dataset is for its operations, how long it can maintain activities without access to them, how much data it can lose without going under, and how hard it could be hit in case of a leak.
Based on this assessment, the company must establish which data it must back up, how often it should do so, and how to secure the backups. Where applicable, it must also work out a BCP that factors in a failover solution. Ultimately, the assessment comes down to calculating the financial risk — damage times probability.
Modern organizations heavily lean on software-as-a-service (SaaS) solutions to handle a lot of their day-to-day operations. Hundreds of companies rely on services like Salesforce to do their CRM, Hubspot to automate their marketing campaigns, MailChimp to handle their email outreaches, and use dozens of other services for hiring, customer support, development, and more. Furthermore, they rely on these SaaS solutions to take care of backups and expect high availability and security at any point in time. However, most SaaS systems operate in the cloud, making many business applications publicly accessible to a certain degree, and therefore increasingly vulnerable. This includes their backup options. Here, the lesson is that a company can’t fully rely on a vendor and on a single cloud storage solution in general.
Best backup practices
The Well-Architected Framework concept, which contains design principles and architectural best practices for building and running cloud workloads, is a must-have for any DevOps engineer and leader. Additionally, it’s a good starting point for developing concrete measures and protocols for backups. Working to improve their data breach safeguards, companies must explore a wide variety of measures, ranging from cloud-risk assessments and updated security protocols to smarter authentication policies and counter-phishing exercises.
A thorough review of the company’s SaaS stack is certain to reveal services that are more valuable than others. These require extra safety in terms of backups, as losing them could mean losing the entire business, not just missing out on a winning year. Most of the popular SaaS solutions such as Salesforce and Google Workspace work with third-party SaaS businesses that can manage the users’ backups. It is a good option — if it is available.
However, a third-party service can go down as well, so companies must make sure to walk that extra mile in ensuring the safety of their data. Backups need backups too. This is why companies would be wise to prioritize backup solutions equipped with robust application programming interfaces (APIs). Setting up another backup for the data stored in a separate cloud environment is just a matter of writing a few custom scripts.
Less developer-friendly solutions will result in more manual work for their clients. With these, the user will have to proactively pull the backup data from the service and push it to safe storage, whether that means another cloud environment or even an air-gapped archive holding a copy of the organization’s data offline. While this process involves more headaches, its sheer value in securing the integrity of the entire business operation cannot be understated.
Backups are a crucial safeguard that every company must take seriously. It is not the sexiest component of CloudOps, and getting it right may take a toll on the operational budget. But with critical data backup, it’s better to be safe than sorry. Not to mention, data backups are essential to mitigating ransomware attacks.
As our digital future becomes ever-cloudier, effective data backup, including preparing for the threat of ransomware attacks, requires a focus on aligning to the company’s architecture, proving pivotal for business continuity in the long run. Rolling the dice and not taking these measures won’t simply lead to taking some time to rebuild the company, like in the NBA; instead, it likely means the end of the business.
Originally published on VentureBeat