If you are looking to kick off 2022 strongly, make sure that your backup solutions have these 6 features we recommend. Safeguard your growth by truly understanding what your future solution should look like.
Ransomware has been a popular word all across the last couple of years. Suddenly you find your data getting encrypted and you rush down to isolate it from the rest of your network. Someone asks for a ransom but very few get the data back, even after paying the ransom. A lot of enterprises are satisfied considering their Anti Virus is good protection but that is not necessarily true.
A lot relies on the backup copies that you make for your data as the primary corrupted/encrypted data is generally not recoverable.
In a very basic protection plan, backups are the most important aspect. However, many of you face challenges with your backups as well. Hackers have been able to hack your backup repositories too and get them corrupted. So, how can you rely on backups?
Cloud backup helps you a lot in this basic level of protection as backed-up data moves out of your own network. So, even if your own network is infected you still get protection.
In one of the cases, the cloud backups of Ace Data enterprise customer suddenly started taking heavier loads. There was more data to back up and it seemed that no deduplication or compression was working on it. Thanks to our 24x7 monitoring, we analyzed and found that these are some new files with a unique extension. We immediately blocked the backup and alerted the IT administrators. It was through this mechanism that the IT realized they have been attacked and their files have been encrypted. Backup indirectly helped to detect encrypted files. We formatted the server and recovered the older version of files to get it operational quickly. The encrypted copies stayed isolated from the real data.
This works well, especially on cloud backups which create new files and are unable to back them up. The enterprise was also using a traditional virtual tape library-based backup which just quickly dumped the encrypted files and backed them up successfully. The corrupted version of backups was immediately deleted but continued to carry the risk of corrupting other versions of backups available on the same network.
We have been able to help you recover the last good copy of your files both in the case of file servers and MS SQL databases without the need to pay a ransom.
To be able to recover quickly from an attack through backups, two key things should be kept in mind:
Yes, it is very important that the recovered backups do not re-introduce the malware back into the production data. Attackers silently place an attack loop in the file system. They leave it there for it to get activated on a particular date. When you restore the backups, you end up restoring the attack look agent as well.
One of the key things to watch for in your backups in the year 2022 is protection against the ransomware attack loop. Your backup application should be scanning through the packets being backed up & packets being restored. It should be able to detect & isolate malicious code and alert the administrators of infection. Before the actual restoration, legacy recovery files are scanned again to prevent the Attack Loop.
The other important things to consider while backing up include:
Be careful while using your backup solution & deployment strategies. Backups are going to help you recover from ransomware issues, so they need to be configured to perform the same.
Seven years back not many agreed on the importance of backing up your Salesforce data. We moved ahead and integrated backing up Salesforce along with other SaaS application data including Office 365 & Google Suite.
For long, Enterprises thought these are not required features, these are only good to have features. This was because of the very slow adoption of Cloud SaaS provider applications. It has taken a while for enterprises to realize the benefits of using these SaaS applications. On top of it, many believe that the data stored on the SaaS platform is always safe which is not true.
There have been continuous debates on the possibility of data loss owing to the level of high availability provided by all the cloud service providers. You don’t want to imagine equipment or a DC failure at that level. However, it is important to understand that accidental deletions due to user behavior cannot be ignored.
For all mailing applications, 14 to 30 days is the time period till the deleted mail is recoverable. What about going beyond that? For file data also, they allow you to enable versioning. However, you may still need to produce a file with a year-old version as per the compliance needs so how do you handle that and how many versions do you plan for these?
Therefore it is very important to consider backing up this data. Your backup application must be ready for backing up cloud-based applications like Google Apps, Office365, etc. The application should work within the respective service provider environment and send data to a different environment preferably a different provider to be additionally safe.
Not all backup applications support backing up cloud applications. Some of them have limited support like backing up O365 mails but not SharePoint & One Drive. Be careful and prepared for this. With the ease of handling data offered by Cloud Service Providers, you would start using them soon so be ready.
Also, don’t consider your files on a service provider app as a backup to your on-premise data. Use them for what they are meant to be to get the real use of them.
There is a lot of stress around compliance. Personally, I believe that even if there are no mandatory requirements, we should all endeavor to follow the compliance guidelines defined for our industry. For many of you, it is mandatory to adhere to a set of Compliances. Many others follow these to add to their own market value. I believe this is important as it brings a lot of discipline to your own work.
I have seen so many people struggling with creating audit reports and extracting data from their historical logs when the auditor comes in. A lot of reporting is required to meet the standards and comply with them. GDPR is the latest of them.
These may not be directly impacting us today as they may not be applicable today but we must keep ourselves prepared for it. One of our customers has a restriction that the data backups of a particular regional office need to be managed by a citizen of the country to which the data belongs.
Like for so many other things, backups have not been kept away from meeting the compliance standards. Many times we ignore the very basics of it. Many tape technologies still don’t support encryption. I know one of them where the encryption key is visible to anyone who can log into the system & every time it is entered or modified, it is stored in a plain text notepad file.
You need to invest their money in backup solutions that follow the global compliance standards and their guidelines. Who should perform backups? What rights & permissions are required for him? etc and most importantly how is the reporting being done? Make sure that your backup application is compliant with global compliance standards and is reporting as per the guidelines so that you don’t have to manually pull out data to fulfill audit report requirements.
Go for a compliant backup infrastructure and environment.
The virtual world is ever-changing. Containers are being adopted now and applications are being developed on and hosted in containerized environments. It is therefore time to be ready for ensuring the safety of the container environments. While evaluating backup strategies you should now consider that your backup infrastructure should support backing up containers.
You can deliberate on backing up a container as containers are more like images with no data in the container. High availability is built in them. They are stateless and are always spawned & killed off as needed.
Many people confuse between high availability & the ability to recover from a disaster. What would happen if multiple container nodes fail or the associated persistent storage fails? You cannot run away from planning for a disaster situation. You can replicate your environment when you plan to move from test/dev environment to production or you stage it to test an upgrade before deploying it on production. Last but not least, backing up Kubernetes help migrate Kubernetes clusters easily.
You should look for backup applications that have the capability of backing up containers. Dockers are the initial members of the container family. Dockers deployments are increasing and critical applications are running on them. Backup their images & the associated storage & databases.
Backup applications should also be capable of protecting the hyper-converged environments regardless of mentioning backing up the traditional virtual environments is supported by most enterprise-class backup applications. Look out to see if they can replicate the virtual machines to a remote site within the backup application so that you don’t to build a separate replication environment.
Data volumes are growing every minute. The need to protect it is also increasing alongside. The growth comes so fast especially when you see quick execution of applications and their wide-scale deployments leveraging the elastic nature of cloud computing & on-prem virtualization.
You need to make your infrastructure a lot more scalable and flexible. Growth of data definitely means growth of the overall infrastructure including backup infrastructure. Traditional backup methods looked good. They seemed to be able to handle unlimited data since the storage was on recyclable & new tapes. However, performance came down until you keep upgrading tape infrastructure regularly. I met someone a couple of days back who said it was like when you get up after sleep you would need the next generation LTO infrastructure leaving more than 2 generation old tapes useless.
You need to retain a lot of them. A media house with data available since 2003 will hold it forever. They are not even compliance governed but they use it when required to monetize the old news and clips integrating them with latest versions of the same news.
Also Read: Cloud backup vs Local backup
You need to have a backup infrastructure that leverages the key data reduction technologies available now. Compression is well available with all backup applications. Deduplication has made it easier to handle the extended data growth. However, tape infrastructure does not handle deduplication at all.
You need to make it more scalable by adopting the cloud methodology. Even if you are still on legacy backup applications which are yet to adopt a direct cloud strategy, you should look for the option to stage out older backup versions to cloud. Start adopting Public cloud more for hosting and if you are relatively new to cloud adoption, go ahead with adopting public cloud for backing up your data. The legacy applications are also integrating their APIs with cloud service providers so that you can move out the older versions to the cloud. Your recovery SLA for old retention is longer than the online data.
Moving it to the Public cloud will reduce the overall cost of retention as well since you don’t need to retain them on online storage. Low-cost archival storage can be used for these backups. They also offer more reliability and durability than you can think of achieving on your traditional tape infrastructure.
Remember: If you are re-evaluating your backup strategy, explore cloud based backup applications that offer local storage as well for faster recoveries. If you need to still continue with your legacy applications, demand and adopt Public Cloud adoption for staging the older backups.
Endpoints need to be protected. This is well known for a decade now. There is nothing new in it. All backup applications backup endpoint devices like desktops & laptops. Agent deployment, backing up only critical data, auto-schedule, and auto retention is all the standard features available in all endpoint backup applications.
What is becoming more relevant is making them more secure. Look out for features like Geolocation and Remote Wipe. Geo location can help you locate your lost devices. Remote Wipe can be configured to ensure that the data from the lost laptop is deleted if the person having the laptop is able to boot it up on the OS. The backup agent contacts the backup server and the backup server signals data deletion. You get secure that your lost data is not being utilized.
The definition of endpoints has also changed these days. It is no more restricted to laptops and smartphones. You are deploying your applications on new endpoint devices like edge, cloud, SaaS platforms. They all need protection and backups. Don’t ignore them. Go for backup strategy that is capable of taking care of them & ensure they are backed up.
Equipment today comes with an integrated PC/server. Diagnostic tools and research data all reside here when it is created. The adoption of POS applications, and IoT-enabled applications running on WiFi devices is growing rapidly. Therefore, you should ensure that your backup infrastructure is able to take care of these devices.