Multi-drive servers and NAS / SAN systems
Data recovery from
RAID arrays
We recover data from RAID arrays of all levels, NAS and SAN devices, and servers. We handle both single drive failures and complex cases involving lost configuration, damaged controllers, or multiple simultaneous drive failures.
Failure classification
Most common failures
in RAID arrays and servers
Typical arrays
- RAID 0, 1, 5, 10 – logical damage
- Array in Degraded state
- Controller or NAS failure (Synology, QNAP)
- Lost RAID configuration / metadata
Complex configurations
- RAID 6, 50, 60 – multiple drive failure
- Arrays with more than 8 drives
- SAN, ZFS, Hyper-V, VMware systems
- Physical damage to drives in the array
Process
How does data recovery work?
Send the hardware
Drives only (with slot order labelled) or the full device — in person or by courier.
Diagnosis and report
We assess each drive, reconstruct the RAID configuration, and classify the case with a quote.
Recovery
You decide whether to proceed based on the technical report and list of recoverable files.
Collection
We copy the recovered data to a new drive or make it available for download from our server.
Video from the lab
Failed RAID 5 array
See how the process works in practice
We film our work to show the real service process. The video below is a complete procedure of drive preparation, imaging, and array reconstruction — due to failure of two of four drives in a RAID 5 configuration the system entered Fail mode; the recovery result was 99% of working data and 1 corrupted file.
Technical equipment
Tools
for specialist work
Working with RAID arrays requires both hardware for individual drive handling and software that understands RAID structures, file systems, and virtualisation. We handle configurations that automated solutions cannot process.
Hardware and software
- AceLab PC-3000 Express with RAID extension
- AceLab PC-3000 Express with SSD extension
- UFS Explorer Technician
Supported RAID levels
- RAID 0, 1, 1E, 10, 0+1, 3, 4, 5, 51, 5E, 5EE, 6, 60, JBOD
- Synology SHR / SHR2
- mdadm / LVM, ZFS / QZFS (mirror, RAIDZ)
- Proprietary manufacturer configurations
Systems and manufacturers
- Synology, QNAP, Asustor, Terramaster, Buffalo
- Dell, HP, IBM, Fujitsu, Lenovo, Netgear
- NTFS, ext4, XFS, Btrfs, HFS+, APFS, exFAT
- VMFS (VMware), Hyper-V, Proxmox
Logical damage in a RAID array
Not every RAID failure is caused by physical drive damage. Often the problem lies in the logical layer.
This includes deleted or overwritten files, corrupted partitions, file system errors after a power failure, and the effects of incorrect array reconfiguration.
It can also be an accidental overwrite of RAID controller settings, a configuration reset during a NAS firmware update, or incorrect drive-to-volume assignment.
Do not use chkdsk or fsck — automatic "repairs" often delete metadata and eliminate any chance of recovery.
Drive failures in the array
One or more drive failures is the most common cause of RAID array problems.
This can be mechanical damage (in HDD drives), electronics failure, memory cell degradation (in SSDs), or growing sector errors that disrupt the consistency of the entire system.
The key point is that RAID 5 tolerates only one drive failure. If another drive fails during reconstruction — and the risk is significant since drives from the same batch often fail simultaneously — total array failure results.
Do not hot-swap drives without consultation — incorrect operation order can overwrite data on healthy drives.
RAID controller problems
Failures are not limited to drives — a damaged controller or array management software can block data access even when all drives are working correctly.
Corrupted firmware, a configuration reset, or hardware errors can prevent the device from correctly assembling the array — and sometimes it will offer to create a new one, which means overwriting metadata and data loss.
Replacing the controller with an identical model is not safe without making sector-by-sector copies first — firmware version differences can trigger array reinitialisation.
Complete system and server failures
In more complex environments the problem can affect not just individual drives but the entire server or storage system.
Loss of volume access, software failures, or errors in the SSD cache can completely halt the array, even when individual drives are functioning correctly.
In virtualisation environments (VMware, Hyper-V, Proxmox) an array failure often means simultaneous loss of access to multiple virtual machines. If the environment is still running — back up critical data immediately before taking any corrective action.
What not to do
with an array after failure
After a RAID problem occurs, it is very easy to make things worse and permanently lose data.
- Initiating a rebuild — any reconstruction attempt in a damaged state can overwrite existing data
- Swapping the controller or device "to try it" — firmware and configuration differences often overwrite RAID metadata
- Hot-swapping drives — inserting or swapping drives in hot-swap mode can cause additional errors and loss of consistency
- Using chkdsk, fsck, or file system repair tools — automatic "repairs" often delete or overwrite data
- Reinstalling the NAS or server system — overwrites the RAID configuration and metadata, making recovery harder or impossible
- Ignoring system warnings — if the array is still running, copy critical data immediately before taking any corrective action
FAQ
Frequently asked questions
Can I continue working on an array in "degraded" state?
No. Power off the device, do not initiate a rebuild, and do not hot-swap drives. A degraded array operates without parity protection — another failure means total data loss.
Can I rebuild the array myself after a failure?
We strongly advise against it. It is easy to overwrite metadata and permanently destroy recovery chances. The correct process always starts with sector-by-sector copies of all drives — only then is proper reconstruction performed.
How many failed drives can RAID 5 and RAID 6 tolerate?
RAID 5 tolerates one drive failure; RAID 6 tolerates two. However, additional logical or sector errors, or drives from the same batch (which often fail simultaneously), can prevent the array from starting despite these "theoretical" limits.
Do I need to bring the whole server or just the drives?
In most cases the drives alone are sufficient, along with information about their slot order. For complex SAN configurations, non-standard hardware controllers, or active encryption, bringing the entire device may be necessary or helpful.
Does drive order matter?
Yes — label each drive with its slot number before removal and do not mix the order. Incorrect drive order can prevent RAID configuration reconstruction or lead to data overwriting during a rebuild attempt.
Do you support Synology SHR, mdadm, ZFS, VMFS, XFS, NTFS, etc.?
Yes. We support SHR/SHR2, mdadm/LVM, hardware RAID, NTFS, exFAT, ext4, XFS, Btrfs, HFS+, APFS, VMFS (VMware), Hyper-V, Proxmox, as well as ZFS and QZFS (mirror/RAIDZ).
The array is encrypted — what is needed for recovery?
The encryption password or key, or the key manager from the device, is required. In NAS devices (Synology, QNAP) a key export or admin password is often required. Without the key, recovery of encrypted content is impossible.
Are Hot Spare drives counted in the price?
No. Hot Spare drives are not included in the service price.
Do you guarantee 100% data recovery?
No. Our goal is maximum, safe recovery — and the outcome depends on drive condition, failure history, and actions taken beforehand. We always provide a detailed results report before handing over the data.
Do you need the array configuration details?
It is not required, but it is helpful. If you can, please record and provide: RAID type, block/stripe size, drive order, volume number, and event logs from the time of the failure.
Get in touch
Describe your RAID problem
You don't need to know the configuration or technical parameters of your server. Based on the information you provide, our engineer will analyse the possible scenarios and prepare a plan of action.
During business hours we reply within 20 minutes. Outside business hours we respond as quickly as possible.