Data recovery from arrays and servers
RAID, NAS
and servers
We recover data from RAID arrays at all levels, NAS and SAN devices, and servers. We handle single-drive failures as well as complex cases involving lost configuration, controller damage, or multiple simultaneous drive failures.
What we fix
What is happening
to your array?
Identify your device's situation. Each scenario below requires a different service approach.
No data access or files disappeared
Array is visible but the volume won't mount, files are inaccessible, or the partition asks to be formatted? This is a sign of logical damage or a file system error.
Logical damageController fails to assemble the array
Server or NAS cannot assemble the array, proposes re-initialisation, or reports a controller error? Swapping the controller "to test" can overwrite the configuration and destroy data.
Controller failureArray invisible or asking for configuration
After a reset, firmware update, or device replacement the array is no longer recognised? Loss of RAID metadata is one of the most serious failures — do not initialise the system.
Lost configurationArray in Degraded state
System reports a drive failure and the array has gone into degraded mode? Do not start a rebuild — any reconstruction attempt with a damaged drive increases the risk of data loss.
Drive damageMultiple simultaneous drive failures
Array has exceeded its fault tolerance — two drives failed in RAID 5, or three in RAID 6? Even in this situation recovery is often possible — as long as no repair or rebuild attempts were made.
Critical failurePricing
How much does RAID data recovery cost?
The price is calculated per drive in the array (including failed and parity drives). Hot Spare drives are not counted. The exact cost is provided after the analysis — it depends on the number of drives, RAID level, and extent of damage.
Standard
Typical RAID arrays
from 600 PLN / drive
- RAID 0, RAID 1, RAID 5, RAID 10
- SATA, SAS, NVMe interfaces
- NAS devices: Synology, QNAP, Asustor
- Hot Spare drives not counted in price
Advanced
Advanced configurations
from 900 PLN / drive
- RAID 6, RAID 50, RAID 60
- Arrays with more than 8 drives
- SAN, ZFS, Hyper-V, VMware systems
- Non-standard hardware controllers
Process
How does data recovery work?
Ship your hardware
Just the drives (with bay order marked) or the entire device — in person or by courier.
Analysis & quote
We examine every drive, reconstruct the array configuration, classify the case, and provide a quote.
File verification
You decide whether to proceed based on the technical report and the list of recoverable files.
Data delivery
Recovered data is copied to a new drive or made available for download from our server.
Video from the lab
Failed RAID 5 array
See what the process looks like in practice
We record our work to show the real service process. This video covers the full procedure of preparing drives, imaging, and array reconstruction — two of four drives in a RAID 5 configuration had failed, putting the system into Fail mode. The recovery result: 99% of data intact, 1 file corrupted.
Technical capabilities
Equipment
built for the toughest jobs
Working with RAID arrays requires both hardware for individual drive recovery and software that understands RAID structures, file systems, and virtualisation. We handle configurations that automated solutions cannot cope with.
Hardware & software
- AceLab PC-3000 Express with RAID extension
- AceLab PC-3000 Express with SSD extension
- UFS Explorer Technician
Supported RAID levels
- RAID 0, 1, 1E, 10, 0+1, 3, 4, 5, 51, 5E, 5EE, 6, 60, JBOD
- Synology SHR / SHR2
- mdadm / LVM, ZFS / QZFS (mirror, RAIDZ)
- Vendor-specific configurations
Systems & vendors
- Synology, QNAP, Asustor, Terramaster, Buffalo
- Dell, HP, IBM, Fujitsu, Lenovo, Netgear
- NTFS, ext4, XFS, Btrfs, HFS+, APFS, exFAT
- VMFS (VMware), Hyper-V, Proxmox
Logical damage in a RAID array
Not every array failure stems from physical drive damage. The logical layer is often the problem.
Common causes include accidentally deleted or overwritten files, corrupted partitions, file system errors after a power failure, or the effects of incorrect array reconfiguration.
It may also involve accidentally overwriting controller RAID settings, a configuration reset during a NAS firmware update, or incorrect drive assignment to a new volume.
Do not use chkdsk or fsck — automatic "repairs" frequently delete metadata and eliminate recovery chances.
Drive failure in an array
One or more failed drives is the most common cause of RAID array problems.
This may be mechanical damage (in HDDs), electronics failure, NAND cell degradation (in SSDs), or accumulating bad sectors that undermine the consistency of the entire system.
The key point is that RAID 5 tolerates only one failed drive. If another drive fails during a rebuild — which is a significant risk since drives from the same production batch often fail simultaneously — the array enters total failure.
Do not hot-swap drives without consulting a specialist — performing operations in the wrong order can overwrite data on healthy drives.
Problems with the RAID controller
Failures are not limited to drives — a damaged controller or array management software can block data access even when all drives are working correctly.
Corrupted firmware, a configuration reset, or hardware faults can prevent the device from assembling the array properly, and sometimes it proposes creating a new one — which means overwriting metadata and losing data.
Replacing the controller with an identical model is not safe without first making sector-by-sector images — firmware version differences can trigger array re-initialisation.
System-wide failures and servers
In more complex environments the problem may affect not just individual drives but the entire server or storage system.
Loss of volume access, software failures, or errors in the SSD cache can completely paralyse the array even when the individual drives are working correctly.
In virtualisation environments (VMware, Hyper-V, Proxmox) an array failure often means losing access to many virtual machines simultaneously. If the environment is still running — do not delay backing up critical data.
What not to do
after a RAID failure
After a RAID array problem occurs, it is very easy to make the situation worse and permanently lose data.
- Starting another array rebuild — every reconstruction attempt in a damaged state can overwrite existing data
- Swapping the controller or device "to test" — firmware and configuration differences often overwrite RAID metadata
- Hot-swapping drives — inserting or swapping drives in hot-swap mode can introduce additional errors and break consistency
- Using chkdsk, fsck, or file system repair tools — automatic "repairs" frequently delete or overwrite data
- Reinstalling the NAS system or server OS — overwrites the RAID configuration and metadata, making recovery harder or impossible
- Ignoring system warnings — if the array is still running, copy critical data immediately before taking any corrective action
FAQ
Frequently asked questions
Can I keep using an array in "degraded" state?
No. Power off the device, do not start a rebuild, and do not hot-swap drives. A degraded array operates without parity protection — another failure means total data loss.
Can I rebuild the array myself after a failure?
We strongly advise against it. It is easy to overwrite metadata and permanently destroy recovery chances. The correct process always starts with sector-by-sector imaging of every drive — the actual reconstruction is then performed from those images.
How many failed drives can RAID 5 and RAID 6 tolerate?
RAID 5 tolerates the failure of one drive; RAID 6 tolerates two. Keep in mind, however, that additional logical errors, bad sectors, or drives from the same production batch (which often fail simultaneously) can prevent an array from starting even within those theoretical limits.
Do I need to send the whole server or just the drives?
In most cases the drives alone are sufficient, along with information about their bay order. For complex SAN configurations, non-standard hardware controllers, or active encryption, sending the entire device may be necessary or helpful.
Does the drive order matter?
Yes — label each drive with its bay number before removing it and do not mix up the order. Incorrect drive order can prevent RAID configuration reconstruction or lead to data being overwritten during a recovery attempt.
Do you support Synology SHR, mdadm, ZFS, VMFS, XFS, NTFS, etc.?
Yes. We support SHR/SHR2, mdadm/LVM, hardware RAID, NTFS, exFAT, ext4, XFS, Btrfs, HFS+, APFS, VMFS (VMware), Hyper-V, Proxmox, as well as ZFS and QZFS (mirror/RAIDZ).
The array is encrypted — what do you need for recovery?
You will need the encryption password or key, or the key manager from the device. For NAS devices (Synology, QNAP) a key export or administrator password is often required. Without the key, recovering encrypted content is impossible.
Are Hot Spare drives counted in the price?
No. Hot Spare drives are not included in the service quote.
Do you guarantee 100% data recovery?
No. Our goal is maximum safe recovery — the outcome depends on drive condition, the history of the failure, and any actions taken beforehand. We always provide a detailed results report before delivering the data.
Do you need the RAID configuration details?
It is not required, but it helps. If possible, note and include: RAID type, block/stripe size, drive order, volume number, and event logs from the time of the failure.
Get in touch
Have a question about your array?
You don't need to know the configuration or technical parameters of your array. Just describe the situation — our engineer will analyse the possible scenarios and prepare a plan of action.
During business hours we reply within 20 minutes. Outside business hours we respond as quickly as possible.