- Fixed pricing on recovery (You know what you are paying - no nasty surprises).
- Quick recovery turnaround at no extra cost. (Our average recovery time is 2 days).
- Memory card chip reading services (1st in the UK to offer this service).
- Raid recoding service (Specialist service for our business customers who have suffered a failed server rebuild).
- Our offices are 100% UK based and we never outsource any recovery work.
- Strict Non-disclosure privacy and security is 100% guaranteed.
Case Studies
Case Studies
Norwich Data Recovery is providing the data recovery solution since 2001. Powered by the experience of more than 25 years, we have many case studies to showcase. As we provide detailed report to the clients about why and how their data lost, it helps things being transparent enough. This analytical approach is one of the reasons that we have been so much trustworthy.
Case Study 1 — QNAP TS-233 (Accidental LUN Deletion + New LUN Created)
Client: Financial services (bank)
Appliance: QNAP TS-233 (QTS, ext4 volume, 2-bay)
Reported issue: iSCSI LUN deleted in error; a new LUN created afterwards on the same volume. Previous vendor(s) and OEM unable to assist.
Triage & Risk
-
File-based iSCSI LUNs on QTS sit as sparse files on an mdadm → LVM → ext4 stack.
-
Creating a new LUN can overwrite unallocated extents previously belonging to the deleted LUN. Priority is to stop all writes and acquire block-level clones of every member disk.
Lab Workflow (Forensic)
-
Member imaging (read-only):
-
Hardware imagers with timeout budgeting, per-head zoning on HDDs (if present), ECC-aware passes.
-
Verified with per-image SHA-256.
-
-
Virtual rebuild of the QNAP storage stack:
-
Parse mdadm superblocks and assemble virtual RAID from clones (no writes to originals).
-
Reconstruct LVM PV/VG/LV geometry.
-
Mount the ext4 volume read-only (disable journal replay).
-
-
Recover the deleted LUN container:
-
Extent-level scan of ext4 inode/extent trees and journal to locate the orphaned LUN’s sparse-file metadata remnants.
-
Where inode records were purged, derive a synthetic file by mapping the LUN’s physical extents from allocation bitmaps and historical journal entries.
-
Validate by searching for embedded filesystem signatures inside candidate ranges (NTFS boot sectors / VMFS headers / partition tables) to confirm correct contiguity.
-
-
LUN content extraction:
-
Present the reconstructed LUN as a virtual block device; repair the guest filesystem (NTFS/VMFS/ext) on the clone only using journal-aware methods (e.g., NTFS $LogFile replay, MFT mirror).
-
Fill small gaps (where the new LUN overwrote blocks) using structure-aware carving and application-level validation (e.g., PST/DB headers, video container indices).
-
-
Verification & hand-off:
-
File-level hashing and sample validation across business-critical datasets.
-
Delivery via encrypted transfer drive.
-
Outcome: 100% data set restored for the bank’s scope of work (no material gaps detected; new-LUN overlap did not intersect live data ranges).
Case Study 2 — Synology DS1522+ (Three Failed Disks After User Swap)
Client: SME design studio
Appliance: Synology DS1522+ (Ryzen R1600), SHR-2 (RAID-6) with Btrfs
Reported issue: Three disks reported failed; user swapped drives; an automatic rebuild likely started; NAS no longer boots cleanly.
Triage & Risk
-
SHR-2 tolerates two concurrent failures; a third fault plus a partial rebuild can leave divergent metadata epochs and parity contamination.
-
Some “failed” disks are often intermittently readable; correct approach is donor-assisted imaging, then virtual reconstruction of the pre-incident epoch.
Lab Workflow (Forensic + Mechanical)
-
HDD stabilisation & imaging:
-
Two drives imaged with adaptive head-map strategies; the third required head-stack replacement (donor matched by model/firmware/micro-jog; ROM adaptives verified).
-
Multi-pass imaging with short-retry windows and thermal rest cycles; per-clone SHA-256.
-
-
Rebuild the Synology stack (virtual):
-
Recover mdadm layer(s) from superblocks; select a consistent event count (pre-rebuild) across members.
-
Assemble Btrfs chunk/stripe map; choose the best superblock pair; reconstruct the root tree and subvolume metadata.
-
-
Parity & epoch conflict resolution:
-
Identify stripes affected during the short rebuild window; majority vote of blocks or exclude contaminated extents using Btrfs checksum guidance.
-
Validate by walking file trees and confirming checksum-good data blocks.
-
-
Export & verification:
-
Snapshot-consistent export of the primary share sets; per-file hashing; targeted open-in-app tests for project files (Adobe suites, large media).
-
Outcome: Full recovery of active project shares and archives. NAS returned to client with guidance to rebuild onto new media; original failed disks retained as evidence.
Case Study 3 — MacBook Air 13″ (M1, 2020) Booting to Recovery
Client: Individual professional
Platform: Apple silicon M1; soldered NVMe NAND with SEP-bound (Secure Enclave) keys; APFS with FileVault commonly enabled
Reported issue: Device boots to macOS Recovery/Utilities; user cannot access data.
Note (accuracy): On Apple silicon there is no removable SSD. Traditional “chip-off” is ineffective because the encryption keys are hardware-bound to the Secure Enclave. Correct methodology is board-level stabilisation and credentialed, SEP-assisted imaging/decryption.
Lab Workflow (Board-Level + Logical)
-
Stabilise logic board power & boot path:
-
Rail checks (PPBUS, PP3V3_S5, etc.), PMIC inspection; resolve an intermittent TB/USB-C path affecting Share Disk.
-
Enter 1 True Recovery (1TR) / DFU; perform Revive (not Restore) to preserve user data.
-
-
Credentialed acquisition:
-
With the client’s passcode/FileVault credentials, unlock the APFS keybag.
-
Acquire a read-only block image over TB (Share Disk) or via asr to a lab target, ensuring no APFS snapshot deletion.
-
-
APFS repair on the image (not the Mac):
-
Walk checkpoint superblocks; rebuild the Object Map (omap) and volume structures; mount Data volume read-only.
-
Validate and export user data; repair per-app stores (Photos library, Mail, Notes) as needed.
-
-
Verification & delivery:
-
Sample-open critical documents; per-file hashes; deliver on encrypted media.
-
Outcome: 100% data recovery. Device boot issue left unmodified (client elected for Apple service after data hand-off). No Activation Lock bypass; all work conducted with client’s credentials.
Shared Engineering Principles (All Cases)
-
Forensic-first: Originals are never modified; we work from verified clones.
-
Minimal necessary intervention: Mechanical or micro-soldering only to the extent required to enable safe imaging.
-
Stack-accurate reconstruction: We rebuild exact storage stacks (controller/NAS → md/LVM → FS → LUN/VMFS) virtually, then repair on the image.
-
Integrity checks: SHA-256 at image and file level; sample validation in native applications.
-
Clear limits: Truly overwritten blocks or cryptography without valid keys are not recoverable; we maximise outcomes via journals, snapshots, replicas and structure-aware methods.