lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Date:	Wed, 17 Apr 2013 20:40:08 +0530
From:	Subranshu Patel <spatel.ml@...il.com>
To:	linux-ext4@...r.kernel.org
Subject: fsck memory usage

I performed some recovery (fsck) tests with large EXT4 filesystem. The
filesystem size was 500GB (3 million files, 5000 directories).
Perfomed force recovery on the clean filesystem and measured the
memory usage, which was around 2 GB.

Then I performed metadata corruption - 10% of the files, 10% of the
directories and some superblock attributes using debugfs. Then I
executed fsck to find a memory usage of around 8GB, a much larger
value.

1. Is there a way to reduce the memory usage (apart from scratch_files
option as it increases the recovery time time)

2. This question is not related to this EXT4 mailing list. But in real
scenario how this kind of situation (large memory usage) is handled in
large scale filesystem deployment when actual filesystem corruption
occurs (may be due to some fault in hardware/controller)
--
To unsubscribe from this list: send the line "unsubscribe linux-ext4" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ