lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Mon, 7 May 2012 18:25:38 +0200
From:	Martin Steigerwald <Martin@...htvoll.de>
To:	Daniel Pocock <daniel@...ock.com.au>
Cc:	linux-ext4@...r.kernel.org
Subject: Re: ext4, barrier, md/RAID1 and write cache

Am Montag, 7. Mai 2012 schrieb Daniel Pocock:
> I've been having some NFS performance issues, and have been
> experimenting with the server filesystem (ext4) to see if that is a
> factor.

Which NFS version is this?
 
> The setup is like this:
> 
> (Debian 6, kernel 2.6.39)
> 2x SATA drive (NCQ, 32MB cache, no hardware RAID)
> md RAID1
> LVM
> ext4
> 
> a) If I use data=ordered,barrier=1 and `hdparm -W 1' on the drive, I
> observe write performance over NFS of 1MB/sec (unpacking a big source
> tarball)

Is this a realistic workload scenario for production use?

> b) If I use data=writeback,barrier=0 and `hdparm -W 1' on the drive, I
> observe write performance over NFS of 10MB/sec
> 
> c) If I just use the async option on NFS, I observe up to 30MB/sec
> 
> I believe (b) and (c) are not considered safe against filesystem
> corruption, so I can't use them in practice.

Partly.

b) can harm filesystem consistency unless you disable write cache on the 
disks

c) won´t harm local filesystem consistency, but should the nfs server break 
down all data that the NFS clients sent to the server for writing which is 
not written yet is gone.

> - or must I just use option (b) but make it safer with battery-backed
> write cache?

If you want performance and safety that is the best option from the ones 
you mentioned, if the workload is really I/O bound on the local filesystem. 

Of course you can try the usual tricks like noatime, remove rsize and 
wsize options on the NFS client if they have a new enough kernel (they 
autotune to much higher than the often recommended 8192 or 32768 bytes, 
look at /proc/mounts), put ext4 journal onto an extra disk to reduce head 
seeks, check whether enough NFS server threads are running, try a different 
filesystem and so on.

-- 
Martin 'Helios' Steigerwald - http://www.Lichtvoll.de
GPG: 03B0 0D6C 0040 0710 4AFA  B82F 991B EAAC A599 84C7
--
To unsubscribe from this list: send the line "unsubscribe linux-ext4" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ