lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Tue, 19 Mar 2013 13:22:46 +1100
From:	Dave Chinner <david@...morbit.com>
To:	Theodore Ts'o <tytso@....edu>
Cc:	Ben Myers <bpm@....com>, Eric Sandeen <sandeen@...hat.com>,
	xfs-oss <xfs@....sgi.com>, linux-ext4@...r.kernel.org,
	Eric Whitney <enwlinux@...il.com>
Subject: Re: possible dev branch regression - xfstest 285/1k

On Mon, Mar 18, 2013 at 10:00:56PM -0400, Theodore Ts'o wrote:
> On Tue, Mar 19, 2013 at 12:47:18PM +1100, Dave Chinner wrote:
> > Sorry about this - I've mixed up my threads about ext4 having
> > problems with zero-out being re-enabled. I thought this was a
> > cross-post of the 218 issue....
> > 
> > However, the same reasoning can be applied to 285 - the file sizes,
> > the size of the holes and the size of the data is all completely
> > arbitrary. If we make the holes in the files larger, then the
> > zero-out problem simply goes away.
> 
> Right.  That was my observation.  We can either make the holes larger,
> by changing:
> 
>    pwrite(fd, buf, bufsize, bufsize*10);
> 
> to
> 
>    pwrite(fd, buf, bufsize, bufsize*42);
>    
> ... and then changing the expected values returned by
> SEEK_HOLE/SEEK_DATA.  (By the way; this only matters when we are
> testing 1k blocks; if we are using a 4k block size in ext4, the test
> currently passes.)
> 
> Or we could set some ext4-specific tuning parameters into the #218
> shell script, if the file system in question was ext4.

Heh, you just mixed up 218 and 285 yourself. I crossed the streams,
and now the universe is going to end. ;)

Seriously, though, I'd prefer we don't need to tweak generic tests
for specific filesystems if changing the file layout will solve the
problem....

> I had assumed that folks would prefer making the holes larger, but
> Eric seemed to prefer the second choice as a better one.
> 
> 
> Hmm....  Another possibility is to define a directory structure where
> each test would look for the existence of some file such as
> fscust/<fs>/<test>, and so if fscust/ext4/218 exists, it would get
> sourced, and this would define potential hook functions that would get
> called after the file system is mounted.  This way, the file system
> specific stuff is kept out of the way of the test script.  Would that
> make adding fs-specific tuning/setup for tests more palatable?

>From an architectural POV, I think that if we need filesystem
specific tuning, it's not a generic test.

If we have common test that needs different setup and tunings for
each filesystem, then I'd prefer to think of a test "template" that
can be used by the filesytem specific tests. We already have this
sort of structure for some tests (e.g. _test_generic_punch()) where
we have factored out the common parts of several tests so they can
be shared.

Hence if we end up with needing to do this, I'd prefer to see
something like:

tests/template/foo

and the individual fs tests do:

tests/fs/foo-test

<setup test>
_clean_up()
{
	....
	<undo fs specific tuning>
}

<do fs specific tuning>

. tests/template/foo

<run test>

That way we can create shared test templates without needing to add
functions to the common/ directory, and so the common/
directory can slowly be cleaned up to contain only shared
infrastructure code....

Indeed, this makes it easy to run the same test with different
tunings and be able to see which tuning broke just by looking at the
test results...

Cheers,

Dave.
-- 
Dave Chinner
david@...morbit.com
--
To unsubscribe from this list: send the line "unsubscribe linux-ext4" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ