lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Fri, 1 Oct 2021 10:09:11 -0400
From:   "Theodore Ts'o" <tytso@....edu>
To:     Jan Kara <jack@...e.cz>
Cc:     yangerkun <yangerkun@...wei.com>, linux-ext4@...r.kernel.org,
        yukuai3@...wei.com
Subject: Re: [PATCH 2/2] ext4: check magic even the extent block bh is
 verified

On Fri, Oct 01, 2021 at 11:18:33AM +0200, Jan Kara wrote:
> > 
> > Digging deep and we found it's actually a xattr block which can happened
> > with follow steps:
> > 
> > 1. extent update for file1 and will remove a leaf extent block(block A)
> > 2. we need update the idx extent block too
> > 3. block A has been allocated as a xattr block and will set verified
> > 3. io error happened for this idx block and will the buffer has been
> >    released late
> > 4. extent find for file1 will read the idx block and see block A again
> > 5. since the buffer of block A is already verified, we will use it
> >    directly, which can lead the upper OOB
> > 
> > Same as __ext4_xattr_check_block, we can check magic even the buffer is
> > verified to fix the problem.
> > 
> > Signed-off-by: yangerkun <yangerkun@...wei.com>
> 
> Honestly, I'm not sure if this is worth it. What you suggest will work if
> the magic is overwritten but if we reallocate the block for something else
> but the magic happens to stay intact, we have a problem. The filesystem is
> corrupted at that point with metadata blocks being multiply claimed and
> that's very difficult to deal with. Maybe we should start ignoring
> buffer_verified() bit once the fs is known to have errors and recheck the
> buffer contents on each access? Sure it will be slow but I have little
> sympathy towards people running filesystems with errors... What do people
> think?

At some point, if we transition away from using buffer_heads for the
jbd2 layer, and use our own ext4_metadata_buf structure which
incorporates the journal_head and buffer_head fields, this will allow
us to control our own writeback, and allow us to have our own error
callbacks so we can do things like declare an inode to be bad and not
to be referenced again.  This would allow us to have a metadata type
field, so we could know that a buffer had been verified as an inode
table block, or bitmap block, or an xattr block.

However, I think the bigger issue is that even if we had a metadata
type field in the buffer_head (or ext4_metadata_buf), we should be
using the metadata validation, and buffer_verified bit, as a backup.
It should not be the primary line of defense.

So what I would suggest doing is preventing the out of bounds
reference in ext4_find_extent() in the first place.  I note we're not
sanity checking the values of EXT4_{FIRST,LAST}_{EXTENT,INDEX} used in
ext4_ext_binsearch() and ext4_ext_binsearch_idx(), and that's probably
how we triggered the out of bounds read in the first place.  The cost
of making sure that pointers returned by
EXT4_{FIRST,LAST}_{EXTENT,INDEX} don't exceed the bounds of the extent
tree node would be minimal, and it would be an additional cross check
which would protect us against the buffer getting corrupted while in
memory (bit flips, or wild pointer dereferences).

Cheers,

						- Ted

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ