lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Date:	Tue, 10 Sep 2013 14:53:50 +0200
From:	Jan Kara <jack@...e.cz>
To:	"Yan, Zheng" <ukernel@...il.com>
Cc:	Jan Kara <jack@...e.cz>, "Yan, Zheng" <zheng.z.yan@...el.com>,
	linux-ext4@...r.kernel.org, Theodore Ts'o <tytso@....edu>,
	lkp@...ux.intel.com
Subject: Re: [PATCH] ext4: fix dirty pages writback regression.

On Tue 10-09-13 13:15:19, Jan Kara wrote:
> On Tue 10-09-13 19:01:16, Yan, Zheng wrote:
> > On Tue, Sep 10, 2013 at 5:17 PM, Jan Kara <jack@...e.cz> wrote:
> > > On Tue 10-09-13 17:10:13, Yan, Zheng wrote:
> > >> On 09/10/2013 05:00 PM, Jan Kara wrote:
> > >> > On Tue 10-09-13 10:02:58, Yan, Zheng wrote:
> > >> >> From: "Yan, Zheng" <zheng.z.yan@...el.com>
> > >> >>
> > >> >> Our Linux Kernel Performance project found that commit 4e7ea81db5
> > >> >> (ext4: restructure writeback path) indroduced regression. After
> > >> >> the commit, ext4 does not merge adjacent mapped dirty pages during
> > >> >> writeback. The "!buffer_delay(bh) && !buffer_unwritten(bh)" check
> > >> >> in mpage_add_bh_to_extent() prevents the merging.
> > >> >>
> > >> >> Signed-off-by: Yan, Zheng <zheng.z.yan@...el.com>
> > >> >> ---
> > >> >>  fs/ext4/inode.c | 3 +--
> > >> >>  1 file changed, 1 insertion(+), 2 deletions(-)
> > >> >>
> > >> >> diff --git a/fs/ext4/inode.c b/fs/ext4/inode.c
> > >> >> index c79fd7d..bfeb8b2 100644
> > >> >> --- a/fs/ext4/inode.c
> > >> >> +++ b/fs/ext4/inode.c
> > >> >> @@ -1944,8 +1944,7 @@ static bool mpage_add_bh_to_extent(struct mpage_da_data *mpd, ext4_lblk_t lblk,
> > >> >>    struct ext4_map_blocks *map = &mpd->map;
> > >> >>
> > >> >>    /* Buffer that doesn't need mapping for writeback? */
> > >> >> -  if (!buffer_dirty(bh) || !buffer_mapped(bh) ||
> > >> >> -      (!buffer_delay(bh) && !buffer_unwritten(bh))) {
> > >> >> +  if (!buffer_dirty(bh) || !buffer_mapped(bh)) {
> > >> >   Sadly it isn't that easy. The condition is there for a reason... The
> > >> > reason is that we are looking for an extent to map. When we already have
> > >> > some buffer to map and then there is buffer which doesn't need mapping we
> > >> > cannot just add it to the extent because then we would allocate too many
> > >> > blocks.
> > >>
> > >> the "(b_state & BH_FLAGS) == map->m_flags)" check in
> > >> mpage_add_bh_to_extent() should prevent delayed and non-delayed dirty
> > >> pages from merging. What am I missing here?
> > >   Yes, that is true. Sorry, I didn't realize this originally. But what
> > > difference would then your patch make?
> > >
> > 
> > Continuous dirty pages that are "!buffer_delay(bh) && !buffer_unwritten(bh)"
> > can be merged during writeback. I think the change will reduce number of
> > bio for workload that re-writes existing data.
>   I see. The code is actually supposed to achieve that as well - when we
> have a sequence of mapped and dirty buffers (pages), we keep map->m_len ==
> 0 and just always return true from mpage_add_bh_to_extent(). This way
> the caller keep adding pages to current bio in ext4_bio_write_page() while
> they are contiguous.
> 
> However I agree there is something broken somewhere in this logic because I
> can reproduce the regression with that commit as well and the request sizes
> are somewhat smaller after the patch (not sure if that thing alone can be
> the reason for rather big throughput drop). I'm investigating it now.
  I've tracked down the problem. It is that we were too aggressive writing
back pages and thus we were finding less consecutive dirty pages (with
random IO workloads the longer you wait with writeback the better IO
pattern you get). And the writeback was too aggressive because we weren't
properly terminating the writeback when nr_to_write dropped to zero. After
I fixed that condition I'm getting about 25% better throughput then before
code reorg... I'll be sending the patch with numbers etc. later this
afternoon.

								Honza
-- 
Jan Kara <jack@...e.cz>
SUSE Labs, CR
--
To unsubscribe from this list: send the line "unsubscribe linux-ext4" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ