lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date: Tue, 7 May 2024 11:15:02 +0800
From: Zhang Yi <yi.zhang@...weicloud.com>
To: "Ritesh Harjani (IBM)" <ritesh.list@...il.com>, linux-ext4@...r.kernel.org
Cc: linux-fsdevel@...r.kernel.org, linux-mm@...ck.org,
 linux-kernel@...r.kernel.org, tytso@....edu, adilger.kernel@...ger.ca,
 jack@...e.cz, hch@...radead.org, djwong@...nel.org, david@...morbit.com,
 willy@...radead.org, zokeefe@...gle.com, yi.zhang@...wei.com,
 chengzhihao1@...wei.com, yukuai3@...wei.com, wangkefeng.wang@...wei.com
Subject: Re: [PATCH v4 02/34] ext4: check the extent status again before
 inserting delalloc block

On 2024/4/29 22:59, Ritesh Harjani (IBM) wrote:
> Zhang Yi <yi.zhang@...weicloud.com> writes:
> 
>> On 2024/4/27 0:39, Ritesh Harjani (IBM) wrote:
>>> Zhang Yi <yi.zhang@...weicloud.com> writes:
>>>
>>>> On 2024/4/26 20:57, Ritesh Harjani (IBM) wrote:
>>>>> Ritesh Harjani (IBM) <ritesh.list@...il.com> writes:
>>>>>
>>>>>> Zhang Yi <yi.zhang@...weicloud.com> writes:
>>>>>>
>>>>>>> From: Zhang Yi <yi.zhang@...wei.com>
>>>>>>>
>>>>>>> Now we lookup extent status entry without holding the i_data_sem before
>>>>>>> inserting delalloc block, it works fine in buffered write path and
>>>>>>> because it holds i_rwsem and folio lock, and the mmap path holds folio
>>>>>>> lock, so the found extent locklessly couldn't be modified concurrently.
>>>>>>> But it could be raced by fallocate since it allocate block whitout
>>>>>>> holding i_rwsem and folio lock.
>>>>>>>
>>>>>>> ext4_page_mkwrite()             ext4_fallocate()
>>>>>>>  block_page_mkwrite()
>>>>>>>   ext4_da_map_blocks()
>>>>>>>    //find hole in extent status tree
>>>>>>>                                  ext4_alloc_file_blocks()
>>>>>>>                                   ext4_map_blocks()
>>>>>>>                                    //allocate block and unwritten extent
>>>>>>>    ext4_insert_delayed_block()
>>>>>>>     ext4_da_reserve_space()
>>>>>>>      //reserve one more block
>>>>>>>     ext4_es_insert_delayed_block()
>>>>>>>      //drop unwritten extent and add delayed extent by mistake
>>>>>>>
>>>>>>> Then, the delalloc extent is wrong until writeback, the one more
>>>>>>> reserved block can't be release any more and trigger below warning:
>>>>>>>
>>>>>>>  EXT4-fs (pmem2): Inode 13 (00000000bbbd4d23): i_reserved_data_blocks(1) not cleared!
>>>>>>>
>>>>>>> Hold i_data_sem in write mode directly can fix the problem, but it's
>>>>>>> expansive, we should keep the lockless check and check the extent again
>>>>>>> once we need to add an new delalloc block.
>>>>>>
>>>>>> Hi Zhang, 
>>>>>>
>>>>>> It's a nice finding. I was wondering if this was caught in any of the
>>>>>> xfstests?
>>>>>>
>>>>
>>>> Hi, Ritesh
>>>>
>>>> I caught this issue when I tested my iomap series in generic/344 and
>>>> generic/346. It's easy to reproduce because the iomap's buffered write path
>>>> doesn't hold folio lock while inserting delalloc blocks, so it could be raced
>>>> by the mmap page fault path. But the buffer_head's buffered write path can't
>>>> trigger this problem,
>>>
>>> ya right! That's the difference between how ->map_blocks() is called
>>> between buffer_head v/s iomap path. In iomap the ->map_blocks() call
>>> happens first to map a large extent and then it iterate over all the
>>> locked folios covering the mapped extent for doing writes.
>>> Whereas in buffer_head while iterating, we first instantiate/lock the
>>> folio and then call ->map_blocks() to map an extent for the given folio.
>>>
>>> ... So this opens up this window for a race between iomap buffered write
>>> path v/s page mkwrite path for inserting delalloc blocks entries.
>>>
>>>> the race between buffered write path and fallocate path
>>>> was discovered while I was analyzing the code, so I'm not sure if it could
>>>> be caught by xfstests now, at least I haven't noticed this problem so far.
>>>>
>>>
>>> Did you mean the race between page fault path and fallocate path here?
>>> Because buffered write path and fallocate path should not have any race
>>> since both takes the inode_lock. I guess you meant page fault path and
>>> fallocate path for which you wrote this patch too :)
>>
>> Yep.
>>
>>>
>>> I am surprised, why we cannot see the this race between page mkwrite and
>>> fallocate in fstests for inserting da entries to extent status cache.
>>> Because the race you identified looks like a legitimate race and is
>>> mostly happening since ext4_da_map_blocks() was not doing the right
>>> thing.
>>> ... looking at the src/holetest, it doesn't really excercise this path.
>>> So maybe we can writing such fstest to trigger this race.
>>>
>>
>> I guess the stress tests and smoke tests in fstests have caught it,
>> e.g. generic/476. Since there is only one error message in ext4_destroy_inode()
>> when the race issue happened, we can't detect it unless we go and check the logs
>> manually.
> 
> Hi Zhang,
> 
> I wasn't able to reproduce the any error messages with generic/476.
> 
>>
>> I suppose we need to add more warnings, something like this, how does it sound?
>>
>> diff --git a/fs/ext4/super.c b/fs/ext4/super.c
>> index c8b691e605f1..4b6fd9b63b12 100644
>> --- a/fs/ext4/super.c
>> +++ b/fs/ext4/super.c
>> @@ -1255,6 +1255,8 @@ static void ext4_percpu_param_destroy(struct ext4_sb_info *sbi)
>>  	percpu_counter_destroy(&sbi->s_freeclusters_counter);
>>  	percpu_counter_destroy(&sbi->s_freeinodes_counter);
>>  	percpu_counter_destroy(&sbi->s_dirs_counter);
>> +	WARN_ON_ONCE(!ext4_forced_shutdown(sbi->s_sb) &&
>> +		     percpu_counter_sum(&sbi->s_dirtyclusters_counter));
>>  	percpu_counter_destroy(&sbi->s_dirtyclusters_counter);
>>  	percpu_counter_destroy(&sbi->s_sra_exceeded_retry_limit);
>>  	percpu_free_rwsem(&sbi->s_writepages_rwsem);
>> @@ -1476,7 +1478,8 @@ static void ext4_destroy_inode(struct inode *inode)
>>  		dump_stack();
>>  	}
>>
>> -	if (EXT4_I(inode)->i_reserved_data_blocks)
>> +	if (!ext4_forced_shutdown(inode->i_sb) &&
>> +	    WARN_ON_ONCE(EXT4_I(inode)->i_reserved_data_blocks))
>>  		ext4_msg(inode->i_sb, KERN_ERR,
>>  			 "Inode %lu (%p): i_reserved_data_blocks (%u) not cleared!",
>>  			 inode->i_ino, EXT4_I(inode),
>>
> 
> I also ran ext4 -g auto and I couldn't reproduce anything with above
> patch. Please note that I didn't use this patch series for testing. I was running
> xfstests on upstream kernel with above diff (because that's what the
> idea was that the problem even exists in upstream kernel and are we able
> to observe the race with page mkwrite and fallocate path)
> 

I also ran fstests -g smoke about 2 days and I couldn't reproduce this issue too,
even if I modified generic/476 fstress to only run mmap write and fallocate. It's
pretty hard to reproduce this issue through stress tests. Now, it could only be
reproduced on my machine if I add a strategic delay in ext4_da_map_blocks()
before holding i_data_sem in write mode, but ext4's error injection infrastructure
doesn't support adding delay like xfs. So I guess there still has a lot of work
to do if we want to reproduce it reliably on fstests.

Thanks,
Yi.


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ