lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date: Mon, 22 Jan 2024 20:00:26 -0800
From: Yonghong Song <yonghong.song@...ux.dev>
To: Kees Cook <keescook@...omium.org>, linux-hardening@...r.kernel.org
Cc: Alexei Starovoitov <ast@...nel.org>,
 Daniel Borkmann <daniel@...earbox.net>,
 John Fastabend <john.fastabend@...il.com>,
 Andrii Nakryiko <andrii@...nel.org>, Martin KaFai Lau
 <martin.lau@...ux.dev>, Song Liu <song@...nel.org>,
 KP Singh <kpsingh@...nel.org>, Stanislav Fomichev <sdf@...gle.com>,
 Hao Luo <haoluo@...gle.com>, Jiri Olsa <jolsa@...nel.org>,
 bpf@...r.kernel.org, "Gustavo A. R. Silva" <gustavoars@...nel.org>,
 Bill Wendling <morbo@...gle.com>, Justin Stitt <justinstitt@...gle.com>,
 linux-kernel@...r.kernel.org
Subject: Re: [PATCH 43/82] bpf: Refactor intentional wrap-around test


On 1/22/24 4:27 PM, Kees Cook wrote:
> In an effort to separate intentional arithmetic wrap-around from
> unexpected wrap-around, we need to refactor places that depend on this
> kind of math. One of the most common code patterns of this is:
>
> 	VAR + value < VAR
>
> Notably, this is considered "undefined behavior" for signed and pointer
> types, which the kernel works around by using the -fno-strict-overflow
> option in the build[1] (which used to just be -fwrapv). Regardless, we
> want to get the kernel source to the position where we can meaningfully
> instrument arithmetic wrap-around conditions and catch them when they
> are unexpected, regardless of whether they are signed[2], unsigned[3],
> or pointer[4] types.
>
> Refactor open-coded wrap-around addition test to use add_would_overflow().
> This paves the way to enabling the wrap-around sanitizers in the future.
>
> Link: https://git.kernel.org/linus/68df3755e383e6fecf2354a67b08f92f18536594 [1]
> Link: https://github.com/KSPP/linux/issues/26 [2]
> Link: https://github.com/KSPP/linux/issues/27 [3]
> Link: https://github.com/KSPP/linux/issues/344 [4]
> Cc: Alexei Starovoitov <ast@...nel.org>
> Cc: Daniel Borkmann <daniel@...earbox.net>
> Cc: John Fastabend <john.fastabend@...il.com>
> Cc: Andrii Nakryiko <andrii@...nel.org>
> Cc: Martin KaFai Lau <martin.lau@...ux.dev>
> Cc: Song Liu <song@...nel.org>
> Cc: Yonghong Song <yonghong.song@...ux.dev>
> Cc: KP Singh <kpsingh@...nel.org>
> Cc: Stanislav Fomichev <sdf@...gle.com>
> Cc: Hao Luo <haoluo@...gle.com>
> Cc: Jiri Olsa <jolsa@...nel.org>
> Cc: bpf@...r.kernel.org
> Signed-off-by: Kees Cook <keescook@...omium.org>
> ---
>   kernel/bpf/verifier.c | 12 ++++++------
>   1 file changed, 6 insertions(+), 6 deletions(-)
>
> diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
> index 65f598694d55..21e3f30c8757 100644
> --- a/kernel/bpf/verifier.c
> +++ b/kernel/bpf/verifier.c
> @@ -12901,8 +12901,8 @@ static int adjust_ptr_min_max_vals(struct bpf_verifier_env *env,
>   			dst_reg->smin_value = smin_ptr + smin_val;
>   			dst_reg->smax_value = smax_ptr + smax_val;
>   		}
> -		if (umin_ptr + umin_val < umin_ptr ||
> -		    umax_ptr + umax_val < umax_ptr) {
> +		if (add_would_overflow(umin_ptr, umin_val) ||
> +		    add_would_overflow(umax_ptr, umax_val)) {

Maybe you could give a reference to the definition of add_would_overflow()?
A link or a patch with add_would_overflow() defined cc'ed to bpf program.
The patch itselfs looks good to me.

>   			dst_reg->umin_value = 0;
>   			dst_reg->umax_value = U64_MAX;
>   		} else {
> @@ -13023,8 +13023,8 @@ static void scalar32_min_max_add(struct bpf_reg_state *dst_reg,
>   		dst_reg->s32_min_value += smin_val;
>   		dst_reg->s32_max_value += smax_val;
>   	}
> -	if (dst_reg->u32_min_value + umin_val < umin_val ||
> -	    dst_reg->u32_max_value + umax_val < umax_val) {
> +	if (add_would_overflow(umin_val, dst_reg->u32_min_value) ||
> +	    add_would_overflow(umax_val, dst_reg->u32_max_value)) {
>   		dst_reg->u32_min_value = 0;
>   		dst_reg->u32_max_value = U32_MAX;
>   	} else {
> @@ -13049,8 +13049,8 @@ static void scalar_min_max_add(struct bpf_reg_state *dst_reg,
>   		dst_reg->smin_value += smin_val;
>   		dst_reg->smax_value += smax_val;
>   	}
> -	if (dst_reg->umin_value + umin_val < umin_val ||
> -	    dst_reg->umax_value + umax_val < umax_val) {
> +	if (add_would_overflow(umin_val, dst_reg->umin_value) ||
> +	    add_would_overflow(umax_val, dst_reg->umax_value)) {
>   		dst_reg->umin_value = 0;
>   		dst_reg->umax_value = U64_MAX;
>   	} else {

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ