lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Wed, 21 Jun 2023 21:47:26 -0700
From:   John Stultz <jstultz@...gle.com>
To:     "Isaac J. Manjarres" <isaacmanjarres@...gle.com>
Cc:     Kees Cook <keescook@...omium.org>, Tony Luck <tony.luck@...el.com>,
        "Guilherme G. Piccoli" <gpiccoli@...lia.com>,
        "Isaac J. Manjarres" <isaacm@...eaurora.org>,
        Mukesh Ojha <mojha@...eaurora.org>,
        Prasad Sodagudi <psodagud@...eaurora.org>,
        kernel-team@...roid.com, linux-hardening@...r.kernel.org,
        linux-kernel@...r.kernel.org
Subject: Re: [PATCH] pstore/ram: Add support for dynamically allocated ramoops
 memory regions

On Wed, Jun 21, 2023 at 5:52 PM 'Isaac J. Manjarres' via kernel-team
<kernel-team@...roid.com> wrote:
>
> From: "Isaac J. Manjarres" <isaacm@...eaurora.org>
>
> The reserved memory region for ramoops is assumed to be at a fixed
> and known location when read from the devicetree. This is not desirable
> in environments where it is preferred for the region to be dynamically
> allocated early during boot (i.e. the memory region is defined with
> the "alloc-ranges" property instead of the "reg" property).
>

Thanks for sending this out, Isaac!

Apologies, I've forgotten much of the details around dt bindings here,
so forgive my questions:
If the memory is dynamically allocated from a specific range, is it
guaranteed to be consistently the same address boot to boot?

> Since ramoops regions are part of the reserved-memory devicetree
> node, they exist in the reserved_mem array. This means that the
> of_reserved_mem_lookup() function can be used to retrieve the
> reserved_mem structure for the ramoops region, and that structure
> contains the base and size of the region, even if it has been
> dynamically allocated.

I think this is answering my question above, but it's a little opaque,
so I'm not sure.

> Thus invoke of_reserved_mem_lookup() in case the call to
> platform_get_resource() fails in order to support dynamically
> allocated ramoops memory regions.
>
> Signed-off-by: Isaac J. Manjarres <isaacm@...eaurora.org>
> Signed-off-by: Mukesh Ojha <mojha@...eaurora.org>
> Signed-off-by: Prasad Sodagudi <psodagud@...eaurora.org>
> Signed-off-by: Isaac J. Manjarres <isaacmanjarres@...gle.com>
> ---
>  fs/pstore/ram.c | 19 ++++++++++++++-----
>  1 file changed, 14 insertions(+), 5 deletions(-)
>
> diff --git a/fs/pstore/ram.c b/fs/pstore/ram.c
> index ade66dbe5f39..e4bbba187011 100644
> --- a/fs/pstore/ram.c
> +++ b/fs/pstore/ram.c
> @@ -20,6 +20,7 @@
>  #include <linux/compiler.h>
>  #include <linux/of.h>
>  #include <linux/of_address.h>
> +#include <linux/of_reserved_mem.h>
>
>  #include "internal.h"
>  #include "ram_internal.h"
> @@ -643,6 +644,7 @@ static int ramoops_parse_dt(struct platform_device *pdev,
>  {
>         struct device_node *of_node = pdev->dev.of_node;
>         struct device_node *parent_node;
> +       struct reserved_mem *rmem;
>         struct resource *res;
>         u32 value;
>         int ret;
> @@ -651,13 +653,20 @@ static int ramoops_parse_dt(struct platform_device *pdev,
>
>         res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
>         if (!res) {
> -               dev_err(&pdev->dev,
> -                       "failed to locate DT /reserved-memory resource\n");
> -               return -EINVAL;
> +               rmem = of_reserved_mem_lookup(of_node);

Nit: you could keep rmem scoped locally here.

Otherwise the code looks sane, I just suspect the commit message could
be more clear in explaining the need/utility of the dts entry using
alloc-ranges.

thanks
-john

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ