ACK/cmnt: [SRU][D][PATCH 1/1] bpf: verifier: hard wire branches to dead code
Kleber Souza
kleber.souza at canonical.com
Wed May 13 07:20:34 UTC 2020
On 13.05.20 01:11, Khalid Elmously wrote:
> From: Jakub Kicinski <jakub.kicinski at netronome.com>
>
> BugLink: https://bugs.launchpad.net/bugs/1878303
>
> Loading programs with dead code becomes more and more
> common, as people begin to patch constants at load time.
> Turn conditional jumps to unconditional ones, to avoid
> potential branch misprediction penalty.
>
> This optimization is enabled for privileged users only.
>
> For branches which just fall through we could just mark
> them as not seen and have dead code removal take care of
> them, but that seems less clean.
>
> v0.2:
> - don't call capable(CAP_SYS_ADMIN) twice (Jiong).
> v3:
> - fix GCC warning;
>
> Signed-off-by: Jakub Kicinski <jakub.kicinski at netronome.com>
> Acked-by: Yonghong Song <yhs at fb.com>
> Signed-off-by: Alexei Starovoitov <ast at kernel.org>
> (cherry picked from commit e2ae4ca266a1c9a0163738129506dbc63d5cca80)
> Signed-off-by: Khalid Elmously <khalid.elmously at canonical.com>
Clean cherry-pick, good test results.
I agree this should benefit all the 5.0 kernels and should be applied to
disco/linux.
Acked-by: Kleber Sacilotto de Souza <kleber.souza at canonical.com>
Khalid,
Can you please add the SRU template to the bug report and fix the
nomination?
Thanks,
Kleber
> ---
> kernel/bpf/verifier.c | 45 +++++++++++++++++++++++++++++++++++++++++--
> 1 file changed, 43 insertions(+), 2 deletions(-)
>
> diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
> index be42b81ba153..b3aa715a6c5b 100644
> --- a/kernel/bpf/verifier.c
> +++ b/kernel/bpf/verifier.c
> @@ -6477,6 +6477,40 @@ static void sanitize_dead_code(struct bpf_verifier_env *env)
> }
> }
>
> +static bool insn_is_cond_jump(u8 code)
> +{
> + u8 op;
> +
> + if (BPF_CLASS(code) != BPF_JMP)
> + return false;
> +
> + op = BPF_OP(code);
> + return op != BPF_JA && op != BPF_EXIT && op != BPF_CALL;
> +}
> +
> +static void opt_hard_wire_dead_code_branches(struct bpf_verifier_env *env)
> +{
> + struct bpf_insn_aux_data *aux_data = env->insn_aux_data;
> + struct bpf_insn ja = BPF_JMP_IMM(BPF_JA, 0, 0, 0);
> + struct bpf_insn *insn = env->prog->insnsi;
> + const int insn_cnt = env->prog->len;
> + int i;
> +
> + for (i = 0; i < insn_cnt; i++, insn++) {
> + if (!insn_is_cond_jump(insn->code))
> + continue;
> +
> + if (!aux_data[i + 1].seen)
> + ja.off = insn->off;
> + else if (!aux_data[i + 1 + insn->off].seen)
> + ja.off = 0;
> + else
> + continue;
> +
> + memcpy(insn, &ja, sizeof(ja));
> + }
> +}
> +
> /* convert load instructions that access fields of a context type into a
> * sequence of instructions that access fields of the underlying structure:
> * struct __sk_buff -> struct sk_buff
> @@ -7169,6 +7203,7 @@ int bpf_check(struct bpf_prog **prog, union bpf_attr *attr,
> struct bpf_verifier_env *env;
> struct bpf_verifier_log *log;
> int ret = -EINVAL;
> + bool is_priv;
>
> /* no program is valid */
> if (ARRAY_SIZE(bpf_verifier_ops) == 0)
> @@ -7215,6 +7250,9 @@ int bpf_check(struct bpf_prog **prog, union bpf_attr *attr,
> if (attr->prog_flags & BPF_F_ANY_ALIGNMENT)
> env->strict_alignment = false;
>
> + is_priv = capable(CAP_SYS_ADMIN);
> + env->allow_ptr_leaks = is_priv;
> +
> ret = replace_map_fd_with_map_ptr(env);
> if (ret < 0)
> goto skip_full_check;
> @@ -7232,8 +7270,6 @@ int bpf_check(struct bpf_prog **prog, union bpf_attr *attr,
> if (!env->explored_states)
> goto skip_full_check;
>
> - env->allow_ptr_leaks = capable(CAP_SYS_ADMIN);
> -
> ret = check_subprogs(env);
> if (ret < 0)
> goto skip_full_check;
> @@ -7263,6 +7299,11 @@ int bpf_check(struct bpf_prog **prog, union bpf_attr *attr,
> ret = check_max_stack_depth(env);
>
> /* instruction rewrites happen after this point */
> + if (is_priv) {
> + if (ret == 0)
> + opt_hard_wire_dead_code_branches(env);
> + }
> +
> if (ret == 0)
> sanitize_dead_code(env);
>
>
More information about the kernel-team
mailing list