[apparmor] Questions about compat encoding in accept1 and accept2 tables

Zygmunt Krynicki me at zygoon.pl
Thu Dec 4 15:03:31 UTC 2025


Hello!

I've been trying to document some of the bit-patterns and magic values present in the code. Looking at the macros dfa_{user,other}_{allow,xbits,audit,quiet,xindex} in security/apparmor/policy_compat.c, defined as follows:

#define dfa_user_allow(dfa, state) (((ACCEPT_TABLE(dfa)[state]) & 0x7f) | \
                                    ((ACCEPT_TABLE(dfa)[state]) & 0x80000000))
#define dfa_user_xbits(dfa, state) (((ACCEPT_TABLE(dfa)[state]) >> 7) & 0x7f)
#define dfa_user_audit(dfa, state) ((ACCEPT_TABLE2(dfa)[state]) & 0x7f)
#define dfa_user_quiet(dfa, state) (((ACCEPT_TABLE2(dfa)[state]) >> 7) & 0x7f)
#define dfa_user_xindex(dfa, state) \
        (dfa_map_xindex(ACCEPT_TABLE(dfa)[state] & 0x3fff))

#define dfa_other_allow(dfa, state) ((((ACCEPT_TABLE(dfa)[state]) >> 14) & \
                                      0x7f) |                           \
                                     ((ACCEPT_TABLE(dfa)[state]) & 0x80000000))
#define dfa_other_xbits(dfa, state) \
        ((((ACCEPT_TABLE(dfa)[state]) >> 7) >> 14) & 0x7f)
#define dfa_other_audit(dfa, state) (((ACCEPT_TABLE2(dfa)[state]) >> 14) & 0x7f)
#define dfa_other_quiet(dfa, state) \
        ((((ACCEPT_TABLE2(dfa)[state]) >> 7) >> 14) & 0x7f)
#define dfa_other_xindex(dfa, state) \
        dfa_map_xindex((ACCEPT_TABLE(dfa)[state] >> 14) & 0x3fff)

I came up with a C type definition, ignoring undefined ordering of bitfield encoding, for a conceptual structure with the two accept fields:

struct packed_perms {
  union {
    accept1 uint32_t;
    struct {
      union {
        struct {
          user_allow uint32_t : 7;
          user_xbits uint32_t : 7;
        };
        user_xindex uint32_t : 14;
      };
      union {
        struct {
          other_allow uint32_t : 7;
          other_xbits uint32_t : 7;
        };
        other_xindex uint32_t : 14;
      };
      _ uint32_t : 3;
      change_profile uint32_t : 1; // allow bit shared between user and other.
    };
  };
  union {
    accept2 uint32_t;
    struct {
      user_audit uint32_t : 7;
      user_quiet uint32_t : 7;
      other_audit uint32_t : 7;
      other_quiet uint32_t : 7;
      _ uint32_t : 4;
    };
  };
};

The following is true for kernel ABI v9+:

What strikes me is the overlap of the user_{allow,xbits} with the user_xindex field. All seven bits of user_allow are meaningful - as they encode "exec", "write", "read", "append", "link", "lock" and "exec-map". The user_xbits field is only used by map_xbits which effectively bitwise-ands it with the mask 0xfc80, or in binary 0b1111_1100_1000_0000. This selects the highest bit (exec-map) and a the upper six bits of user_xbits itself. The result is interpreted as the full 32bit permission set bit-map so the bits correspond to "rename" (0x80), "set-creds" (0x400), "get-creds" (0x800), "chmod" (0x1000), "chown" (0x2000), "chgrp" (0x4000) and "lock" (0xx8000) - all granted to aa_perms.allow.

How is this co-inhabited with xindex which uses the very same bits to derive, among other, an index into the transition table?

Best regards
ZK



More information about the AppArmor mailing list