Right, but if each of your in-memory structs is localized to the
kernel (i.e. not present in the partition table, and not exported
to userspace via /proc or syscalls), then you can just increase
field sizes and recompile the kernel, without the need to support
both structure layouts in the kernel in perpetuity.
For efficiency reasons the kernel will always need to export
structs to userspace but partition information probably shouldn't
get a performance exemption.
> Fine, no operator overloading:
>
> err = ascii_math_make_number(baz, 512); // baz = 512
> if(err){
> // handle error here
> }
> err = ascii_math_add(foo, bar, baz); // foo = bar + baz
> if(err){
> // handle error here
> }
Yes, this is more or less what a bignum package would implement,
albeit with a much more efficient representation that ASCII
strings. But actually manipulating values in bignum format
should be left to utilities like fdisk that want to be generic.
The kernel and boot loaders would just load values into 'u32
a' and 'u32 b' (or whatever type) and add them with 'a + b'.
I'm not in any way advocating bignum arithmetic in the kernel.
> With a 32-bit binary field, programs will use 32-bit types.
> With a 64-bit binary field, programs will use 64-bit types.
> With an ASCII format, every program will use a different type.
Right, which would allow intelligence on the part of the programs,
like using 32-bit types on 32-bit architectures where values are
known to max out at 32-bits (I'm thinking of, say, /proc here)
and 64-bit values on 64-bit architectures.
> It does, a bit, but it sure beats hidden per-program
> limits caused by every program converting the ASCII
> to a different in-memory structure.
So create a library and flame people who don't link against it
and screw up their parsing. At least this way only some programs
would have hidden limits, not all of them.
> >> Yeah, just what we need. The /proc mess expanding
> >> into partition tables. That sounds like a great way
> >> to increase filesystem destruction performance.
> >
> > The /proc mess exists because people chose N ad hoc output
> > formats for /proc files. If they had a consistent format like
> > s-expressions or one-value-per-file most problems with /proc
> > would not exist.
>
> That only solves a superficial problem. It doesn't let
> you reliably handle changing data types and keywords.
s-expressions do--the first value in each parenthesized expression
is the keyword. For instance, if you have the tree:
/proc
/sys
/net
/ipsec
/inbound_policy_check (== 1)
/ipv4
/icmp_echo_ignore_all (== 0)
/icmp_echo_ignore_broadcasts (== 0)
Via, say, a magic 'cat /proc/serialize', you could view this as:
(sys (net (ipsec (inbound_policy_check 1)
(ipv4 (icmp_echo_ignore_all 0)
(icmp_echo_ignore_broadcasts 0)))))
without the pretty-printing, of course. Likewise you could
cat /proc/sys/serialize to see just that subtree.
All this information is available already in the kernel and
'parsing' the resulting s-expressions is mostly a matter of
counting parenthesis nesting depth, matching keywords, and doing
ASCII->numeric conversions.
I'm not religious about s-expressions but they do solve this
problem fairly well. People who are religiously anti-LISP should
pretend I used { } in the above.
Of course this all relies on a one-value-per-file /proc, which is
regrettably not the case now; that's why I chose /proc/sys for
the above example.
miket
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/