Yes, as the length of an email approaches infinity the chance of saying
something stupid approaches certainty ;)
> Which means that we have only _one_ set of routines for handling block
> allocation etc, instead of duplicating them all.
I don't know how bad it was in the old code, but now the little-ended data is
confined to get_block/truncate, as it should be. However, reading ahead...
> Having in-core data in CPU-native byte order is _stupid_. We used to do
> that, and I winced every time I had to look at all the duplication of
> functions. I think it was early 2.3.x when Ingo did the page cache write
> stuff where I fixed that - the people who had done the original ext2
> endianness patches were just fairly lacking in brains (Hi, Davem ;), and
> didn't realize that keeping inode data in host order was the fundamental
> problem that caused them to have to duplicate all the functions.
I'm not proposing to change this, but I would have chosen Davem's approach in
this case, with the aid of:
typedef struct { u32 value; } le_u32;
This is a no-op for Intel, and it would make things nicer for non-intel
arches, for what that's worth. But it seems I've stepped into an old
flamewar, so I'm getting out while my skin is still non-crispy ;)
> So the _wart_ is in 2.2.x, which is just stupid and ugly, and keeps block
> numbers in host data format - which causes no end of trouble. 2.4.x
> doesn't have this problem, and could easily have a pointer to the on-disk
> representation.
Probably what I should have said is that *I* plan to do some conversions
between on-disk and in-memory format for a patch I'm working on, that it's a
natural thing to do, and that tying the in-memory inode straight to the disk
buffer interferes with that. On the other hand, if the direct pointer really
is a cache win, it would trump the layering issue.
-- Daniel - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/