On Mon, 23 Jul 2001, Andrea Arcangeli wrote:
> You don't read 1k. You only read 512bytes and you encrypt them using
> the IV of such 512byte block.
>
> Then the next 512byte block will use again the same IV again. What's the
> problem?
ok, so I didn't understand you well in the first place :-)
you actually want only to change the IV calculation to be based on a
hardcoded 1024 IV-blocksize, but don't change anything else in the loop.c
module, right? (if not, ignore the following text... :-)
if so, I really don't like it;
1.) I really don't like to encode 2 512-blocks with the same IV, (I
actually thought I could just switch to 1024-blocks and thus have it
sensible again, but since you only change the IV calculation without
changing the data-transfer granularity it won't work)
2.) It will break filesystems using set_blocksize(lo_dev, 512)... why?
say, we had a transfer request from the buffer cache requesting starting
at byte 0 (initial IV=0) of the block device for length of 2048 bytes,
this would lead to
transfer_filter (buf, bufsize=2048, IV=0);
since the transfer_filter contains the logic to split the buffer into
chunks with the blocksize (e.g. 512 byte blocks) required*) for
encryption the loop may look like:
int IV2 = IV << 1;
while (bufsize > 0) {
encdecfunc (buf, 512, IV2 >> 1);
IV2++;
buf += 512;
}
which leads to the following calls for the mentioned example:
encdecfunc (buf, 512, 0);
encdecfunc (buf+512, 512, 0);
encdecfunc (buf+1024, 512, 1);
encdecfunc (buf+1536, 512, 1);
so far so good; but now imagine that the original request get's changed
a bit, to start at offset 512 (initial IV still 0!!) instead of offset 0;
transfer_filter (buf, bufsize=2048, IV=0); // the same values are
// passed for bufsize and IV!
encdecfunc (buf, 512, IV=0); // OK
encdecfunc (buf+512, 512, IV=0); // wrong, IV should be 1
encdecfunc (buf+1024, 512, IV=1); // OK
encdecfunc (buf+1536, 512, IV=1); // wrong, IV should be 2
...so you see?
that's why it's better IMHO to stick to the same IV-granularity that is
used for transfers, otherwise you'd have to lose performance by
passing only single IV-blocksize length data-blocks to the filter
functions which need to be aligned...
*) actually only filters making use of the IV parameter would need to
split it up, other function may process bigger blocks at once, and
thus minimizing possible overhead...
>> and btw, at least one other popular crypto loop filter uses 512 byte
>> based IVs, namely loop-AES...
> I guess it emulates it internally, so it should keep working if we do
> the fixed 1k granularity for the IV (while it should break the on-disk
> format if we do the 512byte granularity).
no, you can't emulate it internally if the passed IV is calculated based
on a bigger IV-blocksize than the needed one... that's the problem, and why
it's IMHO better to calculate based on the smallest used IV-blocksize :-(
(that way you can always convert it to a bigger IV-blocksize based one)
regards,
-- Herbert Valerio Riedel / Phone: (EUROPE) +43-1-58801-18840 Email: hvr@hvrlab.org / Finger hvr@gnu.org for GnuPG Public Key GnuPG Key Fingerprint: 7BB9 2D6C D485 CE64 4748 5F65 4981 E064 883F 4142- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/