Of course, if your virtual LAN has several machines that all think
that congestion control only makes sense in a WAN setting, loss may
get quite common :-) I still have fond memories of shared Ethernet
melting down under the load of dozens of diskless clients rebooting
after some problem hit the cluster ...
Loss recovery doesn't have to be painful, and it can still be real
time. When sending, include the packet sequence number, and the
sequence number of the earliest packet buffered. Buffer up to N
packets after sending. Packets older than current_sequence-N drop
off the buffer and cannot be recovered. Receiver can request
retransmission of specific buffered packets. Retransmitted packets
carry an up to date earliest packet buffered sequence number.
This way, it's up to the receiver to decide what to do with lost
packets, e.g. request retransmission, and wait up to a certain
deadline, just flag the loss and proceed, of maybe flag the loss,
request retransmission, and when receiving the missing packet,
redraw the output.
Congestion control would be very good to have, though. As soon as
you put something on IP, it's almost guaranteed to eventually end
up crossing a WAN.
Crazy idea: maybe one could combine the ideas from MCORE and RT
Linux: use a dispatcher for basic system resources (interrupts,
etc.), and run two kernels. One of them would just run the
debugger and serve its communication needs. That would leave
things still relatively close to hardware (unlike just using
UML).
- Werner
-- _________________________________________________________________________ / Werner Almesberger, Buenos Aires, Argentina wa@almesberger.net / /_http://www.almesberger.net/____________________________________________/ - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/