Merely following the discussion a thought occurred to me of how
to make fd allocation fairly efficient (and simple) even if it retains
the O(n) structure worst case. I don't know how it is currently implemented
so this may be how it is done, or I may be way off base.
First, keep a table of FDs in sorted order ( mark deleted entries )
that you can access quickly. O(1) lookup.
Then, maintain this struct like
struct
{
int lowest_fd;
int highest_fd;
}
open:
if( lowest_fd == highest_fd )
{
fd = lowest_fd;
lowest_fd = ++highest_fd;
}
if( flags == IGNORE_UNIX98 )
{
fd = highest_fd++;
}
else
{
fd = lowest_fd
lowest_fd = linear_search( lowest_fd+1, highest_fd );
}
close:
if( fd < lowest_fd )
{
lowest_fd = fd;
}
else if( fd == highest_fd - 1 )
{
if( highest_fd == lowest_fd )
{
lowest_fd = --highest_fd;
}
else
{
highest_fd;
}
}
For common cases this would be fairly quick. It would be very easy to
implement an O(1) allocation if you want it to be fast ( at the expense
of a growing file handle table. )
Just thinking about it.
Laramie.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
Please read the FAQ at http://www.tux.org/lkml/