> Matt Nelson <mnelson@dynatec.com> writes:
>
> > I am about to embark on a data processing software project that
> > will require a LOT of memory (about, ohhh, 6GB or so), and I was
> > wondering if there are any limitations to how one can use very
> > large chunks of memory under Linux. Specifically, is there
> > anything to prevent me from malloc()ing 6GB of memory, then
> > accessing that memory as I would any other buffer? FYI, the
> > application will be buffering a stream of data, then performing a
> > variety of calculations on large blocks of data at a time, before
> > writing it out to a socket.
>
> Pointers on IA32 are still 32 bits, so (as I understand it) each
> process can address a maximum of 4GB. You would have to allocate
> multiple chunks (in shared memory or mmap()ed files, so they're
> persistent) and map them in and out as needed (which could get hairy).
Sorry to followup to myself, but I just had another thought--if you
can split your task up into multiple processe such that no process
needs to address more than 4GB, an IA32 machine will work fine. It
will probably still take more programming work than the naive approach
(malloc() huge chink, use it) that you could use on a 64-bit
machine...
-Doug
-- The rain man gave me two cures; he said jump right in, The first was Texas medicine--the second was just railroad gin, And like a fool I mixed them, and it strangled up my mind, Now people just get uglier, and I got no sense of time... --Dylan - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/