Any processor's cheap once it's got enough volume. That's an effect not a
cause.
> And if you talk about
> super computing, hmm what about some PowerPC CPU variant - they very
> compettetiv in terms of cost and FPU performance! Transmeta isn't the
> adequate choice here.
You honestly think you can fit 142 PowerPC processors in a single 1U, air
cooled?
Liquid air cooled, maybe...
> Well both of those concepts fail in terms of optimization due
> to the same reason: much less information is present about
> the structure of the code then during source code compilation.
LESS info?
Anybody wanna explain to me how it's possible to do branch prediction and
speculative execution at compile time? (Ala iTanium?) I've heard a few
attempts at an explanation, but nothing by anybody who was sober at the
time...
You have less time to work, but you actually have MORE info about how it's
actually running...
> Additionaly there may be some performance wins due to the
> ability of runtime profiling (anykind thereof), however it still remains
> to be shown that this performs better then statically analyzed code.
Okay, I'll bite. Why couldn't a recompiler (like MetroWerks stuff) do the
same static analysis on large code runs that GCC or some such could do if you
give it -Oinsane and let it think for five minutes about each file?
Obviously the run-time version isn't going to spend the TIME to do that. But
claiming the information to perform these actions isn't available just
because your variables no longer have english names...
> > /Mike - who doesn't work for Transmeta, in case anyone was wondering...
> > :-)
>
> /Marcin - who doesn't bet a penny on Transmeta
/Rob, who still owns stock in Intel of all things.
Rob
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/