[CALUG] misc tech Q's
Bryan J Smith
b.j.smith at ieee.org
Tue Feb 22 11:30:59 EST 2011
1. Software Threads
Software threading has _nothing_ to do with Intel's HyperThreading(R) [1]. The
latter is a processor marketing^H^H^H^H^H performance technology specific to
Intel CPUs. Software threading is all you should care about. AMD or Intel, you
can thread across multiple CPU cores, and even the processors are superscalar
and capable of multiple instructions per clock, with multiple threads.
2. POSIX Threads (been around since the '90s)
POSIX Threading has been around since the '90s. Remember, PCs aren't the only
platforms that POSIX (UNIX/Linux standards) target, and HP, IBM, Irix, Sun,
etc... were doing POSIX Threads across 16-128 processors in their standard
libraries in the '90s. ;) Linux just brings that to commodity PC as of the
21st century. Linux's standard NPTL were pioneered by IBM and Red Hat during
kernel 2.4 and GLibC 2.1-2.2, and are standard with any kernel 2.6 and GLibC
2.3+ (2003+).
E.g., Trading systems run GNU/Linux with POSIX applications today (2001+),
because it was a natural transition from prior, proprietary HP, IBM, Irix, Sun,
etc... solutions. And if you want even more proof of what "out-of-the-box,"
non-real-time NPTL can do, with a Linux distro, just watch Watson on Jeopordy.
It was running a standard, SuSE Linux Enterprise Server release, taking full
advantage of NPTL. ;)
Microsoft? Please!
Microsoft is still trying to address its legacy along with newer .NET
developments. Even their own application teams have a nasty habit of being 10+
years behind their own architects/tools. Visual Studio can also ship with a lot
of legacy garbage without implementations. So I wouldn't worry about Linux
"being behind."
The only people who talk about Microsoft having this and that, while Linux does
not, are the ones who not only don't develop for Linux, don't know the first
thing, but base their views on Mono (.NET for non-Windows). I run into this
regularly. I hear about "limitations" in Linux, which are 100% based on Mono's
implementations and limitations.
3. Sysfs, /proc and Fedora
The /proc file system still exists. But most operations should happen on sysfs
(/sys), instead of /proc/sys and others.
Fedora 12 is no longer supported, and one should move to Fedora 14 or, soon,
Fedora 15. There is talk of a 3 release cycle (ended life cycle), so Fedora is
supported beyond 18 months, instead of the current 13+ months or so, but it's
still talk. The official cycle is still 2 releases +1 month -- so Fedora 12
stopped receiving updates (at least officially) one (1) month after the release
of Fedora 12.
-- Bryan
[1] HyperThreading(R) is a re-invented technology by Intel, _specific_ to
Intel. It's value varies, and allows a core to present more than one set of
registers and pipes, even though they don't exist (long story [1]). It's better
now than on the old Pentium 4[2], but I wouldn't worry about whether a processor
has it or not. What's most important are the actual number of cores.
[2] NetBurst (Pentium 4 and P4-based Xeons) was a limited design lineage. It
was a quick, 18 month x86 design, by making long pipes and throwing out a lot of
traditional, design details. As I understand it (insider), Intel thought IA-64
would have taken over by the time Pentium III was dead, and a full redesign like
from the Yonah x86-64 (low-voltage/ultra-low-voltage P3 continuation) to the
Core x86-64 and eventual iX series took the standard 36-40+ months. I.e., most
pipes in the P4 were empty during execution -- 60% of stages on average (around
40 stages, very, very long -- over 20 were typically doing nothing). Long pipes
allow easier timing closure in design (long, long story -- I'm not CS, my
background is EE semicondcutor design), higher clocks, for less efficiency
(again, really long story, I'm over simplifying). Enter HyperThreading. It was
designed so the OS could attempt to fill unused stages with another thread as if
a second CPU -- with the related register renaming, context switching, and other
overhead. It wasn't very successful except for select applications (up to 30%),
and could perform worse (up to 5-10%), nominally.
My view ...
Intel also has a huge fabrication technology lead over AMD, who now relies on
external foundries (AMD is largely "fabless" now). IBM and TSMC only do so
much. So Intel typically beats AMD at any balanced workload at the same
clock/core count.
Understand AMD processors are more than capable of threading across multiple
cores, let alone are still 10-issue, superscalar designs per core. It's long
been argued that until the more recent Intel Core and iX architectures, AMD's
NexGen 3-issue ALU design was superior to Intel, and AMD's 3-issue FPU was far
better and more precise at 64-bit doubles (as fairly competitive in microcoding
SIMDs to it). This was evident at RC4 (SSL) acceleration performance in AMD v.
Intel. Intel uses dedicated SIMD pipes, in addition to its FPU, which can have
an advantage over AMD when it comes to SIMD instructions, especially lossy
(i.e., reduced precision), 32-bit matrix math operations. That can have a
major impact in visualization (where 32-bit singles are very common, often
interpolated integer instead of floating point).
The only advantage AMD has over Intel is its GPU (thanx to the ATI
acquisition), if you're looking at an integrated CPU+GPU combination, and not
an add-on, external GPU. The integrated AMD GPU-peripheral ICs are fabbed at
55nm or 40nm by TSMC -- 690G to newer 800G series (HD2100, 3200, 4200) and work
out-of-the-box with 3D on Linux. Not a bad option if you don't need serious 3D
support, as the ATI Radeon DRI driver in MIT X11 has limited GLX feature support
(don't expect serious OpenGL 2 or ARB extensions to work).
Intel also has a nasty habit of selling older 65nm (and even 90-130nm prior)
platform-peripheral logic in various, cheap PC in my experience, although they
are better on notebooks, and this is less common today. I.e., early Atom boards
were often shipped with 90nm 965 chipsets and sucked up 25W (far more than the
Atom processor). You had to buy select Atom netbooks to get the low-power
designs. Newer designs are 32nm and at least 45nm in the platform logic.
--
Bryan J Smith Professional, Technical Annoyance
Linked Profile: http://www.linkedin.com/in/bjsmith
----------------------------------------------------
UCF Football: AP #21, BCS #25, ESPN/USAToday #20
----- Original Message ----
From: Walt Smith <waltechmail at yahoo.com>
hi,
got a couple of curiosity Q's.
I see there are a standard lot of laptops for sale:
Buzzwords such as dual core, i3, i5 etc... and AMD has a couple of
words.
Do all Intel and AMD CPU's today come with hyperthreading ?
iow,
If one purchased a supposed dual core, or a 4 core, or a single
core, is hyperthreading "standard" for EACH core ? Is there a
comparison between same silicon cores and separate chips ?
If a CPU has hyperthreading, is the gain in speed still in
the ballpark of 15-20% that I remember from more than a couple
years ago ( regardless of clock speed? I accept that some small
efficiencies in hyperthreading itself may have occurred).
I understand that Windows OS does multiple core distributions of the
threads automatically from within a process. No muss, no fuss,
no particular need to pay attention for a coder. OF course,
I could be delusional or been passed the wrong info under the
table (<g>). It just knows how
to spread around the code to cores and the hyperthread sections..
Does linux kernel work the same way
for a process ? Or does one have to code differently ?
I read on lwn.net that /proc is eliminated.
Other than addition and removal of device drivers and processor
related code ( ARM vs INTEL ), what other *nix-like features have
changed recently? I'm running F12, well over a whole year old now !!
I'm just curious as these came to mind recently: havent any project
or "need to know".
thx,
Walt.....
_______________________________________________
CALUG mailing list
CALUG at unknownlamer.org
http://lists.unknownlamer.org/listinfo/calug
More information about the CALUG
mailing list