Talking tech: The core(s) of the matter – Up close and personal with your processor
Written by Deb Shinder, WinNews Editor
Last August, I began what was intended to be a series of “under the hood” articles in Win7News, beginning with a discussion of excess heat and temperature monitoring. Then we decided to combine WXPnews and Win7News into one big newsletter and during the transition to a new website and a new format, the series fell by the wayside. I did pick up the theme again in recent newsletters featuring storage technologies and physical and virtual memory, and then last week with the discussion of video cards. This week I want to take an “up close and personal” look at the component that functions in some ways like the brain of your computer: the Central Processing Unit (CPU) or more simply, the main processor.
I say “main” because the powerful Graphics Processing Units (GPUs) that I talked about last week are capable of doing some of the general processing that was previously done by the CPU, in addition to handling the actual graphics processing. This is referred to as General Purpose Computing on GPUs (GPGPU or GPGP) or GPU Computing. For certain types of operations, the GPU can calculate much faster than a traditional CPU.
That doesn’t mean there haven’t been great advances in CPU design too. Intel and AMD are the two top manufacturers of the processors that power servers, desktops and laptops. Most of today’s smartphones and tablets run on ARM processors, which are based on RISC architecture (some will recognize RISC as the basis of the PowerPC processors that were first designed by IBM and Motorola and were used in Apple’s Mac computers until the company switched over to Intel processors in 2006.
Intel started making processors for “microcomputers” (the early terminology for desktop PCs) back in 1971, with a 4-bit processor called the 4004. For many of us, our first “real” computer ran on the venerable 8086 or 8088. The former was used in the IBM PS/2 and the latter was used in the original IBM PC (including the PC/XT). The more powerful IBM PC/AT that was released in 1984 ran on the 80286 and subsequent “PC clones” (similar machines made by vendors other than IBM) were based on its successors, the 80386 and 80486 (which were 32-bit processors) – thus the term “x86″ to refer to that Intel processor architecture.
Then along came a new Intel brand in 1993, the Pentiums. This included the Pentium Pro, Pentium II and III, Celeron and several variants of the Pentium 4. Somewhere along the way, Intel also came out with the Xeon (capable of working in pairs or larger groups with a multiprocessing operating system). These were still part of the x386 family. Intel’s first 64-bit processors were released in 2001 and named Itanium; they used a totally different instruction set from the x86 processors. In 2005, we got another line of 64-bit processors that was more like its x86 predecessors. There were the x64 processors that began with the Pentium 4F and Pentium D.
In 2006, Intel introduced the Core microarchitecture with the dual-core Woodcrest and quad-core Clovertown technologies for the Xeon line and that same year we got the Core 2/Core 2 Duo dual-core desktop processors. Next were the Core 2 Quad processors in 2007. The Nehalem architecture was introduced in 2008, and included the Core i7, which is still at the heart of some of today’s most powerful desktop and laptop systems. There are Nehalem designs that support six (Gulftown), eight (Beckton) or ten (Westmere-EX) cores. Core i3 and Core i5 were released as lower-powered, lower-cost Nehalem designs that came out in 2009 and 2010. They’re popular for laptops because they’re more power-efficient than the i7. The latest Sandy Bridge architecture replaces Nehalem, but its processors retain the Core i3/i5/i7 names. The first Sandy Bridge processors were released in 2011.
And that’s just an abbreviated and very simplified history of Intel processors. Advanced Micro Devices (AMD), which licensed from Intel the right to make x86-compatible processors and which merged with graphics card maker ATI, has its own story. The competition between these two top competitors has benefitted computer users as each company tries to outdo the other.
A look at today’s multiplicity of processor technologies can be overwhelming. What’s a core and how many of them do you really need? How do you balance power usage against performance? What’s System on a Chip (SoC) and where does it fit in? Are traditional processors on the way out, as we move into the so-called “post PC era?” Over the next couple of weeks, I’ll try to answer some of those questions.
First let’s look at the relatively new phenomenon of multi-core processing. Not so long ago, if you needed superior processing power, you could install more than one processor in your computer. Of course the computer had to have a motherboard that had more than one CPU socket. Most such motherboards were built for servers, although there were also dual processor (or better) high-performance workstations, too. Multi-core processors consist of two or more independent processors but they’re located on the same die or package. The die is the chip that contains an integrated circuit.
Multi-core processors take up much less room inside the computer case than multiple single-core CPUs, don’t generate as much heat and cost less. But you only benefit from multiple cores (or multiple single processors) if your software is written to take advantage of them. Windows NT was designed to support Symmetric Multiprocessing (SMP). Windows XP Professional supports dual processors; the Home version does not. Windows 7 Home Premium supports two processor cores, but not dual CPUs in two different sockets. Pro, Enterprise and Ultimate support two physical CPUs. Windows Server 2008 R2 (Datacenter Edition) supports up to 256 processor cores or 64 physical processors. Wow.
Not only does the OS need to support those processors, but the applications do too, if you’re going to see appreciable performance benefits. Today’s software developers design their programs to run in multiple threads so they can take advantage of multi-core processing. As with the high-end graphics cards that we discussed last week, those who get the most “bang for the buck” from a multiplicity of processor cores tend to be the hard-core (no pun intended) gamers. If all you do with your computer most of the time is surf the web, organize your photos, post to your blog and work on the occasional Word doc or Excel spreadsheet, a couple of cores will probably serve your purposes.
Next week, we’ll continue the processor discussion. Meanwhile, tell us what you think. How many processors do you need? At what point does more become overkill? Are you confused by the array of processors that are available today? Are you an Intel lover or an AMD fan? Do you find that the processor really matters, or are other factors (such as RAM) more important for the types of tasks you perform?
- INTEL Vs AMD (mgitecetech.wordpress.com)
- Intel to introduce first mobile ‘Nehalem’ chip (news.cnet.com)
- Which is best to Buy and what is the difference between Core i3, Corei5 and Core i7 Processors? (lakkireddymadhu.wordpress.com)