Who's ready for processors in the terahertz operating at room temperature?

Forum dedicated to computer hardware and software, mobile phones and electronic gadgets.
User avatar
RCA
Posts: 8226
Joined: Mon Jan 22, 2007 8:09 am

Post

http://www.technologyreview.co.../?a=f&

Pretty epic article thanks to IBM and there nano tech.Graphene is the new silicone!


User avatar
MinisterofDOOM
Moderator
Posts: 34350
Joined: Wed May 19, 2004 5:51 pm
Car: 1962 Corvair Monza
1961 Corvair Lakewood
1974 Unimog 404
1997 Pathfinder XE
2005 Lincoln LS8
Former:
1995 Q45t
1993 Maxima GXE
1995 Ranger XL 2.3
1984 Coupe DeVille
Location: The middle of nowhere.

Post

1 THz isn't 100 GHz, it's 1000 GHz. Need another zero before they hit the THz mark.

Interesting contrast to today's current consumer CPU market, in which core clock speed increases have become much less important than processing optimization and core/thread counts. Of course, a lot could change in the time between now and whenever (if ever) that hits the consumer market.

User avatar
RCA
Posts: 8226
Joined: Mon Jan 22, 2007 8:09 am

Post

Katherine Bourzac wrote:Other researchers have made very fast transistors using expensive semiconductor materials such as indium phosphide, but these devices only operate at low temperatures. In theory, graphene has the material properties needed to let transistors run at terahertz speeds at room temperature.
Right before the part that reads "Story continues below". I have adBlocker so I am sure there is a huge advert there.

Also clock speed has become less important as manufacturers get closer and closer to the physical limitations of silicone. But optimization can always occur, physical limits will always be there. Optimization is a result of reaching those limits. Look at the Pentium 4 chips, they reached the 3.8ghz and Intel moved on to the Dual-Core.

But now as we are elbow deep in innovation so I could imagine first-gen graphene chips could come off the bat with a single and dual core models. That would be exciting. One thing I can't figure out is what power outputs would be necessary to run them at such high frequencies? They discuss heat but not power.

User avatar
MinisterofDOOM
Moderator
Posts: 34350
Joined: Wed May 19, 2004 5:51 pm
Car: 1962 Corvair Monza
1961 Corvair Lakewood
1974 Unimog 404
1997 Pathfinder XE
2005 Lincoln LS8
Former:
1995 Q45t
1993 Maxima GXE
1995 Ranger XL 2.3
1984 Coupe DeVille
Location: The middle of nowhere.

Post

RCA wrote:Also clock speed has become less important as manufacturers get closer and closer to the physical limitations of silicone. But optimization can always occur, physical limits will always be there. Optimization is a result of reaching those limits. Look at the Pentium 4 chips, they reached the 3.8ghz and Intel moved on to the Dual-Core.
That may have been true maybe 2-5 years ago. It isn't anymore. Chips are NOT pushing their limits. The Core i3 sells with a 3 GHz clock but can be pushed past 4. If tech limitations were the only thing holding back clock speed, the i7 would be a 5ghz chip, not 2.6. It's really the opposite: rather than hardware holding back clock performance, it's software not needing clock increases. 2.6GHz is great for even hardcore stuff provided it's done efficiently. 3 will be better, but 2.6 is fine. 4 and 5 just aren't necessary. We've hit the sweet spot for performance, efficiency, and cost.

Optimization is also less an answer to clock speed limitations than an answer to software needs. If clock speed increases are no longer netting significant performance gains, we have to turn elsewhere. That's where things like Hypertransport (which was still beneficially even in the days when clock speeds were still climbing) and QuickPath come into play. They attempt to make the most effective use of the available clock cycles.

Pentium 4 was a LONG time ago. And, depending on who you ask, they weren't particularly good chips anyway. In fact, they're a great example of exactly why optimization is better than clock speed. Look at the P4 vs Athlon 64 and you'll see Intel climbing the MHz ladder while AMD was content to increase effectiveness from lower MHz. HyperTransport made all the difference. That was the era that clock speeds disappeared from system requirements because performance-per-clock was NOT equal from AMD to Intel. The gap was actually quite large because of the very different approaches taken by the two manufacturers.

User avatar
RCA
Posts: 8226
Joined: Mon Jan 22, 2007 8:09 am

Post

CeleronMax. CPU clock rate 266 MHz to 3.6 GHz

Pentium DMax. CPU clock rate 2.66 GHz to 3.73 GHz

ZeonMax. CPU clock rate 400 MHz to 3.8 GHz

Core 2Max. CPU clock rate 1.06 GHz to 3.33 GHz

Core i7Max. CPU clock rate 1.6 GHz to 3.47 GHz

Core i5Max. CPU clock rate 3.46 GHz

You can definitely take these higher but manufactures know what they are doing. So although 3.8 might be the highest any manu will go, people can still take them higher but at what cost? See how long some one can keep a i7 @ 5Ghz. Sure they can do it but it won't last. Manufactures know that and they have warranties to keep. If silicon had no limits then using multiple cores wouldn't be necessary. Single core at 15Ghz will do fine, no need to multi-cores with hyper threading. They had to get inventive in order to keep up with demand while having boundaries.

I didn't mention Pentium 4 because of how good or bad it is, I mentioned because of the limit in speed it reached. But it doesn't matter which chip you mention, they all mysteriously land at or near the same speeds in the latest stages of development. And it isn't because of an economic sweet spot or because users find themselves not needing any extra speed. It is because chip designers ran out of room, so then needed to move on. If multi-chip designs weren't thought up then we till this day would be using single chips with <4Ghz speeds.

User avatar
C-Kwik
Moderator
Posts: 9086
Joined: Thu Aug 01, 2002 9:28 pm
Car: 2013 Chevy Volt, 1991 Honda CRX DX

Post

I think the question that would need to be asked and answered is how much cost is involved in building a computer with a much higher clock speed? I'd imagine heat levels could get very high and the engineering needed to cool such a system could get costly. I skimmed the article the other day so I'm not sure exactly what was said, but it sounds to me like the material handled higher clock speeds without the levels of heat silicone would see at those clock speeds.

Its not implausible that silicone can handle more, but Intel and AMD are not catering much to those who seek to OC their processors. Their primary customers are actually companies like HP, Dell, etc. and in turn the consumers who purchase from them. I'd say a very small portion of people OC their PC's let alone buy PC's from such companies. And these companies are going to seek chips that help to minimize the costs of products. Having to support a large cooling system and perhaps use a larger power supply tends to conflict with such goals. The newer design does benefit people who OC their PC's but its unlikely that there is going to be much direct support for OCing. Especially at extreme levels. And the engineering challenges are likely to keep them from pursuing much higher clock speeds on silicone. Basically, while absolute limitations might be much higher, its the practical limitations that are most relevant.


User avatar
MinisterofDOOM
Moderator
Posts: 34350
Joined: Wed May 19, 2004 5:51 pm
Car: 1962 Corvair Monza
1961 Corvair Lakewood
1974 Unimog 404
1997 Pathfinder XE
2005 Lincoln LS8
Former:
1995 Q45t
1993 Maxima GXE
1995 Ranger XL 2.3
1984 Coupe DeVille
Location: The middle of nowhere.

Post

C-Kwik wrote:Basically, while absolute limitations might be much higher, its the practical limitations that are most relevant.
That's what I'm talking about. And if you throw the i3 onto that graph, you'll see yet more improvement on a performance-per-watt scale.

As for RCA's question, how long would a 5ghz i7 last? I didn't just pick that number out of the air. Intel has demonstrated the i7 975 OC'd to 5.07GHz ON AIR. And i3s and i5s have been pushed to 4GHz reliably on air. Which tells us that A) current tech is already within the realm of reliable 5GHz chips and B) a chip could most certainly be designed with that high a clockspeed in mind and operate reliably.

If basement overclockers can do it, Intel can definitely do it. But it all comes down to what I said in my post, and what C-Kwik said: the hardware ceiling is not the issue. The practical ceiling is the issue. Mid-2-GHz CPUs are more than adequate for most peoples' needs. And "most people" is exactly who Intel and AMD make chips for. If the market was dominated by hobbyists and gamers who regularly make use of crazy clock speeds, higher factory clockspeeds would be more common.

Plus, as that graph shows, and as I said before, real-world performance simply IS NOT solely (or even mostly) dependent on clock speed. And that's last-gen chips and last-last gen chips. The Nehalem chips are even more efficient. More performance per watt. More performance per clock. That's what the end-user benefits from.

On to multi core not being necessary aside from compensating for lack of clock speed, that's just bunk. Multicore might not have been a part of the CONSUMER marketplace until clockspeeds started peaking, but it has most certainly existed for longer than that, and the benefits are undeniable. Unless you're only ever running a SINGLE PROCESS you ALWAYS stand to benefit from distributing computational load across as wide an area as possible. 5GHz might be fast, but processes still have to queue up to be processed on a single core. If you have a dozen processes (low for a consumer PC) you're queuing up a lot. And that queuing doesn't just affect the CPU, it affects memory, video, everything. Providing more places to process the data is far more important than providing faster ways to process it. As I said before, AMD proved this quite well with HyperTransport back in the A64/P4 days. AMD chips did more with less clock speed. It's a fact. And they did it because they handled the processing much more efficiently.

User avatar
audtatious
Moderator
Posts: 37008
Joined: Sun Oct 27, 2002 5:31 pm
Car: 2017 Q60 Red Sport. Gone: 2014 Q50s, 2008 G37s coupe, 2007 G35s Sedan, 2002 Maxima SE, 2000 Villager Estate (Quest), 1998 Quest, 1996 Sentra GXE
Location: Stalking You
Contact:

Post


User avatar
RCA
Posts: 8226
Joined: Mon Jan 22, 2007 8:09 am

Post

So MoD you really think that more efficient CPUs (multi-core, HyperTransport ect) were a result of only software requirements and not necessity based off of the limits of silicone? I think that increased demand for power along with practical limitations created a need for innovation.

Also I apologize for not using the correct terms; when I mention silicone's "physical limits" I meant it's practical limits (Thanks C-Kwik for pointing that out). I just assumed one would think they are the same. After some googleing I see what you mean about the 5Ghz OCed i7 but how ideal is a setup like that? It's a novelty, they do it because they can, the effort required to do so isn't worth the trouble. The voltages required are too high and the temperatures are ridiculous. Can some one tell me why they are both too high?

Awesome article by Robert W Keyes of IBM Research Divisionhttp://www.fisica.unipg.it/~ga...n.pdfDidn't read all of it but skimming through it I can tell there is a lot of great information in it.

EDIT:Cool article audtatious. I see where you are going with that.

AZ...
Modified by RCA at 4:34 PM 2/10/2010

User avatar
AZhitman
Administrator
Posts: 71063
Joined: Mon Apr 29, 2002 2:04 am
Car: 58 L210, 63 Bluebird RHD, 64 NL320, 65 SPL310, 66 411 RHD, 67 WRL411, 68 510 SR20, 75 280Z RB25, 77 620 SR20, 79 B310, 90 S13, 92 SE-R, 92 Silvia Qs, 98 S14.
Location: Surprise, Arizona
Contact:

Post



Nerds.


User avatar
C-Kwik
Moderator
Posts: 9086
Joined: Thu Aug 01, 2002 9:28 pm
Car: 2013 Chevy Volt, 1991 Honda CRX DX

Post

RCA wrote:So MoD you really think that more efficient CPUs (multi-core, HyperTransport ect) were a result of only software requirements and not necessity based off of the limits of silicone? I think that increased demand for power along with practical limitations created a need for innovation.
I think they go hand in hand. The constraints that silicone have on it means to get more useful processing power means a lot of compromise has to be made. A big one being overall cost. The original article posted eludes to the possibilities of the new material, but I doubt we are going to see such high levels of processing power implemented at the consumer level anytime soon. We will probably just get to enjoy the energy saving benefits. Perhaps even a modest increase in processing power without having to resort to major cooling systems and power supplies (Not sure how big of an impact it would have though). Not sure where this stands in terms of cost though.
RCA wrote:The voltages required are too high and the temperatures are ridiculous. Can some one tell me why they are both too high?
While I'm not sure this answers your question, heat and voltage/current go hand in hand. If you treat a processor as a resistor, if you increase voltage while holding current fixed, the amount of power goes up. Resistors convert electrical energy into heat. A processor would be doing the same thing. Not sure if the calculation itself would be that straightforward with a processor, but ultimately, there are likely resistive properties in a processor. So the concept should be valid. I hope this was what you were asking.


User avatar
MinisterofDOOM
Moderator
Posts: 34350
Joined: Wed May 19, 2004 5:51 pm
Car: 1962 Corvair Monza
1961 Corvair Lakewood
1974 Unimog 404
1997 Pathfinder XE
2005 Lincoln LS8
Former:
1995 Q45t
1993 Maxima GXE
1995 Ranger XL 2.3
1984 Coupe DeVille
Location: The middle of nowhere.

Post

RCA wrote:So MoD you really think that more efficient CPUs (multi-core, HyperTransport ect) were a result of only software requirements and not necessity based off of the limits of silicone?
Yes. Absolutely. The reason I say this is that consumer tech trickles down from the professional market. Multi-core CPUs are an example of that. But they're also the logical progression of another bit of tech that never made it to the mainstream consumer market because of cost reasons: multi-CPU (not multi-core) setups. Servers with multiple Xeons were common long before multi-core CPUs entered the consumer market. They were designed for the server environment where clock speed was much less important than moving huge amounts of data quickly and efficiently. It was a software necessity, not a hardware workaround.

Fastfoward a few years, and the multi-CPU motherboard logically progresses to the multi-core CPU. With improved tech (like QPI and HyperTransport [I avoid using "HT" to abbreviate HyperTransport as it could be mistaken for hyperthreading as well]) a single die could be made to manage data as effectively as two. So servers began adopting dual core Xeons and Opterons. The goal was still the same: move lots of data quickly and efficiently. A software need.

Of course, it's always more cost-efficient to produce multiple products from a single design, so shortly after the professional market multi-core chips hit the market, AMD and Intel began offering consumer-market chips based on the same architecture. The Core/Core 2 and Athlon 64 X2 were here. But they were really descended from professional tech. They were multi-core because the Xenon and Opteron were multi-core. The consumer market at that time really couldn't make proper use of multi-core CPUs. Software wasn't ready for it (Windows certainly wasn't optimized for it). There were a lot of articles that talked about the negligible bonuses of multi-core CPUs over faster single-core CPUs, especially in the gaming world where games were not thread-optimized.

Fast forward again, about 4 years, and we've got today. Multi-core-shy Windows XP has been succeeded twice (once by another multi-core reject and then mercifully by Win7 and its superior process management). A large portion of the software available in the consumer marketplace is now written to take advantage of multiple threads or multiple cores. Many games have options to customize the software to utilize a specified number of threads. QPI and HyperTransport are critical as they allow the CPU to keep up with the data rates of 2, 3, 4, 6, 8, or 12 threads. The FSB was a weak point even in the days of single core CPUs (which was when AMD actually began using HyperTransport as a solution to that weak point).

If you compare a program like 7Zip or a well-threaded video encoding program, a slow but multi-threaded processor will easily outperform a fast, single-threaded processor. Moving more data more quickly. Processes might happen slower on an individual basis but more are happening at once.

It's worth noting that hyperthreading originated from server tech as well. The Xeon on which the P4 was based was the first hyperthreaded chip. Then you'll note that HT disappeared from consumer chips because it wasn't much help for the same reasons multiple cores weren't at first. Now that software is "ready" for it, ht is back to compliment multiple cores.

So on the professional end, multi-core was a solution to a software demand. But on the consumer end, we had to wait for software to catch up with the hardware advances. Now we're caught up, and threading is a very powerful tool.

One of the coolest benefits of multi-threading is that you can run multiple demanding programs at once. I can play Crysis and watch a movie, because Crysis doesn't need 8 threads, so the DVD player can do its thing without getting in the way (memory and 3D acceleration are a different issue, but as Crysis is GPU-limited by my poor little GTX260, this makes a good example of the CPU-side benefits).

User avatar
RCA
Posts: 8226
Joined: Mon Jan 22, 2007 8:09 am

Post

C-Kwik wrote:While I'm not sure this answers your question, heat and voltage/current go hand in hand. If you treat a processor as a resistor, if you increase voltage while holding current fixed, the amount of power goes up. Resistors convert electrical energy into heat. A processor would be doing the same thing. Not sure if the calculation itself would be that straightforward with a processor, but ultimately, there are likely resistive properties in a processor. So the concept should be valid. I hope this was what you were asking.
That question was meant to be a rhetorical one that would support my claim that there are limits to silicon at high frequencies.

@ MoDAs far as multi tasking, you're right, nothing beats a multi-threaded / multi-core CPU and your points are very compelling... But I can't get over the fact that these CPUs stop at the same frequencies. Core-Duo @ 7Ghz would of been more then enough if you were multi-tasking, why was a Core2- Duo necessary? Core2-Quad? Just bump the frequencies and you will be fine. Why would manufacturers constantly go against this? I am starting to lean toward the "they go hand in hand" argument. At this point I am very interested in this and I plan on "trying" to contact a professional in the industry and get his point-of-veiw.

User avatar
MinisterofDOOM
Moderator
Posts: 34350
Joined: Wed May 19, 2004 5:51 pm
Car: 1962 Corvair Monza
1961 Corvair Lakewood
1974 Unimog 404
1997 Pathfinder XE
2005 Lincoln LS8
Former:
1995 Q45t
1993 Maxima GXE
1995 Ranger XL 2.3
1984 Coupe DeVille
Location: The middle of nowhere.

Post

Remember that just because the USER isn't multitasking doesn't mean the CPU isn't handling multiple processes. Even windows idling produces numerous processes. Being able to offload some of those to get them out of the way of the intensive stuff is a huge boon.

Plus, Intel's multi-core CPUs have Turbo Boost, which will overclock cores depending on how many are in use. The i5, for instance, will turn up the clock by 533mhz if only one or two cores are active. I7 920 cranks up 533 for 2 and 665 for one core. So there's an attempt to balance the benefits between multiple threads and single threads.

chemao
Posts: 369
Joined: Wed Sep 27, 2006 7:32 pm
Car: 2019 Tesla Model 3 - preordered
1997V2 AM General H1 with 6.6L Duramax (LML) running biodiesel swap
2015 Lexus CT200H
2003 Hummer H2 with 6.5L Detroit Turbodiesel swap
Location: Boston, MA

Post

One of my best friends had an Intel P4 running around 4.5GHz. It was fast...

User avatar
stebo0728
Posts: 2810
Joined: Wed Feb 11, 2009 4:43 pm
Car: 1993 300ZX, White, T-Top
Contact:

Post

That will create a vast improvement in the virtual machine world. 100 GHz cpus means A) more virtual machines per server, B) more cpu to each virtual machine, either way woo hoo

Does anyone know if they have figured out how to create virtual machines that can handle larger graphics tasks efficiently?

Danski
Posts: 27
Joined: Fri Dec 11, 2009 5:58 pm
Car: Bluebird

Post

Hey guys,
I know this is an old thread but I think that you haven't touched on something so I'll put my two cents in without trying to make this an epic post where I tie myself in a knot.

One of the things that you have perhaps missed is that chip manufacturers (lets just say Intel) are businesses.

Well duh, but why am I bringing this up? It's because when making chips they have to cover their own arses. Specifically when it comes to reliability. Sure Intel could put out their chips at 5GHz, they could even put them out and just say hey! you choose the speed.

Problem is then they are liable. They have to provide a chip that's going to do it's job day in day out in a wide range of operating environments without kicking the bucket (think >30deg.C ambient + 1 1/2 years of dust buildup).

Another thing that they need to consider is manufacturing. They have to throw away/do something with chips that won't reach 5GHz if it says on the box it definitely will do 5GHz. This amounts to a lot of wastage as silicon manufacturing is anything but perfect.

And then there's the fact that in a bioploy (Intel vs AMD), when ones whipping the other there really is no need to pump the speeds up and risk having your chips be less profitable.

As a bit of an example, the Compaq(read: HP) laptop I'm using has a nVidia chipset, nVidia stuffed up and their chips get too hot an desolder themselves from the motherboard. This has meant that a lot of these chipsets need to be replaced and HP has made nVidia pay 200$ for each laptop they've had to fix. This is a situation no chip manufacturer wants to be in.

Well, hope that sheds some more light on why clock speeds are the way they are.

User avatar
MinisterofDOOM
Moderator
Posts: 34350
Joined: Wed May 19, 2004 5:51 pm
Car: 1962 Corvair Monza
1961 Corvair Lakewood
1974 Unimog 404
1997 Pathfinder XE
2005 Lincoln LS8
Former:
1995 Q45t
1993 Maxima GXE
1995 Ranger XL 2.3
1984 Coupe DeVille
Location: The middle of nowhere.

Post

Danski wrote:Problem is then they are liable. They have to provide a chip that's going to do it's job day in day out in a wide range of operating environments without kicking the bucket (think >30deg.C ambient + 1 1/2 years of dust buildup).
Absolutely. Excellent point. Retail products need to be on the safe side of the performance envelope. You can push it yourself later, but with cars and computers it's the same thing: push the envelope too hard looking for more performance and your warranty goes away.

And mentioning dust is right on as well. How many "average" computer users ever open their machines? Most retail machines have warranty stickers on the side panel discouraging that anyway (foolish). People leave their machines on carpet and over the years they ingest a LOT of dust, reducing fan and heatsink effectiveness. But they still expect the manufacturer to back the product.

User avatar
RCA
Posts: 8226
Joined: Mon Jan 22, 2007 8:09 am

Post

Hey MoD, apparently a few engineers agree with me.

http://www.reddit.com/r/askscience/comm ... to_around/

/zombiethread

chemao
Posts: 369
Joined: Wed Sep 27, 2006 7:32 pm
Car: 2019 Tesla Model 3 - preordered
1997V2 AM General H1 with 6.6L Duramax (LML) running biodiesel swap
2015 Lexus CT200H
2003 Hummer H2 with 6.5L Detroit Turbodiesel swap
Location: Boston, MA

Post

stebo0728 wrote:That will create a vast improvement in the virtual machine world. 100 GHz cpus means A) more virtual machines per server, B) more cpu to each virtual machine, either way woo hoo

Does anyone know if they have figured out how to create virtual machines that can handle larger graphics tasks efficiently?
Virtual machines by design can't make use of graphics accelerators, as they would need direct access. Theoretically in the future you could dedicate GPUs to a VM, but the question is WHY?

Incidentally, VM hosts are among the few situations where multiple cores will benefit more than higher clock rates. Higher clock rates will benefit end users more.

chemao
Posts: 369
Joined: Wed Sep 27, 2006 7:32 pm
Car: 2019 Tesla Model 3 - preordered
1997V2 AM General H1 with 6.6L Duramax (LML) running biodiesel swap
2015 Lexus CT200H
2003 Hummer H2 with 6.5L Detroit Turbodiesel swap
Location: Boston, MA

Post

RCA wrote:Hey MoD, apparently a few engineers agree with me.

http://www.reddit.com/r/askscience/comm ... to_around/

/zombiethread

You were right from the beginning. A 5GHz dual core processor will stomp a 2.5GHz quad core. The reason they went to multiple cores instead of raising clock rates was due to thermal barriers that OEMs couldn't surmount without some wild cooling rigs [such as the refrigerators used in liquid cooling]. Raising clock speeds requires higher voltage, and higher voltage leads to higher temperatures. Contrary to marketing, you will almost never see a difference in performance between a dual core or quad core processor, given a set of transistors and clock speeds per core.

chemao
Posts: 369
Joined: Wed Sep 27, 2006 7:32 pm
Car: 2019 Tesla Model 3 - preordered
1997V2 AM General H1 with 6.6L Duramax (LML) running biodiesel swap
2015 Lexus CT200H
2003 Hummer H2 with 6.5L Detroit Turbodiesel swap
Location: Boston, MA

Post

RCA wrote:
C-Kwik wrote:While I'm not sure this answers your question, heat and voltage/current go hand in hand. If you treat a processor as a resistor, if you increase voltage while holding current fixed, the amount of power goes up. Resistors convert electrical energy into heat. A processor would be doing the same thing. Not sure if the calculation itself would be that straightforward with a processor, but ultimately, there are likely resistive properties in a processor. So the concept should be valid. I hope this was what you were asking.
That question was meant to be a rhetorical one that would support my claim that there are limits to silicon at high frequencies.

@ MoDAs far as multi tasking, you're right, nothing beats a multi-threaded / multi-core CPU and your points are very compelling... But I can't get over the fact that these CPUs stop at the same frequencies. Core-Duo @ 7Ghz would of been more then enough if you were multi-tasking, why was a Core2- Duo necessary? Core2-Quad? Just bump the frequencies and you will be fine. Why would manufacturers constantly go against this? I am starting to lean toward the "they go hand in hand" argument. At this point I am very interested in this and I plan on "trying" to contact a professional in the industry and get his point-of-veiw.
Myself and my friend Joey used to run the 12th fastest rig in the world, and together we overclocked hundreds of CPUs. Though I can't say I'm a "professional in the industry", I consider myself fairly seasoned. You are correct in your belief that multiple cores arose out of necessity due to thermal barriers, rather than being faster. A 5GHz single core will roast a 2.5GHz dual core all day long. I'm getting itchy fingers now... I'm on a Sager notebook but all this talk about overclocking is getting me in the mood to build a desktop beast at 6+GHz.

User avatar
h66kEM
Posts: 32
Joined: Fri Jan 06, 2012 7:17 pm
Car: '11 Altima Coupe 3.5 MT

other ride: '06 Yamaha R1
Location: Gilbert, AZ

Post

Intel doesnt make billions of dollars a quarter (quarter after quarter, btw) for nothing. They are in the business to make money, and if they break a few records in the process, it a bonus. Clock speed records dont bring Intel the massive profit margins, Dell, HP, Lenovo, etc...now those are the real customers.

Most people want battery life and warranties. Not melting PWCB's.

User avatar
h66kEM
Posts: 32
Joined: Fri Jan 06, 2012 7:17 pm
Car: '11 Altima Coupe 3.5 MT

other ride: '06 Yamaha R1
Location: Gilbert, AZ

Post

For all the overclockers out there, you can buy insurance for your Intel chip here:

http://click.intel.com/tuningplan/

So, that answers that question :chuckle:

rangerRavi
Posts: 1
Joined: Sun Jan 22, 2012 9:59 am
Car: 2007 M35 Sport Blue, 2010 G37 Coupe Black

Post

h66kEM wrote:For all the overclockers out there, you can buy insurance for your Intel chip here:

http://click.intel.com/tuningplan/

So, that answers that question :chuckle:

...very nice I did not know they offered this. Im gonna buy 2 for my i5 2500k's i just got. :)

thanks

User avatar
h66kEM
Posts: 32
Joined: Fri Jan 06, 2012 7:17 pm
Car: '11 Altima Coupe 3.5 MT

other ride: '06 Yamaha R1
Location: Gilbert, AZ

Post

Intel just started offering it. Im pretty sure no other chip maker offers a warranty for people to "abuse/test/max out" their processors. Enjoy!! :woot:


Return to “Computers / Electronics”