Jump to content

amd bulldozer performance ex-amd engineer

  • Please log in to reply
3 replies to this topic

#1 Ci7


    Neowinian Senior

  • 8,306 posts
  • Joined: 21-June 08
  • Location: Bahrain
  • OS: Windows 10 TP
  • Phone: iphone 5S

Posted 17 October 2011 - 08:07

AMD recently launched FX-Series processors based on the Bulldozer architecture haven't managed to deliver the performance everybody expected them to, and an ex-AMD engineer has recently come out to share its vision regarding the Bulldozer performance issues.

Cliff A. Maier has worked as a member of AMD's technical staff until a few years ago, when it left the company at about the same time as AMD has started to use automated design tools for its chips.

According to the engineer, the fact that Bulldozer arrived later than everybody expected it has little to do with its performance problems, as the main issue that affected the architecture was the chip makers adoption of automated design techniques.

Compared to the traditional design techniques that rely on hand-crafting performance-critical parts of the processor, automated tools speed up the design process, but cannot ensure maximum performance and efficiency.

"The management decided there should be such cross-engineering [between AMD and ATI teams within the company] ,which meant we had to stop hand-crafting our CPU designs and switch to an SoC design style,” said Maier in a forum post on Insideris.com.

“This results in giving up a lot of performance, chip area, and efficiency. The reason DEC Alphas were always much faster than anything else is they designed each transistor by hand. Intel and AMD had always done so at least for the critical parts of the chip.

“That changed before I left - they started to rely on synthesis tools, automatic place and route tools, etc.," continued the engineer.

According to Maier, automatically-generated designs can be 20% bigger and slower that hand-crafted silicon, leading to an increased transistor count, increased die space and low energy-efficieny.

"I had been in charge of our design flow in the years before I left, and I had tested these tools by asking the companies who sold them to design blocks (adders, multipliers, etc.) using their tools. I let them take as long as they wanted.

Read more at source


#2 Vice



  • 15,877 posts
  • Joined: 03-September 04

Posted 17 October 2011 - 08:11

They should have kept the GPU and CPU Dies separate and placed them both next to each other on the CPU Package. To lower costs they wanted to produce it as one giant Die and now instead of having the best of both we have this mediocre best of nothing.

#3 Raa


    Resident president

  • 13,491 posts
  • Joined: 03-April 02
  • Location: NSW, Australia

Posted 17 October 2011 - 08:12

Oh c'mon, what are we going to hear next?
I've heard 3 excuses about the lack of performance... Which one is it...

#4 Mark


    (: ollǝɥ

  • 3,845 posts
  • Joined: 22-October 04
  • Location: Derbyshire, UK

Posted 17 October 2011 - 08:13

Well, if all three are true, and all three are fixed in the next batch, they'll be OSOM!