Use of L3 cache when compiling?


Recommended Posts

I compile applications often, and they usually take 1-2 hours. I'm planning to upgrade my dual core rig to a quad core one, but I'd like to pull back on cost.

The latest Athlon X4 620 interests me, because I can make use of all 4 cores to compile the app, and it seems to come in cheaply at US$100. Does not having an L3 cache (the Athlon X4s) affect my compile time significantly? i.e. Is it worth paying total 2-3 times more for a Core i5 750 or a Phenom X4 945 instead of Athlon X4 620?

I'm largely budget-conscious but doesn't mind paying more if it's worth it (Core i7s and its associated platform costs are largely out of range though). Also, I tend leave this computer 24/7 so power may be a factor too. I plan to have 8-16 Gb of DDR3 RAM to maximize the potential of the rig during compilation... Nope I don't play games often.

Link to comment
https://www.neowin.net/forum/topic/830194-use-of-l3-cache-when-compiling/
Share on other sites

I just have to ask what exactly are you compiling that takes 1-2 hours? or is it multiple applications that overall take that long?

guess it depends on what you're compiling i know the hard drive can play an important role in compile speed (i tried compiling on a flash memory stick with 10MB read/write surprisingly took a very long time considering the content)

if its processor bound then i guess a core 2 duo/quad would be a nice upgrade path

could you post the rest of your system specs?

cheers

This seems better for an extra $70 http://www.newegg.com/Product/Product.aspx...N82E16819103471

L3 cache, 3ghz vs 2.6ghz on the newer, more slightly efficient architecture plus it's a black edition, so you can probably get some extra mhz if you're into overclocking.

I don't overclock, and I'm on a SATA II 1Tb Seagate 7200.11 drive (which I'll swap out for an SSD when prices drop further).

Rest of my specs? Mobo depends on processor (non-gaming), probably 8-16 Gb ram, the hard drive described above, DVD burner, probably onboard graphics.

Oh, and I compile Mozilla applications, e.g. Firefox / Thunderbird. Each take ~1 hour on my dualcore rig. Athlon X2 5600+, 4Gb DDR2 RAM, that Seagate drive above. WinXP. (I compile in about half that time on Ubuntu though I not often on Linux)

I could be wrong, but wouldn't it also be better for you to have a larger L2 cache instead of an L3 cache? As L2 should be closer to the CPU and faster for it to access.

  Quote
L2 and L3 don't make much difference. L2 pools data for cores and L3 pools data feed between L2 basically. latency of them would be minimal when compiling i'm sure.

So having an L3 cache would help? (yes, I would think that higher clockspeed would help, though increasing the number of cores would help more)

What about Core i7? Will having multithreading on top of the 4 cores prove even better?

That link was a bonus for me RAID O I wanted to find out the price of the i5 processors.

I have just read a 4 page review of the i5 series from APC magazine (check there web review) and it suggests that the chipset that's needed can run a new i7 series that has come out, so after say 2yrs you can just replace the cpu and carry on...

However the price for technology is over rated and yes to achieve your goal with the i5 series you need a new m/board P55 chipset and DDR3 ram and an expensive graphics card.

Think about this sum

Intel Core i5 750 Qaud Core Processor $200

Intel DP55KG Extreme M/Board $250

low voltage +2200 6gig DDR 3 ram say 300-400

9800 GT or GTX graphics card say 250-300

9800gt or gtx $300? mm what currency is this in?

as for memory no need to get 2200 ^_^

i've had nothing but bad luck with 2000+ memory and in the end if you get 6 dimms its pointless to have that kind of bandwidth as it can't all be utilized (4 dimms in the case of i5..actually anyone know the maximum bandwidth limit supported by the i5?)

This topic is now closed to further replies.
  • Recently Browsing   0 members

    • No registered users viewing this page.
  • Posts

    • Generally, Earth never initiated that animals lay straight
    • Several UI improvements masquerading as a major update. I'm truly hating this trend.
    • OpenAI to use Google Cloud despite rivalry, diversifying beyond Microsoft by Paul Hill To help it meet its massive computing demands for training and deploying AI models, OpenAI is looking into a surprising partnership with Google Cloud to use its services. It was widely seen that OpenAI was Google’s biggest threat, but this deal puts an end to the idea that the pair are purely competing. The two companies haven’t made any public announcement about the deal but a source speaking to Reuters claimed that talks had been ongoing for a few months before a deal was finalized in May. Notably, such a deal would see OpenAI expand its compute sources beyond Microsoft Azure. Microsoft had arrangements in place with OpenAI since 2019 that gave it the exclusive right to build new computing infrastructure for the startup. This limitation was loosened earlier this year with the announcement of Project Stargate. OpenAI is now allowed to look elsewhere for compute if Microsoft is unable to meet the demand. A win for Google Cloud, a challenge for Google's AI strategy The deal will see Google Cloud supply computing capacity for OpenAI’s AI model training and inference. This is a big win for Google’s Cloud unit because OpenAI is a massive name in AI and it lends credence to Google’s cloud offering. It also justifies Google Cloud’s expansion of its Tensor Processing Units (TPUs) for external use. On the back of the news, Alphabet’s stock price rose 2.1%, while Microsoft’s sank 0.6%, showing investors think it’s a good move for Google too. While many end users don’t interact with Google Cloud the same way they do with something like Android or Chrome, Cloud is actually a huge part of Google’s business. In 2024, it comprised $43 billion (12%) of Alphabet’s total revenue. With OpenAI as a customer, this figure could rise even more given the massive amounts of compute OpenAI needs. By leveraging Google’s services, it will also give OpenAI access to the search giant’s Tensor Processing Units (TPUs). Unlike GPUs, these chips are specifically designed to handle the kinds of calculations that are most common in AI and machine learning, leading to greater efficiency. Google’s expansion of these chips to external customers has already helped it attract business from Anthropic and Safe Superintelligence. While Google will happily take OpenAI’s money, it needs to tread carefully giving compute power to a rival, which will only make OpenAI more of a threat to Google’s search business. Specifically, it’ll need to manage how resources are allocated between Google’s own AI projects and its cloud customers. Another issue is that Google has been struggling to keep up with the overall demand for cloud computing, even with its own TPUs, according to its Chief Financial Officer in April. By giving access to OpenAI, it means even more pressure. Hopefully, this will be short lived as companies compete to build out capacity to attract customers. OpenAI's push for compute independence Back in 2019 when Microsoft became OpenAI’s exclusive cloud partner in exchange for $1 billion, the AI landscape was much different. End users wouldn’t have access to ChatGPT for another 3 years and the rate of development of new models was less ferocious than it is today. As OpenAI’s compute needs evolve, its relationship with Microsoft has had to evolve too, including this deal with Google and the Stargate infrastructure program. Reuters said that OpenAI’s annualized run rate (the amount they’ll earn in one year at its current pace) had surged to $10 billion, which highlights its explosive growth and need for more resources than Microsoft alone can offer. To make itself more independent, OpenAI has also signed deals worth billions of dollars with CoreWeave, another cloud compute provider, and it is nearing the finalization of the design of its first in-house chip, which could reduce its dependency on external hardware providers altogether. Source: Reuters
    • I don't think that means what you think it means
    • The Google Home app gets video forwarding support and many more features by Aman Kumar Along with releasing the Android 16 update for supported Pixel devices, Google has also showcased a number of features coming to its Home app. First up is PiP, also known as picture-in-picture mode, which will be available for Nest Cams on any Google TV device you own. It’ll be similar to YouTube’s PiP, with which you must be familiar with. A small window will appear in a corner of the TV screen, allowing you to view your Nest Cams without interrupting your viewing experience. The feature is currently in public preview, and you can enroll to try it out before its public release. Another YouTube feature that Google is adding to its Home app is the ability to jump 10 seconds forward or backward in recorded videos. This feature ensures that you don't have to go through the entire footage to locate the moment you’re looking for. Google mentioned in its blog post that it is adding more controls to the Google Home web app. Currently, the web app offers limited functionality, such as setting automations and viewing cameras, but soon you’ll be able to manage more things through it, such as adjusting lighting, controlling temperature, and locking or unlocking the door. Google’s AI model, Gemini, is also getting more controls in the Home app. You can use natural language in the Gemini app to search for specific footage in the camera history. Furthermore, the fallback assistant experience that broadcast commands use is also being updated. You’ll now be able to use your voice to broadcast messages through the connected speakers in your home. The Google blog post also mentions that you are no longer required to use the standalone Nest app to receive smoke and other critical alerts. You can now view the Nest Protect smoke and carbon monoxide (CO) status directly in the Home app. You’ll also be able to run safety checkups and hush alarms through the Home app. In addition to all these features, Google is also making the automation creation process much quicker, will allow you to add more tiles to the Home app Favorites section, and will let you create different Favorites for any other device you use, such as your smartwatch. The Home app will now also support third-party Matter locks. Similar to the Nest x Yale lock, you’ll be able to control various settings of these third-party locks, like managing household access, creating guest profiles, and more.
  • Recent Achievements

    • Week One Done
      Falisha Manpower earned a badge
      Week One Done
    • One Month Later
      elsa777 earned a badge
      One Month Later
    • Week One Done
      elsa777 earned a badge
      Week One Done
    • First Post
      K Dorman earned a badge
      First Post
    • Reacting Well
      rshit earned a badge
      Reacting Well
  • Popular Contributors

    1. 1
      +primortal
      544
    2. 2
      ATLien_0
      272
    3. 3
      +FloatingFatMan
      207
    4. 4
      +Edouard
      201
    5. 5
      snowy owl
      139
  • Tell a friend

    Love Neowin? Tell a friend!