• 0

C# vs C++ performance: sorting an array


Question

I'm trying to compare the performance of the standard sorting facilities of .Net and C++, just for fun mainly. I devised the following test in C++/CLI: to run this, create a "CLR Console Application". I just fill a large C++ array and an equally large .NET array with the same random numbers, and sort each with their respective standard sorting functions: std::sort and Array.Sort, respectively. I repeat the test a few times and compute the average for each.

#include "stdafx.h"
#include <array>
#include <algorithm>
using namespace System;
using namespace System::Diagnostics;

const int ARRAY_SIZE = 1000000;
const int NUM_LOOPS = 20;

// Testing .Net Array.Sort() vs C++'s std::sort on both languages standard arrays.
int main(array<System::String ^> ^args)
{
auto cppArray = new int[ARRAY_SIZE];
auto netArray = gcnew array<int>(ARRAY_SIZE);
double totalTimeCpp = 0.0;
double totalTimeNet = 0.0;

auto randGen = gcnew Random();

for (int i = 0; i < NUM_LOOPS; ++i) {
for (int i = 0; i < ARRAY_SIZE; ++i) {
int randNum = randGen->Next();
cppArray[i] = randNum;
netArray[i] = randNum;
}

auto stopWatch = Stopwatch::StartNew();
std::sort(cppArray, cppArray + ARRAY_SIZE);
stopWatch->Stop();
totalTimeCpp += (double)stopWatch->ElapsedMilliseconds;

stopWatch = Stopwatch::StartNew();
Array::Sort(netArray);
stopWatch->Stop();
totalTimeNet += (double)stopWatch->ElapsedMilliseconds;
}

Console::WriteLine(L"Average time C++: {0} milliseconds.", totalTimeCpp / (double)NUM_LOOPS);
Console::WriteLine(L"Average time .NET: {0} milliseconds.", totalTimeNet / (double)NUM_LOOPS);
Console::WriteLine(L"Array.Sort time / std::sort time: {0}.", totalTimeNet / totalTimeCpp);
Console::ReadKey(false);
return 0;
}[/CODE]

I'm posting this because I can't believe the results. In release mode, optimizing for speed, Array.Sort() takes 89.6% of the time std::sort() takes on my machine. std::sort is supposed to be faster, if only because C++ doesn't perform bounds check on each random access in an array and .Net does. At this point I suspect there's something wrong with my testing methodology, perhaps due to compiling with the /CLI switch on? I don't know, so if you have a better idea let me know.

Link to comment
https://www.neowin.net/forum/topic/1057418-c-vs-c-performance-sorting-an-array/
Share on other sites

17 answers to this question

Recommended Posts

  • 0

I don't know for a fact, but I would guess it's not doing bounds checking in release mode. I would also guess that Microsoft would add a faster sort to .NET compared to what's in the standard C++ library.

  • 0
  On 11/02/2012 at 01:03, Xinok said:

I don't know for a fact, but I would guess it's not doing bounds checking in release mode.

It is, apparently:


cppArray[i] = randNum;
000000dd mov eax,dword ptr [ebp-8]
000000e0 mov edx,dword ptr [ebp-1Ch]
000000e3 mov ecx,dword ptr [ebp-24h]
000000e6 mov dword ptr [edx+eax*4],ecx
netArray[i] = randNum;
000000e9 mov eax,dword ptr [ebp-8]
000000ec mov edx,dword ptr [ebp-5Ch]
000000ef cmp eax,dword ptr [edx+4]
000000f2 jb 000000F9
000000f4 call 711471E8
000000f9 mov ecx,dword ptr [ebp-24h]
000000fc mov dword ptr [edx+eax*4+8],ecx [/CODE]

Pretty sure the cmp/jb/call there is a bounds check and a call to raise an exception if it fails.

  • 0
  On 11/02/2012 at 00:52, Dr_Asik said:

I'm trying to compare the performance of the standard sorting facilities of .Net and C++, just for fun mainly. I devised the following test in C++/CLI: to run this, create a "CLR Console Application". I just fill a large C++ array and an equally large .NET array with the same random numbers, and sort each with their respective standard sorting functions: std::sort and Array.Sort, respectively. I repeat the test a few times and compute the average for each.

#include "stdafx.h"
#include <array>
#include <algorithm>
using namespace System;
using namespace System::Diagnostics;

const int ARRAY_SIZE = 1000000;
const int NUM_LOOPS = 20;

// Testing .Net Array.Sort() vs C++'s std::sort on both languages standard arrays.
int main(array<System::String ^> ^args)
{
auto cppArray = new int[ARRAY_SIZE];
auto netArray = gcnew array<int>(ARRAY_SIZE);
double totalTimeCpp = 0.0;
double totalTimeNet = 0.0;

auto randGen = gcnew Random();

for (int i = 0; i < NUM_LOOPS; ++i) {
for (int i = 0; i < ARRAY_SIZE; ++i) {
int randNum = randGen->Next();
cppArray[i] = randNum;
netArray[i] = randNum;
}

auto stopWatch = Stopwatch::StartNew();
std::sort(cppArray, cppArray + ARRAY_SIZE);
stopWatch->Stop();
totalTimeCpp += (double)stopWatch->ElapsedMilliseconds;

stopWatch = Stopwatch::StartNew();
Array::Sort(netArray);
stopWatch->Stop();
totalTimeNet += (double)stopWatch->ElapsedMilliseconds;
}

Console::WriteLine(L"Average time C++: {0} milliseconds.", totalTimeCpp / (double)NUM_LOOPS);
Console::WriteLine(L"Average time .NET: {0} milliseconds.", totalTimeNet / (double)NUM_LOOPS);
Console::WriteLine(L"Array.Sort time / std::sort time: {0}.", totalTimeNet / totalTimeCpp);
Console::ReadKey(false);
return 0;
}[/CODE]

I'm posting this because I can't believe the results. In release mode, optimizing for speed, Array.Sort() takes 89.6% of the time std::sort() takes on my machine. std::sort is supposed to be faster, if only because C++ doesn't perform bounds check on each random access in an array and .Net does. At this point I suspect there's something wrong with my testing methodology, perhaps due to compiling with the /CLI switch on? I don't know, so if you have a better idea let me know.

The .NET code might be optimised for better CPU register usage. Or it might have an optimised algorithm. It might make more efficient use of memory, like consecutive memory accesses. Or memory paragraphs.

What about checking that the total array bytes is the same? In some compilers and CPUs, an int is 32 bits, on others 64 bits. Checking the array size in .NET and C++ will tell if they are using the same size int.

.NET programs might use Windows hooks to allow more efficient processing. The C++ program might have to do everything itself.

  • 0

Uh... std::sort is supposed to do a merge sort (of some type) I think according to the standards while Array.Sort uses quicksort. The std::sort is supposed to handle the worst case much faster while quicksort is faster on average. THe worst case thing is to prevent attackes where an attacker hangs a system by sending a job that will take forever to sort. Use a C++ real quicksort if you want to compare more... I don't think qsort is always an actual QuickSort like the Array.Sort is.

sauce: http://msdn.microsof...y/6tf1f0bc.aspx

C++\CLI is really nice for interop isn't it? It's amazing. But for the highest performance you need a vectorizing compiler like ICC.

  • 0
  Quote
Or it might have an optimised algorithm.
They both use QuickSort.
  On 11/02/2012 at 02:11, Ntrstd said:
What about checking that the total array bytes is the same? In some compilers and CPUs, an int is 32 bits, on others 64 bits. Checking the array size in .NET and C++ will tell if they are using the same size int.
They are both using System.Int32 which is guaranteed to be 4 bytes.

I did some tests with various integral and floating-point types, and here are the results:

Byte:

C++ 29 ms

.NET 43 ms

Int32:

C++ 87 ms

.NET 78 ms

Int64:

C++ 132 ms

.NET 99 ms

Single:

C++ 99 ms

.NET 193 ms

Double:

C++ 101 ms

.NET 203 ms

Very interesting! C++ seems strangely slow on Int32 and Int64, but is twice as fast as .NET on floating point types and unsigned char. I took a quick look at VS2010's algorithm.h and it apparently uses the same sort for all types, no specialization for float or anything of the sort.

  Quote
.NET programs might use Windows hooks to allow more efficient processing.
Such as? I don't see what you could be referring to.
  • 0
  On 11/02/2012 at 02:25, a1ien said:

Uh... std::sort is supposed to do a merge sort (of some type) I think according to the standards while Array.Sort uses quicksort.

VS2010's algorithm.h:

template<class _RanIt,
class _Diff> inline
void _Sort(_RanIt _First, _RanIt _Last, _Diff _Ideal)
{ // order [_First, _Last), using operator<
_Diff _Count;
for (; _ISORT_MAX < (_Count = _Last - _First) && 0 < _Ideal; )
{ // divide and conquer by quicksort
_STD pair<_RanIt, _RanIt> _Mid =
_Unguarded_partition(_First, _Last);
_Ideal /= 2, _Ideal += _Ideal / 2; // allow 1.5 log2(N) divisions
if (_Mid.first - _First < _Last - _Mid.second)
{ // loop on second half
_Sort(_First, _Mid.first, _Ideal);
_First = _Mid.second;
}
else
{ // loop on first half
_Sort(_Mid.second, _Last, _Ideal);
_Last = _Mid.first;
}
}
if (_ISORT_MAX < _Count)
{ // heap sort if too many divisions
_STD make_heap(_First, _Last);
_STD sort_heap(_First, _Last);
}
else if (1 < _Count)
_Insertion_sort(_First, _Last); // small
}[/CODE]

Lovely isn't it? :laugh: Anyway, basically it's quicksort but it resorts to heap sort or insertion sort when it detects quicksort would be non-optimal (quicksort's worst case is O(n^2)).

The C++ standard doesn't specify what algorithm std::sort should use, only that it should be O(n log n) on average.

  • 0

So .NET uses intro sort. If C++ does use merge sort, then it would do fewer comparisons on average. Comparing floats is more expensive, but that alone shouldn't make std::sort twice as fast. However, despite heap sort having an average runtime of O(n log n), it's much slower on average compared to merge sort or quick sort because it doesn't make efficient use of the CPU cache. So if Array.Sort() is falling back on heap sort, that could explain the slower benchmark.

  • 0
  On 11/02/2012 at 02:25, a1ien said:

Uh... std::sort is supposed to do a merge sort (of some type) I think according to the standards while Array.Sort uses quicksort. The std::sort is supposed to handle the worst case much faster while quicksort is faster on average. THe worst case thing is to prevent attackes where an attacker hangs a system by sending a job that will take forever to sort. Use a C++ real quicksort if you want to compare more... I don't think qsort is always an actual QuickSort like the Array.Sort is.

sauce: http://msdn.microsof...y/6tf1f0bc.aspx

C++98 doesn't specify an algorithm. All it specifies is complexity as "Approximately N log N (where N == last - first) comparisons on the average" (section 25.3.1.1, ISO/IEC 14882:1998) Most C++ implementations of std::sort are quicksort, partially because it can be done in-place.

  Quote

C++\CLI is really nice for interop isn't it? It's amazing. But for the highest performance you need a vectorizing compiler like ICC.

Auto-vectorising won't help with sort. In general, it won't help with highly branchy, non-linear code. On the other hand, a compiler like ICC will do IPO and profile-guided optimisations, which are generally helpful.

  • 0
  On 11/02/2012 at 03:02, Xinok said:

So .NET uses intro sort. If C++ does use merge sort, then it would do fewer comparisons on average. Comparing floats is more expensive, but that alone shouldn't make std::sort twice as fast. However, despite heap sort having an average runtime of O(n log n), it's much slower on average compared to merge sort or quick sort because it doesn't make efficient use of the CPU cache. So if Array.Sort() is falling back on heap sort, that could explain the slower benchmark.

The code I posted was std::sort, so it's std::sort that uses introsort; all Microsoft says about Array.Sort is that it uses Quicksort. It's not clear to me though in which cases std::sort does fall back to merge sort,

if (_ISORT_MAX < _Count)
{ // heap sort if too many divisions[/CODE]

That's the relevant code but I don't really understand the condition?

By the way I've revised the code so it benches a few different numerical types one after the other:

[CODE]// Testing .Net Array.Sort() vs C++'s std::sort on both languages standard arrays.
#include "stdafx.h"
#include <array>
#include <algorithm>
using namespace System;
using namespace System::Diagnostics;

const int ARRAY_SIZE = 1000000;
const int NUM_LOOPS = 20;

void generateArrays(Byte* cppArray, array<Byte>^ netArray) {
auto randGen = gcnew Random();
randGen->NextBytes(netArray);
for (int i = 0; i < ARRAY_SIZE; ++i) {
cppArray[i] = netArray[i];
}
}

void generateArrays(Single* cppArray, array<Single>^ netArray) {
auto randGen = gcnew Random();
for (int i = 0; i < ARRAY_SIZE; ++i) {
auto randNum = (Single)randGen->NextDouble();
cppArray[i] = netArray[i] = randNum;
}
}

void generateArrays(Double* cppArray, array<Double>^ netArray) {
auto randGen = gcnew Random();
for (int i = 0; i < ARRAY_SIZE; ++i) {
auto randNum = randGen->NextDouble();
cppArray[i] = netArray[i] = randNum;
}
}

void generateArrays(Int32* cppArray, array<Int32>^ netArray) {
auto randGen = gcnew Random();
for (int i = 0; i < ARRAY_SIZE; ++i) {
auto randNum = randGen->Next();
cppArray[i] = netArray[i] = randNum;
}
}

void generateArrays(Int64* cppArray, array<Int64>^ netArray) {
auto randGen = gcnew Random();
for (int i = 0; i < ARRAY_SIZE; ++i) {
auto randNum = (Int64)randGen->Next();
cppArray[i] = netArray[i] = randNum;
}
}

template<typename T>
void benchmark() {
auto cppArray = new T[ARRAY_SIZE];
auto netArray = gcnew array<T>(ARRAY_SIZE);
Double totalTimeCpp = 0.0;
Double totalTimeNet = 0.0;

Console::Write("Testing {0}", T::typeid);

for (int i = 0; i < NUM_LOOPS; ++i) {
generateArrays(cppArray, netArray);

auto stopWatch = Stopwatch::StartNew();
std::sort(cppArray, cppArray + ARRAY_SIZE);
stopWatch->Stop();
totalTimeCpp += (Double)stopWatch->ElapsedMilliseconds;

stopWatch = Stopwatch::StartNew();
Array::Sort(netArray);
stopWatch->Stop();
totalTimeNet += (Double)stopWatch->ElapsedMilliseconds;

// progress indicator and sanity check
Console::Write(cppArray[0] < netArray[ARRAY_SIZE - 1] ?
"." :
"wuuuuuuut?!");
}

Console::WriteLine(L"\nAverage time C++: {0} milliseconds.", totalTimeCpp / (double)NUM_LOOPS);
Console::WriteLine(L"Average time .NET: {0} milliseconds.\n", totalTimeNet / (double)NUM_LOOPS);
};

int main() {
benchmark<Byte>();
benchmark<Int32>();
benchmark<Int64>();
benchmark<Single>();
benchmark<Double>();

Console::WriteLine("All done.");
Console::ReadKey(false);
return 0;
}[/CODE]

  • 0
  On 11/02/2012 at 03:34, Dr_Asik said:

The code I posted was std::sort, so it's std::sort that uses introsort; all Microsoft says about Array.Sort is that it uses Quicksort. It's not clear to me though in which cases std::sort does fall back to merge sort,

if (_ISORT_MAX < _Count)
{ // heap sort if too many divisions[/CODE]

That's the relevant code but I don't really understand the condition?

That condition is to choose between heap sort and insert sort. _ISORT_MAX is a constant to ensure insert sort is only used on small sublists.

The relevant condition is in the loop, specifically the _ideal variable:

[CODE]for (; _ISORT_MAX < (_Count = _Last - _First) && 0 < _Ideal; )
...
_Ideal /= 2, _Ideal += _Ideal / 2; // allow 1.5 log2(N) divisions[/CODE]

But this is all irrelevant. I didn't realize what you posted is the C++ code.

  • 0
  On 11/02/2012 at 02:48, Dr_Asik said:

> .NET programs might use Windows hooks to allow more efficient processing.

Such as? I don't see what you could be referring to.

It can depend if the compiler uses heap, stack or virtual (disk) memory. Also whether the algorithm uses a loop or recursive function calls.

Linux has mmap() but Windows doesn't. Some programs might use HeapAlloc() (windows), but others might use plain alloc(). I don't have any good examples, but here's a page which talks about different types of memory calls in Windows.

http://www.mofeel.net/1147-comp-os-ms-windows-programmer-win32/1326.aspx

  • 0

That shouldn't have any significant influence because the algorithm either needs to make 0 or 1 big memory allocation if it's not in-place. It's not like it was constantly allocating and freeing memory, anyway if it did it'd be very slow.

Thanks for the info though.

  • 0
  Quote

11 February 2012 - 01:52

No offense Dr_Asik but come on: It is saturday (when you posted this).....I think this would be better suited if you did it workdays and for some actual purpose; not just for fun.

Example: Im currenting seeing videos that are suppose to prepare me for the CCENT exam to get my CCNA certification. I dont do that "just for fun".

I mean NO OFFENSE at all by this post Dr_Asik. Everyone is free to do whatever they want with their time and life. Just my opinion :)

On topic: It is surprising (if I read it correctly) that the C# implementation is faster than the C++. If incorrect, ignore me.

  • 0

Don't you think you might be overdoing it with the C++11 type inference?

On the sort itself. A few notes:

1. Have you tried providing your own comparison function? return x < y;

2. It would be interesting to compare the results against C's qsort()

3. Have you tried using another compiler/STL?

  • 0
  On 04/03/2012 at 12:34, htcz said:

No offense Dr_Asik but come on: It is saturday (when you posted this).....I think this would be better suited if you did it workdays and for some actual purpose; not just for fun.

Most code I'm proud of having written was written for fun.
  Quote
Don't you think you might be overdoing it with the C++11 type inference?
No... it's a compile-time feature and has no bearing on execution. It makes the code cleaner by eliminating redundancies. C# (a similar language) had type inference since VS2008 and I believe it should be used whenever possible with a few exceptions. I don't want to turn this into a debate on type inference.
  Quote
1. Have you tried providing your own comparison function? return x < y;

2. It would be interesting to compare the results against C's qsort()

All the code's there, just create a C++/CLI project, copy-paste and modify at your will.
  Quote
3. Have you tried using another compiler/STL?
I'd have to change the approach because only MSVC supports C++/CLI.
  Quote
Also try a different compiler than Visual Studio (MingW using Code::Blocks for example).
Same remark as above.
  Quote
Test with 100 million atleast. 1 million is nothing for computers those days. (No, Not kidding).
Results are consistent with what I presented here even with much larger array sizes. Besides, 100-200ms is not an insignificant amount of time for a cpu-bound operation.
This topic is now closed to further replies.
  • Recently Browsing   0 members

    • No registered users viewing this page.
  • Posts

    • Microsoft makes it easier to find PC specs in Windows 11 Settings by Taras Buria Windows 11 has already received several improvements that make it easier to learn about your computer's specifications. Recently, Microsoft released Spec Cards for the System > About section, which provide basic information about the PC's main components, such as processor, memory, storage, graphics card, and video memory. Now, the Settings app is getting a new way to find your device info. Microsoft wants to display basic device information right on the Home page of the Settings app. The latest preview builds from the Dev and Beta Channels introduced a new "Your device info" card for the Settings' Home page. It displays specs like processor name and speed, graphics card and the amount of video memory, storage, and RAM. The card also has a link to the "About" section, where you will find more information about your computer, its Windows edition, product ID, and the recently introduced FAQ section that answers common hardware-related questions. The "Your device info" card joins the existing cards on the Settings app's home page. While the section offers useful information like quick access to Bluetooth devices, Wi-Fi, personalization, and recommended settings, users received it with mixed reactions, as many considered it another way for Microsoft to promote its services and subscriptions like Microsoft 365, OneDrive, and Game Pass (seriously, who thinks about Game Pass when opening Settings?). Now, the Settings' Home page is a bit more useful, as it saves you a few clicks when checking your computer's specs. If you want to test the new "Your device info" card, update your PC to build 26200.5622 or newer (Dev Channel). Just keep in mind that Microsoft is rolling it out gradually, and it requires signing in with a Microsoft Account in the United States. Other changes in build 26200.5622 include a new Settings section for Quick Machine Recovery, widget improvements, more app recommendations in the "Open with" dialog, and more. Check out the full release notes here.
    • Ponies will finally have good games to play after replaying Last of Us for the 100th time. Oh and I lied, Silent Hill f looks pretty great too, but we already knew about that.
    • China blocks Apple-Alibaba AI venture in retaliation for the US trade war by Hamid Ganji iPhones sold in China, Apple's second biggest market, still lack AI features. While Apple tried to solve the issue by forming an AI venture with China's e-commerce giant Alibaba, the move has faced setbacks from China's regulator, presumably to get back at the US trade war under the Trump administration. According to a new report by Financial Times, citing people familiar with the matter, Apple and Alibaba have been working on their AI venture over the past few months, hoping to bring some AI features to iPhones sold in China. However, the Cyberspace Administration of China hasn't approved the collaboration. Every new iPhone sold worldwide has built-in ChatGPT as a result of the Apple and OpenAI partnership. Since OpenAI has no official presence in China, Apple must partner with local tech companies like Alibaba to offer AI capabilities on iPhones sold in the country. The move could help Apple navigate China's regulatory restrictions, but it's now stalled due to the US-China trade war. The Cyberspace Administration of China doesn't publicly confirm whether halting the Apple-Alibaba AI venture is a response to the US trade war. Still, sources claim this is China's response to the recent tariff clash with the US. China also has a pretty solid record of retaliating against the US reciprocal tariffs. However, the Apple and Alibaba AI partnership also has some opponents in the US. Lawmakers and government officials in Washington have raised concerns about the AI deal. They fear that this collaboration could significantly bolster China's AI capabilities.
    • Raspberry Pi Imager 1.9.4 released bringing performance improvements, bug fixes and more by David Uzondu Raspberry Pi Imager 1.9.4 is now out, marking the first official release in its 1.9.x series. This application, for anyone new to it, is a tool from the Raspberry Pi Foundation. It first came out in March 2020. Its main job is to make getting an operating system onto a microSD card or USB drive for any Raspberry Pi computer super simple, even if you hate the command line. It handles downloading selected OS images and writing them correctly, cutting out several manual steps that used to trip people up, like finding the right image version or using complicated disk utility tools. This version brings solid user interface improvements for a smoother experience, involving internal tweaks that contribute to a more polished feel. Much work went into global accessibility, adding new Korean and Georgian translations. Updates also cover Chinese, German, Spanish, Italian, and many others. Naturally, a good number of bugs got squashed, including a fix for tricky long filename issues on Windows and an issue with the Escape key in the options popup. Changes specific to operating systems are also clear. Windows users get an installer using Inno Setup. Its program files, installer, and uninstaller are now signed for better Windows security. For macOS, .app file naming in .dmg packages is fixed, and building the software is more reliable. Linux users can now hide system drives from the destination list, a great way to prevent accidentally wiping your main computer drives. The Linux AppImage also disables Wayland support by default. The full list of changes is outlined below: Fixed minor errors in Simplified Chinese translation Updated translations for German, Catalan, Spanish, Slovak, Portuguese, Hebrew, Traditional Chinese, Italian, Korean, and Georgian Explicitly added --tree to lsblk to hide partitions from the top-level output CMake now displays the version as v1.9.1 Added support for quiet uninstallation on Windows Applied regex to match SSH public keys during OS customization Updated dependencies: libarchive (3.7.4 → 3.7.7 → 3.8.0) zlib (removed preconfigured header → updated to 1.4.1.1) cURL (8.8 → 8.11.0 → 8.13.0) nghttp2 (updated to 1.65.0) zstd (updated to 1.5.7) xz/liblzma (updated to 5.8.1) Windows-specific updates: Switched to Inno Setup for the installer Added code signing for binaries, installer, and uninstaller Enabled administrator privileges and NSIS removal support Fixed a bug causing incorrect saving of long filenames macOS-specific updates: Fixed .app naming in .dmg packages Improved build reliability and copyright Linux-specific updates: System drives are now hidden in destination popup Wayland support disabled in AppImage General UI/UX improvements: Fixed OptionsPopup not handling the Esc key Improved QML code structure, accessibility, and linting Made options popup modal Split main UI into component files Added a Style singleton and ImCloseButton component Internationalization (i18n): Made "Recommended" OS string translatable Made "gigabytes" translatable Packaging improvements: Custom AppImage build script with Qt detection Custom Qt build script with unprivileged mode Qt 6.9.0 included Dependencies migrated to FetchContent system Build system: CMake version bumped to 3.22 Various improvements and hardening applied Removed "Show password" checkbox in OS customization settings Reverted unneeded changes in long filename size calculation Internal refactoring and performance improvements in download and extract operations Added support for more archive formats via libarchive Lastly, it's worth noting that the system requirements have changed since version 1.9.0: macOS users will need version 11 or later; Windows users, Windows 10 or newer; Ubuntu users, version 22.04 or newer; and Debian users, Bookworm or later.
    • Ancient CD app makes 64-bit comeback to support Windows 11 and probably Windows 10 too by Sayan Sen Remember when CDs or compact discs were a thing? While technically, they still are, their popularity and usage have dropped immensely with the rise in other standards like USB, as the latter continues to evolve, getting faster and gaining more features. Recently, Microsoft enforced some mandatory requirements for USB Type-C so as to ensure a uniform and consistent experience for Windows 11 users. On the topic of Windows 11 and CDs, a CD ripping tool from the Windows 95/98 era, dubbed "CD2WAV32," is back again after 16 years (from the Windows 7 era). The utility has now been updated to work on Windows 11 version 24H2, which is pretty cool. This was not planned, says the author, as they simply wanted to test the app on their newly upgraded Windows 11 PC, but ended up going all the way to make it fully work on Windows 11. Their Windows 11 runs an AMD Ryzen 9600X, 64 GB RAM, and an Nvidia GT 1030 (miswritten as "GT1300"). The developer of the tool notes that they did not run thorough tests on Windows 10, but it works on their Atom-based PC, which is another relic, given how fast technology moves. The author writes (Google-translated from Japanese to English): "From now on, it will only support Windows 11 (24H2). The reason is that this is the only environment the author currently has. I haven't done anything particularly fancy, so I think it will work properly on Windows 10, but I can't guarantee it. All I have left is an ATOM machine that I bought a long time ago that also runs Windows 10, so I've seen that it works lightly on that, but I can't do a detailed test." Atom, for those wondering, was Intel's low-power CPU lineup that it decided to axe back in 2016. The story is similar to how Microsoft gave up on Windows Lumia, as Intel, too, abandoned its mobile chip ambitions once the likes of Qualcomm and MediaTek took over. In terms of the underlying changes, the utility has been compiled now on Delphi 12.1 Community Edition, which is used to make native Windows apps as well as ones for macOS, iOS, and Android. The recent update also brings a significant overhaul in terms of compatibility as well as UX/UI. File sizes and other such metadata are now handled using a 64-bit format instead of the prior 32-bit approach, eliminating overflow issues and ensuring large file and disk space values are displayed correctly. This change is necessary given that large storage volumes are quite common these days. Additionally, support for 16-bit code calling functions has been entirely removed as Windows 11 is 64-bit only; thus, features like MSCDEX and TwinVQ compression are gone. Meanwhile, the font has been changed from MSP Gothic 9pt to Meiryo 10pt, so readability should not be a problem even on 4K screens. In terms of audio file encoding support, it is said to work with MP3 as well as WMA. So, should you download and run it? Probably not, given that the UI is entirely Japanese, but it is still a fun project to look at.
  • Recent Achievements

    • Week One Done
      jbatch earned a badge
      Week One Done
    • First Post
      Yianis earned a badge
      First Post
    • Rookie
      GTRoberts went up a rank
      Rookie
    • First Post
      James courage Tabla earned a badge
      First Post
    • Reacting Well
      James courage Tabla earned a badge
      Reacting Well
  • Popular Contributors

    1. 1
      +primortal
      397
    2. 2
      +FloatingFatMan
      177
    3. 3
      snowy owl
      170
    4. 4
      ATLien_0
      167
    5. 5
      Xenon
      134
  • Tell a friend

    Love Neowin? Tell a friend!