• 0

[C,C++] Subtle obscure differences


Question

Sometimes when I am doing low-level development I run into interesting incompatibilities between C and C++. The type of thing you may have known at one time, but that you eventually forget unless you know the specification of both languages inside-out.

 

One such case happened to me today with code similar to the following:

// foo.h:
int RAM[10000];

// foo.c:
#include "foo.h"
int main() {
    return (unsigned long long)RAM;
}

// bar.c
#include "foo.h"
int bar() {
    return (unsigned long long)RAM;
}

This particular code is valid in C, but not valid in C++ if you link both foo.c and bar.c into the same binary. Why? Because there is a subtle difference with how C and C++ treat uninitialized global symbols (RAM[] in this case). In C, the symbols are merged into one and become a single symbol (instance of a variable) (emitted using common linkage: see here). However, in C++ there is no such thing as common linkage. If you declare the same variable twice, regardless of the circumstances, it is seen as a two separate variables that conflict. So in the latter case you will see the following error:

/tmp/ccK4Aa4B.o:(.bss+0x0): multiple definition of `RAM'
/tmp/cc8u2dAO.o:(.bss+0x0): first defined here
collect2: error: ld returned 1 exit status

Of course, you can get around the limitation in C++ by doing the following instead:

// foo.h:
extern int RAM[10000]; //modified

// foo.c:
#include "foo.h"
int RAM[10000]; //added
int main() {
    return (unsigned long long)RAM;
}

// bar.c
#include "foo.h"
int bar() {
    return (unsigned long long)RAM;
}

The interesting part is some of the implications for C programmers. Suppose for example, you borrowed a database implementation that employed common linkage & you just happened to accidentally clobber over the name in your own code. In this case, you would have a subtle silent bug on your hand. The variable would be shared between your code and the database and you might never know! Here's another example shows some interesting things that occur in these cases:

// baz.c:
#include <stdio.h>
int test; //4 byte declaration.
unsigned long long qux();

int main()
{
    printf("wrong size returned in baz: %d\n", sizeof(test)); //Oops!
    qux();
    printf("set value in baz: %d\n", test);
    return test;
}


// qux.c:
#include <stdio.h>
unsigned long long test; //8 byte declaration.
unsigned long long test;//you can redeclare without error.


unsigned long long qux() {
    test = 0xFF;
    printf("correct size in qux: %d\n", sizeof(test));
    printf("set value in qux: %d\n", test);
}

Output:

wrong size returned in baz: 4
correct size in qux: 8
set value in qux: 255
set value in baz: 255

There are a few interesting things to note: (1) the variable is declared twice in qux.c without error, (2) is declared once with a different sized type in baz.c, (3) has actually been merged and is eight bytes large. Yet the wrong size will be printed in baz.c and the correct size only in qux.c. So it is silent even with incompatible types. The final thing to note is that even if you tried to enable verbose warnings in the compiler, you still won't see this.

  • Like 3
Link to comment
Share on other sites

Recommended Posts

  • 0

I wouldn't use them for a shared cache either. I'd create a module to manage it and create accessor functions.

 

Err, that doesn't make sense. You still need a global way to reference the cache object (which I assume is what you mean by "module" in this context?).

 

 

I'm aware how inclusion guards work. I'm talking about Macro inclusion guards, not Microsoft compiler specific directives. It wouldn't solve his C++ problem, but it should be standard behaviour when including header files. I also stated earlier that the commonly included header file should declare the variable (extern), not define it. I would do this in C regardless of whether the compiler complains because it's simply good practise.

I still haven't heard a good reason for his global variable though.

 

Umm, "pragma once" is not a Microsoft compiler specific directive. It's non-standard, but everyone supports it. It works exactly the same way as the macro method. There is no reason you would be talking about one and not the other here.

 

Yes, declaring it as extern then defining and initializing it in one place is a standard practice, but it doesn't account for every situation. Particularly things like template libraries, or other cases where you don't control whether two instances are going to end up being linked together. Hence the existence of selectany.

 

His global seems hypothetical. There are several cases where such a thing is useful.

Link to comment
Share on other sites

  • 0

Conditionals, Loops, Operators, Functions, standard integral datatypes, complex datatypes, and almost identical syntax. I'd say C and other high level languages are very similar.

Most of these actually exist in common assembly languages, so if those are your examples of "high-level" syntactic constructs then I think I've made my point. :)
 
I think the complexity of modern C compilers and how much optimisation they do attests to the fact that they are very abstract.
But compilers for high-level languages have also evolved, and they can do mind-boggingly sophisticated optimizations that dwarf anything a C compiler could do, simply because a program written in a high-level language contains a lot more information about what you actually want to do and less about how you want to implement it. See for instance Stream Fusion for Haskell. So C remains relatively low-level and becomes increasingly more so as common languages like C++, Java and C# evolve to include higher-level constructs such as asynchronous workflows and closures.
 
Just to illustrate how far C has got from modern high-level languages, try to implement this in C:
 
async void ThisIsModernCSharpCode() {
     Console.WriteLine("Hello...");
     await Task.Delay(1000);
     Console.WriteLine("World!");
}

I don't even want to think about how much code you'd have to write to properly emulate this. This gets translated into a state machine where individual parts of the workflow are compiled as objects representing continuations that are queued on a sophisticated thread pool (or not, depending on the current synchronisation context). Good luck!

 

OO isn't a prerequisite for high level programming, but C is quite capable via libraries like GLib. Functional and parallel programming are domain specific.

Functional programming is specific to which domain??? How come there are general-purpose functional languages then? How come most general-purpose languages today, including C++, Java and C# support functional programming to a certain extent?

 

Parallel and concurrent programming is everywhere now that every computing device is a parallel computer (if not a massively parallel one, i.e. GPUs). C is just designed on an antiquated architecture.

Link to comment
Share on other sites

  • 0

So basically embedded development :D You're still calling into a micro kernel or whatever its running. The C language itself isn't low level, that was my point. Assembly most definitely is because each mnemonic corresponds to a CPU instruction (usually).

 

That's not to disparage what you're doing. I'm a fan of C myself. I just think a clear line must be drawn between assembly and C in terms of lowlevelness. C has the same constructs as other high level languages like Java, C#, Vala, Go, Perl, etc. I just don't see why it should be labelled as low level because it uses memory addresses (pointers) and has no garbage collection.

 

 

The runtime IS the kernel running bare-metal on the hardware. There's nothing else lower besides boot-strapping... i.e. the runtime has direct access to and control of the hardware (there's absolutely no notion of Ring 0, 1, ...). The distinction between the terms "runtime" and "OS" here is that we don't use OS because it implies a ton of baggage and high-level features that simply don't exist here. That is to say it isn't an RT in the sense of something like c++ runtime, .net runtime, etc.

 

This isn't embedded development, it's HPC related so it doesn't fall under the embedded sphere or many of the implications that go with that. E.g. we aren't talking about something with limited resources or computational power. Quite the opposite in fact.

 

Any any case, I said I was doing "low-level development" in my initial post. It was never meant as a statement about C or C++ itself. It was meant in the context of the work I'm actually doing. Arguing whether C in itself is low-level is just an argument that falls into semantic discord. So really what's the point of going back and forth about that? I certainly don't care about that and it's not related to what I meant in the least.

 

Then why use one at all? Globals lead to messy and unmaintainable code.

Instead, why not pass a value to a function in database2.c. I do that 99% of the time. Have each module manage its own internal data. It provides the same encapsulation as a class in C++ and prevents ownership issues.

The database was just used as a convenient example. Don't read too much into why would you do this in a database. At that point, you are just arguing for the useful of the feature at all in C. There are good uses (some have already been highlighted in this thread) for globals. I'll offer one myself: they provide a great way allocate static areas of memory at the level I'm working at. For example, if we assume that in addition to the RT (note that the RT has many instances), we have application code that can be distributed and run across computation units, we have some nice guarantees about the locations of memory we can read and write to remotely. Similarly, the runtime code itself has the same guarantees. To be a bit more specific, I could use those guarantees to create global mailboxes to implement communication constructs between the RT and Application codes running on different sections of the system now. How would you do this without globals? You'd section off the memory segments manually and do the entire thing manually and annoyingly.

 

Another reason is when you don't want to over-engineer your system. Why would you for example would you create localized state for something that is suppose to be globally accessible in a simulation framework or in the architecture you are simulating? All that does is abstract the implementation away from how the actual architecture will behave and arguably obfuscates implementation behavior. In the same token, it needlessly introduces overhead that you don't want or need. It makes you suddenly have to manually pass around the baggage of state everywhere instead of using centralized state with direct access. And, yes, performance is important -- we are running a simulation and the performance in terms of MIPS is extremely important.

 

The MS linker supports COMDAT folding via the selectany directive. I thought GCC had picked that up too but I'm not sure.

That's for coding folding and not variable folding, correct? In any case, wouldn't it be in the linker domain and not the compiler (binutils, golden)? The toolchain uses LLVM-->binutils.

 

For a start, you should be using inclusion guards in the header file so it won't get defined twice. Secondly, it makes logical sense that only one symbol exists. The compiler's associative map should see that the symbol is already defined. I wouldn't do it that way anyway. I'd create a common header file with a declaration (extern) in it.

Firstly, Brandon said why this doesn't work: separate compilation units. Secondly, even if it were in the compiler domain, what you are saying doesn't make sense in C++ (in general) or for variables in C that are assigned initial values (int a = ...; ). These cases should not not merge symbols.

 

In general, that is a better model. But there are perfectly good uses for globals (particularly for something like a globally shared cache, for example). They also help save space for large constants like CLSIDs/IIDs and other GUIDs.

In my case, I'm dropping a simulated variable that serves as a global pool of memory that should be accessible anywhere (it's simulated physical RAM). Why? Because if I move the work I'm currently doing back into the simulation framework (i mentioned above); the code would be directly addressing, reserving, and chunking the address space. It certainly doesn't make sense to encapsulate and abstract such things when that doesn't fit with how it will be handled or accessed at the architecture level.

 

That's not how inclusion guards work. They (i.e. "pragma once") prevent a header from being included multiple times in a given compilation unit. But database1.c and database2.c are clearly separate compilation units. Thus why this is a linker issue, not a compiler issue. You cannot solve it via compilation directives and inclusion guards.

Thank you, this is exactly it! In any case, I didn't use inclusion guards in the code above because it weren't relevant to my examples.

 

I'm aware how inclusion guards work. I'm talking about Macro inclusion guards, not Microsoft compiler specific directives. It wouldn't solve his C++ problem, but it should be standard behaviour when including header files. I also stated earlier that the commonly included header file should declare the variable (extern), not define it. I would do this in C regardless of whether the compiler complains because it's simply good practise.

Sure, they are standard -- but they are definitely not relevant to this discussion and would just convolute the example codes I wrote above needlessly. They of course are used in my actual code, but that's certainly no reason to drop them into hypotheticals where being concise and to the point is important. I'm not sure why you are brought them up given that.

Link to comment
Share on other sites

  • 0

That's like saying assembly and C++ are very similar because they both support comparisons, jumps/gotos, loops, and so on.

Reductio ad absurdum. You know as well as I do that syntactically those constructs are almost identical between C, C++, Java, C#, Go, Vala, etc. Assembly syntax doesn't even resemble high level languages, and that's what we're talking about here.

 

C has syntactical similarities to the plethora of C-derived high level languages. But that's a tautology.

Even non C derived languages have similar constructs both syntactically and functionally. It's clear for anyone to see, even a laymen, that C and C++ share a lot of commonality with other high level languages, even non C derived ones, so no, it's not a tautology.

 

That's not correct. Structs in C are not objects and are nothing like (non-POD) C++ classes and structs. They aren't templates for any definition of that word. They're type definitions, yes, but they map directly to structs in ASM. C structs *ARE* just a map of a memory layout. C99 explicitly requires that fields are ordered as declared (and that's always been the case as far as I know). Yes, fields are padded to maintain alignment as necessary, though you can control this. In general, C (and C++) programmers will pay attention to the layouts of their structs to ensure efficient packing (often as simple as grouping fields of same size type together).

A structure definition is a template from which instances or objects of said structure are derived. It's not necessary for a C programmer to know the memory layout of a structure to make use of one. In fact, unless the compiler is given specific instructions, no guarantees can be made about the layout of it. I only pack my structures in very specific cases.

 

I think you've inverted this. In simple cases, yes, but in general a C developer needs to be conscious of the size of struct fields and the struct's layout.

The size should be governed by what it's designed to hold. Unless the programmer is packing a structure for direct memcpy's or very specific environments, it's unimportant how the compiler arranges it in memory. Personally, I follow the standard and don't make assumptions about specific compilers.

 

The fact that libraries can make C scale and be treated as a somewhat higher level language, does not change the fact that C itself is a relatively low-level one.

Most of the functionality of so called higher level languages is contained in libraries. For example, Java's class library, or C#'s class library. Even C's standard library is just another separate API. It's no different than linking to a third party library.
Link to comment
Share on other sites

  • 0

A structure definition is a template from which instances or objects of said structure are derived. It's not necessary for a C programmer to know the memory layout of a structure to make use of one. In fact, unless the compiler is given specific instructions, no guarantees can be made about the layout of it. I only pack my structures in very specific cases.

This is incorrect. The specifications are very specific about the layout of structures. If they weren't you would not be able to use generic library structs defined in headers. Why? because the emitted accessor code would be incompatible between compilers or compiler revisions!

 

Sure, they aren't compatible between different architectures, but on a given architecture you can assume a given alignment and order for elements unless you force something non-standard (using GCC packing for example will pack and use non-optimal accesses). These things are all you need to determine what elements are where.

 

EDIT: What's more is even the case of packing, you still have guarantees on the ordering of elements and if you know the architectures alignment you can determine the locations of each piece of data even then.

Link to comment
Share on other sites

  • 0

the work I'm doing is related to runtime development in the context of a future exscale-architecture (hence the lack of C++ support in said architecture). Runtime here means something that is running bare-bones on the system (replaces OS functionality). You can think of it has an OS that doesn't do time sharing, without preemption, without protection, and that lacks standard library support (glibc, newlib, etc.).

That sounds awesomely interesting btw :P

Link to comment
Share on other sites

  • 0

That's for coding folding and not variable folding, correct? In any case, wouldn't it be in the linker domain and not the compiler (binutils, golden)? The toolchain uses LLVM-->binutils.

 

selectany tells the linker to pick the first definition it comes across and ignore the rest. I said linker. When I mentioned GCC I meant the linker in the associated toolchain (I never use gcc so couldn't remember it offhand). It does require that you're also initializing the variable, though.

 

http://msdn.microsoft.com/en-us/library/5tkz6s71.aspx

Link to comment
Share on other sites

  • 0

Any any case, I said I was doing "low-level development" in my initial post. It was never meant as a statement about C or C++ itself. It was meant in the context of the work I'm actually doing. Arguing whether C in itself is low-level is just an argument that falls into semantic discord.

Fair enough :)

 

There are good uses (some have already been highlighted in this thread) for globals. I'll offer one myself: they provide a great way allocate static areas of memory at the level I'm working at. For example, if we assume that in addition to the RT (note that the RT has many instances), we have application code that can be distributed and run across computation units, we have some nice guarantees about the locations of memory we can read and write to remotely. Similarly, the runtime code itself has the same guarantees. To be a bit more specific, I could use those guarantees to create global mailboxes to implement communication constructs between the RT and Application codes running on different sections of the system now. How would you do this without globals? You'd section off the memory segments manually and do the entire thing manually and annoyingly.

Without knowing the details and only what you've told me, I'd create a memory manager which could allocate and reclaim blocks for clients on demand. A centralised management of that hardware memory is essential if the code is distributed and many instances of it are running. Unless of course your runtime already does that for you?
Link to comment
Share on other sites

  • 0

Reductio ad absurdum. You know as well as I do that syntactically those constructs are almost identical between C, C++, Java, C#, Go, Vala, etc. Assembly syntax doesn't even resemble high level languages, and that's what we're talking about here.

There are many assembly languages. They don't always resemble each other, either.

Functionally, C has direct mappings to most common ASM functionality. Higher level languages like C# and Java absolutely do NOT.

 

Even non C derived languages have similar constructs both syntactically and functionally. It's clear for anyone to see, even a laymen, that C and C++ share a lot of commonality with other high level languages, even non C derived ones, so no, it's not a tautology.

What? This is completely false. Lisp looks nothing like C (syntactically or functionally). It is a tautology that C-derived languages resemble C to some degree.

 

A structure definition is a template from which instances or objects of said structure are derived. It's not necessary for a C programmer to know the memory layout of a structure to make use of one. In fact, unless the compiler is given specific instructions, no guarantees can be made about the layout of it. I only pack my structures in very specific cases.

A structure definition is a map that can be applied to a chunk of memory. You can create an object using structA and then define structB as a subset (removing fields from the end) and still use it to access the remaining fields of memory you initialized using structA. The compiler *does* make certain guarantees about the layouts of structs. It makes more guarantees if you use bitfields, or if you use non-standard or newer C++ 11 mechanisms to control it.

Creating an "instance" of a struct just tells the compiler to allocate enough memory to fit the fields defined in the struct. It doesn't populate any values, or do anything else that would warrant the use of the word "template."

 

The size should be governed by what it's designed to hold. Unless the programmer is packing a structure for direct memcpy's or very specific environments, it's unimportant how the compiler arranges it in memory. Personally, I follow the standard and don't make assumptions about specific compilers.

It has nothing to do with specific environments. Inefficient packing is inefficient. If you create many instances, you wasted memory. If you copy many instances, you're also wasting CPU time and bandwidth.

 

Most of the functionality of so called higher level languages is contained in libraries. For example, Java's class library, or C#'s class library. Even C's standard library is just another separate API. It's no different than linking to a third party library.

We're talking about whether the language is high-level or low-level. Libraries are irrelevant. Languages like C# and Java do not allow access "to the metal" in the way that C and even C++ do. A library can perhaps make something emulate a higher level language, but it's unlikely to give it lower-level capabilities.

Link to comment
Share on other sites

  • 0

Just to illustrate how far C has got from modern high-level languages, try to implement this in C:

async void ThisIsModernCSharpCode() {
     Console.WriteLine("Hello...");
     await Task.Delay(1000);
     Console.WriteLine("World!");
}
I don't even want to think about how much code you'd have to write to properly emulate this. This gets translated into a state machine where individual parts of the workflow are compiled as objects representing continuations that are queued on a sophisticated thread pool (or not, depending on the current synchronisation context). Good luck!
Sounds fancy, but I don't see the point of it. Show me a real world application of it and what it does, then perhaps I'll applaud :D
Link to comment
Share on other sites

  • 0

Sounds fancy, but I don't see the point of it. Show me a real world application of it and what it does, then perhaps I'll applaud :D

async void OnDownloadButtonClicked(string url) {
    try {
        Display("Downloading...");
        var data = await GetALargeAmountOfDataFromASlowWebServer(url);
        if (m_userCancelled) {
            Display("Cancelled");
            return;
        }
        Display("Parsing...");
        var stats = await ExpensiveStatsComputation(data);
        PresentStats(stats);
    }
    catch (Exception e) {
        Display("An error occured:" + e.Message);
    }
}

This gets called on the UI thread. It does not block the UI thread as long-running operations are executed, keeping the UI responsive. It does not block any thread waiting for a response from the server, relying on I/O completion ports. It reports progress at different steps of the operation, jumping back and forth being the UI thread and the thread pool. It supports cancellation midway through the process. It handles any errors that may happen in any of these asynchronous operations gracefully. It could be called several times in a row and run the whole workflow several times concurrently, automatically scaling to make the best use out of the available CPU cores. While the operation is in progress, the user can begin other work that will run concurrently.

 

Most importantly, it's obviously correct and simple to understand. It reads just like sequential code, except it isn't.

 

Notice the lack of any calls to any particular threading library. This is all supported at the language level. Coding this in C would be a nightmare.

Link to comment
Share on other sites

  • 0

That sounds awesomely interesting btw :p

Yeah, and horrible to code for  :rofl:. Believe you me, this would be really bad if it was exposed to users.

 

selectany tells the linker to pick the first definition it comes across and ignore the rest. I said linker. When I mentioned GCC I meant the linker in the associated toolchain (I never use gcc so couldn't remember it offhand). It does require that you're also initializing the variable, though.

 

http://msdn.microsoft.com/en-us/library/5tkz6s71.aspx

I see, it's binutils for GCC under normal circumstances. LLVM is either binutils or golden. I was wondering why you were saying GCC when you had made the distinction about the .comm sections being in linker domain. But, you know it turns out it may only require compiler support. Based on your link, it looks like it's just language directive and possibly implementable without linker modifications.

 

 

Also, it appears to be that selectany works for merging initialized data as well as code (the latter is what i thought it was strictly prior) between compilation units. I do see where you were going with this now: it is indeed external common linkage for initialized variables. What's interesting is that it can't be applied to the only case C actually allowed common linkage though (in declarations that don't do initialization).

 

Without knowing the details and only what you've told me, I'd create a memory manager which could allocate and reclaim blocks for clients on demand. A centralised management of that hardware memory is essential if the code is distributed and many instances of it is running. Unless of course your runtime already does that for you?

Centralized management of memory is a a terrible idea in terms of performance (remember this is HPC). Also in this particular case we have local segments of memory where you drop the RT binaries (that are remotely addressable using memory mapped I/O). In any case, the issue is how do you establish a communication line to begin with. I'm not talking about something that already has existing communication between cores or even with single core. You can consider it like this, everyone has just booted up; they have an address space they can read/write to; they know their relativist system. Go forward and communicate under the assumption that there are thousands. First step is establishing a communication channel. After that, you could do what you are saying if you really wanted to, but not before.

Link to comment
Share on other sites

  • 0

Yeah, and horrible to code for  :rofl:. Believe you me, this would be really bad if it was exposed to users.

I prefer to code in the highest-level language I can (I favor F# these days), but I love the low-level stuff as well. I learned SSE instrinsics and wrote some optimized color-space conversion routines last year. It's wrapped in a C++/CLI layer and gets conveniently used from .NET, and it's still faster than ffmpeg's :p If I wasn't so enamored with .NET, I'd probably be doing HPC stuff like you.

Link to comment
Share on other sites

  • 0

We're talking about whether the language is high-level or low-level. Libraries are irrelevant.

They are relevant. Many people are claiming that C is low level because it doesn't support certain features out of the box. I'm saying that those features are available via third party libraries.

Languages like C# and Java do not allow access "to the metal" in the way that C and even C++ do.

So you're suggesting that because C and C++ have a different memory management model than C# and Java, that they are low level programming languages?
Link to comment
Share on other sites

  • 0

Many people are claiming that C is low level because it doesn't support certain features out of the box. I'm saying that those features are available via third party libraries.

you can also call third party libraries from asm.

Link to comment
Share on other sites

  • 0

I prefer to code in the highest-level language I can (I favor F# these days), but I love the low-level stuff as well. I learned SSE instrinsics and wrote some optimized color-space conversion routines last year. It's wrapped in a C++/CLI layer and gets conveniently used from .NET, and it's still faster than ffmpeg's :p If I wasn't so enamored with .NET, I'd probably be doing HPC stuff like you.

Well, as they say do the parts of the code where speed matters at the low level and everything else at the high level. I've never really focused on .net myself. The majority of the work I've done in the past decade has been in Linux so besides Mono there is little opportunity. I've been saying, but at times the best you get is C. No-one likes doing anything more because higher level languages like C++/C#/Anything with Objects require support in the ABI.

 

I've almost always been consistently at the C/C++ level in the past 10 years. I've briefly used Java and I will prototype or write post processing routines with python (who does that in C..). I do tend to like python just because it is so little effort in comparison to everything else to get something working. I'll take any opportunity to use the least effort thing when I can.

 

HPC is really an interesting field to be in just because it is so vastly different in terms of where you are working than what most programmers are doing language wise. And it also lends itself to moving into embedded development also.

 

 

They are relevant. Many people are claiming that C is low level because it doesn't support certain features out of the box. I'm saying that those features are available via third party libraries.

Lack of standard library functionality is a valid point for C... it'd be a valid argument for ASM also. This was brought up in a prior thread where you made much the same argument as you are making here, but interfacing and using libraries is not as simple or easy as you are making it out to be. You act as if you can just willy-nilly find and drop in such things without upkeep or effort. I dunno about you, but in almost all the cases I've ever done it is process in itself. It's for that reason that I'm temporarily switching to C++ for my prototyping. I'm not going to spend five years creating or trying to find replacements for built-ins that I'll have to bundle and manage myself.

 

That said, there are plenty of other things that make C lower level than the other languages you are comparing it to. 

 

So you're suggesting that because C and C++ have a different memory management model than C# and Java, that they are low level programming languages?

Pointer arithmetic, direct access to manage memory, direct access to manage resources via memory mapped I/O, and the ability to do inline assembly definitely put C/C++ on a lower level than C# and Java. Some nice examples of differences are things like accessing PMU counters, accessing general devices (HDDs, cameras, usb, etc.), accessing MSRs, CPU scheduling. You can't do these things in C#/Java without the help of native code. Can you write a driver using .net or Java for example? Conceptually those languages don't even present you a view of your actual hardware: what you get is an abstract machine they are "running on" from their point of view. 

 

 

you can also call third party libraries from asm.

Funny, I recall making the very same point to him in another thread where he presented a similar argument about C as he is doing here. The difference there was that the argument began as a C is a good beginner language. It got into the case of libraries exactly like here and so I made the point that if you were making 3rd libraries an argument for why C is not different from others in terms of libraries, you can make a similar argument for ASM.

Link to comment
Share on other sites

This topic is now closed to further replies.
  • Recently Browsing   0 members

    • No registered users viewing this page.