kliu0x52, on Jun 9 2009, 21:36, said:
It teaches the underpinnings of OO. He's already done OO in Java. C teaches you how OO is implemented. Back when I tutored people and had to explain to them things like static and virtual methods, I often found that the best explanations, the ones that elicited a "oh, I get it now, that makes so much more sense than how I was taught" was when I broke things down and explained how OO was implemented and how that affects why some things in OO are the way that they are.
First, you overestimate his skills in object-oriented programming. He's done a high school class in Java 6 years ago. It takes much more than a single introductory course in programming to learn anything about OOP, even if it's in a pure OO language like Java; moreover, this was a high-school level course and he's likely forgotten most of it.
As a result, learning C won't help him understand the underpinnings of OOP because he barely has any notion of OOP. Rather than bringing him a better understanding of already well-known high-level languages, it will teach him how to do things in a simple procedural way, making the transition to OOP less natural. I think it's better to get a very good feeling of OOP asap because good habits are essentials and bad habits are hard to get rid of. Thus my recommendation of C#. Or even C++ for that matter (even though it sucks as an OO language).
It's the difference between telling someone that they can use a fire extinguisher to put out a fire and explaining to someone what goes on chemically when something burns and how, as a result, you can stop a fire by either depriving it of the reactants--the latter makes the former more understandable and also opens up a wider perspective (e.g., "so that means that I can also stop a fire by dumping sand on it since that too will choke out the oxygen, right?").
Sure. But it's better to teach them first how to use a fire extinguisher, so that if a fire starts before the course is over, at least they'll be able to do something against it. Knowing how to use a fire extinguisher, they could even start earlier practicing other fire-fighting skills such as safely evacuating the building, using the right extinguisher against a certain type of fire, etc. With a higher-level language, you're not only productive faster, you also start learning other important programming skills earlier, such as reusing well-tested code instead of making up your own, decoupling components using interfaces, events, making exception-safe code, etc. It's not wasted time.
You don't have to use the preprocessor that much, and C does not encourage the use of global variables. The so-called "abuse" of casting is just a way of revealing How Things Really Work. And I don't see how memcpy is abusive.
How things really work is that there are no type, no variables, no decimal values, no characters, no internet connections. There are just bits and buses. A C variable is an abstraction over a reserved memory area in assembly. A C for loop is syntactic sugar for a jump and a label in assembly. It's nice to know about it, but you rarely need to know what's happening at that level. And the same applies with C# vs C. Sure it's nice to know what really is a virtual method, but you rarely need to care about that. It's pretty advanced stuff and a beginner doesn't need to know about it right off the bat, it can be overwhelming. I think it's much more encouraging if a beginner can quickly see results with few lines of code and make lots of cool applications, then if he wants to take it to the next level he can look under the hood and try to understand how to write better code.
Also, you shouldn't be thinking your design at such a low-level. You should think your design at the highest possible level (interfaces, objects, packages, etc), and the highest possible level in C is depressingly low. There should be a difference between design and implementation, and C offers little support for that.
As for memcpy() : http://www.theregister.co.uk/2009/05/15/mi...anishes_memcpy/
It's very simple. And insanely powerful. And absolutely essential for anyone who wants to be a real programmer (instead of just a hobbyist). And an understanding of C will make so many things in other languages make more sense.
"Real programming" isn't synonymous with "system programming". Writing a windows application in Visual Basic is real programming. Writing an ASP.NET application for a web browser is real programming. Some programmers make a living out of just that, and they probably know a lot of things about application development a system programmer doesn't know. It's just a different field of specialization. A lot of C programmers today are unwilling to learn anything else than C and they can't do OOP and they can't do functional programming. Is that better? I think it's worse.
As for the rest, I share your views about the importance of learning the fundamentals of computing (yes, including C) and how it's important to know the implications of what you're writing in higher-level languages. Any good software engineering or computer science program in university should teach you that; if they don't, choose another university. But I think new programs should not be written in C unless it's absolutely necessary. Usually you can use at least C++. And usually only part of the program absolutely requires the use of a system language; the rest can be written in Python or Java or whatever. If you take games, for instance, only the engine is written in C++, the actual logic is written in Lua or other scripting languages. I think using these new languages greatly helps to make a smaller codebase, one that's faster to develop, easier to read, maintain and expand upon; and you can easily get good performance with them if you know what you're doing.
Edited by Dr_Asik, 10 June 2009 - 19:34.