AltecXP Posted June 20, 2009 Share Posted June 20, 2009 I havent programed anything since VB6 back in 2004/5. I'd like to get back in the game again but where should I start? Pick up with VB.net or start clean witn C# or C++? I've played with C++ before but never really got into it like I did with VB. Any opinions? Link to comment Share on other sites More sharing options...
0 code.kliu.org Posted June 28, 2009 Share Posted June 28, 2009 References exist because they enable operator overloading. My complaint against references was there only as a supporting example of how anything that obfuscates from the program what used to be explicit is harmful and how, despite eliminating some of the obfuscations of C (e.g., the easily-abused preprocessor), modern languages often add their own dangerous obfuscations like references, exceptions, and in some garbage-collecting systems, the need for the programmer to be wary of cyclical references (not to say that this is tougher than manual memory management, but that some GCs lulls programmers into a false sense of security and they stop being aware of these things). With respect to references as they are implemented in C++, yes, they are needed for operator overloading. But that doesn't change that references are still obfuscations (and what about languages in which operator overloading--something that is very much specific to C++--is not an issue?). And that they are needed for operator overloading in C++ is really an issue of the design of C++ (and we can both agree that there are plenty of issues there) (in general, I avoid C++ operator overloading in favor of functions because the latter are, well, more explicit and that the overloading of operators can often be abused or confusing: I'm probably not the only one who thinks that the use of the bitshift operators in streams is both ugly and bizarre). Link to comment Share on other sites More sharing options...
0 Andre S. Veteran Posted June 29, 2009 Veteran Share Posted June 29, 2009 (edited) My complaint against references was there only as a supporting example of how anything that obfuscates from the program what used to be explicit is harmful and how, despite eliminating some of the obfuscations of C (e.g., the easily-abused preprocessor), modern languages often add their own dangerous obfuscations like references, exceptions, and in some garbage-collecting systems, the need for the programmer to be wary of cyclical references (not to say that this is tougher than manual memory management, but that some GCs lulls programmers into a false sense of security and they stop being aware of these things).The features you mentions are not obfuscations, they're layers of abstraction that allow you to program at a higher level, which represents a huge gain in overall productivity. It's like using STL containers instead of rolling up your own; you know it works reliably and efficiently so why reinvent the wheel? One certainly needs to be aware of what's under the hood of a very high-level language, so the low-level details don't magically cease to exist, but at least they're made more automated and reliable. Lots of perfectly fine and resource-efficient programs are written all the time in C#, Python and so forth, and lots of memory hogs are written all the time in C++ (Firefox anyone?), so I think it boils down to competence, but I'd also say that at equal competence a C# programmer can get features out of the door a lot quicker than a C++ programmer, simply because the language is more expressive.With respect to references as they are implemented in C++, yes, they are needed for operator overloading. But that doesn't change that references are still obfuscations (and what about languages in which operator overloading--something that is very much specific to C++--is not an issue?).You can't have a pointer in a managed language precisly because it's managed; the object can be moved around in memory by the GC when compaction occurs so it wouldn't make sense for you to store a fixed address to memory and think that it represents your object, as is the case in C++. Anyway I think in general that object-orientation only really makes sense in a managed environment, when I create an object I shouldn't have to care about where it is and when I should free its memory; also when I change the implementation of my class, not the interface, clients shouldn't have to recompile (compile-time encapsulation): due to being a native language, C++ can't offer that. There's almost no encapsulation at all in C++, everything is visible to everyone because everyone has direct access to memory. If you look at how Java was designed, all the features were put in for best support of object-orientation, and the concept of memory management was deemed necessary to achieve that.in general, I avoid C++ operator overloading in favor of functions because the latter are, well, more explicit and that the overloading of operators can often be abused or confusing: I'm probably not the only one who thinks that the use of the bitshift operators in streams is both ugly and bizarre)Gosh, me too. It's so confusing especially for beginners when you explain how to overload << (it shouldn't be a class member - wut?) in order to provide pretty-printing for your class, while in every single other language on earth you implement ToString() or a similar helper method and you're done, and it's much more intuitive. Whoever thought that hello world should be written using left-shifts (was it really Stroustrup?) should rethink his career in language design, seriously. Edited June 29, 2009 by Dr_Asik Link to comment Share on other sites More sharing options...
0 code.kliu.org Posted June 29, 2009 Share Posted June 29, 2009 (edited) The features you mentions are not obfuscations, they're layers of abstraction Obfuscations and abstractions are woven from the same thread. When the hiding produces uniformity and better code structures, they're abstractions (e.g., going from x86 to C or from context pointers to classes). When the hiding hinders good programming (references and exceptions), they are obfuscations. memory hogs are written all the time in C++ (Firefox anyone?) It's interesting that you bring up this specific example. :) With respect to this particular case, Firefox is special because it is a platform, a lot like .NET, in a way. There's the core that consists of a rendering engine and a JavaScript engine. The Firefox UI is written in XUL (it's probably where Microsoft got the idea for XAML, which bears some similarities) and JavaScript and styled using CSS. So while the core is written in C++ (and the JS engine is written in C) (and .NET itself is obviously written natively), the Firefox browser is written almost entirely in a high-level managed-memory language. And Firefox's memory problems actually stem from quirks of a managed memory model. One component of the Firefox memory problem comes from cyclic references. Although this isn't really a problem in the main Firefox source due to careful code reviews and testing, it often can be a problem with third-party JavaScript. The other big problem is fragmentation. One common problem with managed memory models (not specific to Mozilla) is that the programmer often has less control over the allocation patterns, and this often leads to reduced locality and greater fragmentation. If, after various allocations and frees, you are left with a mere 4096 bytes in use, then in theory, you should be using only 1 page (4K) of memory. But if those bytes are scattered across 100 different memory pages due to fragmentation, then you will end up holding open all 100 pages of memory, turning an ideal 4K footprint into a 400K footprint (it's usually not this extreme, but this is an illustration). The overall effect of the managed memory model in Firefox is most pronounced if you install NoScript and disable most of the content JavaScript (which is all using managed memory). Under such scenarios, I can run Firefox with several dozen tabs for well over a month (until I have to reboot for Patch Tuesday) with the total memory usage hovering at a steady level of usually 200-400 MB. If I disable NoScript and thus enable all JavaScript, that memory usage often soars to over 1GB within a week and I eventually have to restart the browser. Not that I'm saying that a managed memory model is inherently bad; on the whole, it's a good thing for certain situations, and there are situations where you must have it because manual memory management makes no sense for stuff like scripting. And in the case of Firefox, having everything but the core be in XUL+JS+CSS means faster development and unmatched extensibility (it is for this reason that Chrome extensions will never come close to being able to do the same kinds of things that Firefox extensions can do, because Gecko is a bona fide platform while WebKit is not). My point in bringing up GC is that there is a lot lost due to the information hiding, such as when programmers think that GC is a license to do whatever they want (which is how many people tout GC: never worry about managing memory again! riiiight...) and then they go off creating cycles. Gosh, me too. It's so confusing especially for beginners when you explain how to overload << (it shouldn't be a class member - wut?) in order to provide pretty-printing for your class, while in every single other language on earth you implement ToString() or a similar helper method and you're done, and it's much more intuitive. Whoever thought that hello world should be written using left-shifts (was it really Stroustrup?) should rethink his career in language design, seriously. Finally, we agree on something! ;) Edited June 29, 2009 by code.kliu.org Link to comment Share on other sites More sharing options...
0 Andre S. Veteran Posted June 29, 2009 Veteran Share Posted June 29, 2009 (edited) Obfuscations and abstractions are woven from the same thread. When the hiding produces uniformity and better code structures, they're abstractions (e.g., going from x86 and C or from context pointers to classes). When the hiding hinders good programming (references and exceptions), they are obfuscations.While C++ references are a questionable design choice, references in a managed language like Java or C# are not the same thing at all; as I've said, they're a direct implication of automatic memory management. And I don't think they hinder good programming, on the contrary, it's easier to produce a correct program in Java than in C++, largely due to automatic memory management. We can agree that a bad programmer will write broken apps in whatever language, and he definitely can introduce memory leaks in a Java program; but my point is that there's nothing inherently "obfuscated" about having the GC take care of deallocation and compaction, it's a technique that has proved very useful and effective. In my view it's not fundamentally different from using std::vector instead of implementing your own list; it's a smart choice most of the time, even though it's possible to do pretty stupid things with STL containers.Edit: By the way, .NET and Java know how to deal with cyclic references and it has no overhead. Re-edit : not really sure about Java though. Edited June 29, 2009 by Dr_Asik Link to comment Share on other sites More sharing options...
0 Laurë Veteran Posted June 29, 2009 Veteran Share Posted June 29, 2009 :heart: OCaml and C but for a beginner, I'd say pick something you like the look of, and that has a good IDE. I guess I would say Java, but C# sounds good. Link to comment Share on other sites More sharing options...
0 code.kliu.org Posted June 29, 2009 Share Posted June 29, 2009 While C++ references are a questionable design choice, references in a managed language like Java or C# are not the same thing at all; as I've said, they're a direct implication of automatic memory management. I know that we're veering way off topic with this, but I do want to clarify that the problem with references isn't a question of manual vs. automatic memory management (that was a separate issue that I had inserted in my earlier post). The reason why references are harmful in my view is that they hide from the user the extent of any changes that one makes to an object, since changing an object whose existence is confined locally to that function looks exactly like changing an object that is actually just a reference to another. It's that lack of explicitness that makes them bad. In contrast, Perl's handling of references is perfect: you have to explicitly dereference, which means that there's more of Perl's (in)famous symbol soup, but it also means that it's perfectly clear to someone reading the code what is going on from just looking at that line of code, without having to remember or hunt down how things were declared. So the semantics surrounding references in Perl are just like the semantics surrounding pointers, but Perl is still managing everything for you, and you can't do stuff like raw pointer math. Link to comment Share on other sites More sharing options...
0 Andre S. Veteran Posted June 29, 2009 Veteran Share Posted June 29, 2009 In a language where everything is a reference, there's no point to have a special syntax for dereferencing. And this is the case with Java. C# adds value types but even then the general rule in C# is the same as Java. You would always have to dereference to do anything with objects, so it would be purely useless noise. I think you're confusing the issue of an object's scope vs where it is allocated. In Java, if I declare and initialize a MyClass locally, I write MyClass m = new MyClass(); and even though I declared it locally, m is still a disguised pointer to managed memory and I'm still implicitely deferencing it every time I use it within that function. So having a special deferencing syntax wouldn't help me distinguish between objects of different scopes, since in Java everything is a reference. What could have been introduced is special syntax for distinguishing between primitive types and objects (or Value/Reference types in C#), but as primitive/value types play a secondary role in these languages it was deemed not an issue. And the beauty here is that if the Java runtime one day gets smart enough, maybe it'll detect that m never escapes the method where it's declared, and so it decides to allocate it on the stack instead of the heap, and performance improves. (This is a purely fictional example.) And this is what automatic memory management is about, it's letting the runtime decide how, where and when to put your objects as it sees fit, and so from the code's perspective there shouldn't be any implications about those details. And if you really need to manage those details, then just use C, it's callable from pretty much any language. Perl would be the only managed language I've heard about which has a specific dereference syntax (Ruby and Python don't). I know very little about Perl except that it has specific syntax for a few basic types like array and hashes so it seems like it was designed for providing first-class support to a few primitive types and added object-orientation as an afterthought; it doesn't look very clean or consistent. Again, these are just my guesses as I haven't learned the language. Link to comment Share on other sites More sharing options...
0 code.kliu.org Posted June 30, 2009 Share Posted June 30, 2009 This is going way, way, off-topic, but to heck with that. Perhaps I'm not communicating this clearly. The issue isn't scoping. Yes, in virtually all managed memory environments everything is allocated on the heap, and everything is basically an "object", and there's nothing wrong with that. You seem to be talking about the mechanisms by which things are accessed in such environments, and "references" used in that sense is perfectly fine, necessary, and I never had any intent to dispute that. But when most people talk about references and when I talk about references, we're talking about cases when an object has more than one name. Consider the sort of typical question that you would see in an AP exam: void foo(int &bar) { bar = 2; } void main() { int x = 1; foo(x); } What's x? Explicit referencing and dereferencing like the sort used by Perl or C pointers will immediately alert you in two places: when you assign 2 to bar, you will see an explicit dereferencing operator. When you pass x to foo, you will see an explicit referencing operator. Without those explicit operators, you must refer to the declaration of foo, which in non-trivial programs, can be far removed from the actual assignment of bar and the passing of x, and more importantly, it's information that is no longer coupled with the actual use (yes, there is the IDE, but that's just papering over a poor language design decision--and what about cases where an IDE can't help, like when you are reviewing a code patch?). This is actually a very similar principle to the advocacy of the use of apps Hungarian notation--that the more relevant information contained in each line of code, the easier it is to verify its correctness by just looking at it (it's also similar to the argument against exceptions). And it's not just about passed variables. You can also have referencing happening within a function. Of course, various languages treat this differently, so the exact degree and nature of the problem can vary depending on which you are talking about. Take for example, Perl: everything is passed by value (everything is copied) unless you explicitly reference it, which may be an inefficient policy, but is consistent and makes it easier to make the code correct (of course, Perl does a lot of other things that makes it harder to write correct code, like its use of the $_ special variable, which surprise, surprise, trips people up because it robs them of syntactical explicitness, but this is one of those instances where Perl gets it right). Whereas many other languages will pass some things by value and some things by reference and with no explicit referencing dereferencing operators (and this is a problem that has bitten me once before because it's easy to get the different conventions mixed up once you've used a variety of languages). Of course, lots of languages use references. Life goes on. Nothing terribly bad comes from all this because you can just look back, make a note of declarations or whatever, and there's plenty of good code gets written in this world of references. But that loss of explicitness in the syntax that took place in many languages does add a bit of extra burden and mental bookkeeping and is an extra way for someone to shoot themselves in the foot (as I have observed many times when tutoring). But unlike the extra mental bookkeeping burden added by something like manual memory management, it's not clear how the death of syntactical explicitness with respect to references was a good thing: for the cost of extra mental bookkeeping, manual memory management gives us better performance. For the cost of extra mental bookkeeping, non-explicit referencing gives us... uh... hmm... fewer keystrokes? Link to comment Share on other sites More sharing options...
0 Andre S. Veteran Posted June 30, 2009 Veteran Share Posted June 30, 2009 Ok, thanks for making it clear that you're talking about references in C++, not references in a managed language. In C++ they are merely syntactic sugar for const pointers; in a managed language they're the only logical way of handling objects. C++ is confusing in that regard because it's based on C, and in C there are only value semantics, everything is always passed by value. If you want references in C you have to explictely take the address of something and pass that by value, and explicitely deference the address on the other side of the call. So, in C, it's simple, there are only value semantics, if you need to pass by reference you have to make it explicit on both sides of the call. In a managed language like Java, there are only references (save for primitive types), so the default is reference semantics. If you want to pass an object by value, you have to explicitely clone it and pass a reference to that clone. It's also simple, it's just the opposite default behavior. C++ has both value and reference semantics and doesn't make it explicit from the caller's point of view so it indeed introduces an element of complexity, one with, I agree, little benefit for the confusion it creates. If I managed to miss your point again please correct me. Link to comment Share on other sites More sharing options...
0 zeroskyx Posted July 7, 2009 Share Posted July 7, 2009 I'm coming from C++ but moved over to C#. It's an awesome language and if you dont need the ultimate performance of C/C++ applications Id really suggest C# since it's clean, type-safe, straightforward and slowly becoming very popular. Link to comment Share on other sites More sharing options...
Question
AltecXP
I havent programed anything since VB6 back in 2004/5. I'd like to get back in the game again but where should I start? Pick up with VB.net or start clean witn C# or C++? I've played with C++ before but never really got into it like I did with VB.
Any opinions?
Link to comment
Share on other sites
35 answers to this question
Recommended Posts