• 0

Rules we learn at school


Question

Hello everyone!

I've recently started my studies (Bachelor of Applied Computer Science), and in our OOP-classes we've been using Java to get familiar with Object Oriented programming concepts in general (so not focusing on the Java API but mainly on object relations, calling methods on other objects, call by reference/value etc etc).

We do learn a few rules for our code though. Things we should do:

  • never use more than one return statement
  • never use break or continue
  • avoid using switch

I am, of course, all but an experienced programmer, but I quite like these principles. Whenever we see a code example that has multiple returns or uses break or continue it takes a while to understand, while code with a single return and without breaks/continues always looks quite clean and easy to understand. I personally never felt limited by these rules either.

What do you guys think?

Link to comment
Share on other sites

Recommended Posts

  • 0

Switches and if - else if - else statements some times lead to messy code

Point is to make the code small in size and easy to understand - this may refactoring to use continue and break statements or vice-versa refactoring breaks and continues to loop conditions and if statements and switches

Link to comment
Share on other sites

  • 0

Wouldn't initializing your variables (another good rule of thumb) solve that problem:


while (!eof(file))
dataOffset = -1;
dataLength = -1;
header = file.readbytes(22)
if (isValidHeader(header))
dataOffset = int(file.readbytes(4))
if (dataOffset >= 0)
file.seek(dataOffset)
dataLength = int(file.readbytes(4))
if (dataLength >= 0)
dataChunk = file.readbytes(dataLength)
store(dataChunk)
// process dataChunk...
[/CODE]

That mitigates the issue, but leaves most of the code indented. Also, it's not strictly equivalent because you'll have to go through all the ifs everytime whereas my version continues to the next loop as soon as the first error is detected. Also, now you initialise your variables twice (once to the error value, and another time to the real value) instead of once. (btw my code didn't have any uninitialised variables. They were declared at the point of initialisation)

Also your version is more error-prone because what happens if someone adds some logic before the end of the while loop. He has to remember to put it inside the last if (condition) because otherwise it'll execute for an invalid record. Whereas when you break early with continue, no code can be executed for invalid records by mistake.

Link to comment
Share on other sites

  • 0
If you need to know the type of a variable, get a good IDE and hover your mouse over it. Including the type in the variable name, leads to garbage in front of the useful name. It also adds a maintenance nightmare when variable types are changed. A good example is in Win32 LPCSTR is defined to be a char const * It stands for "Long Pointer to Constant STRing" the long part is completely irrelevant nowdays. But Windows is stuck with it for backwards compatibility.

I prefer not to use the mouse too much while programming. I also don't see the big deal about changing variable types. Good code is portable and thus, you don't need to change the notation. I am not suggesting that all type information should be in the name, just a p for pointers, a for arrays, t for typecasts: that sort of thing.

I implement network protocols and there is a lot of packet encoding and decoding involved. Seeing a variable and knowing what it is definitely helps.

I am not claiming that this is useful everywhere, just that it is a rule I like to follow.

Link to comment
Share on other sites

  • 0

I prefer not to use the mouse too much while programming. I also don't see the big deal about changing variable types. Good code is portable and thus, you don't need to change the notation. I am not suggesting that all type information should be in the name, just a p for pointers, a for arrays, t for typecasts: that sort of thing.

I implement network protocols and there is a lot of packet encoding and decoding involved. Seeing a variable and knowing what it is definitely helps.

I am not claiming that this is useful everywhere, just that it is a rule I like to follow.

If you can't remember the types that you're using, or anything of that nature. Then you shouldn't be programming. You should be using smart variable naming conventions, not crappy polluted names that are not helpful at all. If you wanna change the type then you will have to change the variable name, and yet bring source compatibility and even binary compatibility. I write a network protocol, and I don't use all that garbage type in variable stuff. Btw it's not a rule, it's just your preference, and a bad one at that.

Link to comment
Share on other sites

  • 0

If you can't remember the types that you're using, or anything of that nature. Then you shouldn't be programming. You should be using smart variable naming conventions, not crappy polluted names that are not helpful at all. If you wanna change the type then you will have to change the variable name, and yet bring source compatibility and even binary compatibility. I write a network protocol, and I don't use all that garbage type in variable stuff. Btw it's not a rule, it's just your preference, and a bad one at that.

Do you honestly remember the type of every variable you declare? I wonder if you have ever written more than 1000 lines in a program. UINT4 u4Variable is 4 bytes of memory, plain and simple. If you are porting it to a different platform, you modify what UINT4 refers to so that it still means 4 bytes of memory. If there is a possibility that you may need more than 4 bytes of memory for that particular purpose in the future, you should be using a more flexible type in the first place.

Naturally, there are tools that make all this redundant, but there is nothing wrong in relying less on them. I honestly feel that all these tools take the fun out of programming- you feel like a part of an assembly line rather than a human being capable of making intelligent decisions.

Link to comment
Share on other sites

  • 0

Do you honestly remember the type of every variable you declare? I wonder if you have ever written more than 1000 lines in a program. UINT4 u4Variable is 4 bytes of memory, plain and simple. If you are porting it to a different platform, you modify what UINT4 refers to so that it still means 4 bytes of memory. If there is a possibility that you may need more than 4 bytes of memory for that particular purpose in the future, you should be using a more flexible type in the first place.

Naturally, there are tools that make all this redundant, but there is nothing wrong in relying less on them. I honestly feel that all these tools take the fun out of programming- you feel like a part of an assembly line rather than a human being capable of making intelligent decisions.

If you are changing types to match a new system, you are doing it wrong. You should be using uint32_t and friends and having it done for you.

The tools are there to help you, the way I see it is that I'm the creative one coming up with the way it should be done. My lovely IDE just colours it nicely and puts a squiggly red line if I make a mistake typing it in.

Link to comment
Share on other sites

  • 0

If you are changing types to match a new system, you are doing it wrong. You should be using uint32_t and friends and having it done for you.

The tools are there to help you, the way I see it is that I'm the creative one coming up with the way it should be done. My lovely IDE just colours it nicely and puts a squiggly red line if I make a mistake typing it in.

I am not sure I follow. Each chipset manufacturer provides a set of APIs and basic data types that their OS variant (usually some variant of Linux butchered to suit their requirements) supports. Our code is completely portable: we just have to match our data types with what the underlying architecture supports. As we cannot compromise on performance, we have to port for each environment.

Edit: Just to be clear, I am talking about C.

Link to comment
Share on other sites

  • 0

I am not sure I follow. Each chipset manufacturer provides a set of APIs and basic data types that their OS variant (usually some variant of Linux butchered to suit their requirements) supports. Our code is completely portable: we just have to match our data types with what the underlying architecture supports. As we cannot compromise on performance, we have to port for each environment.

Edit: Just to be clear, I am talking about C.

Ah, I was thinking that you had stdint.h available and configured for each system

Link to comment
Share on other sites

  • 0

Do you honestly remember the type of every variable you declare? I wonder if you have ever written more than 1000 lines in a program. UINT4 u4Variable is 4 bytes of memory, plain and simple. If you are porting it to a different platform, you modify what UINT4 refers to so that it still means 4 bytes of memory. If there is a possibility that you may need more than 4 bytes of memory for that particular purpose in the future, you should be using a more flexible type in the first place.

Naturally, there are tools that make all this redundant, but there is nothing wrong in relying less on them. I honestly feel that all these tools take the fun out of programming- you feel like a part of an assembly line rather than a human being capable of making intelligent decisions.

I've wrote a project with code lines over 25,000. I remember everything, that I write. I also use "auto" a lot in C++. My code completion gives me all the type information. Obviously you ain't using a decent IDE that does that. You can blame that on yourself. Just because you probably develop in notepad, or vim, or something doesn't make you better. But wasting your time prefixing every stupid variable with a type makes no sense. You are just making yourself do that much more work, and to maintain. If you prefer that much work, then continue. I rather just get the stuff over with, and keep my source compatibility and binary compatibility, while keeping my source code neat and not have "hungarian notation" garbage in my code.

If you really want to see my code, just pm me and i'll give you the link (even though my repo is outdated, and the most updated isn't on there).

Link to comment
Share on other sites

  • 0

I've wrote a project with code lines over 25,000. I remember everything, that I write. I also use "auto" a lot in C++. My code completion gives me all the type information. Obviously you ain't using a decent IDE that does that. You can blame that on yourself. Just because you probably develop in notepad, or vim, or something doesn't make you better. But wasting your time prefixing every stupid variable with a type makes no sense. You are just making yourself do that much more work, and to maintain. If you prefer that much work, then continue. I rather just get the stuff over with, and keep my source compatibility and binary compatibility, while keeping my source code neat and not have "hungarian notation" garbage in my code.

If you really want to see my code, just pm me and i'll give you the link (even though my repo is outdated, and the most updated isn't on there).

Tell me this. When you are browsing a lot of code, wouldn't you like to see at a glance what the type of each variable is, especially when much of the code is written by someone else? Would you rather hover over each variable to see what type it is? If you are not convinced by my argument, let us agree to differ.

Link to comment
Share on other sites

  • 0

Tell me this. When you are browsing a lot of code, wouldn't you like to see at a glance what the type of each variable is, especially when much of the code is written by someone else? Would you rather hover over each variable to see what type it is? If you are not convinced by my argument, let us agree to differ.

Why would I care about that? You're trying too hard to convince me into supporting MS Hungarian notation.

Link to comment
Share on other sites

  • 0

Tell me this. When you are browsing a lot of code, wouldn't you like to see at a glance what the type of each variable is, especially when much of the code is written by someone else? Would you rather hover over each variable to see what type it is? If you are not convinced by my argument, let us agree to differ.

I work on a project that has 1.3 million lines of code, and I've never had a problem with that, Visual Studio just makes it to easy to figure stuff out on the fly as you type

Link to comment
Share on other sites

  • 0

Tell me this. When you are browsing a lot of code, wouldn't you like to see at a glance what the type of each variable is, especially when much of the code is written by someone else? Would you rather hover over each variable to see what type it is? If you are not convinced by my argument, let us agree to differ.

So do you create an abbreviation for every type you create? Or is it just for built-in types? There are about a million built-in types in .NET (or Java or whatever framework you use): do you have abbreviations for all those? Or is it just for basic numeric types and strings? I don't see how that convention can be enforced in a way that is both coherent and practical.
Link to comment
Share on other sites

  • 0

I use multiple returns especially in validation functions. Default the return to invalid and put the validated return inside an if--makes for a very short function:


bool CheckBounds(int number, int upper, int lower)
{
if (number >= lower && number <= upper)
{
return true;
}
return false;
}
[/CODE]

Link to comment
Share on other sites

  • 0

I use multiple returns especially in validation functions. Default the return to invalid and put the validated return inside an if--makes for a very short function:

bool CheckBounds(int number, int upper, int lower)
{
	if (number &gt;= lower &amp;&amp; number &lt;= upper)
	{	  
		return true;
	}  
	return false;
}
[/CODE]


Even Shorter :)
[code]
bool CheckBounds(int number, int upper, int lower)
{
    return (number &gt;= lower &amp;&amp; number &lt;= upper);
}

Link to comment
Share on other sites

  • 0

Even Shorter :)

bool CheckBounds(int number, int upper, int lower)
{
	return (number &gt;= lower &amp;&amp; number &lt;= upper);
}

Yeah, for my simple example. There's times where you can't do that. I just wanted my example to be real clear.

Link to comment
Share on other sites

  • 0
Do you honestly remember the type of every variable you declare? I wonder if you have ever written more than 1000 lines in a program. UINT4 u4Variable is 4 bytes of memory, plain and simple. If you are porting it to a different platform, you modify what UINT4 refers to so that it still means 4 bytes of memory. If there is a possibility that you may need more than 4 bytes of memory for that particular purpose in the future, you should be using a more flexible type in the first place. Naturally, there are tools that make all this redundant, but there is nothing wrong in relying less on them. I honestly feel that all these tools take the fun out of programming- you feel like a part of an assembly line rather than a human being capable of making intelligent decisions.

What are you actually writing, because really you shouldn't have to change your code every time you port it to a different system (In that case the code isn't that portable), unless you're dealing with some strange embedded systems.

But even then, in your example you're using a plain 32bit integer, I don't know what type of computer wouldn't handle those well.

Link to comment
Share on other sites

  • 0

So do you create an abbreviation for every type you create? Or is it just for built-in types? There are about a million built-in types in .NET (or Java or whatever framework you use): do you have abbreviations for all those? Or is it just for basic numeric types and strings? I don't see how that convention can be enforced in a way that is both coherent and practical.

I work on C. We follow this convention for basic data types (numerical, p-pointer, a-array, g-global), not for structures.

What are you actually writing, because really you shouldn't have to change your code every time you port it to a different system (In that case the code isn't that portable), unless you're dealing with some strange embedded systems.

But even then, in your example you're using a plain 32bit integer, I don't know what type of computer wouldn't handle those well.

The code isn't changed, the portability layer is. The point I was trying to make is that there will be no need for me to change the prefixes every time I port to a different architecture. That is why I chose that particular example.

Let me reiterate: this is a rule my department follows and I am a fan. There may be many reasons why the Hungarian notation is not used, but the disadvantages do not affect us and it also provides valuable information about data types at one glance. I think many of you are referring to higher level languages that don't have direct memory manipulation, where the editor will provide type checking. It is not possible in the code we write and we need to easily know how many bytes to memcpy.

To be honest, when I put up the post about rules, it was a set of personal rules I like to follow. I am not saying that these rules that are applicable or convenient to everyone.

Link to comment
Share on other sites

  • 0

if any API or other pre-programmed structure is supposedly forcing you to use goto to handle errors then there is something fundamentally wrong with it and is a regular target for hackers. can you probably make more efficient code with goto. Yeah, used correctly. But that's not really the point. If you wanted the worlds most efficient code you would be programming in assembly anyway.

Finally, something we can agree upon. :)

Link to comment
Share on other sites

This topic is now closed to further replies.