• 0

Big O Notation


Question

Hey guys,

I have been attending a tech school for Game and Simulation Programming. I am around 20 years old

I am about 90% completed with my bachelor's degree at ECPI College of Technology and I have been doing ok so far. But as my final classes approach I feel myself getting very nervous that my knowledge of programming is not sufficent to ace my most demanding classes. Well, my fears have been realized when I started my Algorithm Analysis and Design class. I am aware that things such as the "Big Oh Notation" exists (or as I call it the Big Oh Sh*t Notation) and that they measure the efficiency of algroithm processes in miliseconds (10^-6) and somehow logarithms come into play. Well that didn't set to well with my brain and I had to get switched out of the class due to not being able to complete even the first assignment which consisted of measuring algorithms. The problems looked like this:

constant logarithmic linear quadratic cubic

n O(1) O(log N) O(N) O(N log N) O(N2) O(N3)

1 ? ? ? ? ? ?

2 ? ? ? ? ? ?

4 ? ? ? ? ? ?

8 ? ? ? ? ? ?

16 ? ? ? ? ? ?

1024 ? ? ? ? ? ?

That may not be an exact representation, I am just going off what I remember being on the screen.

I have to relearn c++ Data Abstraction and problem solving which consists of stacks, queues, the big O, etc. and was a class I had about 2 years ago which is the pre-req for my Algorithm class. I have to also review most ending programming II topics such as linked lists, muilt-dimensional arrays, templates, etc.

My question to you guys is if you can point me in the right direction.

1. Should I be using DevC++ or Visual Studio?

2. Any guides to Big O, Algorithms, Programming in general

3. Any tips, suggestions where to get started.

Thank you guys so much.

Link to comment
Share on other sites

6 answers to this question

Recommended Posts

  • 0

  1. Whichever you feel most comfortable using
  2. Big O Notation is more to do with complexity than time. It relates how many times a specific operation will be carried out as the size of the input changes. For example to bubble sort an array of n elements, the Big O notation is O(n2) in the worst case because it has to swap n-1 elements n-1 times: (n-1)2 = n2 - 2n + 1. When n changes, n2 changes the most, therefore it's said to be of the order n2 - hence O(n2). To work out the Big O notation for an algorithm you need to identify the expensive operation(s), and how many times they are carried out as the size of the input changes. Forget about time (it's a by-product of the number of operations).
  3. Ask your teachers! If you're suck, take the initiative and ask about what you don't understand. Try and be specific, though, don't just go to them and say "teach me Big O", but rather ask them to run through some examples where you're unsure (i.e. where logarithms come in), or set yourself some tasks (like work out the Big O for a particular algorithm) and ask if they would check it and tell you where you're going wrong.

Link to comment
Share on other sites

  • 0

Regarding number 2...

M2Ys4U's* advice is pretty good - don't get bogged down in the mathematics of "big O notation" etc, it looks like garbage and it can scare you. Quite simply it is a measure of time complexity although not actual time itself (i.e. it doesn't describe a runtime in x number of seconds, milliseconds, whatever).

As said by the previous poster, something like a bubble sort is very inefficient and falls in the order of n2, because for each n items in the array there are n2 possible operations to perform to sort the array.

The other key concept is "divide and conquer" algorithms, these are little harder to grasp. Essentially, for every iteration of the algorithm, the amount of possible operations remaining is pretty much halved and so are very efficient. This is referred to as "log n". Think of them as kind of an opposite to the horrible n2 methods that are very slow and inefficient.

Please correct me if I'm wrong, it's been three years or more since I studied algorithm devlopment and I'm writing from memory!

*God that's a hard username to type.

Link to comment
Share on other sites

  • 0

As M2... said, big O notation doesn't measure efficiency in milliseconds. It is an upper bound on the complexity of the algorithm, i.e., the total number of operations as a function of the number of elements (N). Bubble sort is O(n2) because the array can be passed through n times for each element.

Link to comment
Share on other sites

  • 0

As other have said, Big O notation is used for complexity analysis. The Order of the scaling of an algorithm can significantly affect the time it takes to execute a given algorithm, particularly when working with big inputs.

What they mean by O(1), O(log N), O(N), O(N log N), O(N2), O(N3), etc. (there are many other orders, these are just common ones) is that given an input of size n, the number of operations that will be required will be in the order of applying the order's formula to the size of n.

In your example exercise, the number of operations for those specific values of n and those specific orders would be the following:

n	 O(1)  O(log N)   O(N)   O(N log N)	 O(N2)	 O(N3)
1	  1		1		1		  1		   1			1
2	  1		1		2		  2		   4			8
4	  1		2		4		  8		  16		   64
8	  1		3		8		 24		  64		  512
16	 1		4	   16		 64		 256		 4096
1024   1	   10	 1024	  10240	 1048576   1073741824

As you can see, the number of operations required for the algorithms with worse orders skyrockets as the input size grows.

Thus for an input size of n=1024, a computer with an O(N log N) algorithm would finish processing it faster than a computer 100,000 (one hundred thousand) times faster than it but that uses an O(N^3) algorithm (roughly 10240 vs 1073741824 operations needed).

If you have doubts, I'm sure you can find an introductory course on Google.

If you're further interested you might want to know this, if it confuses you just ignore it for now:

There are some problems which have orders that cannot be represented in polynomial ways, these are the famous NP problems and the way they scale up is staggering, it would simply take too long for them to be processed with the known algorithms given a significant input size.

Edited by Argote
Link to comment
Share on other sites

This topic is now closed to further replies.
  • Recently Browsing   0 members

    • No registered users viewing this page.