Saturday, December 3, 2011

Easy Part Time Jobs Online

Part time Internet Jobs for Everyone. Earn money online by data entry jobs

Are you looking for work from Home part time jobs?
Home based typing jobs are now being offered by many Companies!
Receive your paychecks every month! Full training provided by
the company itself. No Selling Required.

For more information guys!

Please visit the website: http://www.earnparttimejobs.com/index.php?id=3797389

Friday, February 11, 2011

Shell Sort

  • requires O(N3/2) operations in the worst case, which means that it can be quite effectively used even for moderately large files (say N < 5000).
  • Invented by Donald Shell in 1959, the shell sort is the most efficient of the O(n2) class of sorting algorithms.
  • It is also the most complex of the O(n2) algorithms.
  • The shell sort is a "diminishing increment sort", better known as a "comb sort" to the unwashed programming masses.

Pros: Efficient for medium-size lists.

Cons: Somewhat complex algorithm, not nearly as efficient as the merge, heap, and quick sorts.


The algorithm makes multiple passes through the list, and each time sorts a number of equally sized sets using the insertion sort. The size of the set to be sorted gets larger with each pass through the list, until the set consists of the entire list. (Note that as the size of the set increases, the number of sets to be sorted decreases.) This sets the insertion sort up for an almost-best case run each iteration with a complexity that approaches O(n).


Example and Steps in Shell Sort:

The idea is to divide the sequence into groups which induces great order into the initial random sequence relatively fast and then apply insertion sort to get final order.

To compare element which are relatively great distance from each other,

Steps:

1. half the sequence (4 elements/group), and compare corresponding elements (four groups, two elements/group).



Empirical Analysis

Shell Sort Efficiency

The shell sort is by far the fastest of the N2 class of sorting algorithms. It's more than 5 times faster than the bubble sort and a little over twice as fast as the insertion sort, its closest competitor.


The shell sort is still significantly slower than the merge, heap, and quick sorts, but its relatively simple algorithm makes it a good choice for sorting lists of less than 5000 items unless speed is hyper-critical. It's also an excellent choice for repetitive sorting of smaller lists.


Source Code

Below is the basic shell sort algorithm.


void shellSort(int numbers[], int array_size)

{

int i, j, increment, temp;

increment = 3;

while (increment > 0)

{

for (i=0; i <>

{

j = i;

temp = numbers[i];

while ((j >= increment) && (numbers[j-increment] > temp))

{

numbers[j] = numbers[j - increment];

j = j - increment;

}

numbers[j] = temp;

}

if (increment/2 != 0)

increment = increment/2;

else if (increment == 1)

increment = 0;

else

increment = 1;

}

}

Thursday, February 10, 2011

Selection Sort

  • simple O(n²) sorting algorithm
  • the method requires O(N2) comparisons and so it should only be used on small files.
  • There is an important exception to this rule. When sorting files with large records and small keys, the cost of exchanging records controls the running time.
  • In such cases, selection sort requires O(N) time since the number of exchanges is at most N.
  • The selection sort works by selecting the smallest unsorted item remaining in the list, and then swapping it with the item in the next position to be filled. The selection sort has a complexity of O(n2).

Pros: Simple and easy to implement.

Cons: Inefficient for large lists, so similar to the more efficient insertion sort that the insertion sort should be used in its place.


Steps in Selection Sort:

1. Select the largest element of an array.

2. Swap the last element with the largest element.

3. Now do the same thing again, for an array that has one less element.

4. Keep doing this, until you are down to an array of 2 elements.


Example:

Repeatedly find the next largest (or smallest) element in the array and move it to its final position in the sorted array.


Empirical Analysis

Selection Sort Efficiency

The selection sort is the unwanted stepchild of the n2 sorts. It yields a 60% performance improvement over the bubble sort, but the insertion sort is over twice as fast as the bubble sort and is just as easy to implement as the selection sort. In short, there really isn't any reason to use the selection sort - use the insertion sort instead.

If you really want to use the selection sort for some reason, try to avoid sorting lists of more than a 1000 items with it or repetitively sorting lists of more than a couple hundred items.


Source Code

Below is the basic selection sort algorithm.


void selectionSort(int numbers[], int array_size)

{

int i, j;

int min, temp;

for (i = 0; i <>

{

min = i;

for (j = i+1; j <>

{

if (numbers[j] <>

min = j;

}

temp = numbers[i];

numbers[i] = numbers[min];

numbers[min] = temp;

}

}

Bubble Sort

· is the oldest and simplest sort in use.

· Unfortunately, it's also the slowest.

· The bubble sort is generally considered to be the most inefficient sorting algorithm in common usage. Under best-case conditions (the list is already sorted), the bubble sort can approach a constant O(n) level of complexity. General-case is an abysmal O(n2).

· While the insertion, selection, and shell sorts also have O(n2) complexities, they are significantly more efficient than the bubble sort.

Pros: Simplicity and ease of implementation.

Cons: Horribly inefficient.

Steps in Bubble Sort:

1. Comparing each item in the list with the item next to it

2. Swap them if required (if not in correct order).

3. The algorithm repeats this process until it makes a pass all the way through the list without swapping any items (in other words, all items are in the correct order).

4. This causes larger values to "bubble" to the end of the list while smaller values "sink" towards the beginning of the list.

In summary, each complete pass through the list has the effect of pushing each value slightly closer to it’s sorted position. After a finite number of passes (at most N-1) the list must eventually become completely sorted.


Example:

Consider this example list, below. Let’s use bubble sort to order it in ascending order.

4 8 1 9 3


First we would compare 4 and 8. They are in the correct order. So we move to the next pair, 8 and 1. They are not in the correct order, so they are swapped. The list now looks like:

4 1 8 9 3


Now we compare 8 and 9. They are in the correct order, so we move on to compare 9 and 3, which are not correctly ordered, and so are swapped. The list now looks like like:

4 1 8 3 9


That concludes one single pass. We now start again from the beginning, for the second pass. 4 and 1 are out of order, so they are swapped, and the list becomes:

1 4 8 3 9


4 and 8 are already correct, so they are left as is. 8 and 3 however need to be swapped, so the list becomes:

1 4 3 8 9


Note that we don’t bother comparing 8 and 9 on the second pass, because we know they are already in the right order. Thus we now move straight to the next pass. By comparing 1 and 4, we find that they are correctly ordered.

But 4 and 3, the next pair, are not and so are swapped. This results in the list:

1 3 4 8 9


It can be seen quite obviously that this small example is now sorted. However, the bubble sort algorithm presented would still perform the next pass, obviously to no effect.


Empirical Analysis

Bubble Sort Efficiency

The graph clearly shows the n² nature of the bubble sort.


A fair number of algorithm purists (which means they've probably never written software for a living) claim that the bubble sort should never be used for any reason. Realistically, there isn't a noticeable performance difference between the various sorts for 100 items or less, and the simplicity of the bubble sort makes it attractive. The bubble sort shouldn't be used for repetitive sorts or sorts of more than a couple hundred items.

Source Code

Below is the basic bubble sort algorithm.


void bubbleSort(int numbers[], int array_size)

{

int i, j, temp;

for (i = (array_size - 1); i >= 0; i--)

{

for (j = 1; j <= i; j++)

{

if (numbers[j-1] > numbers[j])

{

temp = numbers[j-1];

numbers[j-1] =numbers[j];

numbers[j] = temp;

}
}
}
}

Insertion Sort

• is a most natural and general one
• Insertion Sort orders a list in the same way we would order a hand of playing cards.
• its worst case and average behavior analysis are easy.
• Using Insertion Sort, the number of comparisons on a list of size n varies, because, if the numbers are already in order, further comparisons are avoided. So we will find the average number of comparisons.
• Using limits and probability, we find that the average number of comparisons is (n-1)(n/4) + k, where k increases for larger values of n.

Pros: Relatively simple and easy to implement.
Cons: Inefficient for large lists.

The insertion sort works just like its name suggests - it inserts each item into its proper place in the final list. The simplest implementation of this requires two list structures - the source list and the list into which sorted items are inserted. To save memory, most implementations use an in-place sort that works by moving the current item past the already sorted items and repeatedly swapping it with the preceding item until it is in place.

Like the bubble sort, the insertion sort has a complexity of O(n²). Although it has the same complexity, the insertion sort is a little over twice as efficient as the bubble sort.

Steps in Insertion Sort:
1. Compare the first two numbers, placing the smallest one in the first position.
2. Compare the third number to the second number. If the third number is larger, then the first three numbers are in order. If not, then swap them.
3. Now compare the numbers in positions one and two and swap them if necessary.
4. Proceed in this manner until reaching the end of the list.

Example:

Empirical Analysis

Insertion Sort Efficiency

The graph demonstrates the n² complexity of the insertion sort.

The insertion sort is a good middle-of-the-road choice for sorting lists of a few thousand items or less. The algorithm is significantly simpler than the shell sort, with only a small trade-off in efficiency. At the same time, the insertion sort is over twice as fast as the bubble sort and almost 40% faster than the selection sort. The insertion sort shouldn't be used for sorting lists larger than a couple thousand items or repetitive sorting of lists larger than a couple hundred items.

Source Code

Below is the basic insertion sort algorithm.

void insertionSort(int numbers[], int array_size)

{

int i, j, index;

for (i=1; i < style="">

{

index = numbers[i];

j = i;

while ((j > 0) && (numbers[j-1] > index))

{

numbers[j] = numbers[j-1];

j = j - 1;

}

numbers[j] = index;

}

}

Monday, January 31, 2011

EMPERICAL ANALYSIS, ANALYSIS OF ALGORITHM AND BIG-OH NOTATION

1. EMPERICAL ANALYSIS

Actually konti lang ang na-search ko tungkol sa empirical analysis. According to my research, Empirical Analysis is one of the two ways to approach analysis of algorithm. Empirical analysis repeatedly run algorithm with different inputs - get some idea of behavior on different sizes of input.

Empirical analysis is important because it helps us to understand and compare algorithms for specific situations. The Mathematical Analysis of some (even very simple) algorithm is very difficult and empirical analysis is the principal alternative.

So, may anim na steps in doing Empirical Analysis:
1.) Decide on the basic operation.
2.) Decide on the input sample(range, size...).
3.) Convert the algorithm to a program.
4.) Generate a sample of inputs.
5.) Run the program and record the observed data.
6.) Analyze the data.

2. ANALYSIS OF ALGORITHM

Algorithm analysis is a field of computer science that is dedicated to understanding the complexity of algorithms. An algorithm is a set of simple instructions to be followed to solve a problem. To analyze an algorithm is to determine the amount of resources such as time and storage necessary to execute it.

When developing solutions to problems we are often faced with multiple approaches that produce the same results. We need to be able to decide which of the options is best suited to our needs. And we can make better decisions if we can characterize the algorithms according to their efficiency.

To characterize algorithm there are two main considerations that we need to consider:

  1. the amount of storage (memory) that they require while executing.
  2. the speed with which they complete their task.

But also we need to consider that each algorithm may have different characteristics in different situations. For example, sorting algorithm A may be better than B for small amounts of data, but the situation reversed for large amounts of data. Or A may be better when the data is partially sorted, B otherwise.

Algorithms can be expressed in many ways, in flowcharts, a natural language, and computer programming languages. Algorithms are used in mathematics, computing and linguistics, but a most common use is in computers to do calculations or process data.

Algorithm analysis deals with algorithms written in computer programming languages, which are based on mathematical formalism. An algorithm is essentially a set of instructions for a computer to perform a calculation in a certain way.

One way to test an algorithm is to run a computer program and see how well it works. The problem with this approach is that it only tells us how well the algorithm works with a particular computer and set of inputs. The purpose of algorithm analysis is to test and then draw conclusions about how well a particular algorithm works in general. This would be very difficult and time consuming to do on individual computers, so researchers devise models of computer functioning to test algorithms.

In general, algorithm analysis is most concerned with finding out how much time a program takes to run, and how much memory storage space it needs to execute the program. In particular, computer scientists use algorithm analysis to determine how the data imputed into a program affects its total running time, how much memory space the computer needs for program data, how much space the program’s code takes in the computer, whether an algorithm produces correct calculations, how complex a program is, and how well it deals with unexpected results.

3. BIG-OH NOTATION

Big-O notation is a mathematical description of an algorithm. It is a tool for comparing the growth rate of functions and it allows us to describe an algorithm with respect to its time and space requirements.

The time efficiency of the algorithms can be characterized by only a few growth rate functions:

I. O(l) - Constant Time

all instructions are executed a fixed number of times, regardless of the size of the input.

Example: taking the head of a list.

II. O(n) - Linear Time

- a constant amount of processing is done on each input element.

Example: searching unordered list.

III. O(n2) - Quadratic Time

the number of operations is proportional to the size of the task squared.

Example: comparing two two-dimensional arrays of size n by n.

IV. O(log n) - Logarithmic Time

- program gets only slightly slower as n grows (typically by using some transformation that progressively cuts down the size of the problem).

Example: binary search.

V. O(n log n) - "n log n " Time

- Typical of ‘divide and conquer’ algorithms, where the problem is solved by breaking it up into smaller subproblems, solving them independently, then combining the solutions.

Example: mergesort.

VI. O(an) (a > 1) - Exponential Time

- very fast-growing (assuming a>1), essentially unusable for all but very small instances.

Example: Towers of Hanoi (a=2).

VII. O(nk) – Polynomial Time

- most often arises from the presence of k nested loops (examples: insertionsort (k=2); Gaussian elimination method for a calculating a determinant (k=3); also typical of ‘greedy algorithms’

Examples: Prim’s and Kruskal’s algorithms.

VIII. O(n!) – Factorial Time

- even worse!

Example: recursive evaluation of a determinant.

The best time in the above list is obviously constant time, and the worst is exponential time which, as we have seen, quickly overwhelms even the fastest computers even for relatively small n. Polynomial growth (linear, quadratic, cubic, etc.) is considered manageable as compared to exponential growth.