# Sorting algorithms in Erlang, Part 1

Ok, so I am continuing my series of posts on sorting algorithms. In the first parts, I implemented some classic sorting algorithms in Ruby, an OO language. Right now, I will implement similar algorithms in a purely functional language, Erlang.

I will not explain the basis for the algorithms in these posts as I have already done so in the Ruby posts.

Differences in implementations

Implementing Bubble sort in Erlang was really hard for me. While implementing it in Ruby was dead simple, in Erlang this algorithm was one of the hardest, if not the hardest for me to figure out. Going into this, I was already expecting the iterative algorithms to be harder to implement in a functional style, but not this much. There is more code and it is harder to follow then a procedural or OO implementation.

If you were to give the task of implementing a sorting algorithm to a class of green CS students, in a procedural or OO language, without asking them to focus on performance, you would probably get a few bubble sorts. Bubble sort, Insertion sort and Selection sort are all very intuitive in a procedural paradigm.

I don’t see someone coming up with Bubble sort intuitively in a functional language. Clearly this is a sign of how different paradigms shape our way of coming up with solutions. The languages and more importantly the paradigms we use ,shape our very way of thinking.

For the purely iterative algorithms to work in Erlang, I had to change them into recursive algorithms. The concepts of in-place and out of place sorting were also thrown out the window. “while” loops in other languages often depend on mutable state to work, in Erlang these must be replaced with recursion as there is no mutable state.

Bubble sort

Here is the code for what I came up with. Disclaimer, I am still a novice with Erlang and there are better/shorter implementations on the web.

```% empty list an single element list
bubble_sort([]) -> [];
bubble_sort([X | []]) -> [X];

bubble_sort(ToSort) -> {RList, RSwapped} = bubble_sort(ToSort, false),
case RSwapped of
true -> bubble_sort(RList);
false -> RList
end.

% We have come to the last element of the list
bubble_sort([X, Y | []], Swapped) -> case X > Y of
true -> {[Y, X], true};
false -> {[X, Y], Swapped}
end;

% We are working through the list
bubble_sort([X, Y | Tail], Swapped) -> case X > Y of
true -> {NewTail, RSwapped} = bubble_sort([X | Tail], true), {[Y] ++ NewTail, RSwapped};
false -> {NewTail, RSwapped} = bubble_sort([Y | Tail], Swapped), {[X] ++ NewTail, RSwapped}
end.
```

The algorithm works by defining that an empty list and a list with a single element is already sorted. In any other case, we traverse the list recursively with bubble_sort/2, swapping the elements when returning from the chain of recursive calls.

Swapped is passed along to monitor if any swaps have taken place. This way we can tell the bubble_sort/1 to recursively start another chain of recursive calls to bubble_sort/2 and go through the whole list again.

Insertion sort

Insertion sort was much easier to write in Erlang than Bubble sort. I also find it a lot easier on the eyes to. Contrary to Bubble sort, I did not find implementing Insertion sort in Erlang harder than in Ruby. This solution feels more natural in Erlang than the previous one.

Note that the code is simplified because it relies on the max/1 function. Without relying on this built-in function the code would have been more dense.

Of course, this isn’t a “pure” textbook insertion sort, but you could call it a functional Insertion sort.

```insertion_sort([]) -> [];
insertion_sort([X | []]) -> [X];

insertion_sort(ToSort) -> insertion_sort(ToSort, []).

insertion_sort([], Sorted) -> Sorted;

insertion_sort(Unsorted, Sorted) -> Max = lists:max(Unsorted),
insertion_sort(lists:delete(Max, Unsorted),
[Max] ++ Sorted).
```

The function insertion_sort/1 calls insertion_sort/2. It’s first parameter is the list to sort and the second parameter the sorted list.

See you next time as I implement more sorting algorithms.

Sorting algorithms in Erlang, Part 2

# Sorting algorithms in Ruby, Part 3

For the last installment of Sorting algorithms in Ruby, I will be covering quicksort. This one is pretty famous, and a staple of CS classes.

Quicksort is a recursive sort algorithm that works by dividing the array to be sorted in two. An element called a pivot is then chosen. This element will be the one that determines how the array is next to be subdivided.

Different implementations of quicksort can use different methods for choosing the pivot. These will influence the performance of the algorithm itself. Choosing the first element as the pivot will result in bad performance on nearly sorted or already sorted arrays. Choosing the last element can also result in a similar case of poor performance.

A better implementation could be to choose the middle element or a random element. With the pivot choice, what you want to avoid is having the algorithm divide the sub-lists in a fashion as to create many sub-list with only one element, as this is wasteful.

Even so, my implementations will use the first element as it’s pivot for ease of implementation.

Here is how to algorithm works for the in-place version:

1. Divide the array in two.
2. Choose a pivot (in our case the first element of our divided array).
3. Using two indices, iterate on the sub array. One index will go from left to right, and the other right to left.
4. As the two indices iterate, swap the elements greater than the pivot to the right side of the sub array and the elements lesser than the pivot to the left side of the array.
5. Move the pivot where the two indices have crossed path.

Step 3, 4 and 5, are what is generally called the partition operation.

After step 5, the sub array will be divided in such a way that all elements smaller than the pivot will be left of the pivot and all elements greater than the pivot on it’s right side.

After that, you simply call quicksort on both side of the pivot to sort each of those two sub arrays.

This algorithm can be done in-place or out of place. As for the algorithms covered in previous posts, the out of place version is easier to implement but consumes more memory.

If performance is not an issue, or if you have trouble grasping the concept, I recommend starting with the out of place version. Once you understand it clearly, you can then implement the in-place version more easily. This technique applies for all sorting algorithms and I recommend it as a stepping stone for the more complex search algorithms.

The algorithm for the out of place version, is simpler. There is no need to muck around with indices. You just create two additional array at each recursive call. As you iterate you place the smaller elements in the first array and the greater elements in the other. You then call quicksort on both new arrays.

Tests

As in my last two posts, I will be using my generic sort test class to create the tests. I have created tests for both my in-place and out of place version of the algorithm. For brevity, I will only include one suite of tests.

```require 'test/unit'
require './quick_sort.rb'
require './sort_test.rb'

class QuickSortTest < Test::Unit::TestCase
def setup
quick_sort = QuickSort.new
@sort_test = SortTest.new(quick_sort)
end

def test_empty_array
assert @sort_test.test_empty_array, "Empty array not corectly sorted"
end

def test_array_with_one_element
assert @sort_test.test_array_with_one_element, "Array with one element not correctly sorted"
end

def test_ordered_array
assert @sort_test.test_ordered_array, "Ordered array not correctly sorted"
end

def test_array_with_even_number_of_elements
assert @sort_test.test_array_with_even_number_of_elements, "Array with even number of elements not correctly sorted"
end

def test_array_with_odd_number_of_elements
assert @sort_test.test_array_with_odd_number_of_elements, "Array with odd number of elements not correctly sorted"
end

def test_medium_sized_array
assert @sort_test.test_medium_sized_array, "Medium sized array not correctly sorted"
end

def test_reversed_sorted_array
assert @sort_test.test_reversed_sorted_array, "Reversed sorted array not correctly sorted"
end

def test_large_size_array
assert @sort_test.test_large_sized_array, "Large sized array not correctly sorted"
end

def test_test_array_with_repeating_elements
assert @sort_test.test_array_with_repeating_elements, "Array with repeating elements not correctly sorted"
end
end
```

Out of place

Here is the code for the out of place version:

```class QuickSort
def sort(to_sort)
# stop the recursion if nothing to sort

# pick the pivot (I pick the last element)
pivot = to_sort.pop

# partition operation
smaller_array = []
larger_array = []

to_sort.each do |element|
if element <= pivot
smaller_array.push(element)
else
larger_array.push(element)
end
end

# recursively sort the sub arrays and concatenate the results
return sort(smaller_array).push(pivot) + sort(larger_array)
end
end
```

While my in-place version picks the first element as the pivot, this one picks the last element for the convenience of using pop. Since there is no need to work with indices and swap elements around a pivot position, most of the complexity is done away with.

In place

Here is the code for the in-place version:

```class QuickSort
def sort(to_sort, index_of_pivot = 0, right_index = to_sort.length - 1)
old_right_index = right_index
left_index = index_of_pivot

# stop the recursion if nothing to sort
if left_index >= right_index then
end

# partition operation
# move both indexes towards the center until they cross over
# when left index finds an element greater than pivot and
# right index finds an element smaller than pivot swap them
while left_index < right_index
while to_sort[left_index] <= to_sort[index_of_pivot] and left_index < to_sort.length - 1
left_index = left_index + 1
end

right_index = right_index - 1 until to_sort[right_index] <= to_sort[index_of_pivot]

# swap both elements
if left_index < right_index
to_sort[left_index], to_sort[right_index] = to_sort[right_index], to_sort[left_index]
end
end

# swap pivot
to_sort[index_of_pivot], to_sort[right_index] = to_sort[right_index], to_sort[index_of_pivot]

# recursively sort the sub arrays
sort(to_sort, index_of_pivot, right_index - 1)
sort(to_sort, left_index, old_right_index)

end
end
```

As you can see the in-place version is more complicated. The algorithm works on the array at hand by swapping elements. Since no new arrays are created, it works on subdivisions of the array represented by two indices: index_of_pivot (which also happens to be the left index) and right_index.

The partition operation will usually work several passes until the left and right index cross. Since both indices find and swap elements that should be on the other side of the array, when the two indices meet there are no more swaps necessary.

At that point all elements smaller than the pivot are to the left of where the pivot will end up. We then swap the pivot in it’s correct place and recursively sort both sides of the array.

Conclusion

As I mentioned in my first post of the series, I intended to cover some sorting algorithms in an OO language, which I just did, and then in a functional language.

Hopefully, I will be able to get some insights by solving the same problem (sorting numbers) in two different programming paradigms.

Until next time.

# Sorting algorithms in Ruby, Part 2

Welcome back. This is part two of three in my “Sorting algorithms in Ruby” series. If you haven’t checked out the first one, by all means go take a quick peek: Sorting algorithms in Ruby, Part 1.

By the way, constructive comments are always welcome!

Heapsort

Heapsort is a great recursive sort algorithm. It starts by creating a heap, which is a data structure (more on that shortly).

The heap is created recursively and then used by the sort algorithm to create the new sorted array.

The algorithm has a time complexity of O (n log n). The algorithm can be implemented out of place or in-place. The latter being a little more trickier but requiring less memory to perform the sort. I went with the out of place approach in this post.

While quicksort is generally faster than heapsort, heapsort has a better worst case scenario and requires relatively few memory.

To use heapsort you must first build a heap. I had to go back and read about heaps since I had forgotten most of the details from my university computer science classes. Once I did though, everything started coming back pretty quickly. Heaps are a data structure, more precisely a binary (or ternary) tree structure. Not all trees are heaps, but all heaps are trees (binary trees actually).

A tree is a data structure where nodes have a parent (except for the root node) and can have 0 to n children nodes. In a binary tree, the nodes have between 0 to 2 children.

To qualify as a heap, a binary tree must follow these two rules:

• For a node B with parent node A. The key of B is smaller than the key of A.
• For all levels of the tree except the last one, all levels must be fully-filled. If the last level isn’t fully filled, it must filled from left to right.

Here are some illustrations of these data structures.

Heaps are typically represented in a linear array. A simple function using a formula is used to find the child and parent nodes of a node in array position i.

If the array has a zero-based index, the children of the node at index i are at:

index = 2 * i + 1
index = 2 * i + 2

To find the parent of a given node, we can use the formula:

index = floor((i – 1) / 2)

Which can be implemented in Ruby as : heap[((i – 1) / 2).floor]

Using an array seems counter intuitive at first. When heapsort was first designed, object oriented programming was still in embryonic stages. Also, using clever tricks to keep data storage space at a minimum was the norm. Even today, using an array to store the heap is a good solution on embedded systems with low memory.

To someone used to think in object-oriented principles, like myself, the heap begs to be implemented as a class. For this blog entry I decided to try out both approaches. I will present the more classical array based method. The article initially contained an object-oriented approach but as an astute reader commented the implementation was flawed.

Looking into it I noticed implementing heapsort was quicker and made more sense in the traditional approach.

I started by writing the heap code before the sort algorithm. I began by writing a first test, making it pass and then writing a second test case. Here is my test class:

```require 'test/unit'
require './heap.rb'

class HeapTest < Test::Unit::TestCase
@@unsorted = [6, 3, 5, 7, 1, 2, 8, 39, 23, 12, 34, 11, 32, 4, 13]
# This is an already generated heap, verified by hand.
@@generated_heap = [39, 34, 32, 8, 23, 11, 13, 3, 6, 1, 12, 2, 7, 4, 5]

def test_heapify_will_put_greatest_number_at_root_node
test_heap = Heap.new
test_heap.heapify(@@unsorted)
greatest_number = @@unsorted.max

assert_equal greatest_number, test_heap.heap.first
end

# test for heap property
def test_heapify_heap_will_follow_heap_property
# Heap property: child nodes have keys which are smaller or equal to their parent node's keys
test_heap = Heap.new
test_heap.heapify(@@unsorted)

(test_heap.heap.length - 1).downto(1) do |i|
node_value = test_heap.heap[i]
parent_node_value = test_heap.heap[((i - 1) / 2).floor]

assert_operator node_value, :<=, parent_node_value
end
end

def test_heapify_heap_is_equal_to_manually_calculated_heap
test_heap = Heap.new
test_heap.heapify(@@unsorted)

assert_equal @@generated_heap, test_heap.heap
end

def test_remove_removes_and_replaces_root_node
test_heap = Heap.new
test_heap.heapify(@@unsorted)

test_heap.remove()

assert_not_equal @@generated_heap.first, test_heap.heap.first
assert_equal @@generated_heap.length - 1, test_heap.heap.length
end

def test_remove_resulting_heap_will_have_34_as_root_node
test_heap = Heap.new
test_heap.heapify(@@unsorted)

test_heap.remove()

assert_equal 34, test_heap.heap.first
end

def test_remove_resulting_heap_satisfies_heap_property
# Heap property: child nodes have keys which are smaller or equal to their parent node's keys
test_heap = Heap.new
test_heap.heapify(@@unsorted)

test_heap.remove()

(test_heap.heap.length - 1).downto(1) do |i|
node_value = test_heap.heap[i]
parent_node_value = test_heap.heap[((i - 1) / 2).floor]

assert_operator node_value, :<=, parent_node_value
end
end

def test_empty_will_return_false_if_heap_contains_elements
test_heap = Heap.new
test_heap.heapify(@@unsorted)

result = test_heap.empty?()

assert !result
end

def test_empty_will_return_true_if_heap_contains_no_elements
test_heap = Heap.new
test_heap.heapify()
test_heap.heap.delete_at(0)

result = test_heap.empty?

assert result
end
end
```

Here is the source code for my heap implementation. This implementation builds a heap contained in an array.

```class Heap
attr_accessor :heap

def initialize()
@heap = []
end

# Creates a max-heap
def heapify(data)
data.each do |element|
@heap.push element
up_heap(@heap.length - 1)
end
end

def remove()
@heap.delete_at(0)

if not empty?
last_element = @heap.pop
@heap.insert(0, last_element)
down_heap(0)
end
end

def empty?()
return false if @heap.length > 0
return true
end

private
# method to restore heap property after adding an element to a heap
def up_heap(index)
if parent_exists?(index)
parent_index = ((index - 1) / 2).floor

if @heap[parent_index] < @heap[index]
swap(index, parent_index)
up_heap(parent_index)
end
end
end

# method to restore heap property after removing an element from the heap
def down_heap(index)
if smaller_than_children?(index)
index_of_greatest_child = index_of_greatest_child?(index)
swap(index, index_of_greatest_child)
down_heap(index_of_greatest_child)
end
end

def swap(first_index, second_index)
@heap[first_index], @heap[second_index] = @heap[second_index], @heap[first_index]
end

def parent_exists?(index)
return index != 0
end

def smaller_than_children?(index)
if @heap.length < 2 * index + 2
# element doesn't have any children
return false
elsif @heap.length == 2 * index + 2
# element only has one child
return @heap[index] < @heap[2 * index + 1]
else
# element has two children
return @heap[index] < @heap[2 * index + 1] || @heap[index] < @heap[2 * index + 2]
end
end

# This method assumes there is at least one child for the specified index.
def index_of_greatest_child?(index)
# We only have one child, it is thus the greatest
return 2 * index + 1 if @heap.length == 2 * index + 2

# We have two children
if @heap[2 * index + 1] > @heap[2 * index + 2]
return 2 * index + 1
else
return 2 * index + 2
end
end
end
```

The heap class contains the array which is accessible through an accessor.

The heapify method will take an unsorted array and will store it as a heap in the array. The implementation creates a max-heap, meaning a heap where the root node is the greatest element in the heap. The algorithm could also be created with a min-heap.

Remove, on the other hand will remove the first element on a heap (to be used by the sorting algorithm) and then recreate the heap using the recursive down_heap method.

Now for the sort algorithm and it’s test…

For the tests, I decided to create a reusable SortTest class. Heapsort was more complex to implement than my earlier sorts. Because of this, I decided to have a much more rigorous test suite. Since I’m feeling confident I will be able to use the same test cases for quicksort, I decided to put them away in a reusable SortTest class.

```require 'test/unit'
require './heap_sort.rb'
require './sort_test.rb'

class HeapSortTest < Test::Unit::TestCase
def setup
heap_sort = HeapSort.new
@sort_test = SortTest.new(heap_sort)
end

def test_empty_array
assert @sort_test.test_empty_array, "Empty array not corectly sorted"
end

def test_array_with_one_element
assert @sort_test.test_array_with_one_element, "Array with one element not correctly sorted"
end

def test_ordered_array
assert @sort_test.test_ordered_array, "Ordered array not correctly sorted"
end

def test_array_with_even_number_of_elements
assert @sort_test.test_array_with_even_number_of_elements, "Array with even number of elements not correctly sorted"
end

def test_array_with_odd_number_of_elements
assert @sort_test.test_array_with_odd_number_of_elements, "Array with odd number of elements not correctly sorted"
end

def test_medium_sized_array
assert @sort_test.test_medium_sized_array, "Medium sized array not correctly sorted"
end

def test_reversed_sorted_array
assert @sort_test.test_reversed_sorted_array, "Reversed sorted array not correctly sorted"
end

def test_large_size_array
assert @sort_test.test_large_sized_array, "Large sized array not correctly sorted"
end

def test_test_array_with_repeating_elements
assert @sort_test.test_array_with_repeating_elements, "Array with repeating elements not correctly sorted"
end
end
```
```require './heap.rb'

class HeapSort
def sort(to_sort)
heap_to_sort = Heap.new
heap_to_sort.heapify(to_sort)

sorted_array = []

until heap_to_sort.empty?
sorted_array.insert(0, heap_to_sort.heap.first)
heap_to_sort.remove()
end

return sorted_array
end
end
```

As you can see, most of the logic is in the heap class. The out of place sort algorithm is dead simple. Since the root node of a max-heap will always be it’s greatest element, we iterate over the heap taking the largest element and placing it in our new array.

Then we just have to remove the first element and make sure we reconstruct the heap correctly. After which we start over again.

That is all for now. See you next time for quicksort.

# Sorting algorithms in Ruby, Part 1

For fun, I decided to implement some classic sorting algorithms in an object oriented language. In this case Ruby.

Later I plan to implement some sorting algorithms in a functional language to contrast the differences.

For now, here are the algorithms I chose the implement.

Bubble sort
Selection sort
Insertion sort
Merge sort

I implemented those by reading the textual descriptions of the algorithms on Wikipedia and trying to refrain from looking at actual source code or pseudo code.

To test the algorithms I wrote some simple test cases.

```require 'test/unit'
require './bubble_sort.rb'
require './selection_sort.rb'
require './insertion_sort.rb'
require './merge_sort.rb'

class SearchTest < Test::Unit::TestCase
@@unsorted = [5, 2, 3, 8, 1, 3, 12, 10, 11, 6]
@@sorted = @@unsorted.sort

def test_bubble_sort
bubble_sort = BubbleSort.new
result = bubble_sort.sort(@@unsorted)

assert_equal @@sorted, result
end

def test_selection_sort
selection_sort = SelectionSort.new
result = selection_sort.sort(@@unsorted)

assert_equal @@sorted, result
end

def test_insertion_sort_out_of_place
insertion_sort = InsertionSort.new
result = insertion_sort.sort_out_of_place(@@unsorted)

assert_equal @@sorted, result
end

def test_insertion_sort_in_place
insertion_sort = InsertionSort.new
result = insertion_sort.sort_in_place(@@unsorted)

assert_equal @@sorted, result
end

def test_merge_sort
merge_sort = MergeSort.new
result = merge_sort.sort(@@unsorted.dup)

assert_equal @@sorted, result
end

# Additional tests on merge sort since it's more complex than the others
def test_merge_sort_random_arrays
merge_sort = MergeSort.new
a = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15]

assert_equal a, merge_sort.sort(a.shuffle)
assert_equal a, merge_sort.sort(a.shuffle)
assert_equal a, merge_sort.sort(a.shuffle)
end
end
```

In these tests I create an unsorted array which I then sort using Ruby’s built-in sort method. I then run the unsorted array into various objects implementing the algorithms and compare the results with the one sorted by Ruby.

Bubble sort

The first one is bubble sort. It’s fairly inefficient, the complexity is O(n2). If you have done some CS classes in the past you probably went over this one. It iterates over the array, comparing two values at a time and bubbles up the highest values to the end of the array. It continues doing this until it can determine that no more swaps are needed. In other words, that the array is now sorted.

While it can beat some algorithms for nearly sorted arrays, ie: [1, 2, 4, 3, 5, 6], in the real world it’s uses are mostly academic.

```class BubbleSort
def sort(to_sort)
sorted = false

until sorted
sorted = true

for index in 0..(to_sort.length - 2)
if to_sort[index] > to_sort[index + 1]
sorted = false
to_sort[index], to_sort[index + 1] = to_sort[index + 1], to_sort[index]
end
end
end

end
end
```

There are ways to optimize bubble sort. Rather than looking into common ways to optimize Bubble sort, I will take this time to look into more algorithms.

Selection sort

Selection sort is another inefficient but simple sort. It’s complexity is again O(n2), because of the nested loop.

It’s an in-place sort, meaning that it uses a small constant amount of data to perform it’s sort. In-place is also frequently taken to mean that it works on the data set at hand, by swapping the existing values.

Selection sort works by dividing the array in two; the sorted array at the beginning and the unsorted array at the end. Initially the whole array is unsorted.

The algorithm iterates over the array finding the smallest element. It then swaps this element to put it at the loop’s current index. What this does is add it to the end of sorted elements.

I will illustrate this with a sample array. [3, 6, 1, 2]
The sorted and unsorted values will be divided with a | symbol.

Initial value:
[ | 3, 6, 1, 2]

First pass:
[1 | 6, 3, 2]

Second pass:
[1, 2 | 3, 6]

Third pass:
[1, 2, 3, 6]

As you can see in this example, there is no need to do a fourth pass. When we are left with a single element, it is by definition already sorted.

```class SelectionSort
def sort(to_sort)
for index in 0..(to_sort.length - 2)
# select the first element as the temporary minimum
index_of_minimum = index

# iterate over all other elements to find the minimum
for inner_index in index..(to_sort.length - 1)
if to_sort[inner_index] < to_sort[index_of_minimum]
index_of_minimum = inner_index
end
end

if index_of_minimum != index
to_sort[index], to_sort[index_of_minimum] = to_sort[index_of_minimum], to_sort[index]
end
end

end
end
```

Insertion sort

For Insertion sort, I read a description stating that the algorithm works by going through the elements to be sorted and inserting them in their correct order in a new sorted array. This is an in-place algorithm. An even easier way to write it, would be to use an out of place algorithm and create another array like this:

```class InsertionSort
def sort_out_of_place(to_sort)
sorted = []

to_sort.each do |element|
for index in 0..(to_sort.length - 1)
sorted.insert(index, element) if to_sort[index] > element
end
end

end
end
```

Wow, that one was easy. But now let’s try the slightly harder in-place version. This version works on the array to be sorted rather than create a new array.

```def sort_in_place(to_sort)
# index starts at one, we can skip the first element, since we would
# otherwise take it and place it in the first position, which it already is
for index in 1..(to_sort.length - 1)
for inner_index in 0..(index - 1)
if to_sort[inner_index] >= to_sort[index]
to_sort.insert(inner_index, to_sort[index])
to_sort.delete_at(index + 1)
end
end
end

end
```

Ok, so it’s not that much more complicated. That’s because I used Ruby’s insert and delete_at. Normally in my inner loop I would have had to shift (move) manually all the elements in-place to put my new element to be sorted in the right position thus repositioning many elements.

What this means is that my implementation is really only truly in-place if Ruby’s insert and delete_at are implemented in-place themselves. The logic of the algorithm can be considered in-place if we consider the two methods as abstractions.

Either way, this is a more faithful implementation of the original algorithm than the first out of place version. Doing it in-place uses less memory than the out of place version but is slightly harder to code and understand.

As for the complexity it’s a O(n2) algorithm, but it is considered more efficient in practice than Selection sort and Bubble sort.

Let’s move on to a more efficient algorithm.

Merge sort

Because recursion kicks ass!

Now we are getting at more interesting and more efficient algorithms. Merge sort is an O (n log n) algorithm. This is a lot better than the previous algorithms. It’s also the first to use recursion thus using a divide and conquer approach.

I have commented the code, and will let these comments and the source code do most of the talking.

```lass MergeSort
def sort(to_sort)
# if the array is of length 0 or 1, consider it is already sorted
if to_sort.length <= 1
end

# otherwise split the remaining elements in two
# I had to look this line on the web (source refactormycode.com)
second_array = to_sort.slice!((to_sort.length / 2.0).round..to_sort.length)

# recursive method call on both arrays
first_sorted_array = sort(to_sort)
second_sorted_array = sort(second_array)

# merge the two sorted arrays together
return merge(first_sorted_array, second_sorted_array)
end

private
def merge(first_array, second_array)
# if either array is empty consider the other already sorted
return second_array if first_array.empty?
return first_array if second_array.empty?

# remove the smallest element out of the two arrays
if first_array.first <= second_array.first
element = first_array.shift
else
element = second_array.shift
end

# recursive call to construct the merged array
return [element] + merge(first_array, second_array)
end
end
```

To implement Merge sort, you also need to implement a merge function. This function will pick values from two arrays and construct a merged sorted array. The two arrays used for input are considered already sorted. The job of the merge function is then to look at the first element of each of these arrays and find the smallest.

ie:

[1, 4, 7, 9] and [2, 5, 6]
[]

In this case the merge function would look at 1 and 2. Select 1 as the smallest value and remove it from the first array. The value would then be used as the first value of the new merged array.

[4, 7, 9] and [2, 5, 6]


The process then repeats itself until there are no more elements in either arrays.

For your merge function you can use an iterative approach. In such an approach you loop, taking elements from the two arrays to be merged and placing them in a merged array, until you have no more elements left to take from the original arrays.

Since Merge sort is a recursive algorithm, I also went with a recursive merge function.

When I write a recursive function, I always start by defining the condition(s) that will end the recursion and then consider what I need to do to get to this condition.

In this case, we will end the recursion when one or our arrays to be merged is empty. If this is the case, we can return the other array as there are still values in it to be merged.

If we are not facing an empty array, we simply need to look at the first value from each array, remove it and return it along with a recursive call to merge.

This merge function required more thinking than the actual sort function which just keeps splitting the array in two, calling sort recursively on both of the new arrays.

Until next time…

This is it for part 1. Stay tuned for part 2 where I will look into more sorting algorithms in Ruby, more specifically heapsort and quicksort.