# Sorting Algorithms: Bubble, Selection, Insertion, Quick, Merge, Radix

Throughout my research in changing careers to Software Engineering, I have been bombarded with the harsh reality that without some knowledge of data structures and algorithms, I am not going to find myself a job. I’ve been blessed to have been given 2 technical interviews already before even graduating from my bootcamp, and I can already see that learning algorithms is imperative to the process. On both interviews, just knowing 2 simple searching algorithmic patterns helped me pass on to the next step in the hiring process.

Because I’ve known from the beginning that this part of the career change is overwhelmingly under-taught at bootcamps, I have, on my own time, been studying and learning as much as I can. I cannot call myself a master (traversing a linked list really is a b****), but I found a cool way to help me continuously work on and remember a lot of these concepts…

**Visualizers.**

As much as you can watch tutorials online and take classes, much like every other thing in coding, you learn algorithms by implementing. I found that watching visualizers were very helpful to really understand each algorithms’ Big O, but I went one step further: I created my own.

(In this blog, I’ll talk about my sorting visualizer, but check out my GitHub for my pathfinder algorithms visualizer as well! (Coming Soon…))

Understanding the code for sorting algorithms is actually quite simple. Bubble, Insertion, Selection, Merge, you name it. They all are slightly different, but the code is relatively simple.

Getting a React Component to actually show it in real time, though? That’s where the fun comes in. That’s where you really have to understand what the algorithm is doing at each line to make sure you are showing it in the way that it works. It’s easy to get an array on a page and sort it. What isn’t easy is to show all the different ways you can sort it.

*“Elementary”* Sorting Algorithms

*“Elementary”*Sorting Algorithms

I started with what they call the *elementary* algorithms. Bubble Sort, Insertion Sort and Selection Sort are all great, but they have one thing in common: they’re slow.

**Bubble Sort**

Bubble Sort starts at the beginning of an array and swaps the first two elements if the first is greater than the second. Then, it goes to the next pair and does the same so the larger elements slowly “bubble” to the end of the list . Average and worst case, Bubble Sort performs at O(n²).

With this implementation, I start with looping from the end of the array to the beginning. I do this so that I can use this loop to define the next loop. The next loop loops through the array starting from the beginning to the element before the first loop. If the element in the second loop is greater than the element to its right, swap the two elements. The `noSwaps`

variable is there for optimization. When there are no more swaps left in that loop, the function can exit out of the loop; this deters the function from comparing items that are already sorted.

**Selection Sort**

Selection Sort is similar to bubble sort but *almost* the opposite. It finds the smallest element and moves it to the front by swapping it with the first element. Then it looks for the next smallest and swaps it with the second element, so on and so forth until the array is sorted. Average and worst case, Selection Sort performs at O(n²).

For Selection Sort, I loop through the array from beginning to end. While looping, I find the index of the smallest number. To start off, the index is just where we are in the loop. The second loop then loops through the array starting at the index after the first loop. If the element in the second loop is greater than the element at the index of the smallest element, the second loop is now the new smallest index. One the second loop is done, if the first loop does not equal the smallest index, swap the elements at the first loop and at the smallest index.

**Insertion Sort**

Insertion Sort is the last of the* elementary* sorting algorithms. It gradually builds up a sorted left half of the array, until all elements are sorted. Worst case, Insertion sort is quadratic (O(n²)). But, insertion sort is a great option if you have to sort data live, or in real time.

With the Insertion Sort implementation, I start the loop at the 2nd element in the array until the end. This loop is my pointer. Then I loop through the elements *before* the pointer. If the element before the pointer (2nd loop) is greater than the pointer, move the element to the right of the 2nd loop to the place of the second loop. Else, make the element above the 2nd loop element the pointer.

# “Intermediate” Sorting Algorithms

These are the ones you are more likely to see in an interview setting, and they’re comparatively MUCH faster than the algorithms above. I will say, animating these were significantly more difficult than the “elementary” ones; especially Merge Sort.

**Quick Sort**

This one was my favorite to learn and animate. It’s just so mesmerizing HOW much faster it is than all the other ones. If there’s any reason I can persuade you to create your own visualizers, it’s just to see the quickness of Quick Sort. Quick Sort uses the fact that an array of 0 or 1 element is already sorted. It finds one element (pivot) and looks for the index that pivot should end up in. Quick Sort uses recursion as well as a helper function to sort through the array. Average case, Quick Sort runs at O(n log(n)). Worst case, Quick Sort performs at O(n²).

To give some context, the gifs for Bubble and Selection Sort are over 10 seconds log, Insertion Sort is 6.5 seconds long. Quick Sort is barely 3 seconds.

In my Quick Sort implementation, I start with a `pivot()`

helper function. In this function, I assign the element at the starting index as the pivot. The starting index is also my starting pivot index. Then, the function loops through the array starting at the starting index plus one through to the end of the array. If the pivot is greater than the current element in the loop, the pivot index is increased by one and the elements at the pivot index and the loop index swap. One the loop is done, the function swaps the starting element with the pivot element. Finally, the function returns the pivot index. This is important to know where to start the pivot in the `quickSort()`

function.

Now, in the actual Quick Sort implementation, the function takes in a left and right argument. These arguments determine what section of the array we’re looking at. If left is less than right, the function finds the index using the pivot helper function. Then, it recursively calls `quickSort()`

on the array from the left index to the index and from the index plus one to the right index. There is no need to include the element in the index in these recursive functions because it is already sorted to its correct place in the array.

**Merge Sort**

Merge Sort, like Quick Sort, uses the fact that arrays of 0 and 1 element are always sorted. It creates 2 smaller arrays and loops through both, pushing the smallest element into a new, sorted array. Average and worst case, Merge Sort performs at O(n log(n)).

In `merge()`

, the function loops through both arrays. While the index for each is less than the length of each array, if the element in the first array is less than the element in the second array, the new array gets the first array element and the index of the first array is increased by one. If the second array element is smaller, the new array gets the second array element and the index of the second array is increased by one. For the case that the arrays are different lengths, after the loop, if each index is still less than the length of each array, the elements get pushed into the new array and the indices are increased by one.

In the `mergeSort()`

function, recursion is used again. Until the length of the array being passed in is less than or equal to 1, the mid-point is going to be the average of the length of the array. Then, the two arrays are going to be a recursive call of `mergeSort()`

with the array split at the mid-point. Once the call stack is completed, the function returns the `merge()`

of the two arrays. `merge()`

returns the new sorted array.

**Radix Sort**

This is the only algorithm I didn’t make a visualizer for…yet 😉. This one is the curveball of them all. In all the sorting algorithms I showed above, all of them did one thing the same: they compared 2 elements and sorted based on who was smaller or larger. Radix doesn’t work by comparing numbers. Radix uses the fact that a number with more digits is a bigger number. Radix Sort sorts elements based on the last number to the right of each number. It loops through until it has sorted the same amount of times as the largest number in the array.

Radix Sort is usually implemented with binary, but my implementation below is using base 10.

Okay, there’s a LOT going on here (and a lot of math…), but let’s break it down. In `getDigit()`

, we get the digit at the place we are looking for. For example, if we have a number, 1234, and we want the most right number (4), `getDigit()`

returns the number 4. The math is not too complicated either! If you think about the digits in a number, they are all powers of 10. In 1234, 4 is in the one’s place, 3 is in the ten’s place, 2 is in the hundred’s place and 1 is in the thousand’s place. So, what this function does is grabs the absolute value of 1234 (edge case for negative numbers) and divides it by 10 to the power of the place we’re looking for (10⁰ = 1, 10¹ = 10, 10² = 100, etc). Then we divide by 10 and grab the remainder, which will be the number in that place. This is going to be useful to know which bucket to put the number in when we sort.

`getDigit(1234, 2) //=> 2`

Math.floor(1234 / 10^2) => Math.floor(1234 / 100) = 12

12 / 10 = 1 R2

The answer is 2.

`digitCount()`

returns the amount of digits there are in a number. Just like getDigit, it is possible to do this without math; you can convert the number to a string and check it’s length. But, math doesn’t involving changing datatypes. Here, we use the same idea as in `getDigit()`

; each digit in a number is in a place of 10. This time, we use log with base 10 to compute 10 to what power gives us the absolute value (again, negative numbers) of the number we’re passing in. We use `Math.floor`

to get a whole number and then add 1 to get the answer! This is going to be useful to find the number with the maximum amount of digits.

`digitCount(1234) // => 4`

Math.log10(1234) = 3.0913...

Math.floor(3.0913...) = 3

3 + 1 = 4

The answer is 4.

`mostDigits()`

takes in an array and returns the count for the number that has the most amount of digits. No more math here! The function loops through the array, finds the element with the highest return from `digitCount()`

and returns that digitCount.

Finally, in our `radixSort()`

function, we find the maxDigitCount. This is the amount of times we will have to sort the array into the buckets. The buckets is an array of arrays. Each nested array represents the number each digit can be. In this case, we are comparing on base 10, so we have 10 buckets, 0–9. Then, for every element in the array, we push the element into the right bucket based on the digit in the place we are looking. Then, we flatten our bucket array and return it.

I hope this was helpful in encouraging you to try out creating a visualizer for yourself! Doing so made my understanding of these algorithms so much clearer. Also, it’s probably my favorite project I’ve done so far (and that says a lot coming from someone who really enjoys backend development).

Check out my GitHub repo here: Sorting Visualizer