Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Functional Building Blocks

Save for later
  • 3 min read
  • 02 Mar 2017

article-image

In this article by Atul Khot, author of the book, Learning Functional Data Structures and Algorithms provides refresher on some fundamentals concepts.

How fast could an algorithm run? How does it fare when you have ten input elements versus a million? To answer such questions, we need to be aware of the notion of algorithmic complexity, which is expressed using the Big O notation. An O(1) algorithm is faster than O(logn), for example.

What is this notation? It talks about measuring the efficiency of an algorithm, which is proportional to the number of data items, N, being processed.

(For more resources related to this topic, see here.)

The Big O notation

In simple words, this notation is used to describe how fast an algorithm will run. It describes the growth of the algorithm's running time versus the size of input data.

Here is a simple example. Consider the following Scala snippet, reversing a linked list:

scala> def revList(list: List[Int]): List[Int] = list match {
     |   case x :: xs => revList(xs) ++ List(x)
     |   case Nil => Nil
     | }
revList: (list: List[Int])List[Int]
scala> revList(List(1,2,3,4,5,6))
res0: List[Int] = List(6, 5, 4, 3, 2, 1)

A quick question for you: how many times the first case clause, namely case x :: xs => revList(xs) ++ List(x), matches a list of six elements? Note that the clause matches when the list is non-empty. When it matches, we reduce the list by one element and recursively call the method.

It is easy to see the clause matches six times. As a result, the list method, ++, also gets invoked four times. The ++ method takes time and is directly proportional to the length of the list on left-hand side.

Here is a plot of the number of iterations against time:

functional-building-blocks-img-0

To reverse a list of nine elements, we iterate over the elements 55 times (9+8+7+6+5+4+3+2+1). For a list with 20 elements, the number of iterations are 210.

Here is a table showing some more example values:

functional-building-blocks-img-1

Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at AU $19.99/month. Cancel anytime

The number of iterations are proportional to n2/2. It gives us an idea of how the algorithm runtime grows for a given number of elements.

The moment we go from 100 to 1,000 elements, the algorithm needs to do 500 times more work.

Another example of a quadratic runtime algorithm is a selection sorting. It keeps finding the minimum and increases the sorted sublist. The algorithm keeps scanning the list for the next minimum and hence always has O(n2) complexity; this is because it does O(n2) comparisons. Refer to http://www.studytonight.com/data-structures/selection-sorting for more information.

Binary search is a very popular search algorithm with a complexity of O(logn). The succeeding figure shows the growth table for O(logn).

When the number of input elements jumps from 256 to 10,48,576, the growth is from 8 to 20. Binary search is a blazing fast algorithm as the runtime grows marginally when the number of input elements jump from a couple hundreds to hundred thousand.

functional-building-blocks-img-2

Refer to https://rob-bell.net/2009/06/a-beginners-guide-to-big-o-notation/ for an excellent description of the O notation.

Refer to the following link that has a graphical representation of the various growth functions:

https://therecyclebin.files.wordpress.com/2008/05/time-complexity.png

Summary

This article was a whirlwind tour of the basics. We started with a look at the Big O notation, which is used to reason about how fast an algorithm could run

FP programs use collections heavily. We looked at some common collection operations and their complexities.

Resources for Article:


Further resources on this subject: