Tag: functional programming

Is C++ Ready for Functional Programming? – Standard Library

Time to take a closer look to the standard library from the functional programming point of view.

In the first installment of this series, I stated that using a library to implement functional programming structures would not be an ideal solution, but as C language pioneered in the 70s, part of the language finds its proper location in a library.

C++ standard library has grown disorderly oversize during the years, so let’s have a look at what kind of support is available for those that want to use C++ with the functional paradigm.

Continue reading “Is C++ Ready for Functional Programming? – Standard Library”

Is C++ Ready for Functional Programming? – Functions, Functions for Everyone!

In this second installment, I’ll look closer to the core of functional programming – functions. When I started programming, BASIC was the language of choice… the only available choice. And no, it wasn’t the quite convoluted Visual Basic, but the old plain, primitive BASIC with line numbers and poor syntax.

In those days I just used GOTOs and looked with suspicion at the GOSUB/RETURN wondering why in the world I would need to automatically return when I could GOTO everywhere I needed… But I’m digressing. It is just for saying that (besides yours truly could be not the best authority on this matter), programming languages have come a long way and the poor BASIC programmer had to spin the evolution wheel for quite a while to reach to concept of function as presented in this post.

Continue reading “Is C++ Ready for Functional Programming? – Functions, Functions for Everyone!”

Is C++ Ready for Functional Programming? – Intro

Design by committee (DbC) was never meant to be a compliment. In DbC, every decision has to go through a debate and be subjected to a political process. Because of the needed compromise among the parts, the resulting artifact usually tends to be clunky and patchy, lacking elegance, homogeneity, and vision.

C++ is quite an old language, not as old as its parent C, or other veterans of the computer language arena, but the first standard is now 23 years old and the language was quite well-formed and aged to mature its dose of quirks, long before that.

Continue reading “Is C++ Ready for Functional Programming? – Intro”

Comprehensive For in C++

Switching back and forth between Scala and C++ tends to shuffle ideas in my brain. Some ideas from Scala and the functional world are so good that is just a pity that can’t be applied directly and simply in C++.

If you carefully look into idiomatic C code for error handling, it is easy to find a lousy approach even in industrial code. I’ve seen a lot of code where not every return value was examined for an error, or not everything was thoroughly validated, or some code was executed even when it was not needed because an error occurred but was too cumbersome to provide the escape path.

In C++ things are a bit better, thanks to the exception mechanism, but exceptions are such a pain in the neck to get right and to handle properly that error management could soon turn into a nightmare.

Error handling in C and C++ is so boring and problematic that the examples in most textbooks just skip it.

(featured image by Nadine Shaabana)

Continue reading “Comprehensive For in C++”

Our Fathers’ Faults – The Chain of Death

The functional programming core concept is about composing functions together to craft more complex and convoluted functions. In Scala (not unlike what happens in many other programming languages) there are two ways to combine functions: the first is to apply a function to the result value of another function and the latter is to use a function to produce the argument value of another function.

Written in this way they may look not so different, in fact, even if in practice they show up in quite a different look, the abuse of the mechanism leads to the same problem.
The first way of combining functions, is also called chaining since you chain functions together – the result of function fn is the input of function fn+1. (Interestigly this very same mechanism will get a boost in C++ starting from C++20 ISO standard thanks to range and pipes).

Take the following example:

List(1,2,3,4).filter( _ > 2 ).map( _.toString ).fold( "" )( _ + _ )

(If you already know Scala, you may safely skip this paragraph 🙂 ) If you are unfamiliar with Scala or function programming this may look suspiciously like bad code, (this could be some meat for another post), but trust me it is not. Function List(…) constructs a List. List, containers, and iterators in general in Scala have a set of methods that can be chained together. You may think of them as taking an iterator and producing an iterator in a different sequence. Back to the example above, filter produces an iterator that scans only elements of the input sequence that fulfill the given condition (being greater than 2 in the example). map produces an iterator over a sequence whose elements are computed by the given function on the source elements. Eventually fold collapses the sequence by accumulating items iteratively using the provided expression.

Each small block performs an operation. Note that I haven’t written “simple” operation, on purpose. In fact the operator (filter, map, fold) is simple in itself, but it is programmable via a lambda that can be as convoluted as you want. Therfore the operation performed by the block may become really complex.

While you are focused on your goal, it may be easy and convenient to dump your mind into an endless chain. This has two drawbacks, first, it is unreadable and second, it is uninspectable.

It is unreadable because you need to start from the beginning and analyze the sequence up to the point to where you want to focus and know what the inputs are and from there onward to understand how the output is processed into the final result. Pair this with complex lambdas and you may find yourself in quite a nightmare to figure out what this code is supposed to do or where is the bug.

It is uninspectable because you cannot use the debugger to stop execution and show intermediate results of your operation (I’ve noticed over the years that Scala programmers are usually reluctant in using a debugger preferring print/log debug).

The other form of the problem – functions calling functions calling functions – despite the differences in syntax, is not that different in how it is produced and in the result. The code you may see, such as –

context.become(
    enqueuer(
        resourceManagement(
            Queues( queues.execution.filterNot( x =>
                ( x == realFree || context.child( childName( x.getUUID ) ).isEmpty ) ), queues.waiting ) ) ), true )

can be written in any language, but it seems that our fathers was quite fond of this approach.

It is important to recognize that when writing code you have (or, at least you should have) a crystal clear and detailed comprehension of the data flow, and even if you strive to write clear code, you’ll tend to pack these things together because a) it is faster (less typing, less naming of values, less effort to add structure) and b) you don’t feel the need to make it clearer since it is so straightforward in your head.

Lesson learned – resist the temptation of dumping the mental flow into a coded data flow, use the old “divide et impera” to split the flow into manageable units with names reflecting the content e the intent. A good indication is the line length, if your statement, properly formatted to fit in 80 columns, happens to span over more than 3 lines, then splitting it is a good idea.

Additional thought Attending some FP conferences I had the clear impression that FP pundits encourage complex aggregation of functions, even the justification for having short indent space (2 characters is the recommended indentation for Scala) has the purpouse of make the writing of chains more convenient. For these reasons I suspect that this point could be a little controversial in FP circles. I stay with my idea that industrial quality code must be first easy to read and understand and then easy to write.

Lambda World 2019

Lambda World 2019 Blog Post

The Spanish town of Cadiz lies outward in the ocean, circled by ancient walls, protected by two castles. Grids of alleys, old houses facing at a short distance across narrow streets, narrow balconies enclosed in windowed structures. Warm, light brown rocks and stone. Old, ancient, historical and yet so filled with sun and sparkling energy.
It is just a heck of a journey to get there to attend the best functional programming conference in the world. You start from your ordinary small northern Italy town, get by car to the nearest airport, fly first to Barcellona, hoping that the turmoils for the recent sentence on the separatist leaders cause no problem with your flight exchange. Wait for a while at the airport terminal, eating a sandwich, then you board on another plane that will carry you to Jerez de la Frontera. There you are up to a ride on the train. Eventually, you reach Cadiz, but you still need to walk to your hotel in the historic downtown. “Walk” because at Cadiz core everything is at a walking distance and most of the streets are impractical to any car.

This is my third year in a row that I manage to get my employer to sponsor my attendance at Lambda World Cadiz. I like this conference, I like the eclectic view of the functional-programming-verse that provides and last, but not least, I like the setting.

Continue reading “Lambda World 2019”

If the free lunch is over, where can we eat for free?

In the first part of this blog post, I wrote about why FP is mandatory for programmers that work on modern software projects. Summing it up, it was like this – modern hardware scales up by adding cores, therefore, if software wants to scale up and well, then it has to do it by going concurrent, and FP is a good choice to deal with concurrent programming issues.

In this post, I’ll write about some differences between traditional and FP programming and I’ll present one more reason about why FP is a good match with parallel programming.

You can write bad code in any language. That’s a fact. So there’s no magic in functional programming to turn a lame programmer in a brilliant one. On the other hand, you can write pretty good code in the functional paradigm by sticking with the tools provided by functional languages.

So, let’s have a look at this FP thing… First, it is evident that this is not your parent’s FP – map, filter, fold… Just a few of the most recurring spells you’d see in a Scala source and none of them was in the notes I jotted down during my University courses. Even looking at my LISP textbook I couldn’t find any reference to the map operation. (Just to help those with no FP background – map (transform in C++) is an operation that applies a transformation to each item in a collection. In other words, if you have a function that turns apples into pears, then – using map – you can turn your array of apples into an array of pears.)

Continue reading “If the free lunch is over, where can we eat for free?”

Lambda World 2018

Back in the XVI century, the Flota de Indias based in Cadiz set out to Americas carrying tools,  books, and settlers and carried back goods, from the new world.

In much the same way I went to Cadiz and set off for the brave new world of functional programming with the intent of carrying back news and techniques for my daily job.

Cadiz is an awesome and is a wonderful frame for one the best conference I attended in 2017 and 2018 – Lambda World.

As usual, I took a lot of notes and possibly soon or later they will land here. I plan to prepare a short summary with information about the conference and briefs of the speech I attended, much I did for Scala Italy. In the meantime, I warmly suggest these awesome notes (by Yulia). Stay tuned.

Is Functional Programming functional to programming?

Some 25 years ago I took my class of advanced programming which included exotic paradigms such as logic and functional programming.

As most of the content of that course, FP went into a seldom-if-ever open drawer during my professional career.

Simply I never saw the needs to use Lisp or had the luxury to afford never changing values.

A few years ago (well not that few – just checked it was 2005) I read on DDJ that the free lunch was over. That has nothing to do with the charitable food bank, but it marked a fundamental twist in the computer performance evolution. Something that would have a strong impact on how software was going to be designed, written and tested.

Until 2005 no matter how your code was badly written and underperforming. You were assured that for free, just waiting for the next hardware iteration, your code would have run faster (bugs included)! This is like deflation in economics – why should I invest my money if keeping them in the matters for a year they increase their value?

Continue reading “Is Functional Programming functional to programming?”

Optimization Time – Scala (and C++)

The Case

It is quite a feeling when you learn that your commit, a couple of months ago, broke the build is such a subtle way that it took so long to be detected. Possibly a more thorough testing and validation of the software would have caught it earlier, nonetheless it’s there and you and your coworkers are working hard to delimit the offending code and better understanding what caused the mess.

It turned out that some perfectly working code was taking too much time in one of the hot spot of our codebase. More precisely that code operated a conversion from an incoming data packet into a format suitable for data processing… about 2500 times a second on a modest hardware.

The code resulting in such poor performances that disrupted the device functionality seemed like a good idea in a very functional idiom –

@inline final val SAMPLES_PER_BLOCK = 6
@inline final val BLOCK_LENGTH = 8
@inline final val BLOCK_HEADER = 2

private def decodeFunctional(packet: Array[Byte]): Array[Int] = {
    val lowSamples: Array[Int] = packet.drop(BLOCK_HEADER).map(x => x.toUint)

    val msbs: Int = ((packet(0).toUint << 8) | packet(1).toUint) & 0xFFF
    Array.tabulate(SAMPLES_PER_BLOCK)(
        x => lowSamples(x) | (((msbs << 2 * x) & 0xC00) >> 2)
    )
}

Basically the incoming packet holds six 10-bits samples in 8 bytes and some bits-shifting’n’pasting is needed to rebuild the six sample array for later processing.

Once found that the problem was here I decided rewrite the code to get rid of the functional idiom and take an imperative/iterative approach:

private def decodeImperative(packet: Array[Byte]): Array[Int] = {
    // the code below is ugly, but it needs to be fast, don't touch,
    // unless you know what you are doing
    val samples: Array[Int] = new Array[Int](SAMPLES_PER_BLOCK)
    val msbs: Int = ((packet(0).toUint << 8) | packet(1).toUint) & 0xFFF
    var index: Int = 0
    while (index < SAMPLES_PER_BLOCK) {
        samples(index) = packet(BLOCK_HEADER + index).toUint |
                         (((msbs << 2 * index) & 0xC00) >> 2)
        index += 1
    }
    samples
}

(the comment is a colorful note I decide to leave for the posterity :-)).

The idea is to get rid of the read-only constraint imposed by functional programming and save all the temporaries.

Optimizing is a subtle art, and even if intuition may guide you, only measurement can tell whether the optimization succeeded. So I set up some speed test (see “methodology” below) to objectively assess my results.

The speed improvement was astounding: more than 7 times faster (7.6 to be precise).

Energized by this victory I decided to switch to heavy optimization weapon. Not being sure if scala/java compiler optimization really does its optimization homework, I opted for manually unrolling the loop in the code:

private def decodeUnrolled0( packet: Array[Byte] ) : Array[Int] = {
    val PACKET0 : Int = packet(0).toUint
    val PACKET1 : Int = packet(1).toUint
    Array[Int](
        packet(BLOCK_HEADER+0).toUint | ((PACKET0 << 6) & 0x300),
        packet(BLOCK_HEADER+1).toUint | ((PACKET0 << 8) & 0x300),
        packet(BLOCK_HEADER+2).toUint | ((PACKET1 << 2) & 0x300),
        packet(BLOCK_HEADER+3).toUint | ((PACKET1 << 4) & 0x300),
        packet(BLOCK_HEADER+4).toUint | ((PACKET1 << 6) & 0x300),
        packet(BLOCK_HEADER+5).toUint | ((PACKET1 << 8) & 0x300)
    )
}

I was somewhat disappointed to learn that this version is only marginally faster (x1.6) than the original functional version. This didn’t really make sense to me, since the loop is completely unrolled and just trivial computation needed to be performed. So I decompiled the .class file:

public int[] CheckPerf$$decodeUnrolled0(byte[] packet)
{
  int PACKET0 = UnsignedByteOps(packet[0]).toUint();
  int PACKET1 = UnsignedByteOps(packet[1]).toUint();
  return (int[])Array..MODULE$.apply(Predef..MODULE$.wrapIntArray(new int[] {
    UnsignedByteOps(packet[2]).toUint() | PACKET0 << 6 & 0x300, 
    UnsignedByteOps(packet[3]).toUint() | PACKET0 << 8 & 0x300, 
    UnsignedByteOps(packet[4]).toUint() | PACKET1 << 2 & 0x300, 
    UnsignedByteOps(packet[5]).toUint() | PACKET1 << 4 & 0x300, 
    UnsignedByteOps(packet[6]).toUint() | PACKET1 << 6 & 0x300, 
    UnsignedByteOps(packet[7]).toUint() | PACKET1 << 8 & 0x300 }), ClassTag..MODULE$.Int());
}

Instead of the Java array I expected, some Scala internals are used to create the array and then converting it into the Java thing. Possibly this was the cause of the slowness. By looking around at the decompiled code I found another function that just used a Java int array. So I rewrote the unrolled version accordingly –

private def decodeUnrolled1( packet: Array[Byte] ) : Array[Int] = {
    val samples = new Array[Int](SAMPLES_PER_BLOCK)
    val PACKET0 = packet(0).toUint
    val PACKET1 = packet(1).toUint
    samples(0) = packet(BLOCK_HEADER + 0).toUint | ((PACKET0 << 6) & 0x300)
    samples(1) = packet(BLOCK_HEADER + 1).toUint | ((PACKET0 << 8) & 0x300)
    samples(2) = packet(BLOCK_HEADER + 2).toUint | ((PACKET1 << 2) & 0x300)
    samples(3) = packet(BLOCK_HEADER + 3).toUint | ((PACKET1 << 4) & 0x300)
    samples(4) = packet(BLOCK_HEADER + 4).toUint | ((PACKET1 << 6) & 0x300)
    samples(5) = packet(BLOCK_HEADER + 5).toUint | ((PACKET1 << 8) & 0x300)
    samples
}

The main difference is that here the array is first declared and allocated, then filled. In the above code the array was created, initialized and returned all in the same statement.

The speed improvement was good, but not much better than my imperative version: x7.77 times faster.

Then a colleague pointed out that I was using the “old” Scala 2.10 compiler and that I should try the latest 2.12 that benefits from a better interoperability with the underlying JVM 1.8.

In the following table you get the performance comparison:

Attention: The internal data of table “1” is corrupted!

Functional and unrolled0 are quite unsurprising, just what you expect from the next version of the compiler. Imperative approach yields quite a boost – 22 times faster than the functional version on the previous compiler. The shocking surprise is in the last column – the unrolled1 version of the code runs in excess of 1200 times faster than the original code!

Before jumping to conclusions, I performed the same tests on the code translated into C++. Here is the equivalent code:

typedef std::array<int,SAMPLES_PER_BLOCK> OutputArray;
typedef std::array<uint8_t,BLOCK_LENGTH> InputArray;

OutputArray decodeImperative( InputArray const& packet )
{
    OutputArray samples;
    uint32_t msbs = ((packet[0] << 8) | packet[1]) & 0xFFF;
    for( int index=0; index< SAMPLES_PER_BLOCK; ++index )
    {
        samples[index] = packet[BLOCK_HEADER + index] |
                         (((msbs << 2 * index) & 0xC00) >> 2);
    }
    return samples;
}

OutputArray decodeFunctional( InputArray const& packet )
{
    OutputArray lowSamples;
    std::copy( packet.begin()+2, packet.end(), lowSamples.begin() );
    uint32_t msbs = ((packet[0] << 8) | packet[1] ) & 0xFFF;
    OutputArray samples;

    int index=0;
    OutputArray::const_iterator scan = lowSamples.begin();
    std::generate(
        samples.begin(),
        samples.end(),
        [&index,&scan,msbs]()
        {
            return *scan++ | (((msbs << 2*index++) & 0xC00) >> 2);
        }
    );
    return samples;
}

OutputArray decodeUnrolled0( InputArray const& packet )
{
    uint8_t PACKET0 = packet[0];
    uint8_t PACKET1 = packet[1];

    return OutputArray{
        packet[BLOCK_HEADER+0] | ((PACKET0 << 6) & 0x300),
        packet[BLOCK_HEADER+1] | ((PACKET0 << 8) & 0x300),
        packet[BLOCK_HEADER+2] | ((PACKET1 << 2) & 0x300),
        packet[BLOCK_HEADER+3] | ((PACKET1 << 4) & 0x300),
        packet[BLOCK_HEADER+4] | ((PACKET1 << 6) & 0x300),
        packet[BLOCK_HEADER+5] | ((PACKET1 << 8) & 0x300)
    };
}

OutputArray decodeUnrolled1( InputArray const& packet )
{
    OutputArray samples;
    uint8_t PACKET0 = packet[0];
    uint8_t PACKET1 = packet[1];

    samples[0] = packet[BLOCK_HEADER+0] | ((PACKET0 << 6) & 0x300);
    samples[1] = packet[BLOCK_HEADER+1] | ((PACKET0 << 8) & 0x300);
    samples[2] = packet[BLOCK_HEADER+2] | ((PACKET1 << 2) & 0x300);
    samples[3] = packet[BLOCK_HEADER+3] | ((PACKET1 << 4) & 0x300);
    samples[4] = packet[BLOCK_HEADER+4] | ((PACKET1 << 6) & 0x300);
    samples[5] = packet[BLOCK_HEADER+5] | ((PACKET1 << 8) & 0x300);
    return samples;
}

I’m positive that functional purist will scream in horror at the sight of C++ functional version of the code, but it is the closest thing I could do using the standard library. Should you have any idea on how to improve the functional idiom using the standard library, let me know and I’ll update my test set.

Here are the C++ results –

Attention: The internal data of table “2” is corrupted!

This is mostly unsurprising but for the the lower right column which seems to indicate that Scala version is faster than C++. I guess that the reason is because I am using timing values that are rebased considering a standard price you pay for executing the code under test and that I don’t want to consider in my measurements (see below about the methodology I used). My interpretation is that the overhead in Scala hides the cost for a trivial expression so that the base time is much more close to the execution time than it is in C++.

Conclusions

Drawing general conclusions from a specific case is all but easy and trivial. Anyway according to data and to intuition, if you want to code using the functional idiom you pay a price that sometimes can be hefty. Paying an execution price, be it memory or cpu cycles, to raise the abstraction level is something that has been with computer programming from the dawn of this craft. I remember echoes from the ranting against high level languages in favor of assembly to avoid paying a 30% penalty. Now those times are gone, but facing a x4-x7 overhead can be something not always you can afford.

(Raising the level of abstraction by employing the functional idiom is something that is not undisputed, but I will spare this for another post).

Writing efficient code is not always intuitive. Constructs that seem innocent enough turns out to be compiled into time consuming lower level code. Peeking at compiled code is always useful. While languages as C++ are accompanied by detailed information about performances of various constructs and library components, Scala (as Java) lacks of these information. Also the huge amount of syntactic sugar employed by Scala does not simplify the life of the programmer that has to optimize code.

Eventually I cannot save to remarks the performances of interpreted languages (and even more so functional interpreted languages), despite of the progress in the JVM technologies, are still way behind native language ones. Even employing a functional approach C++ is some 80 times faster than Scala. You can argue that the C++ code above is not really functional, but there is nothing preventing me to use a mostly functional library for C++, with roughly the same performances of the functional version of the C++ code. It could be more tedious to write because of the lesser grade of syntactic glucose in C++, yet it is still functional.

I’m curious to see how scala-native compares, but this again is a topic for another post.

The last thought I want to report here is that you can make Scala code much faster, by avoiding the functional paradigm and peeking at compiled code. It is something you are not usually expected to do, but and improvement of 1200 times is a worth gain for the most executed code.

A very last thought for those who think that after all nowadays we have such powerful computers that analyzing performances and optimizing is a waste of time. I would like to remember that data processing requires energy and today energy production mostly relies on processes that produce CO2, something that the world doesn’t need. A data processing that runs just twice the speed of another will produce half of the carbon dioxide emissions. The environment benefits for switching to C and C++ should be clear to everyone.

Methodology

A couple of words on methodology. The compilers were Scala 2.10.6, Scala 2.12.2 and GNU c++ 7.3.1. All the test were performed on Linux (Fedora Core 27) using the command /bin/time –format=”%U %S”  command to print the User and Kernel times of the command and summing the two values together. These are times that CPU spent executing the process, not wall clock. The decoding function is executed 0x7FFFFFFF (some 2 billions) times per run and each program is tested for 12 runs. The fastest and the slowest run are discarded and the other are used to compute an average that is considered the measure of the test.

The base time is computed by employing a dummy decoder that does nothing but filling the result array with constants.

This setup should take in account the JIT effect and measure just the code I listed above.