Category: Blog

Despicable People

Every place I worked had a different culture. I found this interesting and, when the conditions are proper, very entertaining. The workplace culture is something that can strongly motivate you, something that makes you feel part of something unique that’s happening right here and now.

When I worked at Ubisoft I collected all the jargon we used in a dictionary.

My current job is not different. During the years, recurring rants and complains made my friend Francesco and I create a list of despicable types of people. You know when you see something in the code that it is not just right or some approach to the problem that you consider plainly wrong? That’s the base idea. Then take into account other colleagues that have strong ideas about what is Holy and what is Evil and you get lot of jokes only you and your team can understand.

So the list has a split nature, half a satire against Absolute Truth and half a complain against those who write code for a living, but are not Real Programmers. That’s how this ever groving list of wrongdoers came together.

Here are our current list. It is very nerdy and most pun and jokes are likely difficult to catch even if you are a seasoned programmer, so I will offer a decoding table at the end of the post. Ready? Let the rant begin?

Despicable types of people (in no particular order)

  1. those who don’t do a memset at the beginning of each function
  2. those who define nonsense operators
  3. those who start counting from 1
  4. [C++] those who pass references without const modifier even when they do not change the argument
  5. [C++] those who don’t place const in the signature of methods that don’t change the status of the instance
  6. [C++] those who use delete without [] to free an array
  7. those who don’t program in C
  8. those who don’t use Windows XP
  9. those who use the very same logging message in several lines of their code
  10. [C++] those who do delete this with no reason
  11. those who don’t know Z80 assembly
  12. those who don’t remove every warning from their code
  13. those who check in an if condition if a boolean variable is == true in languages where booleans can only be true or false
  14. those who name variables and functions in a language but English
  15. those who use DBs
  16. those who create animations using .obj files
  17. those who punctuate randomly
  18. those who pronounce acronyms half in Italian and half in English
  19. those who write programs that exceed 48k RAM.

So here is a little analysis/explanation

1 This means “to perform a memset of the variables on the stack, so that everything is initialized in a function before doing actual work”. Therefore this belongs to a sort of self defensive programming approach – trust no one, yourself included (after all who is going to use the variables freshly created on the stack?).

  1. It is aimed mostly at languages where you can redefine operators – C++ and Scala. In Scala you can define every sequence of symbols as a custom operator. Usually this is utterly confusing for beginners and everyone that is not fluent with the library the defines such operators. Personally I find DSL a messy confusion that brings no real advantage for at least two reason – first you know the host language but you need to learn the DSL that has different syntax/semantic and then the DSL forces the “host language” but has to comply with it and this results in odd looking syntax, misleading punctuation or phrasing. I much prefer a traditional library with a possibly well defined interface to Vulcan gibberish such as :::, |–>, ~>.

  2. How many times have you found code where the programmer started counting from 1, even if the language was C? This is annoying because there are very good reasons to start from 0… but someone prefers to ignore them and sticks with medieval culture lacking the concept of zero.

4,5. In C++ (and to a lesser extent in C) you can enforce a type checking strategy called const-correctness. Basically this strategy marks data that is expected to stay constant through-out some operation. This helps in writing better code because the user programmer can make stronger assumptions. Unfortunately you can’t use non const-correct code from const-correct code (and possibly this is why Java has no const-correctness concept) so the lazy programmer prefers to ignore the whole issue.

  1. This is a plain bug, because delete new int[10] is undefined behavior as per C++ standard. Nonetheless poorly written (and tested) code has such patterns.

7, 8 and 15 are basically exaggerations of what some people with Absolute Truth assert. Taken out of their contexts these sentences sound even more absurd, but the meaning is “simple and tested systems tend to be more reliable than complex and new ones”.

  1. Logging is helpful to get insights on the sequence of operations inside a running software. If you replicate the very same text then it is quite hard to figure out what the executed sequence was. This is plain dumb.
  2. Deleting this in C++ can be colorfully depicted as cutting the branch on which you are sitting. This is something that usually you don’t want to do. If you find yourself needing this it is likely that you are approaching the problem from a wrong perspective. Seldom this operation is correct.

  3. Z80 CPU was a popular microprocessor back in the 80s. Programming in Z80 assembly was easy, surely easier than programming the 6502, another popular micro of the same years. This point is another overstatement that refers back to a Golden Age where everything was proper and sound.

  4. Compiler warning, most of the times, are just picky annoyance, not much different than a crooked picture on the wall. Sometimes they are actually errors – the intention of the programmer was not caught by the lines he wrote and some subtleties were detected and reported by the compiler. This is one of the reasons why you want zero warnings when compiling your code. The other reason is that if your output space is cluttered by harmless warnings hardly you will notice the dangerous one.

  5. When a programmer is insecure of the language she/he is programming in, the first thing that she/he should do is to study the language rather than writing code with it. Adding redundant or useless structures (such as this – comparing with true) is a clear sign of doubt and uncertainty and usually it appears in code with below average quality.

  6. As many programmers I like tidy and well sorted out thing (even if my desk could tell a different story to the casual observer). Programming languages (at least most of the commonly used ones) have English keywords. So, yes the official reason is that your code should be read internationally, the real reason is that mixed language code looks really ugly.

  7. See 7.

  8. Once upon a time you made animations by playing different images (frames) one after another at a given speed. This worked fine, but the production was labor intensive. Obj files contains static 3D model, so in order to play them you need to perform a lot of data transfer to the graphics adapter. New techniques based on morphing and skeletal deformations arose in the years so that animations can be created with less data production and transferring, and – more important – can be optimized by the 3D accelerator. Making an animation with objs means that you miss where the bottlenecks in the graphics pipeline are, together with a good part of the last decades in 3D technology.

  9. Code punctuation is used to separate stuff. Keeping a consistent punctuation style (i.e. relative position between spaces, parenthesis, commas and the likes) is a good habit, is good manner to those who will read it and shows that you care for your code. And if you don’t care about your code, then you don’t care about your work, and don’t care about who is going to read your code, then you are despicable.

  10. This is much like 14. Italian programmers (but I guess that this is shared by all non English native programmers), while speaking, mix Italian and English words in the same sentence. This is fine because quite often technical documentation is written in English and for many terms there are no Italian equivalents. Pronouncing an acronym half in Italian and half in English is something that tells about the English knowledge of the speaker and his/her desire to show off with unfamiliar buzzwords.

  11. 48K RAM was the memory size of the ZX Spectrum a popular home computer back in the 80s. This is much on the line of point 7, 8 and 15 – a simple system tends to be more reliable than a complex one. There are also some implications about how lazy programmers are that are no longer willing to spend time in doing their homework (optimization) and waste memory and CPU for no good reason.

There are basically two main interesting ideas: the love for the code, meaning the care for your craft, and that simpler is better. I am not sure I actually heard the sentence “this code hasn’t received enough love” referring to a ugly or messy source file, but that’s the care idea. What I wrote above is true – if you don’t care for the details, why should you care for less evident characteristics?

Simpler is better, though put a little dramatically on the list, is not a new concept. KISS principle is also called. No sane programmer is going to oppose this, on the other hand it has to be balanced with development time. Programmers that uses more than 48k are not lazy, but have deadlines to hit.

The list is over, but we’re still looking for wrongdoers to add. Have you any suggestion?

Ti sembra possibile?

“Il Voyager I è stata la prima sonda spaziale inviata dalla erra (1977) verso i pianeti esterni al Sistema solare. Viaggiando alla velocità di circa 17km/s […] si sta allontanando dal Sistema solare, verso lo spazio interstellare.

Calcola quanto tempo impiegherebbe per raggiungere la stella a noi più vicina: Proxima Centauri, che dista da noi 4,2 anni luce. Ti sembra possibile che l’uomo, in futuro, possa visitare altri pianeti al di fuori del Sistema solare?”

Questo è il contenuto di un riquadro che si trova fuori dal testo principale al capitolo 17 del libro per le terze medie “Noi Scienziati 3”.

La prima risposta al problema, volendo essere scientifici fino in fondo, è che, nemmeno se andasse al doppio della velocità attuale arriverebbe mai su Proxima Centauri visto che sta andando in un’altra direzione. Quindi, visto che si tratta di un testo che ha l’obiettivo di insegnare il pensiero scientifico ai ragazzi, dovrebbe quantomeno cercare di essere rigoroso.

Ma questo non è quello che mi porta a scrivere di questo triste riquadro quanto la cupezza e la rassegnazione che può ispirare alle giovani menti. Da dove arrivano le nuove scoperte scientifiche e gli incredibili progressi tecnologici se non da dove nessuno pensava potessero arrivare?

A livello teorico ci sono almeno un paio di possibilità per arrivare velocemente là dove nessun uomo (o donna) è mai giunto prima. La prima è l’Alcubierre Drive – ispirata dai telefilm di Star Trek, questa teoria sviluppata da un fisico Messicano, ipotizza di distorcere lo spazio in modo che una “bolla” di spazio normale scivoli più veloce della luce. L’altro dispositivo classico è il Ponte di Einstein-Rose – una deformazione dello spazio tempo che crea una sorta di tunnel collegando due punti remoti dello spazio attraverso una scorciatoia.

Le difficoltà tecniche per entrambe queste soluzioni sono inimmaginabili al nostro livello di sviluppo tecnologico. Il nostro livello tecnologico è però inimmaginabile solo un centinaio di anni fa. Se avessimo dato retta ad Einstein, che pure non si poteva tacciare di vedute ristrette, non ci saremmo messi a cercare di captare le onde gravitazionali. Il grande scienziato infatti riteneva altamente improbabile che la tecnologia sarebbe un giorno stata capace di percepire di delle perturbazioni così piccole del continuun.

E allora perchè mettere questo falso spunto di riflessione che si aspetta la grigia risposta rinunciataria, dell’ovviamente non è possibile? Perchè tarpare le ali del pensiero creativo, dell’entusiasmo e della voglia di superare i limiti dei ragazzi di oggi e scienziati e ingegneri di domani?

button down -> light on

Picture this project – you have a battery operated gizmo with LEDs that has to light up at a given time and light off at another one. Reality is a bit more complex than this, but not that much.

So what could possibly go wrong? How much could cost the development of its firmware?

The difficult may seem on par with “press button – light on”.

As usual devil is in the details.

Let’s take just a component from the whole. If it is battery operated you need to assess quite precisely the charge of the battery, after all – it is expected to do one thing and it is expected to do it reliably. The battery charger chip (fuel gauge) comes with a 100+ pages datasheet. You may dig out of internet a pseudo driver for this chip only to realize that it uses undocumented registers. Eventually you get some support from the producer and get a similar, but different data-sheet, and a initialization procedure leaflet that is close, but doesn’t match exactly neither of the two documents.

Now may think that a battery is a straightforward component and that the voltage you measure to its connectors are proportional to the charge left in the battery. In fact it is not like that, because the voltage changes according to temperature, humidity and the speed with which you drain the energy. So the voltage may fall and then rise even if you don’t recharge the battery.

To cope with this non-linear behavior the fuel-gauge chip keeps track of energy in and energy out and performs some battery profile learning to provide reliable charge status and sound predictions about residual discharge time and charging time. The chip feature an impressive number of battery specific parameters.

Since the chip is powered by the battery itself (or by the charger) and doesn’t feature persistent storage, you have to take care of reading learned parameters, store into some persistent memory, and rewrite them back at the next reboot.

Btw, this is just a part of the system which is described by some 2000 pages programming manuals specific to the components used in this project.

In a past post, I commented an embedded muse article about comparing firmware/software project cost to the alternatives. The original article basically stated that for many projects there is no viable mechanical or electrical alternative.

This specific project could have an electromechanical alternative – possibly designing a clock with some gears, dials and a mechanical dimmer could be feasible. I am not so sure that the cost would be that cheaper. But the point I’d like to make now is that basically you can’t have an electro-mechanical product today – it is not what the customer wants. Even if the main function is the same, the consumer expects the battery charge meter, the PC connection, the interaction. That’s something we take for granted.

Even the cheapest gizmo from China has the battery indicator!

And this is the source of distortion, we (and our internal customer who requested the project) have a hard time in getting that regardless of how inexpensive the consumer device is, that feature may have taken programmer-years to be developed.

This distortion is dangerous in two ways – first it leads you to underestimate the cost and allocate an insufficient budget; second and worse it leads you to think that this is a simple project and can be done by low-cost hobbyists. Eventually you get asked to fix it, but when you realize how bad the shape of the code (and the electronics) is, you are already overbudget with furious customers waving pitchforks, torches and clubs knocking at your door.

Digital Apollo

It’s no secret that I was a child during the space age, going to the moon was just taking a different kind of plane and there was no doubt, we all were going to spend y2k vacations on some asteroids or even Mars. Rockets and space keep fascinating me even when I got to know y2k for another reason and now that Mars still remains a distant shore where no man has gone before (and possibly it will take yet some more time to get there).

Growing older, and becoming an engineer just increased my respect and admiration for those people that using 60’s technology achieved such an incredible goal. Space is hard, unforgiving, there are millions of things that could go wrong. Timing and counting are especially critical. Just the first example that comes to my mind – switch the rocket off too early and you’ll plunge back to Earth, switch the rocket off too late and you’ll be cruising on a non-return-path to the Sun or the Outer Space.

This book starts from the beginning of the space era in the USA and follows the story up to the moon landings.

A bit surprisingly, the first half of the book is focused on the human side – the test pilots, the political propaganda, the strong will about having a human in the cockpit and in command of the mission. Von Braun, the German scientist behind many successes of the American rocket science, was a strong supporter of having humans out of the control loop – the rocket is fired, autopilot and physical laws of gravity take it to the destination, without the need of a pilot doing anything. This view was strongly opposed by the ideology of the space hero – the man that conquest the space, driving rockets and starships.

From the first orbital maneuvering it was clear that the pilot intuition was too misleading to proper control the spaceship – changing the orbit to get closer or farther from Earth it’s too different from the action you do on a plane, you need to take in orbital speeds when approaching your target, not just the relative positions.

Eventually the space program came to agree on avoiding human piloting the rocket during the launch phase and having some control operations with man in the loop and some other completely automatic. Humans where there as a reliability and recovery mechanism in the control loop. Having a spaceship flying to the Moon, is a very complex operation, that requires many activities to keep the capsule functional, aligned with the objects and preserving the passengers’ life.

While the story narrated in the book unfolds, the digital computer appears on stage. It was back in the sixties, when the single chip computer was yet to come. So the MIT took the task to design and build the so called Apollo Guidance Computer. It had 2k of work memory and 36k of persistent memory and ran at 2Mhz. The book describes the story of the development of the hardware and the software never going into technical details. You get to know that the computer was built using only NOR gate chips to ease repair and replacement of faulty components (operation that was later ruled out from on-board activities). Also you get to know that the user launched specific subprograms by input a number describing the “verb” followed by another one describing the “noun” of the required operation. Also all the operations that involved firing rockets were programmed to require the pilot authorization. This lock saved at least one mission when, during a lunar orbit, the pilot accidentally attempted to run a pre-launch sequence.

Maybe the digital computer played an even more crucial role in the lunar lander. This flying machine was something so radically different from every machine built to that time that designers and pilot could not tap into any previous experience.

The flight was not based on aerodynamics, but was only powered by rockets. By properly controlling the rocket attitudes and thrust the lander could move around and hover at a fixed point. Engineers and programmers made an incredible achievement by having such a complex machine controlled via a joystick, that proved quite intuitive to the pilot.

The computer on the lander had several programs to be selected according to the flight phase. A specific one had to be selected for landing. A graduate ruler was print on the portholes. By aligning the view of the pilot to the desired landing point, you get a reading on the ruler that had to be input in the computer. An automatic program would take control of the lander and took it to the desired position. This struck me for the ingenuity of the designers. They had to overcome the lack of interactive computer and did it in a simple yet effective way by using low technology equipment. Kudos!

During the first mission to the moon – Apollo 11 – the story wants that the landing computer failed requiring the two astronauts in the cockpit – Neil Armstrong and Buzz Aldrin – to manually take control. This is partly true. First because the lander couldn’t fly at all without a computer that translated manual input on the control joystick to rocket pulses. Second because the computer was in fact working properly reporting a real problem that, thanks to the ability of the programmers, had no consequences on the computer reliability.

The problem was that the lander computer had to be used for many different things, as the computer in the Apollo spaceship, it was controlled by the user that launched subprograms for every specific phase or maneuver during the mission. The landing sequence had many abort points, i.e. specific moments when the pilots had to decide whether abort the descent to the moon surface and get back to the Apollo capsule (the Command Module), or to proceed. Each part of the sequence had a specific procedure to operate. Right after the lander detached from the Command Module a first decision had to be taken – dock back to the Command Module or proceed to the next step? If the descent was aborted at that point, the pilot had to switch on the docking radar and use the computer to execute the docking sequence.

The docking radar had to be engaged also during the flight to the Moon when the lander was extracted from the 3rd stage of the rocket and then docked to the Control Module. So, during the simulations, the astronauts asked whether they could keep the radar on to avoid having something to think of in the case they aborted the descent.

This switch caused data from the radar to be fed to the lander computer causing some extra processing. The operating system and the programs were designed with a 30% margin on CPU load, and an error were reported if the load exceeded a safety threshold. Being a real time operating system with sound priorities, the computer continued working properly offering reliable pilot controls and descent trajectory. Nonetheless during the mission the mission control preferred to avoid any risk and asked the pilots to switch off the trajectory control, leaving Neil Armstrong and Buzz Aldrin to manually operate the lander to the landing spot.

This served also the propaganda to claim the reliability of human and the fallibility of the machine. In the next missions most the pilots decided to avoid automatic landing and go for manual landing even if there were no problems in the software and they landed very close to the programmed spot.

Very interesting also the role that the software played in the design and development of the missions. Initially no one considered software. I mean literally – there was no account for software on the initial plan. Likely it was considered part of the engineering system, not a main activity. Also the activity of software production started without an industrial process and without an effective leader. Around mid ’66, things changed, software switched from being a pet project for a few tens of programmers into a project with hundreds of people, project managers, documentation, review…

Overall I enjoyed this book a lot. In the first part felt a bit slow, and apparently not going to the point of computers in space, but in the next chapters that long introduction helps to better understand the human factor behind the story.

Optimization Time – Scala (and C++)

The Case

It is quite a feeling when you learn that your commit, a couple of months ago, broke the build is such a subtle way that it took so long to be detected. Possibly a more thorough testing and validation of the software would have caught it earlier, nonetheless it’s there and you and your coworkers are working hard to delimit the offending code and better understanding what caused the mess.

It turned out that some perfectly working code was taking too much time in one of the hot spot of our codebase. More precisely that code operated a conversion from an incoming data packet into a format suitable for data processing… about 2500 times a second on a modest hardware.

The code resulting in such poor performances that disrupted the device functionality seemed like a good idea in a very functional idiom –

@inline final val SAMPLES_PER_BLOCK = 6
@inline final val BLOCK_LENGTH = 8
@inline final val BLOCK_HEADER = 2

private def decodeFunctional(packet: Array[Byte]): Array[Int] = {
    val lowSamples: Array[Int] = packet.drop(BLOCK_HEADER).map(x => x.toUint)

    val msbs: Int = ((packet(0).toUint << 8) | packet(1).toUint) & 0xFFF
    Array.tabulate(SAMPLES_PER_BLOCK)(
        x => lowSamples(x) | (((msbs << 2 * x) & 0xC00) >> 2)
    )
}

Basically the incoming packet holds six 10-bits samples in 8 bytes and some bits-shifting’n’pasting is needed to rebuild the six sample array for later processing.

Once found that the problem was here I decided rewrite the code to get rid of the functional idiom and take an imperative/iterative approach:

private def decodeImperative(packet: Array[Byte]): Array[Int] = {
    // the code below is ugly, but it needs to be fast, don't touch,
    // unless you know what you are doing
    val samples: Array[Int] = new Array[Int](SAMPLES_PER_BLOCK)
    val msbs: Int = ((packet(0).toUint << 8) | packet(1).toUint) & 0xFFF
    var index: Int = 0
    while (index < SAMPLES_PER_BLOCK) {
        samples(index) = packet(BLOCK_HEADER + index).toUint |
                         (((msbs << 2 * index) & 0xC00) >> 2)
        index += 1
    }
    samples
}

(the comment is a colorful note I decide to leave for the posterity :-)).

The idea is to get rid of the read-only constraint imposed by functional programming and save all the temporaries.

Optimizing is a subtle art, and even if intuition may guide you, only measurement can tell whether the optimization succeeded. So I set up some speed test (see “methodology” below) to objectively assess my results.

The speed improvement was astounding: more than 7 times faster (7.6 to be precise).

Energized by this victory I decided to switch to heavy optimization weapon. Not being sure if scala/java compiler optimization really does its optimization homework, I opted for manually unrolling the loop in the code:

private def decodeUnrolled0( packet: Array[Byte] ) : Array[Int] = {
    val PACKET0 : Int = packet(0).toUint
    val PACKET1 : Int = packet(1).toUint
    Array[Int](
        packet(BLOCK_HEADER+0).toUint | ((PACKET0 << 6) & 0x300),
        packet(BLOCK_HEADER+1).toUint | ((PACKET0 << 8) & 0x300),
        packet(BLOCK_HEADER+2).toUint | ((PACKET1 << 2) & 0x300),
        packet(BLOCK_HEADER+3).toUint | ((PACKET1 << 4) & 0x300),
        packet(BLOCK_HEADER+4).toUint | ((PACKET1 << 6) & 0x300),
        packet(BLOCK_HEADER+5).toUint | ((PACKET1 << 8) & 0x300)
    )
}

I was somewhat disappointed to learn that this version is only marginally faster (x1.6) than the original functional version. This didn’t really make sense to me, since the loop is completely unrolled and just trivial computation needed to be performed. So I decompiled the .class file:

public int[] CheckPerf$$decodeUnrolled0(byte[] packet)
{
  int PACKET0 = UnsignedByteOps(packet[0]).toUint();
  int PACKET1 = UnsignedByteOps(packet[1]).toUint();
  return (int[])Array..MODULE$.apply(Predef..MODULE$.wrapIntArray(new int[] {
    UnsignedByteOps(packet[2]).toUint() | PACKET0 << 6 & 0x300, 
    UnsignedByteOps(packet[3]).toUint() | PACKET0 << 8 & 0x300, 
    UnsignedByteOps(packet[4]).toUint() | PACKET1 << 2 & 0x300, 
    UnsignedByteOps(packet[5]).toUint() | PACKET1 << 4 & 0x300, 
    UnsignedByteOps(packet[6]).toUint() | PACKET1 << 6 & 0x300, 
    UnsignedByteOps(packet[7]).toUint() | PACKET1 << 8 & 0x300 }), ClassTag..MODULE$.Int());
}

Instead of the Java array I expected, some Scala internals are used to create the array and then converting it into the Java thing. Possibly this was the cause of the slowness. By looking around at the decompiled code I found another function that just used a Java int array. So I rewrote the unrolled version accordingly –

private def decodeUnrolled1( packet: Array[Byte] ) : Array[Int] = {
    val samples = new Array[Int](SAMPLES_PER_BLOCK)
    val PACKET0 = packet(0).toUint
    val PACKET1 = packet(1).toUint
    samples(0) = packet(BLOCK_HEADER + 0).toUint | ((PACKET0 << 6) & 0x300)
    samples(1) = packet(BLOCK_HEADER + 1).toUint | ((PACKET0 << 8) & 0x300)
    samples(2) = packet(BLOCK_HEADER + 2).toUint | ((PACKET1 << 2) & 0x300)
    samples(3) = packet(BLOCK_HEADER + 3).toUint | ((PACKET1 << 4) & 0x300)
    samples(4) = packet(BLOCK_HEADER + 4).toUint | ((PACKET1 << 6) & 0x300)
    samples(5) = packet(BLOCK_HEADER + 5).toUint | ((PACKET1 << 8) & 0x300)
    samples
}

The main difference is that here the array is first declared and allocated, then filled. In the above code the array was created, initialized and returned all in the same statement.

The speed improvement was good, but not much better than my imperative version: x7.77 times faster.

Then a colleague pointed out that I was using the “old” Scala 2.10 compiler and that I should try the latest 2.12 that benefits from a better interoperability with the underlying JVM 1.8.

In the following table you get the performance comparison:

Attention: The internal data of table “1” is corrupted!

Functional and unrolled0 are quite unsurprising, just what you expect from the next version of the compiler. Imperative approach yields quite a boost – 22 times faster than the functional version on the previous compiler. The shocking surprise is in the last column – the unrolled1 version of the code runs in excess of 1200 times faster than the original code!

Before jumping to conclusions, I performed the same tests on the code translated into C++. Here is the equivalent code:

typedef std::array<int,SAMPLES_PER_BLOCK> OutputArray;
typedef std::array<uint8_t,BLOCK_LENGTH> InputArray;

OutputArray decodeImperative( InputArray const& packet )
{
    OutputArray samples;
    uint32_t msbs = ((packet[0] << 8) | packet[1]) & 0xFFF;
    for( int index=0; index< SAMPLES_PER_BLOCK; ++index )
    {
        samples[index] = packet[BLOCK_HEADER + index] |
                         (((msbs << 2 * index) & 0xC00) >> 2);
    }
    return samples;
}

OutputArray decodeFunctional( InputArray const& packet )
{
    OutputArray lowSamples;
    std::copy( packet.begin()+2, packet.end(), lowSamples.begin() );
    uint32_t msbs = ((packet[0] << 8) | packet[1] ) & 0xFFF;
    OutputArray samples;

    int index=0;
    OutputArray::const_iterator scan = lowSamples.begin();
    std::generate(
        samples.begin(),
        samples.end(),
        [&index,&scan,msbs]()
        {
            return *scan++ | (((msbs << 2*index++) & 0xC00) >> 2);
        }
    );
    return samples;
}

OutputArray decodeUnrolled0( InputArray const& packet )
{
    uint8_t PACKET0 = packet[0];
    uint8_t PACKET1 = packet[1];

    return OutputArray{
        packet[BLOCK_HEADER+0] | ((PACKET0 << 6) & 0x300),
        packet[BLOCK_HEADER+1] | ((PACKET0 << 8) & 0x300),
        packet[BLOCK_HEADER+2] | ((PACKET1 << 2) & 0x300),
        packet[BLOCK_HEADER+3] | ((PACKET1 << 4) & 0x300),
        packet[BLOCK_HEADER+4] | ((PACKET1 << 6) & 0x300),
        packet[BLOCK_HEADER+5] | ((PACKET1 << 8) & 0x300)
    };
}

OutputArray decodeUnrolled1( InputArray const& packet )
{
    OutputArray samples;
    uint8_t PACKET0 = packet[0];
    uint8_t PACKET1 = packet[1];

    samples[0] = packet[BLOCK_HEADER+0] | ((PACKET0 << 6) & 0x300);
    samples[1] = packet[BLOCK_HEADER+1] | ((PACKET0 << 8) & 0x300);
    samples[2] = packet[BLOCK_HEADER+2] | ((PACKET1 << 2) & 0x300);
    samples[3] = packet[BLOCK_HEADER+3] | ((PACKET1 << 4) & 0x300);
    samples[4] = packet[BLOCK_HEADER+4] | ((PACKET1 << 6) & 0x300);
    samples[5] = packet[BLOCK_HEADER+5] | ((PACKET1 << 8) & 0x300);
    return samples;
}

I’m positive that functional purist will scream in horror at the sight of C++ functional version of the code, but it is the closest thing I could do using the standard library. Should you have any idea on how to improve the functional idiom using the standard library, let me know and I’ll update my test set.

Here are the C++ results –

Attention: The internal data of table “2” is corrupted!

This is mostly unsurprising but for the the lower right column which seems to indicate that Scala version is faster than C++. I guess that the reason is because I am using timing values that are rebased considering a standard price you pay for executing the code under test and that I don’t want to consider in my measurements (see below about the methodology I used). My interpretation is that the overhead in Scala hides the cost for a trivial expression so that the base time is much more close to the execution time than it is in C++.

Conclusions

Drawing general conclusions from a specific case is all but easy and trivial. Anyway according to data and to intuition, if you want to code using the functional idiom you pay a price that sometimes can be hefty. Paying an execution price, be it memory or cpu cycles, to raise the abstraction level is something that has been with computer programming from the dawn of this craft. I remember echoes from the ranting against high level languages in favor of assembly to avoid paying a 30% penalty. Now those times are gone, but facing a x4-x7 overhead can be something not always you can afford.

(Raising the level of abstraction by employing the functional idiom is something that is not undisputed, but I will spare this for another post).

Writing efficient code is not always intuitive. Constructs that seem innocent enough turns out to be compiled into time consuming lower level code. Peeking at compiled code is always useful. While languages as C++ are accompanied by detailed information about performances of various constructs and library components, Scala (as Java) lacks of these information. Also the huge amount of syntactic sugar employed by Scala does not simplify the life of the programmer that has to optimize code.

Eventually I cannot save to remarks the performances of interpreted languages (and even more so functional interpreted languages), despite of the progress in the JVM technologies, are still way behind native language ones. Even employing a functional approach C++ is some 80 times faster than Scala. You can argue that the C++ code above is not really functional, but there is nothing preventing me to use a mostly functional library for C++, with roughly the same performances of the functional version of the C++ code. It could be more tedious to write because of the lesser grade of syntactic glucose in C++, yet it is still functional.

I’m curious to see how scala-native compares, but this again is a topic for another post.

The last thought I want to report here is that you can make Scala code much faster, by avoiding the functional paradigm and peeking at compiled code. It is something you are not usually expected to do, but and improvement of 1200 times is a worth gain for the most executed code.

A very last thought for those who think that after all nowadays we have such powerful computers that analyzing performances and optimizing is a waste of time. I would like to remember that data processing requires energy and today energy production mostly relies on processes that produce CO2, something that the world doesn’t need. A data processing that runs just twice the speed of another will produce half of the carbon dioxide emissions. The environment benefits for switching to C and C++ should be clear to everyone.

Methodology

A couple of words on methodology. The compilers were Scala 2.10.6, Scala 2.12.2 and GNU c++ 7.3.1. All the test were performed on Linux (Fedora Core 27) using the command /bin/time –format=”%U %S”  command to print the User and Kernel times of the command and summing the two values together. These are times that CPU spent executing the process, not wall clock. The decoding function is executed 0x7FFFFFFF (some 2 billions) times per run and each program is tested for 12 runs. The fastest and the slowest run are discarded and the other are used to compute an average that is considered the measure of the test.

The base time is computed by employing a dummy decoder that does nothing but filling the result array with constants.

This setup should take in account the JIT effect and measure just the code I listed above.

Zen and the Art of Motorcycle Maintenance

It was a long time I’d like to read Zen and the Art of Motorcycle Maintenance. It came to my knowledge several years ago while browsing hacker culture stuff, like the Jargon File. Regrettably I can’t find now, even with the power of modern web search engines, which reference I stumbled upon.

I knew that this book has not much to do with real motorbike maintenance (and not much about Zen as well), but motorbike maintenance is used by the author to make his points about a more general pictures. So, when I found this book on the Amazon Kindle daily deals, purchasing it was a no brainer.

The book is about a journey through USA on an old motorcycle. The main character is a dad, traveling with his son. The book is written in first person and is split between the present where the road unfolds before the duo and the past, when the character worked as a University professor, so intelligent and focused that became mad while restructuring in his mind the whole philosophy.
Eventually his journey through philosophical systems ended with an electroshock that turned him into another self. A less introverted person, possibly somewhat less sharp (even if his reconstruction of the thought of the previous self is really deep).

I liked the book, but I cannot define it an eye-opener. Here and there you can find interesting notes and thoughts on issues like technology and people, technology and maintenance, quality. On the other hand, if you are interested in philosophy, you will find this book quite interesting since it tackles about all major philosophers and their systems, applying their thoughts to concrete aspects of life.

[Btw, all these Amazon ads you find in this post (and in the next ones), are just a desperate attempt in recovering some cents to contribute to my website hosting. I don’t want you to but anything you wouldn’t have bought, but if you are interested you can click on the link.]

Lambda World 2017 – Lambda Core, hardcore

Despite the title which is a bit obscure, this talk has been very clear and entertaining. The speaker Jarek Ratajski (@jarek000000) who defines himself as a functional programmer, wizard, and anarchitect, definitively knows his art and how to make an engaging speech.

Btw, an anarchitect is a person that breaks the rules so that something can be eventually done.

As usual, mistakes, errors, and inaccuracies are all my faults. A last note: you need Scala 2.12 to compile the code presented here, older versions are likely to fail.

I really recommend watching the presentation video on youtube.


Let’s start with a question for the attendees – you are on a desert island and you can take just one thing from the civilized world. What would you like to have to survive?

[someone from the audience shouts “wifi”]

Well, you need Math. Math is the base for everything, but it is huge! So, maybe we can find a small subset, a core, that allows us to recreate the whole Math.

Lambda is a building block of math. Using lambda calculus we can build – one step at a time – the whole Math. Historically Math has been built using sets, unfortunately, sets had some problems in terms of paradoxes, so mathematicians resorted to this simpler mechanism.

So lambda (λ) is a function that takes some lambda as a parameter and produces another lambda. A lambda is written as:

λ x. some expression

A λ x is applied to λ y with the following notation –

x(y)

to a scala developer:

trait Lambda {
    def apply(x: Lambda) : Lambda
}

Now suppose we are on a deserted island with just our Scala compiler. Just the compiler – no jars, no collections, no scala utils, no Boolean, Int, Float, Double, String… anything at all.

The only exception to this nothingness is the badlam package in order to simplify and help the presentation of concepts.

Let’s define the identity, which is a function that returns the function accepted as an argument:

λ x. x

In Scala it turns out to be:

val identity : Lambda = (x) => x

val id2 = identity(identity)

Now we can define a function that accepts multiple arguments (via the curry mechanism) and always returns the first one:

val funny: Lambda = (x) => (y) => x

Note that names are not important, but the structure is. The “funny” function structure can be used to implement true and false concepts:

val aTrue = funny
val aFalse : Lambda = (x) => (y) => y

Note that aFalse is complementary with respect to aTrue. Now that we have true and false symbols we can define boolean operators:

val and : Lambda = (p) => (q) => p(q)(p)

You can read this like: and is a function that via currying accepts two arguments p and q. Consider p: it can be either true or false. If it is true its lambda evaluates to its first argument, otherwise to the second. So, when p is true, the and expression evaluates to q, so and is true when both arguments are true. If p is false, then p is returned – no need to consider q.

Let’s check with Scala:

val result1 : Lambda = and (aFalse)(aTrue)

This evaluates to aFalse, while

val result2:Lambda = and (aTrue)(aTrue)

This evaluates to aTrue.

Or function is based on the same scheme used by and:

val or:Lambda = (p) => (q) => p(p)(q)

To define not is a matter of reversing the true/false definition:

val not: Lambda = (p) => (a) => (b) => p(b)(a)

So long, so far, we have defined all the tools we need for boolean calculus. Let’s see if we can express numbers. One approach could be:

val zero: Lambda = (f) => (x) => x
val one: Lambda = (f) => (x) => f(x)
val two: Lambda = (f) => (x) => f(f(x))
val three: Lambda = (f) => (x) => f(f(f(x)))

So numbers are functions (of course) that accept a function and an argument and return zero or more applications of the function to the argument.

Zero is the identity. One is the single application of lambda f to the identity. Two is the single application of lambda f to one, that is that application of lambda f to the application of lambda f to the identity, and so on…

This may work, but it is boring to encode all numbers in this way. A function that computes the successor of a given number could be a good tool to lessen the burden:

val succ:Lambda = (n) => (f) => (x) => f(n(f)(x))

[NdM – well this is mind-boggling and I have a hard time decoding. Conceptually is simple – just remember that an integer n is represented by a lambda that accepts a function f and an argument x and returns a function composing n nested applications of f over x. So replace x with f(x) and you are done.]

Now we can define addition and multiplication

val plus: Lambda = (m) => (n) => (f) => (x) => m(f)(n(f)(x))
val mult:Lambda = (m) => (n) => (f) => m(n(f))

[NdM -It takes a while for these two as well. Sum is intuitively easy, take 3+2=5, in lambda is: x => f(f(f(x))) + x => f(f(x)) = x => f(f(f(f(f(x))))). You may read plus as: (m,n,f) => (x) => m(f)(n(f)(x)) , that is a function that takes two operands and one function. Remember that m represents an integer by applying a function m times to an operand. So swap in f as an operand and you have m times the application of f, now applies this function to an argument that is n times the application of f to x.]

Multiplication is similar, x is omitted to keep the code terse, but you can read it as:

val mult:Lambda = (m) => (n) => (f) => (x) => m(n(f))(x)

Keeping the sum approach in mind, this is similar – applies m times a function that is composed by n application of f to x. I needed quite a neuron effort to figure this out.]

Predecessor can be defined as:

val pred: Lambda = (n) => (f) => (x) => n((g) => (h) => h(g(f)))((u) => x)((u) => u)

[NdM. I tried hard to understand this, but simply I couldn’t wrap my mind around it. I don’t have even the slightest idea of how this is expected to work. If you figure it out, let me know, please… well indeed, I think that the last part (u) => u, being an identity, is used to skip an application in the application list, therefore reducing by 1 the application list…]

[NdM. I found this thorough explanation of the predecessor on stackoverflow]

Now we can do something even more interesting – conditionals:

val ifLambda: Lambda = (c) => (t) => (f) => c(t)(f)((x) => x)
val isZero: Lambda = (n) => n((x) => aFalse)(aTrue)

Recursion would be a nice addition to the computing arsenal of lambda calculus since it allows the expression of iterative algorithms. But how are we supposed to call a function when lambda functions are anonymous?

First, let’s define a function that makes its argument call itself:

val autocall:Lambda = (x) => x(x)

Then we need a helper function that makes something call itself

val Y: Lambda = (f) => autocall((y) => f((v) => y(y)(v)))

And finally, we define a function containing the body of the recursive function:

val G:Lambda = (r) => (b) =>
  (ifLambda(isZero(n))
    ((x) => one )
    ((x) => mult(n)(r(pred(n))))
  )

Now we have everything to recursively compute a factorial:

val fact = Y(G) // this makes G recursive

[NdM: Once more I have a hard time trying to understand this part. It makes sense intuitively, but I’m lost in details such as function Y. The overall idea is pretty well laid out]

Turing Equivalent?
Lambda calculus has been proved to be Turing- equivalent, meaning that every algorithm that can be implemented on a Turing Machine, can also be implemented using Lambda Calculus. Therefore, you have no excuse, everything can be done purely functional!

Equivalence

In mathematics there are a lot of problems, an interesting one is about theorems.

For a mathematician, a theorem is something like

if ( condition(args)) then statement(args)

That is, if something holds, then something else is true. It would be nice to build a machine that automatically checks this. This is what about a century ago, mathematicians were looking for – a machine that renders us useless by proving theorems by itself.

In the same way, we used lambda calculus to represent numbers, booleans, and statements, we could rewrite the theorem as a lambda calculus expression and then execute it to let it show true or false to determine whether the theorem holds or not.

This could be such a machine:

def main( args: Array[String]) : Unit = {
    val autocall: Lambda = x => x(x)
    println( SmartDisplay.web.display(autocall))
    val OMEGA = autocall(autocall)
}

[NdM: also after having watched the youtube video of the conference, I can’t tell where this function comes from. I believe Jarek, but I have no explanation of why this function should prove theorems.]

That would be awesome, regrettably, it doesn’t work. In fact, autocall function is invoked over autocall itself causing a stack overflow. This is generally what happens when you try to analyze a lambda function for equivalence.

This fact has been proved in 1936 by Alonzo Church: “No computable function can decide the equivalence of two different lambda functions”.

Despite this, there are two guys on stack overflow that are trying to do exactly this in C#.

Lambda calculus is convenient, but it undergoes the incompleteness theorem by Kurt Gödel – For any consistent, effectively generated formal theory that proves certain basic arithmetic truths, there is an arithmetical statement that is true, but not provable in the theory.

In other words, it doesn’t matter how cool your formalism is, you still can’t fully automate through an algorithm to prove the properties of another algorithm.

So this is what nearly a century ago a group of mathematicians, led by Alonzo Church, devised in search of more elegant mathematics.

What I presented today is called Untyped Lambda Calculus. Take plus and try to add true and factorial together. It doesn’t make sense. So, by using this notation you can also write correct expressions that don’t make sense.

Since not every expression you can write makes sense, it is a programmer’s duty to write and use only those that make sense.

Typed lambda calculus checks correctness for you. Untyped lambda is very much like javascript in which you can write everything and is not checked for type correctness.

I haven’t talked about Lazy/Eager evaluations in lambda calculus. This is a very complicated issue, even if very fun. If you do lambda calculus on paper you will notice that sometimes you need to evaluate lazy and other times you need to evaluate eager.

Usually, when you read about lambda calculus in Scala, you don’t find what I have shown you, you find something else – lambda calculus done on the type system of Scala:

sealed trait True extends Bool {
    type If[ T <: Up, F <: Up, Up ] = T
}

sealed trait False extends Bool {
    type If[T <: Up, F <: Up, Up] = F
}

This is not performed at run-time but at compile time. This is awesome because the compiler does all the calculations and produces such fine errors you can’t even figure out where they end.

Sources

  • wikipedia
  • blog: (C# Version) Dixin’s blog – this is the post where I took inspiration for this talk.
  • Book: Roger Penrose: The Emperor’s New Mind – this is not about only lambda calculus, but about computation. Reading this book will help you better grasp the subject.
  • Lambda Visualization – Badlam this is nothing special just a package to help the presentation. It is known to work only in presentation and not elsewhere.
  • This same presentation in Java – is the original work. This presentation has been done first in Java (for fun and drinking beer of course).

NdM: that’s the end of the presentation. Here I add my fragment of code I used to check the code. If you prefer you can use Badlam, the code below is in Scala.

Function to convert from lambda numbers to decimal –

def dec( y : Lambda ) : Int = {
    case class Counter( counter: Int) extends Lambda {
        def apply( x: Lambda ) : Lambda = {
            this
        }
    }
    case class Fun() extends Lambda {
        def apply( x: Lambda ) : Lambda = {
            Counter( x.asInstanceOf[Counter].counter+1 )
        }
    }

    y( Fun() )(Counter(0)).asInstanceOf[Counter].counter
}

The trick is to give function f of lambda the semantic of incrementing its argument by 1. If the argument has type int and starts counting from zero the conversion is done.

The following code is just a pretty printer for lambdas that are not integers:

def del( x : Lambda ) : Unit = {
    x match {
        case `aTrue` => println( "true" )
        case `aFalse` => println( "false" )
        case `and` => println("&lt;and&gt;")
        case `or` => println("&lt;or&gt;")
        case `not` => println("&lt;not&gt;")
        case _ => println("?")
    }
}

(where aTrue, aFalse, and all the other symbols are defined as in Jarek’s presentation.

Lambda World 2017 – Workshop – Don’t fear the Optics

This talk by Jesús Lopez Gonzales has been quite clear (at least to my challenged functional understanding). As strange as it may sound the whole idea of optics (in Functional Programming) is to solve a problem that exists only because of the functional paradigm. Aside from cheap humor, it makes sense – in structured programming, you do the same by forbidding the use of the goto statement, and you need other tools (e.g. break, continue, structured statements) to do the same job in a safe, sound and controlled way.

You can find sources for the running example here: https://github.com/hablapps/dontfeartheoptics.git

But I don’t want to steal the stage. As usual, all mistakes and false predicates are mine (my only defense is that the talk was performed without a mike and loudspeaker system).

Just one last note, before starting – the original slides of the presentation used a compact syntax, relying on Scala’s high sucrose diet. I opted for a more verbose syntax that makes clear to those less fluent in Scala what’s happening behind the sugar curtain.

Functional Programming is a programming paradigm […] that treats computation as the evaluation of mathematical functions and avoids changing-state and mutable data.

(from Wikipedia)

In functional programming, we decide not to change the state of variables once assigned. That means that when we want to change something we have to create a new instance from the existing one.

Consider the position class:

case class Pos(i: Int, j: Int )
pos1 = Pos(1,1)

In order to change to position to a new one, the “changing” method just takes the existing pos1 and creates a new instance with the new state:

pos2 = pos1.move(1,2)

The running example for this talk is a CandyCrush clone. Here are the main classes:

case class Game( ups: Int, level: option[Level], )
case class Level( targetScore: Long,
                  targetMoves: Long,
                  board: Board,
                  currentScore: Long,
                  currentMoves: Long )
case class Board( height: Int, width: Int, matrix: Map[Pos,Option[Candy]])

Modules are defined as follows:

  • Candy REPL – IO
  • Candy Business logic – state program
  • Candy data layer – data structures & optics

To face problems posed by state immutable we resort to the Half-Life narration – who better than Gordon Freeman – the man with a big lambda on his breast – could help us in the process?

The talk uses an explorative approach – you may want to explore the area to locate the problem (the Alien), then try to solve it using some techniques (equipping new weapons), and then refine the solution until you find an elegant way to fix the problem.

The first enemy to defeat is how to keep the state unchanged.

CandyState.scala is the source file where “getter and setters” are located). There are several points where you need to update the state as the game progresses.

def modifyScore( f: Long => Long) : Level => Level..
def modifyMatrix( f: CandyMatrix => CandyMatrix ) : Level => Level

[NdM: note that specific to this paradigm these methods accept a function that transforms score (or playfield) into a new score (or playfield) and returns a function that changes the Level accordingly. I found this revealing and somewhat mind-boggling.]

How would you do it in a traditional way?
In order to modify you need to copy:

def modifyScore( f: Long => Long ) : Level => Level =
  lv => lv.copy(currentScore = f(lv.currentScore))

def modifyMatrix( f: CandyMatrix => CandyMatrix ) : Level => Level
lv => lv.copy( board = lv.board.copy( matrix = f(lv.board.matrix)))

This is not straightforward, at least not in general, because you need to copy through several indirection levels. Functional programming is about elegance and modularity, not this.

(Alien identified!)

Lens comes to the rescue (or, as Jesus put it, Lens – the crowbar in the half-life analogy – is the weapon to equip). The lens is a parametric class defined over two types: S – the whole and A – the part:

abstract class Lens[S,A] {
    def get( s: S): A
    def set(a: A): S => S
}

get method accepts a whole and returns a part. set accepts a part and tells you how to change the whole to incorporate that part.

val _currentScore: Lens[Level,Long] =
    Lens[Level,Long]( 
        lv => lv.currentScore
    )(
        cs => lv => lv.copy(currentScore=cs)
    )
val _board : Lens[Level,Board] =
    Lens[Level,Board](
        lv => lv.board
    )(
        br=>lv=>lv.copy(board=br)
    )
val _matrix : Lens[Board,CandyMatrix] =
    Lens[Board,CandyMatrix](
        bd => bd.matrix
    )(
        mx => bd => bd.copy(matrix=mx)
    )

modifyScore2( f: Long => Long) : Level => Level = {
    level : Level =>
        Level.currentScore.set( f( level.currentScore ))(level)
}

The code works, but it is lengthy and boring to write. So we can take advantage of the Lens defined for us by the @Lenses annotation.

[NdM: I’m going to expand the talk a little bit here because I lost some passages and I reconstructed them thanks to my Scala-speaking friends]

This annotation instructs the compiler to create one lens method in the companion object for each case class field. [NdM: Oddly enough, for us coming from traditional programming languages, the lens has the same name as the case class field].

[NdM: original example exploited import to inject in the current scope the companion object’s fields, creating a bit of confusion in my mind. In the following examples I will avoid this shortcut in favor of readability].

What if we want to extract the matrix from a level? Operationally we have to navigate through the board (level->board->matrix). This can be done via composition, using the verb composeLens :

def getMatrix2 : Level => CandyMatrix = (Level.board composeLens Board.matrix).get

[NdM: my Scala-speaking friend also told me that def has been used without a real advantage over val. Having used val would have avoided an unneeded function call.]

The same can be applied to modify:

def modifyMatrix2( f: CandyMatrix => CandyMatrix ) : Level => Level = 
    (Level.board composeLens Board.matrix).modify(f)

This syntax is slick, but still more verbose than say Haskell where you write just a dot instead of the composeLens verb.

2nd Enemy – Threading State Zombie (State Monads)

Consider the function

crushPos( pos: Pos): Level => (Level, Long)

Its purpose is to crash a given position. This is accomplished by updating the map and updating the score. Additionally, we want the function to return a pair composed of the level and the new score. The first implementation you may want to try is to navigate through the level to change the matrix, then navigate through the updated version of the level to update the score, and then prepare the pair with the updated level and the score.

def crushPos( pos: Pos) : Level => (Level, Long) = {
    lv0 =>
        val lv1 = (board composeLens matrix).modify(mx => mx.updated(pos, None))(lv0)
        val lv2 = currentScore.modify( cs => cs +1)(lv1)
        (lv2, currentScore.get(lv2))
}

This works, but it is error-prone because the programmer must ensure to properly pipe all the changes through the transformations.

The new weapon is the State:

abstract class State[S,A](run : S => (S,A))

This class defines a mechanism to execute a given action on an object and produce the updated object and a value. And it can be used like:

def crushPos2(pos: Pos): Level => (Level , Long) = {
    lv0 =>
        val (lv1, _ ) = State[Level,Unit] (
            lv => {
                (board composeLens matrix).modify(mx => mx.updated(pos, None))
                ((lv), ())
            }
        ).run( lv0 )
        State[Level,Long] (
            lv => {
                val nlv = currentScore.modify(cs => cs + 1)(lv)
                (nlv, currentScore.get(nlv))
            }
        ).run(lv1)
}

This may be more elegant but can be hardly defined as better, and nonetheless still requires the programmer to properly set up the execution pipe. [NdM: also note that the first part (from val to run( lv0 )  may be replaced by a more compact val lv1 = (board composeLens matrix).modify(mx => mx.updated(pos, None))( lv0 )  ]

This can be improved by using an implicit MonadState, which is a class implicitly built from a State class that can be bound together using the >> operator. In code:

case class State[S,A](run : S => (S,A))

implict def monadState[S]: monad[State[S,?]] = ...

Our code becomes:

def crushPos3(pos: Pos): State[Level, Long] = {
    State[Level, Unit] {
        lv =>
            val lv1 = (board composeLens matrix).modify(mx => mx.updated(pos, None))(lv)
            (lv),()
    } >>
    State[Level,Long] {
        lv =>
            val nlv = currentScore.modify( cs => cs +1)(lv)
            (nlv,currentScore.get(nlv))
    }
}

[NdM: be careful in placing the >> operator! First IntelliJ is not able to recognize it and marks it as an unknown operator; second thanks to Scala’s forgiveness of syntax punctuation, you need to place >> on the same line of the closing bracket. I couldn’t figure out the right way to write this until I mailed the author of the talk for help. He responded quickly and kindly and set me in the right direction. Thanks, Jesus]

[NdM: a quick word on binding. Bind is the same as flatMap, that is the way monads transform their content. In this case, the binding allows you to compose the two-run action into one. Since the computation accepts one value and produces two, you may wonder what happens to the side value (in our case the score) of the first run. Answer – it gets discarded and only the last one is produced in the final result]

MonadState can be composed so they pipe the result one through the other.

def crushPos4( pos: Pos ) : State[Level, Long] =
    (board composeLens matrix).mod(mx => mx.updated( pos, None)) >> currentScore.mod(_ + 5)

[NdM: now, this is a bit more complex to digest – where does the .mod come out? And more importantly, how does .mod know what to return? .mod is a method of the StateLens object (well, nearly true, but true enough for the sake of this analysis). Always remember that you are not dealing with actual values, but you are forging functions that will need to be called/applied to actual values. Note that the lenses in the expression are both on the Level class, so the state generated by mod operates on the Level class. The additional type is derived by the right operand of the >> operator.]

Enemy 3: optional antlion

So far so good, but there are still other entities that cannot fit properly in the picture. What about getting and modifying the current score from the Game? The problem we face is that level is an option in Game. Lenses can’t be used with a plain-vanilla approach.

Let’s try a first attempt at the solution:

def getScore: State[Game, Option[Long]] =
    level.extract.map( olv => olv.map( lv => currentScore.get(lv)))

def modifyScore(f : Long => Long) : State[Game,Unit] =
    level.mod_(olv. => olv.map(lv => currentScore.modify(f)(lv)))

extract  is a method of the State

Nice, but cumbersome. The abstraction we can use now is the Prism (which is defined in monocle, roughly in the following way):

abstract class Prism[S,A] {
    def getOption : S => Option[A]
    def reverseGet : A => S
}

The first method takes an object and produces an option, the second method rebuilds the object given a part. So, let’s define our prism:

import monocle.Prism

def mySome[A] : Prism[Option[A], A] = Prism[Option[A],A]( s => s)(a=>Some(a))

This prism deals  with an Option[A], but monocle already provides you with this tool and it’s called some :

import monocle.std.option.some

def getScore2: State[Game, Option[Long]] =
    (level composePrism some composeLens currentScore).extract

def modifyScore2( f : Long => Long) : State[Game,Unit] = 
    (level composePrism some composeLens currentScore).mod_(f)

Final Enemy: Multiple Fast Zombies

Now we want to crush an entire column of the board.

We can combine lenses and prisms into something else:

abstract class Optional[S,A] {
    def getOption(s: S): Option[A]
    def set(a: A): S => S
}

As for the Prims we have a getOption method that exposes the Option, but, instead of the reverseGet, there is a set that transforms S into another S provided an A.

Optionals  can be created by composing prisms and lenses as follows:

val op: Optional[Game,CandyMtrix] =
    level composePrism some composeLens board composeLens matrix

So that we can write our first iteration of the crushColumn method as –

def crushColumn(j: Int): State[Game, Unit] =
    op.mod_(
        mx => mx.map {
            case (p,_) if p.j == j => (p,None)
            case x => x
        }
    )

This function operates on the game matrix and removes the candy when the column of the position is the same as j.

It is not bad per se, but we are doing this in a manual way. The solution could be improved by using a filterIndex:

def crushColumn2( j : Int ) : State[Game,Unit] =
    op.mod_(
        mx => filterIndex[CandiMatrix,Pos,Option[Candy]]( p => p.j == j ).set(None)(mx)
    )

Let’s see how to automatize it. Let’s introduce the abstract metaclass Traversal:

abstract class Traversal[S,A] {
    def modifyF[F[_]: Applicative](f:A => F[A])(s:S): F[S]
}

Now it is possible to compose the Traversal with other lenses such as:

def crushColumn3( j: Int ) : State[Game,Unit] =
    (op composeTraversal filterIndex((p: Pos)=> p.j == j )).assign_(None)

FilterIndex is a monocle function, that along with the implicit mapFilterIndex allows the lens to apply over the map collection.

Since compose syntax may tend to be a bit verbose, you can also use the following operators:

  • ^<-?  compose with prism
  • ^|->  compose with a Lens
  • ^|->> compose traversal

Conclusions: Optics are abstractions for changing parts of wholes. These abstractions are composable to access complex data. Monacle library provides hybrid of concrete e Van Laarhoveen optics [NdM: sorry I missed the explanation entirely]. State monads encapsulate state threading and produce output values.


Max’s comment – The talk has been very helpful in improving my understanding of this aspect of functional programming. I still find Scala syntax to be a bit on the verge of cryptic and dealing with new concepts doesn’t help either. Composing stuff in the way functional programming does is a really powerful mechanism that enables the programmer to recycle code in an effective manner.

I find that the use of symbols to further reduce the characters count is really dangerous. But this is a topic for another post. Let’s just say that Scala is endangered of write-only code 🙂 Looking forward to attending the next Lambda World!

Lambda World 2017 – Unconference – Category Theory Crash Course

Here I am in Cadiz to attend 2017 edition of Lambda World. Cadiz is really a nice frame for this conference and the weather is so good that it seems to be back at vacation time instead then in mid-Autumn.

Anyway, on this morning I attended the “unconference”, a sort of informal prelude to the real conference. After the first, planned talk, people could register to present their talk and other attendants could vote their preferred talk. This yield two interesting talk – one on declarative testing and the other one on category theory. Here is what I learned from the last one, by Rùnar Bjarnason.

First, this is algebra stuff, really. Not the kind of ‘a+b’ you do in mid schools, but the high order stuff you do later when you study groups, monoids and the likes.

How does these concepts applies to practical programming still eludes me, but I sense that something is there waiting for my brain to grasp it.

Basically an abstract algebra of function wants to answer to the following questions – what functions do? And what can we say about them? Here we go, all mistakes, errors, inconsistencies are just mine, blame me not Rùnar.

A Category is defined by objects and arrows and how we can compose those arrows.

If you consider language types as objects then functions can be thought as arrows from an object to another object:

Some types: A, B, C.
Some functions f, g.

f: A => B
g: B => C

a composite functions is formed by invoking a function on the result of another one:

g compose f = (x => g(f(x)))

As long as types line up we can concatenate function composition.

If a function links an object to itself (in the example a type to itself), this is the identity function:

identity function: identity[A] : A=>A
identity compose f = f
f compose identity = f

Sometime composition is expressed with a dot. An interesting property is that composition is associative:

h.g.f = (h.g).f = h.(g.f)

(so no need for parenthesis).

In Scala you have a category for functions and types, but also for types and inheritance (subtyping). This category is defined as follows:

Objects: Scala Types: A, B, C…
Arrows: Subtype relations: A <: B, B <: C…
Composition: [ A <: B, B <: C ] => A <: C
Associativity: A <: B <: C (no need for parenthesis)
Identity: A <: A

Another example of category is the the pre-order (<=). Objects are instances of a Type on which <= binary relationship is defined:

Objects a, b, c…
Arrows: a <= b, b <= c
Transitivity: <=
Identity: a <= a

E.g. natural numbers are a category for preorder.

Monoids are categories with one object. This object, by convention, is written as ‘*’. All arrows links * to itself. In a turn of notation I’ll use the symbol |+| for composition and ‘mzero’ for identity.

Objects: *
Arrows: f, g, h: * -> *
associativity
identity: * -> *
mzero |+| a = a
a |+| mzero = a

Object is just an anchor, to hold arrows.

Category of preorders:
A preorder is a set with <=
Objects: all the preorders (Scala integers)
arrows: Homomorphisms on preorders

[NdM: at this point things start to be complicated and I have likely lost the speaker. What follows are likely to be glimpses of his talk… the parts I got]

Monotonic functions.
e.g. string length : _.length is a monotonic function.
Preserves relationship:
x <= y then h(x) <= h(y)

Category of monoids:
Objects: all the monoids
Arrows: monoid homomorphism
h( x |+| y) == h(x) |+| h(y)

Category of categories
Objects: all the categories
Arrows: category homomorphism F
F( f . g ) = F(f) . F(g) e.g. _map( f compose g ) = _.map(f) compose _.map(g)
these are functors.

Given a category C, there is a category of endofunctors on C
endofunctor F[_]
Objects: functors
arrows: functor homomorphism
h: F[A] => G[A]
h compose _.map(f) = _.map(f) compose h

trait ~>[F[_].G[_]] { def apply[A](a: F[A]) : G[A] }

A monad on C is an endofunctor M[_] together with two natural transformations:

pure[A] : A=>M[A]
join[A] : M[M[A]] => M[A]

A monad is a kind of category:
M[M[M[A]]] : can be associated as we want, preserving the result.

A monad is a monoid in an endofunctor category.

[And with this truth the brief talk is over]

Livello di pericolosità dello step coreografico

(articolo originale)

Lo step coreografico è una delle mie attività sportive preferite. Come ho già scritto, mi piace lo step perchè richiede sia coordinazione che memoria e mette alla prova la memoria muscolare al ritmo della musica dance. Nello scorso anno e mezzo ho avuto la fortuna di avere un’istruttice stellare. Capace di proporre ad ogni lezione coreografie sempre nuove, emozionanti e a volte pazze. Le sue coreografie sono molto divertenti, ma a volte richiedono alcuni passaggi che presentano un qualche livello di pericolo. Se ti distrai o non sei abbastanza veloce a modificare la tua memoria muscolare puoi mancare lo step o scontrarti con qualcun altro.
Ho preparato una tabella (simile al mio “Grado di Efficacia del Weekend”) che definisce un numero di taschi per ogni caratteristica pericolosa della coreografia. Semplicemente bisogna aggiungerne uno per ogni caratteristiche della coreografia per avere il livello di pericolo. Oggi abbiamo raggiunto i 3 teschi.

  • Passo attraverso lo step – bisogna estendere la gamba oltre lo step con il rischio di colpire lo step con il tallone e perdere l’equilibrio.
  • Giro all’indietro sul pavimento che finisce con un piede sullo step – non si può vedere lo step mentre giri, ma bisogna avere un’idea abbastanza precisa di dove sia lo step, altrimenti puoi colpirlo con il piede o appoggiare il piede parzialmente perdendo l’equilibrio.
  • Due step a testa – di solito gli step sono abbastanza vicini l’uno all’altro e la coreografia solitamente prevede delle mosse con un piede tra i due step. E’ possibile inciampare in uno step o appoggiare male il piede. Se gli step sono molto vicini è possibile inciampare nelle proprie gambe.
  • Step vicini – quando il tuo step è accanto a quello di un’altra persona diventa necessaria una buona sincronia altrimenti ci si può scontrare. Distinguere la destra dalla sinistra solitamente aiuta.
  • Step condivisi – in determinati momenti della coreografia due persone condividono lo stesso step. Di solito la coreografia prevede abbastasnza spazio per entrambe. Ma anche in questo caso è necessaria una buona tempistica e una buona sincronia per muoversi nella giusta direzione.
  • Scambio step – ognuno ha il proprio step, ma ad un certo punto tutti si muovono verso un altro step allo stesso momento. Di solito la coreografia crea un corridoio a tempo per permetterti di raggiungere la destinazione. “A tempo” è la chiave.
  • Sincronizzazione precisa. Due persone sone abbastanza vicine, verosimilmente sullo stesso step, ma eseguono movimenti diversi. I movimenti si combinano con i tempi giusti in modo che non ci sia collisione. Questo è una delle situazioni più pericolose perchè non può affidarti alla simmetria o prendere spunto dai movimenti del tuo compagno/a; qualsiasi errore, anche piccolo, causa una collisione.