Tag: Scala

Our Fathers’ Faults – Failure is not an Option

Our Fathers’ faults.

Intelligent people learn from their mistakes, wise people learn from other’s mistake. Unlucky people learn from the mess they have to fix in someone else’s code.

Working as a programmer is not always an easy task. On lucky days you feel like a god planning and designing architectures, wisely crafting virtual nuts, cogs, and bolts that happily tick together in an elegant and coordinated choreography, while on bad days it is bug fixing in your code. In really sh**ty days it is bug hunting in code written (by a long-gone someone else) for exploration, demo, and test of the coolness factor.

Continue reading “Our Fathers’ Faults – Failure is not an Option”

Scala Italy 2019

Scala Italy is the Italian conference of the Scala Language now to the 7th edition – a time to meet old friends, make new friends, talk about programming, and attend high-quality speeches about the Scala language.
This year, following the tradition, started in 2018 🙂 the conference lasted two full days.
Conference videos are published on this Vimeo channel.

Demographics

By a very rough approximation, I counted about one hundred people, organizers included. While the headcount went down (last year we were about 150), the average age went up. Still not at the C++ conferences level, but the number of baby programmers was quite low. Also, the female percentage was very low (on par with C++ conferences).

Maybe Scala is leaving the phase of enthusiast early adopters (usually young programmers, still in contestation years, with a lot of time, curiosity, and energy to invest in something new) and going toward being something too solid and usable to be used for disputing the establishment.
This isn’t too bad and possibly a sign of maturity. Even the content of some talks at the conference seems to confirm this idea.

Continue reading “Scala Italy 2019”

Scala Italy 2018

At the end of September, my employer sent me to the awesome city of Florence to attend the Scala Italy conference. Thanks to this patronage, this is the third year I’m able to be at the conference of the Italian Scala Community.

For the first time this year, the conference lasted two days with two distinct tracks – talks and workshops. The choice of one working and one non-working day can be a bit odd and likely a symptom of some ambivalence about the target audience. To keep the conference for Scala enthusiasts attending in their spare time or to move it to the professional realm, that’s the question. Given that the ticket price wasn’t very light on pockets, maybe half of the decision has already been made.

I dutifully took notes during all the talks so I’d like to prepare some posts with my notes. I noted that after I did so for ++it 2018 (Channels are Useful, not only for Water, Zero Allocation and No Type Erasure Futures, Time Travel Debug, ++it 2018 – Coroutines – coming soon… ?, ++it 2018 – Key Notes) conference organization put all the conference videos online. I’m pretty sure the same is going to happen for Scala Italy 2018 as well. So I’ll wait for your feedbacks before investing the time needed for writing all the blogs.

Continue reading “Scala Italy 2018”

Protecting you from danger

One of the interesting features of Scala is the trait construct. In Java you have interfaces, a stripped down class that lets you express the requirements for classes that implement it to have a given … interface. Just to keep this from an OO perspective – in a certain system all MovableObjects, must expose an interface to retrieve, say, position, speed and acceleration:

interface MovableObject {
    Point getPosition();
    Vector getSpeed();
    Vector getAcceleration();
}

class Rocket implements MovableObject {
    public Point getPosition() {
        return m_position;
    }

    public Vector getSpeed() {
        return m_speed;
    }

    public Vector getAcceleration() {
        return m_acceleration;
    }
    private Point m_position;
    private Vector m_speed;
    private Vector m_acceleration;
}

In Java the interface has no implementation part, just a list of methods that heir(s) must implement. This is a way to mitigate the lack of multiple inheritance in the Java language – a class can inherit at most from a single base class, but can implement as many interfaces as needed.

Scala traits are different in the sense they can provide an implementation part and fields. Now that I read the wikipedia definition, it looks like that traits are a bit more mathematically sound than interfaces (that’s what you can expect in functional language). Anyway, in plain English, traits are very close to C++ base classes (with some additions that is stuff for another blog post) in the sense that you can both require interface and provide common functionality.

Continue reading “Protecting you from danger”

Optimization Time – Scala (and C++)

The Case

It is quite a feeling when you learn that your commit, a couple of months ago, broke the build is such a subtle way that it took so long to be detected. Possibly a more thorough testing and validation of the software would have caught it earlier, nonetheless it’s there and you and your coworkers are working hard to delimit the offending code and better understanding what caused the mess.

It turned out that some perfectly working code was taking too much time in one of the hot spot of our codebase. More precisely that code operated a conversion from an incoming data packet into a format suitable for data processing… about 2500 times a second on a modest hardware.

The code resulting in such poor performances that disrupted the device functionality seemed like a good idea in a very functional idiom –

@inline final val SAMPLES_PER_BLOCK = 6
@inline final val BLOCK_LENGTH = 8
@inline final val BLOCK_HEADER = 2

private def decodeFunctional(packet: Array[Byte]): Array[Int] = {
    val lowSamples: Array[Int] = packet.drop(BLOCK_HEADER).map(x => x.toUint)

    val msbs: Int = ((packet(0).toUint << 8) | packet(1).toUint) & 0xFFF
    Array.tabulate(SAMPLES_PER_BLOCK)(
        x => lowSamples(x) | (((msbs << 2 * x) & 0xC00) >> 2)
    )
}

Basically the incoming packet holds six 10-bits samples in 8 bytes and some bits-shifting’n’pasting is needed to rebuild the six sample array for later processing.

Once found that the problem was here I decided rewrite the code to get rid of the functional idiom and take an imperative/iterative approach:

private def decodeImperative(packet: Array[Byte]): Array[Int] = {
    // the code below is ugly, but it needs to be fast, don't touch,
    // unless you know what you are doing
    val samples: Array[Int] = new Array[Int](SAMPLES_PER_BLOCK)
    val msbs: Int = ((packet(0).toUint << 8) | packet(1).toUint) & 0xFFF
    var index: Int = 0
    while (index < SAMPLES_PER_BLOCK) {
        samples(index) = packet(BLOCK_HEADER + index).toUint |
                         (((msbs << 2 * index) & 0xC00) >> 2)
        index += 1
    }
    samples
}

(the comment is a colorful note I decide to leave for the posterity :-)).

The idea is to get rid of the read-only constraint imposed by functional programming and save all the temporaries.

Optimizing is a subtle art, and even if intuition may guide you, only measurement can tell whether the optimization succeeded. So I set up some speed test (see “methodology” below) to objectively assess my results.

The speed improvement was astounding: more than 7 times faster (7.6 to be precise).

Energized by this victory I decided to switch to heavy optimization weapon. Not being sure if scala/java compiler optimization really does its optimization homework, I opted for manually unrolling the loop in the code:

private def decodeUnrolled0( packet: Array[Byte] ) : Array[Int] = {
    val PACKET0 : Int = packet(0).toUint
    val PACKET1 : Int = packet(1).toUint
    Array[Int](
        packet(BLOCK_HEADER+0).toUint | ((PACKET0 << 6) & 0x300),
        packet(BLOCK_HEADER+1).toUint | ((PACKET0 << 8) & 0x300),
        packet(BLOCK_HEADER+2).toUint | ((PACKET1 << 2) & 0x300),
        packet(BLOCK_HEADER+3).toUint | ((PACKET1 << 4) & 0x300),
        packet(BLOCK_HEADER+4).toUint | ((PACKET1 << 6) & 0x300),
        packet(BLOCK_HEADER+5).toUint | ((PACKET1 << 8) & 0x300)
    )
}

I was somewhat disappointed to learn that this version is only marginally faster (x1.6) than the original functional version. This didn’t really make sense to me, since the loop is completely unrolled and just trivial computation needed to be performed. So I decompiled the .class file:

public int[] CheckPerf$$decodeUnrolled0(byte[] packet)
{
  int PACKET0 = UnsignedByteOps(packet[0]).toUint();
  int PACKET1 = UnsignedByteOps(packet[1]).toUint();
  return (int[])Array..MODULE$.apply(Predef..MODULE$.wrapIntArray(new int[] {
    UnsignedByteOps(packet[2]).toUint() | PACKET0 << 6 & 0x300, 
    UnsignedByteOps(packet[3]).toUint() | PACKET0 << 8 & 0x300, 
    UnsignedByteOps(packet[4]).toUint() | PACKET1 << 2 & 0x300, 
    UnsignedByteOps(packet[5]).toUint() | PACKET1 << 4 & 0x300, 
    UnsignedByteOps(packet[6]).toUint() | PACKET1 << 6 & 0x300, 
    UnsignedByteOps(packet[7]).toUint() | PACKET1 << 8 & 0x300 }), ClassTag..MODULE$.Int());
}

Instead of the Java array I expected, some Scala internals are used to create the array and then converting it into the Java thing. Possibly this was the cause of the slowness. By looking around at the decompiled code I found another function that just used a Java int array. So I rewrote the unrolled version accordingly –

private def decodeUnrolled1( packet: Array[Byte] ) : Array[Int] = {
    val samples = new Array[Int](SAMPLES_PER_BLOCK)
    val PACKET0 = packet(0).toUint
    val PACKET1 = packet(1).toUint
    samples(0) = packet(BLOCK_HEADER + 0).toUint | ((PACKET0 << 6) & 0x300)
    samples(1) = packet(BLOCK_HEADER + 1).toUint | ((PACKET0 << 8) & 0x300)
    samples(2) = packet(BLOCK_HEADER + 2).toUint | ((PACKET1 << 2) & 0x300)
    samples(3) = packet(BLOCK_HEADER + 3).toUint | ((PACKET1 << 4) & 0x300)
    samples(4) = packet(BLOCK_HEADER + 4).toUint | ((PACKET1 << 6) & 0x300)
    samples(5) = packet(BLOCK_HEADER + 5).toUint | ((PACKET1 << 8) & 0x300)
    samples
}

The main difference is that here the array is first declared and allocated, then filled. In the above code the array was created, initialized and returned all in the same statement.

The speed improvement was good, but not much better than my imperative version: x7.77 times faster.

Then a colleague pointed out that I was using the “old” Scala 2.10 compiler and that I should try the latest 2.12 that benefits from a better interoperability with the underlying JVM 1.8.

In the following table you get the performance comparison:

Attention: The internal data of table “1” is corrupted!

Functional and unrolled0 are quite unsurprising, just what you expect from the next version of the compiler. Imperative approach yields quite a boost – 22 times faster than the functional version on the previous compiler. The shocking surprise is in the last column – the unrolled1 version of the code runs in excess of 1200 times faster than the original code!

Before jumping to conclusions, I performed the same tests on the code translated into C++. Here is the equivalent code:

typedef std::array<int,SAMPLES_PER_BLOCK> OutputArray;
typedef std::array<uint8_t,BLOCK_LENGTH> InputArray;

OutputArray decodeImperative( InputArray const& packet )
{
    OutputArray samples;
    uint32_t msbs = ((packet[0] << 8) | packet[1]) & 0xFFF;
    for( int index=0; index< SAMPLES_PER_BLOCK; ++index )
    {
        samples[index] = packet[BLOCK_HEADER + index] |
                         (((msbs << 2 * index) & 0xC00) >> 2);
    }
    return samples;
}

OutputArray decodeFunctional( InputArray const& packet )
{
    OutputArray lowSamples;
    std::copy( packet.begin()+2, packet.end(), lowSamples.begin() );
    uint32_t msbs = ((packet[0] << 8) | packet[1] ) & 0xFFF;
    OutputArray samples;

    int index=0;
    OutputArray::const_iterator scan = lowSamples.begin();
    std::generate(
        samples.begin(),
        samples.end(),
        [&index,&scan,msbs]()
        {
            return *scan++ | (((msbs << 2*index++) & 0xC00) >> 2);
        }
    );
    return samples;
}

OutputArray decodeUnrolled0( InputArray const& packet )
{
    uint8_t PACKET0 = packet[0];
    uint8_t PACKET1 = packet[1];

    return OutputArray{
        packet[BLOCK_HEADER+0] | ((PACKET0 << 6) & 0x300),
        packet[BLOCK_HEADER+1] | ((PACKET0 << 8) & 0x300),
        packet[BLOCK_HEADER+2] | ((PACKET1 << 2) & 0x300),
        packet[BLOCK_HEADER+3] | ((PACKET1 << 4) & 0x300),
        packet[BLOCK_HEADER+4] | ((PACKET1 << 6) & 0x300),
        packet[BLOCK_HEADER+5] | ((PACKET1 << 8) & 0x300)
    };
}

OutputArray decodeUnrolled1( InputArray const& packet )
{
    OutputArray samples;
    uint8_t PACKET0 = packet[0];
    uint8_t PACKET1 = packet[1];

    samples[0] = packet[BLOCK_HEADER+0] | ((PACKET0 << 6) & 0x300);
    samples[1] = packet[BLOCK_HEADER+1] | ((PACKET0 << 8) & 0x300);
    samples[2] = packet[BLOCK_HEADER+2] | ((PACKET1 << 2) & 0x300);
    samples[3] = packet[BLOCK_HEADER+3] | ((PACKET1 << 4) & 0x300);
    samples[4] = packet[BLOCK_HEADER+4] | ((PACKET1 << 6) & 0x300);
    samples[5] = packet[BLOCK_HEADER+5] | ((PACKET1 << 8) & 0x300);
    return samples;
}

I’m positive that functional purist will scream in horror at the sight of C++ functional version of the code, but it is the closest thing I could do using the standard library. Should you have any idea on how to improve the functional idiom using the standard library, let me know and I’ll update my test set.

Here are the C++ results –

Attention: The internal data of table “2” is corrupted!

This is mostly unsurprising but for the the lower right column which seems to indicate that Scala version is faster than C++. I guess that the reason is because I am using timing values that are rebased considering a standard price you pay for executing the code under test and that I don’t want to consider in my measurements (see below about the methodology I used). My interpretation is that the overhead in Scala hides the cost for a trivial expression so that the base time is much more close to the execution time than it is in C++.

Conclusions

Drawing general conclusions from a specific case is all but easy and trivial. Anyway according to data and to intuition, if you want to code using the functional idiom you pay a price that sometimes can be hefty. Paying an execution price, be it memory or cpu cycles, to raise the abstraction level is something that has been with computer programming from the dawn of this craft. I remember echoes from the ranting against high level languages in favor of assembly to avoid paying a 30% penalty. Now those times are gone, but facing a x4-x7 overhead can be something not always you can afford.

(Raising the level of abstraction by employing the functional idiom is something that is not undisputed, but I will spare this for another post).

Writing efficient code is not always intuitive. Constructs that seem innocent enough turns out to be compiled into time consuming lower level code. Peeking at compiled code is always useful. While languages as C++ are accompanied by detailed information about performances of various constructs and library components, Scala (as Java) lacks of these information. Also the huge amount of syntactic sugar employed by Scala does not simplify the life of the programmer that has to optimize code.

Eventually I cannot save to remarks the performances of interpreted languages (and even more so functional interpreted languages), despite of the progress in the JVM technologies, are still way behind native language ones. Even employing a functional approach C++ is some 80 times faster than Scala. You can argue that the C++ code above is not really functional, but there is nothing preventing me to use a mostly functional library for C++, with roughly the same performances of the functional version of the C++ code. It could be more tedious to write because of the lesser grade of syntactic glucose in C++, yet it is still functional.

I’m curious to see how scala-native compares, but this again is a topic for another post.

The last thought I want to report here is that you can make Scala code much faster, by avoiding the functional paradigm and peeking at compiled code. It is something you are not usually expected to do, but and improvement of 1200 times is a worth gain for the most executed code.

A very last thought for those who think that after all nowadays we have such powerful computers that analyzing performances and optimizing is a waste of time. I would like to remember that data processing requires energy and today energy production mostly relies on processes that produce CO2, something that the world doesn’t need. A data processing that runs just twice the speed of another will produce half of the carbon dioxide emissions. The environment benefits for switching to C and C++ should be clear to everyone.

Methodology

A couple of words on methodology. The compilers were Scala 2.10.6, Scala 2.12.2 and GNU c++ 7.3.1. All the test were performed on Linux (Fedora Core 27) using the command /bin/time –format=”%U %S”  command to print the User and Kernel times of the command and summing the two values together. These are times that CPU spent executing the process, not wall clock. The decoding function is executed 0x7FFFFFFF (some 2 billions) times per run and each program is tested for 12 runs. The fastest and the slowest run are discarded and the other are used to compute an average that is considered the measure of the test.

The base time is computed by employing a dummy decoder that does nothing but filling the result array with constants.

This setup should take in account the JIT effect and measure just the code I listed above.

Lambda World 2017 – Lambda Core, hardcore

Despite the title which is a bit obscure, this talk has been very clear and entertaining. The speaker Jarek Ratajski (@jarek000000) who defines himself as a functional programmer, wizard, and anarchitect, definitively knows his art and how to make an engaging speech.

Btw, an anarchitect is a person that breaks the rules so that something can be eventually done.

As usual, mistakes, errors, and inaccuracies are all my faults. A last note: you need Scala 2.12 to compile the code presented here, older versions are likely to fail.

I really recommend watching the presentation video on youtube.


Let’s start with a question for the attendees – you are on a desert island and you can take just one thing from the civilized world. What would you like to have to survive?

[someone from the audience shouts “wifi”]

Well, you need Math. Math is the base for everything, but it is huge! So, maybe we can find a small subset, a core, that allows us to recreate the whole Math.

Lambda is a building block of math. Using lambda calculus we can build – one step at a time – the whole Math. Historically Math has been built using sets, unfortunately, sets had some problems in terms of paradoxes, so mathematicians resorted to this simpler mechanism.

So lambda (λ) is a function that takes some lambda as a parameter and produces another lambda. A lambda is written as:

λ x. some expression

A λ x is applied to λ y with the following notation –

x(y)

to a scala developer:

trait Lambda {
    def apply(x: Lambda) : Lambda
}

Now suppose we are on a deserted island with just our Scala compiler. Just the compiler – no jars, no collections, no scala utils, no Boolean, Int, Float, Double, String… anything at all.

The only exception to this nothingness is the badlam package in order to simplify and help the presentation of concepts.

Let’s define the identity, which is a function that returns the function accepted as an argument:

λ x. x

In Scala it turns out to be:

val identity : Lambda = (x) => x

val id2 = identity(identity)

Now we can define a function that accepts multiple arguments (via the curry mechanism) and always returns the first one:

val funny: Lambda = (x) => (y) => x

Note that names are not important, but the structure is. The “funny” function structure can be used to implement true and false concepts:

val aTrue = funny
val aFalse : Lambda = (x) => (y) => y

Note that aFalse is complementary with respect to aTrue. Now that we have true and false symbols we can define boolean operators:

val and : Lambda = (p) => (q) => p(q)(p)

You can read this like: and is a function that via currying accepts two arguments p and q. Consider p: it can be either true or false. If it is true its lambda evaluates to its first argument, otherwise to the second. So, when p is true, the and expression evaluates to q, so and is true when both arguments are true. If p is false, then p is returned – no need to consider q.

Let’s check with Scala:

val result1 : Lambda = and (aFalse)(aTrue)

This evaluates to aFalse, while

val result2:Lambda = and (aTrue)(aTrue)

This evaluates to aTrue.

Or function is based on the same scheme used by and:

val or:Lambda = (p) => (q) => p(p)(q)

To define not is a matter of reversing the true/false definition:

val not: Lambda = (p) => (a) => (b) => p(b)(a)

So long, so far, we have defined all the tools we need for boolean calculus. Let’s see if we can express numbers. One approach could be:

val zero: Lambda = (f) => (x) => x
val one: Lambda = (f) => (x) => f(x)
val two: Lambda = (f) => (x) => f(f(x))
val three: Lambda = (f) => (x) => f(f(f(x)))

So numbers are functions (of course) that accept a function and an argument and return zero or more applications of the function to the argument.

Zero is the identity. One is the single application of lambda f to the identity. Two is the single application of lambda f to one, that is that application of lambda f to the application of lambda f to the identity, and so on…

This may work, but it is boring to encode all numbers in this way. A function that computes the successor of a given number could be a good tool to lessen the burden:

val succ:Lambda = (n) => (f) => (x) => f(n(f)(x))

[NdM – well this is mind-boggling and I have a hard time decoding. Conceptually is simple – just remember that an integer n is represented by a lambda that accepts a function f and an argument x and returns a function composing n nested applications of f over x. So replace x with f(x) and you are done.]

Now we can define addition and multiplication

val plus: Lambda = (m) => (n) => (f) => (x) => m(f)(n(f)(x))
val mult:Lambda = (m) => (n) => (f) => m(n(f))

[NdM -It takes a while for these two as well. Sum is intuitively easy, take 3+2=5, in lambda is: x => f(f(f(x))) + x => f(f(x)) = x => f(f(f(f(f(x))))). You may read plus as: (m,n,f) => (x) => m(f)(n(f)(x)) , that is a function that takes two operands and one function. Remember that m represents an integer by applying a function m times to an operand. So swap in f as an operand and you have m times the application of f, now applies this function to an argument that is n times the application of f to x.]

Multiplication is similar, x is omitted to keep the code terse, but you can read it as:

val mult:Lambda = (m) => (n) => (f) => (x) => m(n(f))(x)

Keeping the sum approach in mind, this is similar – applies m times a function that is composed by n application of f to x. I needed quite a neuron effort to figure this out.]

Predecessor can be defined as:

val pred: Lambda = (n) => (f) => (x) => n((g) => (h) => h(g(f)))((u) => x)((u) => u)

[NdM. I tried hard to understand this, but simply I couldn’t wrap my mind around it. I don’t have even the slightest idea of how this is expected to work. If you figure it out, let me know, please… well indeed, I think that the last part (u) => u, being an identity, is used to skip an application in the application list, therefore reducing by 1 the application list…]

[NdM. I found this thorough explanation of the predecessor on stackoverflow]

Now we can do something even more interesting – conditionals:

val ifLambda: Lambda = (c) => (t) => (f) => c(t)(f)((x) => x)
val isZero: Lambda = (n) => n((x) => aFalse)(aTrue)

Recursion would be a nice addition to the computing arsenal of lambda calculus since it allows the expression of iterative algorithms. But how are we supposed to call a function when lambda functions are anonymous?

First, let’s define a function that makes its argument call itself:

val autocall:Lambda = (x) => x(x)

Then we need a helper function that makes something call itself

val Y: Lambda = (f) => autocall((y) => f((v) => y(y)(v)))

And finally, we define a function containing the body of the recursive function:

val G:Lambda = (r) => (b) =>
  (ifLambda(isZero(n))
    ((x) => one )
    ((x) => mult(n)(r(pred(n))))
  )

Now we have everything to recursively compute a factorial:

val fact = Y(G) // this makes G recursive

[NdM: Once more I have a hard time trying to understand this part. It makes sense intuitively, but I’m lost in details such as function Y. The overall idea is pretty well laid out]

Turing Equivalent?
Lambda calculus has been proved to be Turing- equivalent, meaning that every algorithm that can be implemented on a Turing Machine, can also be implemented using Lambda Calculus. Therefore, you have no excuse, everything can be done purely functional!

Equivalence

In mathematics there are a lot of problems, an interesting one is about theorems.

For a mathematician, a theorem is something like

if ( condition(args)) then statement(args)

That is, if something holds, then something else is true. It would be nice to build a machine that automatically checks this. This is what about a century ago, mathematicians were looking for – a machine that renders us useless by proving theorems by itself.

In the same way, we used lambda calculus to represent numbers, booleans, and statements, we could rewrite the theorem as a lambda calculus expression and then execute it to let it show true or false to determine whether the theorem holds or not.

This could be such a machine:

def main( args: Array[String]) : Unit = {
    val autocall: Lambda = x => x(x)
    println( SmartDisplay.web.display(autocall))
    val OMEGA = autocall(autocall)
}

[NdM: also after having watched the youtube video of the conference, I can’t tell where this function comes from. I believe Jarek, but I have no explanation of why this function should prove theorems.]

That would be awesome, regrettably, it doesn’t work. In fact, autocall function is invoked over autocall itself causing a stack overflow. This is generally what happens when you try to analyze a lambda function for equivalence.

This fact has been proved in 1936 by Alonzo Church: “No computable function can decide the equivalence of two different lambda functions”.

Despite this, there are two guys on stack overflow that are trying to do exactly this in C#.

Lambda calculus is convenient, but it undergoes the incompleteness theorem by Kurt Gödel – For any consistent, effectively generated formal theory that proves certain basic arithmetic truths, there is an arithmetical statement that is true, but not provable in the theory.

In other words, it doesn’t matter how cool your formalism is, you still can’t fully automate through an algorithm to prove the properties of another algorithm.

So this is what nearly a century ago a group of mathematicians, led by Alonzo Church, devised in search of more elegant mathematics.

What I presented today is called Untyped Lambda Calculus. Take plus and try to add true and factorial together. It doesn’t make sense. So, by using this notation you can also write correct expressions that don’t make sense.

Since not every expression you can write makes sense, it is a programmer’s duty to write and use only those that make sense.

Typed lambda calculus checks correctness for you. Untyped lambda is very much like javascript in which you can write everything and is not checked for type correctness.

I haven’t talked about Lazy/Eager evaluations in lambda calculus. This is a very complicated issue, even if very fun. If you do lambda calculus on paper you will notice that sometimes you need to evaluate lazy and other times you need to evaluate eager.

Usually, when you read about lambda calculus in Scala, you don’t find what I have shown you, you find something else – lambda calculus done on the type system of Scala:

sealed trait True extends Bool {
    type If[ T <: Up, F <: Up, Up ] = T
}

sealed trait False extends Bool {
    type If[T <: Up, F <: Up, Up] = F
}

This is not performed at run-time but at compile time. This is awesome because the compiler does all the calculations and produces such fine errors you can’t even figure out where they end.

Sources

  • wikipedia
  • blog: (C# Version) Dixin’s blog – this is the post where I took inspiration for this talk.
  • Book: Roger Penrose: The Emperor’s New Mind – this is not about only lambda calculus, but about computation. Reading this book will help you better grasp the subject.
  • Lambda Visualization – Badlam this is nothing special just a package to help the presentation. It is known to work only in presentation and not elsewhere.
  • This same presentation in Java – is the original work. This presentation has been done first in Java (for fun and drinking beer of course).

NdM: that’s the end of the presentation. Here I add my fragment of code I used to check the code. If you prefer you can use Badlam, the code below is in Scala.

Function to convert from lambda numbers to decimal –

def dec( y : Lambda ) : Int = {
    case class Counter( counter: Int) extends Lambda {
        def apply( x: Lambda ) : Lambda = {
            this
        }
    }
    case class Fun() extends Lambda {
        def apply( x: Lambda ) : Lambda = {
            Counter( x.asInstanceOf[Counter].counter+1 )
        }
    }

    y( Fun() )(Counter(0)).asInstanceOf[Counter].counter
}

The trick is to give function f of lambda the semantic of incrementing its argument by 1. If the argument has type int and starts counting from zero the conversion is done.

The following code is just a pretty printer for lambdas that are not integers:

def del( x : Lambda ) : Unit = {
    x match {
        case `aTrue` => println( "true" )
        case `aFalse` => println( "false" )
        case `and` => println("&lt;and&gt;")
        case `or` => println("&lt;or&gt;")
        case `not` => println("&lt;not&gt;")
        case _ => println("?")
    }
}

(where aTrue, aFalse, and all the other symbols are defined as in Jarek’s presentation.

Lambda World 2017 – Workshop – Don’t fear the Optics

This talk by Jesús Lopez Gonzales has been quite clear (at least to my challenged functional understanding). As strange as it may sound the whole idea of optics (in Functional Programming) is to solve a problem that exists only because of the functional paradigm. Aside from cheap humor, it makes sense – in structured programming, you do the same by forbidding the use of the goto statement, and you need other tools (e.g. break, continue, structured statements) to do the same job in a safe, sound and controlled way.

You can find sources for the running example here: https://github.com/hablapps/dontfeartheoptics.git

But I don’t want to steal the stage. As usual, all mistakes and false predicates are mine (my only defense is that the talk was performed without a mike and loudspeaker system).

Just one last note, before starting – the original slides of the presentation used a compact syntax, relying on Scala’s high sucrose diet. I opted for a more verbose syntax that makes clear to those less fluent in Scala what’s happening behind the sugar curtain.

Functional Programming is a programming paradigm […] that treats computation as the evaluation of mathematical functions and avoids changing-state and mutable data.

(from Wikipedia)

In functional programming, we decide not to change the state of variables once assigned. That means that when we want to change something we have to create a new instance from the existing one.

Consider the position class:

case class Pos(i: Int, j: Int )
pos1 = Pos(1,1)

In order to change to position to a new one, the “changing” method just takes the existing pos1 and creates a new instance with the new state:

pos2 = pos1.move(1,2)

The running example for this talk is a CandyCrush clone. Here are the main classes:

case class Game( ups: Int, level: option[Level], )
case class Level( targetScore: Long,
                  targetMoves: Long,
                  board: Board,
                  currentScore: Long,
                  currentMoves: Long )
case class Board( height: Int, width: Int, matrix: Map[Pos,Option[Candy]])

Modules are defined as follows:

  • Candy REPL – IO
  • Candy Business logic – state program
  • Candy data layer – data structures & optics

To face problems posed by state immutable we resort to the Half-Life narration – who better than Gordon Freeman – the man with a big lambda on his breast – could help us in the process?

The talk uses an explorative approach – you may want to explore the area to locate the problem (the Alien), then try to solve it using some techniques (equipping new weapons), and then refine the solution until you find an elegant way to fix the problem.

The first enemy to defeat is how to keep the state unchanged.

CandyState.scala is the source file where “getter and setters” are located). There are several points where you need to update the state as the game progresses.

def modifyScore( f: Long => Long) : Level => Level..
def modifyMatrix( f: CandyMatrix => CandyMatrix ) : Level => Level

[NdM: note that specific to this paradigm these methods accept a function that transforms score (or playfield) into a new score (or playfield) and returns a function that changes the Level accordingly. I found this revealing and somewhat mind-boggling.]

How would you do it in a traditional way?
In order to modify you need to copy:

def modifyScore( f: Long => Long ) : Level => Level =
  lv => lv.copy(currentScore = f(lv.currentScore))

def modifyMatrix( f: CandyMatrix => CandyMatrix ) : Level => Level
lv => lv.copy( board = lv.board.copy( matrix = f(lv.board.matrix)))

This is not straightforward, at least not in general, because you need to copy through several indirection levels. Functional programming is about elegance and modularity, not this.

(Alien identified!)

Lens comes to the rescue (or, as Jesus put it, Lens – the crowbar in the half-life analogy – is the weapon to equip). The lens is a parametric class defined over two types: S – the whole and A – the part:

abstract class Lens[S,A] {
    def get( s: S): A
    def set(a: A): S => S
}

get method accepts a whole and returns a part. set accepts a part and tells you how to change the whole to incorporate that part.

val _currentScore: Lens[Level,Long] =
    Lens[Level,Long]( 
        lv => lv.currentScore
    )(
        cs => lv => lv.copy(currentScore=cs)
    )
val _board : Lens[Level,Board] =
    Lens[Level,Board](
        lv => lv.board
    )(
        br=>lv=>lv.copy(board=br)
    )
val _matrix : Lens[Board,CandyMatrix] =
    Lens[Board,CandyMatrix](
        bd => bd.matrix
    )(
        mx => bd => bd.copy(matrix=mx)
    )

modifyScore2( f: Long => Long) : Level => Level = {
    level : Level =>
        Level.currentScore.set( f( level.currentScore ))(level)
}

The code works, but it is lengthy and boring to write. So we can take advantage of the Lens defined for us by the @Lenses annotation.

[NdM: I’m going to expand the talk a little bit here because I lost some passages and I reconstructed them thanks to my Scala-speaking friends]

This annotation instructs the compiler to create one lens method in the companion object for each case class field. [NdM: Oddly enough, for us coming from traditional programming languages, the lens has the same name as the case class field].

[NdM: original example exploited import to inject in the current scope the companion object’s fields, creating a bit of confusion in my mind. In the following examples I will avoid this shortcut in favor of readability].

What if we want to extract the matrix from a level? Operationally we have to navigate through the board (level->board->matrix). This can be done via composition, using the verb composeLens :

def getMatrix2 : Level => CandyMatrix = (Level.board composeLens Board.matrix).get

[NdM: my Scala-speaking friend also told me that def has been used without a real advantage over val. Having used val would have avoided an unneeded function call.]

The same can be applied to modify:

def modifyMatrix2( f: CandyMatrix => CandyMatrix ) : Level => Level = 
    (Level.board composeLens Board.matrix).modify(f)

This syntax is slick, but still more verbose than say Haskell where you write just a dot instead of the composeLens verb.

2nd Enemy – Threading State Zombie (State Monads)

Consider the function

crushPos( pos: Pos): Level => (Level, Long)

Its purpose is to crash a given position. This is accomplished by updating the map and updating the score. Additionally, we want the function to return a pair composed of the level and the new score. The first implementation you may want to try is to navigate through the level to change the matrix, then navigate through the updated version of the level to update the score, and then prepare the pair with the updated level and the score.

def crushPos( pos: Pos) : Level => (Level, Long) = {
    lv0 =>
        val lv1 = (board composeLens matrix).modify(mx => mx.updated(pos, None))(lv0)
        val lv2 = currentScore.modify( cs => cs +1)(lv1)
        (lv2, currentScore.get(lv2))
}

This works, but it is error-prone because the programmer must ensure to properly pipe all the changes through the transformations.

The new weapon is the State:

abstract class State[S,A](run : S => (S,A))

This class defines a mechanism to execute a given action on an object and produce the updated object and a value. And it can be used like:

def crushPos2(pos: Pos): Level => (Level , Long) = {
    lv0 =>
        val (lv1, _ ) = State[Level,Unit] (
            lv => {
                (board composeLens matrix).modify(mx => mx.updated(pos, None))
                ((lv), ())
            }
        ).run( lv0 )
        State[Level,Long] (
            lv => {
                val nlv = currentScore.modify(cs => cs + 1)(lv)
                (nlv, currentScore.get(nlv))
            }
        ).run(lv1)
}

This may be more elegant but can be hardly defined as better, and nonetheless still requires the programmer to properly set up the execution pipe. [NdM: also note that the first part (from val to run( lv0 )  may be replaced by a more compact val lv1 = (board composeLens matrix).modify(mx => mx.updated(pos, None))( lv0 )  ]

This can be improved by using an implicit MonadState, which is a class implicitly built from a State class that can be bound together using the >> operator. In code:

case class State[S,A](run : S => (S,A))

implict def monadState[S]: monad[State[S,?]] = ...

Our code becomes:

def crushPos3(pos: Pos): State[Level, Long] = {
    State[Level, Unit] {
        lv =>
            val lv1 = (board composeLens matrix).modify(mx => mx.updated(pos, None))(lv)
            (lv),()
    } >>
    State[Level,Long] {
        lv =>
            val nlv = currentScore.modify( cs => cs +1)(lv)
            (nlv,currentScore.get(nlv))
    }
}

[NdM: be careful in placing the >> operator! First IntelliJ is not able to recognize it and marks it as an unknown operator; second thanks to Scala’s forgiveness of syntax punctuation, you need to place >> on the same line of the closing bracket. I couldn’t figure out the right way to write this until I mailed the author of the talk for help. He responded quickly and kindly and set me in the right direction. Thanks, Jesus]

[NdM: a quick word on binding. Bind is the same as flatMap, that is the way monads transform their content. In this case, the binding allows you to compose the two-run action into one. Since the computation accepts one value and produces two, you may wonder what happens to the side value (in our case the score) of the first run. Answer – it gets discarded and only the last one is produced in the final result]

MonadState can be composed so they pipe the result one through the other.

def crushPos4( pos: Pos ) : State[Level, Long] =
    (board composeLens matrix).mod(mx => mx.updated( pos, None)) >> currentScore.mod(_ + 5)

[NdM: now, this is a bit more complex to digest – where does the .mod come out? And more importantly, how does .mod know what to return? .mod is a method of the StateLens object (well, nearly true, but true enough for the sake of this analysis). Always remember that you are not dealing with actual values, but you are forging functions that will need to be called/applied to actual values. Note that the lenses in the expression are both on the Level class, so the state generated by mod operates on the Level class. The additional type is derived by the right operand of the >> operator.]

Enemy 3: optional antlion

So far so good, but there are still other entities that cannot fit properly in the picture. What about getting and modifying the current score from the Game? The problem we face is that level is an option in Game. Lenses can’t be used with a plain-vanilla approach.

Let’s try a first attempt at the solution:

def getScore: State[Game, Option[Long]] =
    level.extract.map( olv => olv.map( lv => currentScore.get(lv)))

def modifyScore(f : Long => Long) : State[Game,Unit] =
    level.mod_(olv. => olv.map(lv => currentScore.modify(f)(lv)))

extract  is a method of the State

Nice, but cumbersome. The abstraction we can use now is the Prism (which is defined in monocle, roughly in the following way):

abstract class Prism[S,A] {
    def getOption : S => Option[A]
    def reverseGet : A => S
}

The first method takes an object and produces an option, the second method rebuilds the object given a part. So, let’s define our prism:

import monocle.Prism

def mySome[A] : Prism[Option[A], A] = Prism[Option[A],A]( s => s)(a=>Some(a))

This prism deals  with an Option[A], but monocle already provides you with this tool and it’s called some :

import monocle.std.option.some

def getScore2: State[Game, Option[Long]] =
    (level composePrism some composeLens currentScore).extract

def modifyScore2( f : Long => Long) : State[Game,Unit] = 
    (level composePrism some composeLens currentScore).mod_(f)

Final Enemy: Multiple Fast Zombies

Now we want to crush an entire column of the board.

We can combine lenses and prisms into something else:

abstract class Optional[S,A] {
    def getOption(s: S): Option[A]
    def set(a: A): S => S
}

As for the Prims we have a getOption method that exposes the Option, but, instead of the reverseGet, there is a set that transforms S into another S provided an A.

Optionals  can be created by composing prisms and lenses as follows:

val op: Optional[Game,CandyMtrix] =
    level composePrism some composeLens board composeLens matrix

So that we can write our first iteration of the crushColumn method as –

def crushColumn(j: Int): State[Game, Unit] =
    op.mod_(
        mx => mx.map {
            case (p,_) if p.j == j => (p,None)
            case x => x
        }
    )

This function operates on the game matrix and removes the candy when the column of the position is the same as j.

It is not bad per se, but we are doing this in a manual way. The solution could be improved by using a filterIndex:

def crushColumn2( j : Int ) : State[Game,Unit] =
    op.mod_(
        mx => filterIndex[CandiMatrix,Pos,Option[Candy]]( p => p.j == j ).set(None)(mx)
    )

Let’s see how to automatize it. Let’s introduce the abstract metaclass Traversal:

abstract class Traversal[S,A] {
    def modifyF[F[_]: Applicative](f:A => F[A])(s:S): F[S]
}

Now it is possible to compose the Traversal with other lenses such as:

def crushColumn3( j: Int ) : State[Game,Unit] =
    (op composeTraversal filterIndex((p: Pos)=> p.j == j )).assign_(None)

FilterIndex is a monocle function, that along with the implicit mapFilterIndex allows the lens to apply over the map collection.

Since compose syntax may tend to be a bit verbose, you can also use the following operators:

  • ^<-?  compose with prism
  • ^|->  compose with a Lens
  • ^|->> compose traversal

Conclusions: Optics are abstractions for changing parts of wholes. These abstractions are composable to access complex data. Monacle library provides hybrid of concrete e Van Laarhoveen optics [NdM: sorry I missed the explanation entirely]. State monads encapsulate state threading and produce output values.


Max’s comment – The talk has been very helpful in improving my understanding of this aspect of functional programming. I still find Scala syntax to be a bit on the verge of cryptic and dealing with new concepts doesn’t help either. Composing stuff in the way functional programming does is a really powerful mechanism that enables the programmer to recycle code in an effective manner.

I find that the use of symbols to further reduce the characters count is really dangerous. But this is a topic for another post. Let’s just say that Scala is endangered of write-only code 🙂 Looking forward to attending the next Lambda World!

Comparisons

Recently I had to spend some time trying to adapt my imperative/OO background to a piece of code I need to write in functional paradigm. The problem is quite simple and can be described briefly. You have a collection of pairs containing an id and a quantity. When a new pair arrives you have to add the quantity to the pair with the same id in the collection, if exists, otherwise you have to add the pair to the collection.

Functional programming is based on three principles (as seen from an OO programmer) – variables are never modified once assigned, no side-effects (at least, avoid them as much as possible), no loops – work on collections with collective functions. Well maybe I missed something like monad composition, but that’s enough for this post.

Thanks to a coworker I wrote my Scala code that follows all the aforementioned principles and is quite elegant as well. It relies on the “partition” function that transforms (in a functional fashion) a collection into two collections containing the elements of the first one partitioned according to a given criteria. The criteria is the equality of the id so that I find the element with the same id if it exists, or just an empty collection if it doesn’t.

Here’s the code:

    type Collection = List[Data]

    def merge( collection : Collection, data : Data ) : Collection = {
        val result = collection.partition( _.id == data.id )
        result match {
            case (List(),other) =>
                data :: other
            case (List(old),other) =>
                Data( data.id, data.quantity+old.quantity ) :: other
        }
    }

Yes, I could have written more concisely, but that would have been too much write-only for me to be comfortable with.

Once the pleasant feeling of elegance wore off a bit I wondered what is the cost of this approach. Each time you invoke merge the collection is rebuilt and, unless the compile optimizer be very clever, also each list item is cloned and the old one goes to garbage recycling.

Partitioning scans and rebuild, but since I’m using an immutable collection, also adding an item to an existing list causes a new list to be generated.

Performance matters in some 20% of your code, so it could acceptable to sacrifice performance in order to get a higher abstraction level and thus a higher coding speed. But then I wonder what about premature pessimization? Premature pessimization, at least in context where I read the them, means the widespread adoption of idioms that lead to worse performances (the case was for C++ use of pre or post increment operator). Premature pessimization may cause the application to run generally slower and makes more difficult to spot and optimize the cause.

This triggered the question – how is language idiomatic approach impacts on performances?

To answer the question I started coding the same problem in different languages.

I started from my language of choice – C++. In this language it is likely you approach a similar problem by using std::vector. This is the preferred collection and the recommended one. Source is like this:

typedef std::vector<Data> Collection;

void
merge( Collection& collection, Data data )
{
    auto iter = std::find_if( collection.begin(),
                              collection.end(),
                              [&]( Data const& probe ){ return probe.id == data.id; } );
    if( iter == collection.end() )
    {
        collection.push_back( data );
    }
    else
    {
        iter->quantity += data.quantity;
    }
}

Code is slightly longer (consider that in C++ I prefer opening brace on a line alone, while in Scala “they” forced me to have opening braces at the end of the statement line). Having mutable collections doesn’t require to warp your mind around data to find which aggregate function could transform your input data into the desired output – just find what you are looking for and change it. Seems simpler to explain to a child.

Then I turned to Java. I’m not so fond of this language, but it is quite popular and has a comprehensive set of standard libraries that really allow you to tackle every problem confidently. Not sure what a Java programmer would consider idiomatic, so I staid traditional and went for a generic List. The code follows:

    static class Data
    {
        public int id;
        public int quantity;
    }

    private static void merge( List<Data> collection, Data data )
    {
        ListIterator<Data> iterator = collection.listIterator();
        while( iterator.hasNext() )
        {
            Data probe = iterator.next();
            if( probe.id == data.id )
            {
                probe.quantity += data.quantity;
                return;
            }
        }
        collection.add( 0, data );
    }

I’m not sure why the inner class Data needs to be declared static, but it seems that otherwise the instance has a reference to the outer class instance. Anyway – code is decidedly more complex. There is no function similar to C++ find_if nor to Scala partition. The loop is simple, but it offers some chances to add bugs to your code. Anyway explaining the code is straightforward once the iterator concept is clear.

Eventually I wrote a version in C. This language is hampered by the lack of basic standard library – beside some functions on strings and files you have nothing. This could have been fine in the 70s, but today is a serious problem. Yes there are non-standard libraries providing all the needed functionalities, you have plenty of them, gazillions of them, all incompatible. Once you chose one you are locked in… Well clearly C shows the signs of age. So I write my own single linked list implementation:

struct Data
{
    int id;
    int quantity;
};

struct ListNode
{
    struct ListNode* next;
    struct Data data;
};

typedef struct ListNode* List;

void merge( List* list, struct Data data )
{
    for( struct ListNode* scan = *list; scan != NULL; scan = scan->next )
    {
        if( scan->data.id == data.id )
        {
            scan->data.quantity += data.quantity;
            return;
        }
    }
    pushFront( list, data );
}

Note that once cleaned of braces, merge function is shorter in C than in Java! This is a hint that Java is possibly verbose.

I just wrote here the merge function. The rest of the sources is not relevant for this post, but it basically consists in parsing the command line (string to int conversion), getting some random numbers and getting the time. The simplest frameworks for this operation are those based on the JVM. The most complex is C++ – it allows a wide range of configuration (random and time), but I had to look up on internet how to do it and… I am afraid I wouldn’t know how to exploit the greater range of options. Well, in my career as a programmer (some 30+ years counting since when I started coding) I think I never had the need to use a specific random number generator, or some clock different from a “SystemTimeMillis” or Wall Clock Time. I don’t mean that because of this no one should ask for greater flexibility, but that I find daunting that every programmer should pay this price because there is case for using a non default RNG.

Anyway back to my test. In the following table I reported the results.

C++ Scala Java C C++
vector list
time (ms) 170,75 11562,45 2230,05 718,75 710,9
lines 81 35 69 112 81

Times have been taken performing 100000 insertions with max id 10000. The test has been repeated 20 times and the results have been averaged in the table. The difference in timing between C++ and Scala is dramatic – with the first faster about 70 times the latter. Wildly extrapolating you can say that if you code in C++ you need 1/70 of the hardware you need to run Scala… there’s no surprise (still guessing wildly) that IBM backs this one.

Java is about 5 times faster than Scala. I’m told this is more or less expected and possibly it is something you may be willing to pay for higher level.

In the last column I reported the results for a version of the C++ code employing std::list for a more fair comparison (all the other implementations use a list after all). What I didn’t expected was that C++ is faster (even if slightly) than C despite using the same data structure. It is likely because of some template magic.

The other interesting value I wrote in the table is the number of lines (total, not just the merge function) of each implementation. From my studies (that now are quite aged) I remember that some researches reported that the speed of software development (writing, testing and debugging), stated as lines of code per unit of time, is the same regardless of the language. I’m starting having some doubt because my productivity in Scala is quite low if compared with other languages, but … ipse dixit.

Let’s say that you spend 1 for the Scala program, then you would pay 2.31 for C++, 1.97 for Java and 3.20 for C.

Wildly extrapolating again you could draw a formula to decide whether it is better to code in C++ or in Scala. Be H the cost of the CPU and hardware to comfortably run the C++ program. Be C the cost of writing the program in Scala. So the total cost of the project is:

(C++) H+C×2.31

(Scala) 68×H+C

(C++) > (Scala) ⇒ H+C×2.31 > 68×H+C ⇒ C×1.31 >67×H ⇒ C > 51.14×H

That is, you’d better using Scala when the cost of the hardware you want to use will not exceed the cost of Scala development by a factor of 50. If hardware is going to cost more, then you’d better use C++.

Beside of being a wild guess, this also assumes that there is no hardware constraint and that you can easily scale the hardware of the platform.

(Thanks to Andrea for pointing out my mistake in inequality)

Scala Days 2016 my thoughts

Scala Days 2016 is over. I’m sorry I didn’t make it to take notes of all the talks I attended, but some speakers spoke very fluently and my phone is not the best way to write down notes. English doesn’t help as well, and sometimes my understanding of the matter was lagging behind.
So what are my impressions? That’s a good question and I’m not sure about the answer. I think Scala is a very fitting solution for a specific niche. Then as every other programming language can be used for everything (I just heard that Dropbox employs 2.7 million lines of Python code).
The niche I have in mind is made of large, distributed applications, handling zettabytes of streaming data, performing math functions over them. A part of the niche could also be composed by high traffic, highly reliable web services (where service is just a general term and refers to any kind of service, including serving web pages).
In this niche using Scala, with actors and possibly Spark makes a lot of sense.
In other contexts you risk to pay the extra run for something you don’t really need – not every server software needs to scale up, not every process needs to be modeled using events. Although functional paradigm eases writing code that copes with these contexts, you still pay an extra cost.
It is hard to quantify how much. The naïve experiment of having two programmers of comparable proficiency in two different languages working at the same task is very hard to setup.
According to an old research the productivity of a programmer, intended as line per unit of time, is more or less the same regardless of the language. That means, for example, that it is cheaper to write programs in C than in assembly, because C is more expressive and higher level than assembly. I’d like to know if the same holds for Scala, where lines tend to be long concatenation of function applications. For sure this is at a more abstract level, but it requires quite an effort to write, lot of effort to understand and it is close to impossible to debug, at least with today tools.
Well back to my impressions on the state of Scala. Scala is comparatively young and suffers from its youth. There are pitfalls and shortcomings in its design (just as naturally is for every other language) that are starting to be acknowledged by the language owners. The solution they seem to prefer is to rewrite and go for a next incompatible version. This is a dangerous move as Python would teach. Also is somewhat that does not acknowledge the industry. It makes sense for a teaching tool, for a research language, but it impacts badly on industry investments.
In several talks the tenet was that although Scala allows the programmer to chose any of the supported paradigms, only functional programming is the proper way to code. Surprisingly, at least to me, Martin Odersky, the language father, doesn’t agree, when attending a talk, he claimed that multi paradigm was a bless when programming the Scala compiler.
Industry needs pragmatism, but I see it only partially. Enthusiasts may crosses the border to become zealots. And crusades are not something that could bring stability and reciprocal understanding. When I hear about functional programming revolution I am somewhat scared, I prefer evolution, acknowledging the goods of existing stuff and building over them. In revolutions many heads roll, included those of innocent people.
The most widespread background for programmers here was Java. I understand that Java is not the most exciting language. Coming from C++ I find Java quite boring. And, in fact, some, if not many, of the advantages of Scala over Java can be found in C++ as well. Unfortunately C++ falls short of open source industry standard libraries. You won’t find anything in C++ that comes close to what Spark or Akka are for Scala. Also Play – even if it doesn’t encounters a unanimous consensus – is the de facto companion library for web services and web development.
Back to Scala days (again) – it has been a positive experience, some talks needed some preliminary study even if they were marked as beginner (everything pretending to explain implicits). Other talks were quite marketing advertisements in disguise. And some were genuinely fun.
I think I got closer to this language and had great opportunities to change my point of view. My wish is for an ecosystem more attentive to the industry and that values back-compatibility  rather than see anything that breaks with the past a way to make easy money by selling technical support.

Implicits inspected and explained

Talk by Tim Soethout. The average age among attendees seem to be higher than other talks, maybe we elders are looking for understanding finally what the hell are implicits (then we would need something similar for monads…). Here we go.
Implicit enables you to use values without explicit reference. Implicit enables the relationship “is viewable as”.
As a parameter the value is taken from the calling context (eg akka sender).
Incomprehensible examples follow.
Implicits are used for DSL, type evidence (whatever this be), reduce verbosity… Other stuff I didn’t make to copy.
Caveats – resolution can be difficult to understand. Automatic conversions. For these reasons better to not overuse.
Implicit parameter just takes the only variable with a matching type as argument.
If two or more variables with a matching types are available an error is reported by the compiler.
Implicit conversion (please avoid them) allows to automatically convert from a type A to a type B every time B is required. This comes handy to convert from and to Java collection types, but it is deprecated. The new approach is to add the asScala method to the Java types.
Implicit view is not understood by yours truly, sorry.
Implicit class. This is used when a class is missing a method. You can write a wrapper to add what you need. If this is declared implicit then when the compiler finds an invocation to the new method the wrapper class is constructed and the method is invoked.
Scoping. Some explanation I didn’t understand. Interactive moment turned quite to an epic failure with most of the attendees having no idea on what was going to happen in the example shown.
Type classes. This concept is used for ad-hoc polymorphism and to extend libraries. A naive approach to serialize to json uses subclasses. All the classes need to inherit from a JsonSerializable and to define the serialization abstract method.
With type class the class to serialize do not need to derive from the same base. A generic trait is defined for the serialization method. By defining implicit variables and assigning them the class which defines the serialization for the specific type then type serializers will be uses when needed.
Other examples just make things worse for me, I though I got it, but now I’m no longer sure (also, now, rereading what I wrote, I’m unsure I understood anything at all).
Wrap up time. I still don’t get this stuff, at least not completely and only slightly more than “danger – do not use”. Type class seems to be really powerful in the sense that they lift Java from its pond to the static polymorphism of C++. But the mechanics that power this specific kind of implicit are still obscure to me. Likely nothing that a good book and some months of hard study couldn’t solve. Just it seems not to be some knowledge you can transfer in a conference.