Blog

Out Fathers’ Faults – Actors and Concurrency

When I started this job and faced the joyful world of Scala and Akka I remember I was told that thanks to the Actor model you don’t have to worry about concurrency, since every issue was handled by the acting magic.

Some months later we discovered, to our dismay that this wasn’t true. Or better, it was true most of the time if you behave properly, but there are notable exceptions.

Continue reading “Out Fathers’ Faults – Actors and Concurrency”

Our Fathers’ Faults – Actors – Explicit State

This post is not really specific to Scala/Akka, since I’ve seen Finite-State Machine (AKA FSM – not this FSM) abuse in every code base regardless of the language. I’ll try to stick with the specificities of my code base, but considerations and thoughts are quite general.

FSM is an elegant and concise formal construct that helps in designing and encoding and understanding simple computational agents.

Continue reading “Our Fathers’ Faults – Actors – Explicit State”

Our Fathers’ Faults – Mixing Actors and OOP 2 – Acting is not an inherited trait

This is the second part of the post on why Actors and OOP are really a bad match. In the last post we have seen how adding types and methods to actors could turn into a bad idea, now we look at another aspect of OOP – actors and inheritance.

Once you have wrapped an actor inside an object as our fathers did, you can hardly resist the temptation of composing by inheritance. On paper this is also a good idea, think for example to some sort of service that has some housekeeping to do (registering/unregistering clients, notify clients), what’s wrong in having a base class Service from which LedService can be inherited?

Continue reading “Our Fathers’ Faults – Mixing Actors and OOP 2 – Acting is not an inherited trait”

Our Fathers’ Faults – Mixing Actors and OOP 1, Actors with methods

This was intended to be a single comprehensive post about what’s wrong in mixing the Actor model with OOP. After a while I was writing this I discovered that there is a lot of stuff to be told, so I split the post in two. This is the first and talks about why you would like to add typing to Actors and then why you would like to get back. The next one (that I would likely publish in 2020) is about why you would like to add inheritance among actors and why guess what… you would refrain to do it. Let’s start.

Once that the concept of Actor as implemented by the Akka framework is clear, we can proceed to the first issue in mixing Actors and OOP, that is brilliantly depicted by the sentence “No good deed goes unpunished”.

Continue reading “Our Fathers’ Faults – Mixing Actors and OOP 1, Actors with methods”

Our Fathers’ Faults – Akktors and Ekkstras

After the first four posts of “Our Fathers’ Faults” it’s time to turn a specific aspect of the application – the Akka framework. The code I’m managing is strongly based on this framework offering endless inspiration for misuse and abuse. Before going straight to the sins parade, I think it is proper a brief introduction to the Akka framework actors and their usage. Half of my two readers are ludicrously proficient in Akka and Scala that they might think of skipping this post wouldn’t it be for my witty ranting prose style, the rest of you two may actually be interested in the content as well.

BTW, actors, as most innovations in programming, are no longer that innovative. The actor model dates back to 1973 (geez, I was 5! I couldn’t even spell “actor”!), but it has been largely popularized by the reactive manifesto as a viable model for reliable concurrent programming.

Continue reading “Our Fathers’ Faults – Akktors and Ekkstras”

Our Fathers’ Faults – Scantly Typed Language

Scala tries to help the programmer in many ways. Verbose and repetitive code can be often syntactic-sugarized into concise statements. This is very convenient but encourages the programmer to produce write-only code. Let’s talk about types. In many contexts, you can is very good at inferring types. Consider

val n = 1

This Is the most obvious case, you don’t need to specify Int because the language manages to infer the type of the variable from the type of the expression to the right of the equal sign.

Continue reading “Our Fathers’ Faults – Scantly Typed Language”

Our Fathers’ Faults – The Chain of Death

The functional programming core concept is about composing functions together to craft more complex and convoluted functions. In Scala (not unlike what happens in many other programming languages) there are two ways to combine functions: the first is to apply a function to the result value of another function and the latter is to use a function to produce the argument value of another function.

Written in this way they may look not so different, in fact, even if in practice they show up in quite a different look, the abuse of the mechanism leads to the same problem.
The first way of combining functions, is also called chaining since you chain functions together – the result of function fn is the input of function fn+1. (Interestigly this very same mechanism will get a boost in C++ starting from C++20 ISO standard thanks to range and pipes).

Take the following example:

List(1,2,3,4).filter( _ > 2 ).map( _.toString ).fold( "" )( _ + _ )

(If you already know Scala, you may safely skip this paragraph 🙂 ) If you are unfamiliar with Scala or function programming this may look suspiciously like bad code, (this could be some meat for another post), but trust me it is not. Function List(…) constructs a List. List, containers, and iterators in general in Scala have a set of methods that can be chained together. You may think of them as taking an iterator and producing an iterator in a different sequence. Back to the example above, filter produces an iterator that scans only elements of the input sequence that fulfill the given condition (being greater than 2 in the example). map produces an iterator over a sequence whose elements are computed by the given function on the source elements. Eventually fold collapses the sequence by accumulating items iteratively using the provided expression.

Each small block performs an operation. Note that I haven’t written “simple” operation, on purpose. In fact the operator (filter, map, fold) is simple in itself, but it is programmable via a lambda that can be as convoluted as you want. Therfore the operation performed by the block may become really complex.

While you are focused on your goal, it may be easy and convenient to dump your mind into an endless chain. This has two drawbacks, first, it is unreadable and second, it is uninspectable.

It is unreadable because you need to start from the beginning and analyze the sequence up to the point to where you want to focus and know what the inputs are and from there onward to understand how the output is processed into the final result. Pair this with complex lambdas and you may find yourself in quite a nightmare to figure out what this code is supposed to do or where is the bug.

It is uninspectable because you cannot use the debugger to stop execution and show intermediate results of your operation (I’ve noticed over the years that Scala programmers are usually reluctant in using a debugger preferring print/log debug).

The other form of the problem – functions calling functions calling functions – despite the differences in syntax, is not that different in how it is produced and in the result. The code you may see, such as –

context.become(
    enqueuer(
        resourceManagement(
            Queues( queues.execution.filterNot( x =>
                ( x == realFree || context.child( childName( x.getUUID ) ).isEmpty ) ), queues.waiting ) ) ), true )

can be written in any language, but it seems that our fathers was quite fond of this approach.

It is important to recognize that when writing code you have (or, at least you should have) a crystal clear and detailed comprehension of the data flow, and even if you strive to write clear code, you’ll tend to pack these things together because a) it is faster (less typing, less naming of values, less effort to add structure) and b) you don’t feel the need to make it clearer since it is so straightforward in your head.

Lesson learned – resist the temptation of dumping the mental flow into a coded data flow, use the old “divide et impera” to split the flow into manageable units with names reflecting the content e the intent. A good indication is the line length, if your statement, properly formatted to fit in 80 columns, happens to span over more than 3 lines, then splitting it is a good idea.

Additional thought Attending some FP conferences I had the clear impression that FP pundits encourage complex aggregation of functions, even the justification for having short indent space (2 characters is the recommended indentation for Scala) has the purpouse of make the writing of chains more convenient. For these reasons I suspect that this point could be a little controversial in FP circles. I stay with my idea that industrial quality code must be first easy to read and understand and then easy to write.

Our Fathers’ Faults – Operator @!#

With a Great Power comes Great Responsibility. I’m referring to the incredible power of defining custom operators as function names. I was convinced that this feature was introduced by C++, but a quick look on Wikipedia was enough to dispell this myth. Starting from Algol 68, programmers were enabled to redefine operators. Not all languages have this feature and even those who do vary in what the programmer can do.

Continue reading “Our Fathers’ Faults – Operator @!#”

Our Fathers’ Faults – Failure is not an Option

Our Fathers’ faults.

Intelligent people learn from their mistakes, wise people learn from other’s mistake. Unlucky people learn from the mess they have to fix in someone else’s code.

Working as a programmer is not always an easy task. On lucky days you feel like a god planning and designing architectures, wisely crafting virtual nuts, cogs, and bolts that happily tick together in an elegant and coordinated choreography, while on bad days it is bug fixing in your code. In really sh**ty days it is bug hunting in code written (by a long-gone someone else) for exploration, demo, and test of the coolness factor.

Continue reading “Our Fathers’ Faults – Failure is not an Option”

A Type with Character

So, you know everything about C++ integral types, don’t you?

I thought I did, until when I enabled clang-tidy on my project. It all started with the rather innocent-looking warning:

warning: use of a signed integer operand with a binary bitwise operator

It looked somewhat less innocent when, examining the line, I saw no evidence of signs.

But, let’s start from the comfort zone (at least, my comfort zone):

unsigned x = 32;

The type of ~x (bitwise negation) is still unsigned. No surprise here, really obvious. The diligent programmer finds that the data fits in a smaller integer and writes:

   uint8_t s = 42;

Can you guess the type of ~s? Give it a try, really. Ready? Well, the type of ~s is… int. What? A quick check of other expressions involving uint8_t yields the same … fascinating result. Apparently these expressions are all converted into int.

In other words (and with a bit of syntax bending) uint8_t+uint8_t -> int, uint8_t<<uint8_t -> int, uint8_t+1 -> int. Let me rephrase that, in every expression an uint8_t type is converted to int.

Time for some Internet Duckduckgoing :-).

Back to our uint8_t (that is nothing but an unsigned char in disguise). When a char value, be it signed, unsigned or plain is encountered in an expression (in C++ standard jargon, a prvalue) it is promoted to int pretty much on every common CPU. On exotic architectures, char and int could have the same size, so int could not hold every possible value of char and therefore it is turned into an unsigned. From a strictly rigorous point of view, the signedness of the type of uint8_t in expression is machine-dependent. Keep this in mind if you aim to write really portable code 😉 (*)

You can find a very good (and somewhat frightening) explanation here.

But I’d like to add something beyond the standard technicalities and look at the reasons for why this is like this and what we can take home.

First, it is important to fix that in C (and C++ by extension) the int/unsigned type is mapped to the most efficient word of the target processor. The language provides that, as long as you use int/unsigned (without mixing them) in an expression, you get the most performing code.

Also, the language mandates that an int be at least 16 bits wide and at most the same size of a long.

What if you need to do math on 8bits data on a 32bits architecture? Well, you need assembly code to insert mask operation for cutting away the excess data in order to reach the right result.

So, the C language opts for the best performance turning everything into int, avoiding the extra assembly for masking and let the programmer arrange the expressions with cast here and there if anything different is desired.

Unexpected type promotion, sign change, and performance penalty should be three good reasons to avoid using anything different from int and unsigned in expressions (or long long when needed) and keep intXX_t and uintXX_t for storage.

Note that this applies also to function arguments. It happens quite frequently to read APIs where types for integer arguments are defined as the smallest integer capable of holding the largest value for a given parameter. That may seem a good idea at first since the API embeds in the interface the suggestion to the user for a proper range.

In fact, this has to be balanced against the aforementioned problems and don’t really enforce the constraints for two reasons – first, you can actually pass any integral type and get not even a warning and second, possibly your accepted range is a subset of all the possible value of the type, therefore the user is still required to read the documentation.

Finally, when in doubt, use the compiler 🙂 Finding types of expressions via the compiler could be not the most intuitive task. Below you find the code I used for my tests. Happy type-checking!

/* Linux only, sorry */
#include <iostream>
#include <cstdint>
#include <cxxabi.h>

int main()
{
    uint8_t a = 42;
    std::cout << abi::__cxa_demangle(typeid( a ).name(), nullptr, nullptr, nullptr) << "
";
    std::cout << abi::__cxa_demangle(typeid( ~a ).name(), nullptr, nullptr, nullptr) << "
";
    std::cout << abi::__cxa_demangle(typeid( 42u ).name(), nullptr, nullptr, nullptr) << "
";
    std::cout << abi::__cxa_demangle(typeid( ~42u ).name(), nullptr, nullptr, nullptr) << "
";
    return 0;
}

(*) Also keep in mind, if you strive to write a portable code, that uintXX_t types may not exist for exotic architectures. In fact, the standard mandates that if the target CPU has no 8 bits type then uint8_t be not defined for such target. You should use uint_least8_t (meaning a data type that has at least 8 bits) and uint_fast8_t (a data type that is the most efficient for operations with 8 bits).