Blog

Trousers of Time… Again

It happened again. Last time was 2008, I received an interesting job offer and I had a really hard time in deciding what to do. Since the last days before Summer vacation I’ve been in touch with two friends that offered me a position for a leading a game programming team. For sure it was a once in a life time occasion, considering I live in Italy and considering the kind of workplace they were going to build and the name sponsoring the development.Since September I heartedly decided to join them in this adventure and I even start working for them in my spare time.
Sadly this morning I had to change my mind because we were unable to reach an agreement for the notice period my current employer required.
I feel sad and very dumb since programming videogames is my dream-job and I keep being nit-picking. The last time (and even the time before – not recorded here), hindsight proved me right, but I don’t think this is the case.
Somehow I also feel guilty, because I know I can make the difference, even though my videogame programming knowledge is a bit dated.
What to add more? Well, all the best to you, my friends, go and show the world Italians can make great games, too!

The Nature of Quality

In the last days before vacations, discussions at workplace raged over the subject of software quality. The topic is interesting and we got easily carried away throwing flames each other on programming techniques, programming mastery, (we stopped just before getting to personal arguments). Eventually we didn’t reach any agreement, but the discussion was interesting and insightful. More precisely some of us claimed that in software development quality always pays off while others claimed that the same project goals may be achieved earlier (thus cheaper) by employing quality-less programming (I am a bit overemphasizing.)
In other words, the question the office tried to answer was – is the cost of quality enhancement worth spending?
It is hard to get to a final answer and that’s possibly why there isn’t a clear consensus even within programmers. Take for example Lua designers. They were adamant in not limiting the language expressiveness in some questionable directions because they targeted the language at programs of no more than a few hundreds lines written by a single programmer. In other words they sacrificed some forms of quality because, in their opinion, using those was just overhead without benefit.
Detractors of software quality have their reasons – programmers are human and as such they may “fall in love” with their own creation or with the process. This causes either an endless polishing and improvement, or the creation of overwhelming designs well beyond the scope of the project. If you question these programmers about their choices the answer usually involves design for change, portability or re-usability. Seldom an assessment has been made to check whether these are really needed for the project, its future evolution or the company.
It is clear that the point is finding the right compromise. “Anybody can build a bridge that stands, but it takes an engineer to build a bridge that barely stands.”
Unfortunately bridge engineering (and other elder and more stable engineering fields) is not much of help beside witty saying. For bridge engineering does not have to take in account staged deliveries, demos, versioning, evolution maintenance (“yeah, nice bridge, now grow it three stories, add a panoramic wheel and a skiing slope”), customer changing idea (“yeah, that’s exactly my bridge, now move it on the other river”), nor specification problems (“I said ‘bridge’, but I meant ferry-boat”).
When talking about quality there is an important distinction to make – quality perceived by the user and internal quality (perceived only by the programmers working on such software).
User perceived quality is the opinion of software customers. As such, it is the most important quality of your software. As an engineer you should write the program so to spend the minimum needed to reach the required user perceived quality or slightly exceed it. No more, no less. (Just keep in mind that these aspects strongly depend by your customers). This claim holds as long as you consider the whole project without any maintenance. Since maintenance is usually the most expensive part of a project cost (even up to 75% of the whole project cost), you may want to lower its impact by employing maintenance-oriented development techniques, this means that you must spend more during the development, than what is strictly needed to match customer expected quality.
Internal quality is what, we programmers usually refer to when we say “quality”, good code, well insulated, possibly OO, with patterns clearly identified, documented, no spaghetti, readable and so on.
Unfortunately for software engineering supporters, programs do exist with a fair, adequate or even good perceived quality that have bad or no internal quality at all.
And this is the main argument against the strive for internal quality.
I think that this reasoning has a fault. In fact it relies on the hypothesis that since you can have fair to good perceived quality without any internal quality, then internal quality is an extra cost that we can save. But usually this is not the case, even if there would be no direct relation, would it be simpler to write a program where you keep just a bunch of variables under your attention to get to your goal, or write a program where you need to practice Zen in order to get an holistic comprehension of the whole not to mess anything in the process of treading your way?
And then it is hard that no whatsoever relation exists between the two natures of quality.
On this point both literature and commonsense agree that from medium to large program size internal quality affects several aspects of user perceived quality. E.g. if the program crashes too often perceived quality can’t be any good.
Moreover – in the same class of projects – the overall cost of the software across its entire life cycle is less if internal quality is high. That is desirable software properties really help you in understanding and modifying the software more confidently and with reliable results.
Quality-less supporters – at this point – backs up on picking single aspects of internal quality. For example they can target reuse and state that reuse practice is not worth since you pay in this project a greater cost with the risk that you won’t reuse the code in another project or that the context will be so different that it will be cheaper rewrite the code from scratch.
In this case it is hard to provide good figure of savings for designing and coding reusable components. This is unfortunate both for internal quality supporters in their advocacy and for software architects deciding if a component has to be reusable or not.
Unfortunate also because there is a large band of components between the ones that obviously are to be made reusable and the ones that obviously not.
Also it has to be considered that techniques adopted to make a component reusable are – generally speaking – techniques to improve internal quality. Pushing this thought to the extreme, you can’t rule out reusability from your tool chest because all other techniques that improve internal quality drive you in the reusability direction to the point that reusability either comes for free or at a little cost.
Despite of what I wrote, I think that there exists a problem and it occurs when the designer starts overdesigning too many components and the programmer starts to code functions that won’t be used in this project but just because.
A little overdesigning could be a good idea most of the times (After all design takes a little percentage of the development time), nonetheless you should always keep in mind a) the scope of the project and b) the complexity you are introducing in the code respect to the complexity of the problem to solve.
Since you are overdesigning you are not required to implement everything. Just implement what is needed for the next iteration of the project.
At this point you shouldn’t rush into moving your reusable component in the shared bin, but you should keep it in a “aging” area, granting it time to consolidate. Before being reused it should prove to survive at least to one project (or most of it). Then, when the component is reused in at least two projects, once properly documented, could be moved in the shared folder.
What I can’t understand and leaves me wandering is that in many workplaces firmware is considered an accidental cost, that everyone would happily do without it only if could it be possible. So the logical consequence would be to scientifically attempt to lower the cost of development and maintenance by finding the proper conditions. Instead in many places there is a blind shooting at firmware development, attempting to shrink any cost, with the effect that the cost will actually increase. E.g. you can save on tools and training, only to discover that your cost per LoC is more. I am afraid that the reason is either mistrust or ignorance in the whole body of researches done on software development since the 60s.

xc8: see xc8

The project with my “beloved” Pic18 is nearly over, the one I’m starting is based on a Freescale ARM with enough horse-power to run a modest desktop. Nonetheless an interface (to the proprietary field bus) is still. . . You guessed, a glorious PIC 18. This is one of the reasons I keeping an eye on the Microchip tools strategy for these chips. The other one is that the nearly over project is likely to be maintained in the coming years to implement small changes and requests from the customer. So, what happened in their yard since the last time I wrote?
First, mplab has been driven into retirement and replaced by a customization of Netbeans, named Mplab X. This I consider a good move since I much prefer this IDE over the sadly ubiquitous Eclipse. The reason is long to tell and may be partly invalidated by the latest progress, but that’s enough matter for another post.
Mplab X is still young and that may be the reason for some clumsiness in migration from Mplab 8.
The other strategic move that Microchip is doing is the unification of the two families of compiler. Microchip had the low cost, under-performing mcc18, then re-branded as mplabc18, and the acquired, high cost, high performance HTC.
Microchip decided to dump mplabc18 and to add some compatibility into the HTC so that mplabc18 code could be smoothly recompiled. This new version has been called xc.
Given what I witnessed using mplabc18 for over 3 years, I thought this had to be a good idea.
Then I took a look at the xc8 manual and I discovered that the compiler is not capable of generating re-entrant code. Despite of this Microchip claims that xc is ANSI (89) compliant. I could rant on the fact that c89 is no longer the c standard at least since 12 years ago, but I will concentrate on re-entrant code (or lack of thereof).
A function is re-entrant when it is computed correctly even if it is called one or more times before returning.
The first case is the recursive algorithm. Many algorithms on trees are formulated quite intuitively in a recursive form. Also expression parsing is inherently recursive.
Do you need recursion on a PIC18? May be not, also because – as you may remember – the little devil has a hardware call stack of only 31 entries.
Is there any other case you need re-entrancy? Yes pre-emptive multi-tasking – a task executing a function may be pre-empted and another task be switched in, if you are unlucky enough then it may be entering the same function (in multiprogramming unluckiness is the norm).
You may correctly object that multitasking on an 8 bit is overkilling. I agree, even if I don’t like to give up options before starting to analyze the project requirements.
Anyway there is a form of multitasking you can hardly escape even on 8bits – interrupts.
Interrupts occur every time the hardware feels it right to do so, or – more precisely – asynchronously. And it is perfectly legal and even desirable to have some library code shared between the interrupt and non-interrupt contexts.
When I pointed this, Microchip technical support answered that this case is solved through a technique they call “function duplication” (see section 5.9.5 in the compiler manual). In fact if you don’t use a stack for passing parameters to a function you may use an activation record (i.e. the parameters and the space for return value) in a fixed position of the static ram. Of course this causes your code not to be re-entrant (that’s why you usually want a stack). But you can simply provide a two levels re-entrancy by duplicating the activation record and the right one according to the context you run in.
Note that you don’t duplicate the code, just the activation records.
Neat trick, at least until you realize that a) PIC18 has two interrupts levels, so you would need three copies of activation records and b) this is going to take a lot of space.
In this sense the call stack is quite efficient because it keeps only what is needed in the current execution point. You will not find, in the stack, anything that does not belong to the dynamic chain of function invocations. Instead, if you pre-allocate activation records and local variables you need that space always.
Well this may not be completely true since the compiler can actually perform convoluted static analysis and discover which functions are mutually exclusive. Say that function a() calls first b() and then c() and that b() and c() are never called elsewhere and that their address is never taken. In this case you may reuse the same memory space for activation records and locals first to host b() stuff and the to host c() stuff.
Note that this kind of analysis is very delicate and sometimes not even possible. In fact you may have function pointers, but you may also have assembly code calling at fixed address. So I don’t think this kind of optimization can be very effective, but I would need to run some tests to support my claim.
Let’s get back to my firmwares. The most complex one counts about 1000 functions. Nearly 700 of them require one or more parameters.
Almost every one of them use local variables.
Even if we count 1 byte per function (and I am underestimating) we are talking of 2k on a micro that has 3.5k of ram.
Yes, I have to admit that the static approach is very fast because addressing is fixed. In any point you know the address of the variable at compile time. You don’t have to walk the stack and direct access is fast.
Anyway what I wrote is enough to demonstrate that it is not possible to port trivially a complex code from mplabc18 to xc8.
A last case when you need re-entrancy is when you have several function wrappers that accepts function pointers. If the wrapped code calls another code that uses the same wrapper you are again in the need for re-entrancy.

All Inclusive

“And for this promotion it’s 9€ per months, are you already a client? “Well, yes indeed, it’s maybe 15 years I am still client to the same mobile phone provider.”
“So, it’s 19€ to activate”
Gosh, no one mentioned that in the commercial aired on TVs… What if I would not have been a client?
“Then it would have been free”

Actually it seems it is worth to periodically change provider.

Ugly Code

Unfortunately, I’m a sort of a purist when it comes to coding. Code that is not properly indented, global and local scopes garbled up, obscure naming, counter-intuitive interfaces… all conjure against my ability to read a source and causes a headache, acid stomach, and buboes. “Unfortunately,” I wrote, meaning that’s most unfortunate for the poor souls that have to work with me to whom I should appear as sort of source code Taliban.

Recently my unholy-code-alarm triggered when a colleague – trying unsuccessfully to compile an application produced by a contractor – asked me for advice.

More and more I delved into the code, more and more my programmer survival instinct screamed. The code was supposedly C++ and the problem was related to a class, that I would call – to save the privacy and dignity (?) of the unknown author – SepticTank. This class interface was defined inside a .cpp and then again in a .h. Many methods were inlined by implementing them in the class interface (and this possibly was part of the problem).

After resolving some differences, the code refused to link because there was a third implementation of the SepticTank destructor in a linked library. I claimed that such code couldn’t possibly work (even after disabling the dreaded precompiled headers – never seen a Visual Studio project working fine with precompiled headers), even if we could manage to get it compiled and linked the mess was so widespread that nothing good could come.

My colleague tried to save the day by removing the implementation of the SepticTank destructor so as to leave the implementation found in the linked library.

In the end, he had to give up because the code was broken beyond repair, and even if it compiled and linked it crashes on launch (not really surprising).

What stroke me most, basically because it caused a slight fit of dizziness, was the sight of the mysterious operator below –

class SepticTank
{
    public:
        // code removed for your comfort
        operator SepticTank* ()
        {
            return this;
        }
        // other code removed, again for your comfort
};

My brain had some hard moments trying to decode signals from my eyes. Then it figured out that the coder was redefining the implicit conversion to the class pointer so as to use instances and references where pointers were expected… why on the Earth one should want something like that?!?

Implicit conversions if not handled correctly are a sure way to kick yourself on the nose and this is enough a good reason to stay away. But… trying to enter the (criminal) mind that wrote that code, what’s the purpose? Just to avoid the need for the extra ‘&’ ? Or is it a Javism? Maybe it is better to stay out of such minds…

I’d like to introduce…

Say you have slightly more than one hour to talk about C++ to an audience of programmers that range from the self-taught C-only programmer to the yes-I-once-programmed-in-C++-then-switched-to-a-modern-language. What would you focus your presentation on?I started composing slides thinking C++ for C programmers, but it is a huge task and sure the result won’t fit into a single week.
Also I must resist the temptation to teach, since a single hour is not enough to learn anything.
Then I am planning to re-shaping my presentation in form of wow-slides. I mean every slide (no more than 10-15 of them) should show a C++ idiom / technique / mechanism that would cause a wow-reaction in a C programmer or in a C++80 programmer.
Advices are highly appreciated.

Having a Good Time

Once upon a time, there was this Unix thing powering departmental mainframes. When it came to keeping track of system time it sounded like a perfect solution to have a virtual clock ticking once every second and counting seconds since the EPOCH was the way to keep track of the current date and time. Such a clock, stored into a 32bits, had several advantages – it was simple and easy to store and manipulate, it was granted to be monotonically increasing for such a long time that it would have been someone else problem the day that this grant would have been broken. Now switch context, you are working on a micro within an embedded system. The micro has the standard nowadays System On Chip parade of peripherals, including a hardware timer that can be programmed to interrupt your code at any frequency is convenient to you.
Using 1ms as a clock base may seem a little aggressive, but that timespan makes a lot of sense – it is unnoticeable for us sluggish humans but its sort of eons for a speedy electron. In other words, it is a good compromise to base your time engine. You can build on this some timer virtualization so that you may employ multiple timers either periodical or one shot in your application.
So far so good, but there’s a catch. Consider using a 32bit variable to keep the time (as the glorious Unix time_t). Starting from 0 on every power-up (after all, it is not mandatory to keep the full calendar information), it will take one-thousandth of the time it takes Unix to run out of seconds. Is that enough to be someone else problem? No, that’s much closer to your paycheck. It takes some 49 days to roll over if the variable is unsigned.
At the rollover, you lose one important property of your clock – increasing monotonicity. One ms you are at 0xFFFFFFFF and the next one you are at 0x00000000. Too bad.
Every comparison on time in your code is going to fail.
Is there anything that we could do? Let’s say we have a code like this –

if( now > startTime + workingSpan )
{
    // timeout
}

It seems perfectly legit, but what happens if startTime approaches the rollover boundary? Let’s say that startTime is 0xFFFFFFF0, workingSpan is 0x10, and now is 0xFFFFFFF8. That means that now is within the working span and we expect the test to succeed. Instead the test will fail, because 0xFFFFFFF0 + 0x10 rolls into 0x00000000 which is NOT greater than now (0xFFFFFFF8).
Is there any way around before going into 64bits?
You may try to check if the operation is going to overflow and act accordingly, e.g.

uint32_t LIMIT = MAXUINT32 - workingSpan;
if( startTime > LIMIT )
{
    handle overflow...
}

Or you may check if the operation overflowed –

uint32_t timeLimit = startTime + workingSpan;
if( now < startTime || now > timeLimit )
{
    handle overflow
}

You may have to trim a bit the code for corner cases, but I wouldn’t care because there is a simpler way. An interesting property of modulo arithmetic is that

a - b = modulo[n]( a- b )

as long as a – b < n. That means that if we manage to transform the comparison operation from absolute to relative then we can forget about the overflow and roll over problem. So the following code:

if( workingSpan < now - startTime )
{
// timeout
}

is correct from an overflow/rollover point of view, does not requires extra computing time, nor space, and is as readable as the original code.

The ideal job

It is since long that Videogame Studios recruiters send occasional mail proposing jobs (for whose I am mostly unqualified, but it is always nice to be considered). This morning I received an opening in South France with the following benefit list

  • Christmas Bonus for all employees in November = about 1500€ for one full working year and on prorata for a shorter period
  • Salary increase each year for good performance (as an example in 2009, salaries increased of 7%, in 2010, salaries increased of 12%, in 2011, salaries increased of 8%)
  • Profit sharing system: Above a certain threshold, we give back to employees 50% of bonus gained on video game sales
  • Gift vouchers at the end of the year = 140 €
  • 19 luncheon vouchers per month of 8,80 € each (we cover 60 % = 5,28€ ), you pay only 40%.
  • 50 % of your public transport card will be reimbursed.
  • Retirement and health care – we take 100% in charge (for you and your family). And you perhaps know that the French state guarantee for a high level of health care insurance too.
  • We contribute to an agency that can helps you financially in buying or renting a flat
  • We will help you to find a flat to rent – we have a partnership with relocating company based locally (they help the candidate to find a flat, they welcome him at his arrival, help him with stuff like opening a bank account, discovering the city, finding schools for children, finding a good place to practice sport…..).
  • If you’re not speaking French, we would offer you French lessons to facilitate your integration
  • Media library : you can borrow consoles, videogames, DVD… for free (see with the ITguys
  • Free cakes and soft drinks dispenser.
  • As we are more than 50 employees, we work as a work council – Its goal is to propose you some discounted products such as theater tickets, trips, cinema…and another of its roles is to be the mediator between employees and employer if needed.
  • From our studio, you will be on the beach in 1 hour and you will be on ski slopes in 2,5 hours !

How couldn’t I be tempted? Especially by the 1 hour distance from seaside?!

Week-end Effectiveness Degree

(English follows)

Grado Nome Descrizione
0 Nullo hai passato il week end a pensare ai problemi del lavoro. Il computer è già acceso e appena ti siedi ti sembra che il week end non sia proprio esistito
1 Debole appena vedi il computer ti ricordi di quello che stavi facendo venerdì, il week end è stato piacevole anche se adesso non ti ricordi bene di cosa hai fatto
2 Accettabile dopo un attimo di smarrimento iniziale riesci ad interpretare gli appunti che ti eri lasciato, un caffè e ritorni operativo
3 Buono indovini la password al terzo tentativo dopo il caffè, cestini gli appunti perché potrebbe averli scritti chiunque e non ti dicono nulla
4 Eccellente entri nell’ufficio sbagliato e non riconosci i colleghi, hai bisogno più di mezza giornata per capire qual è la tua scrivania
5 Totale l’automatismo mentale che ti permette di guidare l’auto fino al posto di lavoro in comodo dormiveglia fallisce e ti ritrovi davanti alla tua vecchia scuola
Degree Name Description
0 Null You spent the week end thinking about your job problems. The PC is on and as soon as you sit in front of it, the week end may have never existed.
1 Weak As soon as you seat in front of your PC you remember what you were doing last Friday. Week end went fine, even if you couldn’t tell how you spent it.
2 Acceptable You feel lost for a while, then you understand the notes you left for yourself. A coffee and you’re back.
3 Good You guess your password on the third attempt after the coffee. You throw your notes away because they are meaningless gibberish.
4 Excellent You enter the wrong office and fail to recognize your coworkers. You need more than half a day to find your desk.
5 Totale Mental mechanism that allows you to drive to your workplace in a comfortable half-sleep fails and you find yourself at lost in front of your old school.