Maybe one of the main differences between embedded software/PC programmers and server/backend programmers is their attitude toward system resets. A server programmer will try as hard as possible to avoid any sort of system reboot since this could make a bad situation even worse. They would always strive for graceful service degradation (i.e. the system would not provide its full or top-level service) whenever forced to take action against unexpected or failing conditions.
Continue reading “Watchdogging the Watchdog”button down -> light on
Picture this project – you have a battery operated gizmo with LEDs that has to light up at a given time and light off at another one. Reality is a bit more complex than this, but not that much.
So what could possibly go wrong? How much could cost the development of its firmware?
The difficult may seem on par with “press button – light on”.
As usual devil is in the details.
Let’s take just a component from the whole. If it is battery operated you need to assess quite precisely the charge of the battery, after all – it is expected to do one thing and it is expected to do it reliably. The battery charger chip (fuel gauge) comes with a 100+ pages datasheet. You may dig out of internet a pseudo driver for this chip only to realize that it uses undocumented registers. Eventually you get some support from the producer and get a similar, but different data-sheet, and a initialization procedure leaflet that is close, but doesn’t match exactly neither of the two documents.
Now may think that a battery is a straightforward component and that the voltage you measure to its connectors are proportional to the charge left in the battery. In fact it is not like that, because the voltage changes according to temperature, humidity and the speed with which you drain the energy. So the voltage may fall and then rise even if you don’t recharge the battery.
To cope with this non-linear behavior the fuel-gauge chip keeps track of energy in and energy out and performs some battery profile learning to provide reliable charge status and sound predictions about residual discharge time and charging time. The chip feature an impressive number of battery specific parameters.
Since the chip is powered by the battery itself (or by the charger) and doesn’t feature persistent storage, you have to take care of reading learned parameters, store into some persistent memory, and rewrite them back at the next reboot.
Btw, this is just a part of the system which is described by some 2000 pages programming manuals specific to the components used in this project.
In a past post, I commented an embedded muse article about comparing firmware/software project cost to the alternatives. The original article basically stated that for many projects there is no viable mechanical or electrical alternative.
This specific project could have an electromechanical alternative – possibly designing a clock with some gears, dials and a mechanical dimmer could be feasible. I am not so sure that the cost would be that cheaper. But the point I’d like to make now is that basically you can’t have an electro-mechanical product today – it is not what the customer wants. Even if the main function is the same, the consumer expects the battery charge meter, the PC connection, the interaction. That’s something we take for granted.
Even the cheapest gizmo from China has the battery indicator!
And this is the source of distortion, we (and our internal customer who requested the project) have a hard time in getting that regardless of how inexpensive the consumer device is, that feature may have taken programmer-years to be developed.
This distortion is dangerous in two ways – first it leads you to underestimate the cost and allocate an insufficient budget; second and worse it leads you to think that this is a simple project and can be done by low-cost hobbyists. Eventually you get asked to fix it, but when you realize how bad the shape of the code (and the electronics) is, you are already overbudget with furious customers waving pitchforks, torches and clubs knocking at your door.
I’ve got the PIC
It was 1971 when the first single chip CPU hit the shelves. 4004 was the name. Rough by today standards, nonetheless it featured several impressive features – among which 16 registers and 5 instructions operating at 16 bits. One year later and it was the time for 8008, with 6 registers of 8 bit each. This chip was the base for the 8080, Z80 and 8086. I am quite familiar with 8080 (basically the cpu powering the GameBoy Color) and with Z80 (the heart of many of 80s home computers – ZX Spectrum and CPC Amstrad). Later it was time for extremely elegant and rational architectures – the MC68000 and the ARM.
So I supposed that the evolution of CPUs drove to better chip with rational architecture and with legacy kludges slowly moved into the oblivion. I was happy.
Then I met the Microchip PIC. To give you an idea I would say that the PIC is to CPUs what the Cobol is to programming languages.
PIC has basically one single register, plus a set of memory location with hardwired operations. For example if you want to indirect access a memory location, you write the memory location address at a specific address, then you read another specific address and you got the indirect addressing.
PIC features an harvard architecture that is program memory is separate from data memory. Program memory can address up to 24bits, while data memory holds no more than 64kbyte, but usually you get a few kilos.
The CPU has a 31 level hardware stack for call. That means that on this stack only return addresses can be stored. If you want to use the stack to pass parameters and/or to store local variables you have to implement your software stack. In the latest PIC you get some specialized memory addresses that helps you in this task.
But obviously this architecture is not thought for modern language (if you can call modern C with his 40 years of history). So at microchip they decided that some extended instruction set was needed. I think they had best intentions and that, being engineers, they took the best decisions. But the result leave me headscraping… Basically they added a static configuration bit to the CPU. This bit is stored in the program memory, so you can’t change it without rebooting. When this bit is set the meaning of nearly half of the instruction set is altered so that rather than accessing a fixed memory address, that address is used as a displacement from a pointer half in a given memory location.
Kiss backward compatibility goodbye.
I would add that the harvard architecture doesn’t mate well with C (at least with the compiler you can buy at microchip). In fact, C language pointers may have different size according to the pointed type. But a void pointer is large enough to accommodate any size pointer. With the PIC C compiler this is not true – the size of the pointer depends of a non-standard modifier “rom” or “ram” (ram is the default). So if you point in ram the pointer is 16bits wide, but if you point in rom the pointer is 24 bits. If you move a rom pointer into a void pointer you lose the 8 most significant bits. The drawback is that you cannot write code agnostic to the location of pointed data.
Considering all, the fact that the compiler requires “main” to be declared as “void main(void)” can be well ignored.