Notes from “Scala center” by Heather Miller. Scala Center is a non profit organization established at EPFL. It is not lightbend. Same growth chart of yesterday, source are not cited (indeed?). Stack overflow survey reports Scala in the top 5 most loved languages.
The organization will take the burden of evolving and keeping organized libraries and language environment, educating and managing the community rather than the language itself.
Coursera Scala class is very popular (400k) with a high completion rate. There will be 2 new courses on the new coursera platform. Unverified courses are free, verified and certified courses are paid.
Functional programming in Scala – 6 weeks.
Functional program design in Scala – 4 weeks.
Parallel programming – 4 weeks.
Big data analysis in Scala and spark – 3 weeks.
My (somewhat cynic) impression – lot of work and desperate needs for workforce, they are looking to get for free by grooming the community.
EPFL funds for 2 ppl for moocs . donations from the industry and revenues from moocs.
Lightbend? Will continue to maintain the stable Scala.
Package index is not yet available for Scala. Aka people should be able to publish their projects and get them to be used without the need of being a salesperson.
Scala library index. Index.scala-lang.org
It is an indexing engine.
Just wondering – is this a language for the academia or for the industry? Keep changing things and the investments made by the industry will be lost: language is going to change, base libraries are going to change as well… Which warranties do I have that my code will still compile 5 years ahead in the future?
Changing things is good for the academia since it allows to do research and to better teach new concepts. It doesn’t harm the community where workforce is free and there is no lack of people to redo the same things with new tech.
The first sprint is over and it has been though. As expected we are encountering some difficulties. Mainly role keys without enough time to invest in Scrum.
Scrum IMO tends to be well balanced – the team has a lot of power having the last word on what can’t be done, what can be done and how long it will take (well, it’s not really the time, because you estimate effort and derive duration, but basically you can do the inverse function and play the estimation rules to get what you want).
This great power is balanced by another great power – the Product Owner (PO) who defines what has to be done and in what order.
Talking, negotiating and bargaining over the tasks is the way the product progress from stage to stage into its shippable form.
In this scenario it is up to the PO to write User Stories (brief use cases that can be implemented in a sprint) and define their priority. In the first scrum planning meeting, the team estimates one story at time to set the Story Points value.
This is a straightforward process, just draw a Fibonacci Series along a wall and then set a reference duration. We opted for a single person in a week is capable of working for 8 story points. This is just arbitrary. You can set whatever value you consider sound. Having set two-week sprint and being more or less 5 people we estimated an initial velocity somewhere between 80 points per sprint and (having the value) 40 points. The Scrum Master (SM) reads the User Story aloud and the team decides where to put it on the wall. Since the relationship between story points is very evident then it is quite easy and fast to find where to put new stories.
In that meeting, that went well beyond the boxed 2 hours, all the team was there, SM included, but the PO who was ill (the very first day in years). We could have delayed the meeting but the team would have been idling for some days…. not exactly doing nothing, you know there’s always something to refactor, some shiny new technology to try, but for sure we wouldn’t have gone the way our PO would have want us to go.
So we started and in a few moments it became clear that we were too much. Being a project that ranges from firmware to the cloud when talking about a specific story many people were uninterested and became bored and started doing their personal stuff. In the end, we were about three being quite involved in the estimation game.
The lack of the PO in that specific moment was critical – we missed the chance of setting proper priority on tasks since we had to rely on a mail and we can’t ask our questions and get the proper feedback. At the end, we discovered that the topmost prioritaire User Story remained out of the sprint.
The other error we made was to avoid decomposing User Stories into Tasks. This may seem redundant at first, but it is really needed because an User Story may involve different skills and thus different people to be implemented.
The sprint started and we managed to get the daily scrum. This is a short meeting scheduled each day early in the morning where each member of the team says what she/he has done the day before, what is going to do that day and if she/he sees any obstacle in reaching the goal. This meeting is partly a daily assessment and partly an intent statement where, at least ideally, everyone sets the goal for the day. Everyone could assist, but only the team may speak (in fact has to speak).
Daily Scrums were fine, we managed to host one/two programmers that were not co-located most of the time via a skype video call. The SM updated the planned, doing, done walls. The PO attended most of the times.
On the other hand, the team interacted sparingly with the PO during the sprint, but just in part because his presence was often needed elsewhere. Also, I need to say that our PO is usually available via phone call even if he’s not in the office.
This paired with the team underestimating the integration testing and definition of done. Many times user stories were considered done even if not tested with real physical systems, but just with mockups. Deploy process failed silently just before the sprint review leaving many implemented features not demonstrable. Also, our definition of done requires that the PO approve the user story implementation. This wasn’t done for any task before the sprint end, but we relied on the sprint review for getting the approval.
The starting plan included 70 Story Points and we collect nearly another 70 points in new User Stories during the sprint. These new Stories appeared both because we found stuff that could be done together with the current activities or the needed to be done to make sense of the current activities.
Without considering the Definition of Done, we managed to crunch about 70 points that were nearly halved by the application of Definition of Done (in the worst possible moment, i.e. during the Sprint Review).
Thinking about improving the process, probably working from remote is not quite efficient, I noticed that a great deal of communication (mainly via slack, but also via email and Skype) happened the day that those two programmers were off-site.
The sprint end was adjusted to allow for an even split of sprints before delivery, therefore we had something less time than was we planned for.
End/Start sprint meetings (i.e. Sprint Review, Sprint Planning, and Sprint Retrospective), didn’t fit quite well in the schedule, mostly because of PO being too busy in other meetings and activities.
Am I satisfied about the achievements? Quite. I find that at least starting to implement the process exposes the workload and facilitates communication. The team pace is clear for everybody.
Is the company satisfied about the achievements? This is harder to say and should be the PO to say this. I fear that other factors affecting the team speed in implementing the requests of the PO may be considered with the scrum and dumped altogether. Any process poses some overhead and the illusion that a process is not really needed to get the things done is lurking closely.
Today we planned for another sprint, but this is for another post.
I’m pretty sure that in the common sense programmers are considered rational folks, their minds solidly rooted in facts, comforted by engineering, based on logic, algebra and maths. Brains like knife part truth from lies, dispel doubts and myths.
Well, maybe. What is true is that those who write programs for passion before than for a living, proud themselves to be artists (or at least craftsmen). Artists have inspirations and base their work on inner emotions and use rationality just as a tool when they need it and irrationality as the tool for the other times.
We, programmers, can write code that with is capable to insert a spacecraft in Pluto orbit with astounding precision, while, at the same time we can decide to quit a job if forced to use some tool or process we don’t like.
I tried to be as rational as possible in choosing my tools, programming habits and process, always trying to justify in terms of engineering practice, sometimes changing my gut choice. Recently I confronted with, or, maybe better, have been challenged by colleagues and friends on these matters and long and heated discussions arose.
So I decided to prepare a short poll via surveymokey (even shorter because of the limit for free survey), on the issues that more closely seem to be matter of religion and faith among my friends. Here I’m presenting the result.
As of today I have received 24 poll submission, the poll is still open so feel free to take it, if the exit poll would change significantly in the future I’ll update my analysis. I won’t claim any statistical validity, it is just a poll among friends, likely a very biased set of programmers.
Preferred Text Editor
This is one of the most ancient religion war among programmers, dating well before the advent of PC
I find interesting that set aside the Windows editor notepad, vim/gvim comes second, winning even over nano which is the other Linux/Unix standard. Emacs seems much dead, which is somewhat surprising when compared to vim. In the other votes I count one and half for Sublime Text, half vote for Atom (which I don’t know), 1/3 vote for notepad++ and two misvotes (I requested no IDE).
My editor of choice is usually vim/gvim, but when I’m on Windows I often and often go for Notepad++. The vim choice was not a straight one because vim is hard to learn. At the beginning, when I used vt100 terminals at the university, I hated it. It appeared like a cumbersome relic from a long gone era. At home I could interactively use CygnusEd on Amiga. But at school we were prevented to use Emacs because the poor HP-UX box we used had just 16M RAM and Emacs made it crash on launch.
Then I came to terms with vi, but I never suggest anyone to learn it, even after I reached a fair proficiency. There are two main reasons that make vim my preferred editor. First it is available on every Unix machine. Maybe you find nano or pico or even emacs or none of them, it depends on the distribution, on the system. But vi, if not vim, is there for sure. Also consider that now linux is used also on embedded systems that are still resource constrained so chances are that you can’t install the editor you want. The second reason is that when you have a slow/intermittent (or blind) connection nothing beats vi. You count how many columns you want advance, how many characters you want to delete, where to insert and with a single command you instruct the editor to do what you want. Try to move the cursor 10 columns forward on an intermittent connection using a conventional editor by pressing repeatedly the right arrow key. Are you sure you pressed it ten times? And that the editor on the other side of the Moon received 10 keypresses? How confident you are? Well with vim you are sure of what you have done. On the other hand if you happen to have the wrong keyboard layout…
Indent Technique
Indentation is a need for readability in many programming language, in some it is even needed for proper compilation. There are two basic techniques whose origin is lost in the dawn of electrical typewriters – spaces and tabs.
The advantage of tabs is that you can use editor/IDE preferences to set the preferred width, so the indent could always please your taste. But tabs mixes badly with spaces, so a file that mixes both spaces and tabs may become messy if it is viewed with a tab size different from the one used to write the file. Also sometimes you need an indent level not aligned with standard indentation levels (e.g. when you need to split a long line). In this case you are force to use the spaces, causing the file to mess up again if viewed with a different tab size.
Spaces may be a bit more dull, but they are reliable – always you see what you have, the file appears always the same regardless of user preferences. That’s why I found the result of this poll interesting.
As properly pointed out by a friend of mine – you should use tabs for indentation and spaces for alignment. That makes a lot of sense, but it is pretty hard to enforce without entering quite heavily in the syntax of the language being edited.
Indent Column
Still about indent, this question asked the preferred indent size. Keeping a wide indent size hints the programmer at avoiding too nested code, since quickly the code goes out of the right margin. That’s why I prefer a quite wide indent at 4 columns. I found so may colleagues and friends agree with me:
Interestingly enough – the sum of all the votes for indentation lesser than 4 is not greater than the votes for indentation 4. Surprisingly there two people love single column indent! That option was more of a joke than an option I would take seriously.
Opening Brace Position (Brace=open block symbol)
Braces are used to define blocks of instruction in many languages whose ancestry can be tracked, more or less easily, back to BCPL (even if I would have some difficulties to see such lineage in Scala). In this sense there are several styles about the placement of the opening brace. The C Language by Kernigham & Ritchie used the open brace at the end of the statement that defines what kind of block is. Allman style, used initially to write most of the BSD utilities, uses braces in the same way Pascal-like languages use begin/end, i.e. on a line alone aligned with the statement. GNU is a third popular indent style and requires the opening (and closing) brace to be on a line alone, half indented between the statement and the inner block code.
Once again my preferred style won. I like the symmetry of the matching parenthesis the helps in reading the code and hints the programmer to keep the code short because some lines have to accommodate braces. The denser the code the harder to read.
Also I think that is interesting that I changed my style – I started with K&R style (pretty obvious since I started coding C before the ANSI standard was out). Then, when switching to C++ back in the early ninenties, I read the Ellemtel Rules and Recommendation. Those rules made a lot of sense and provided rationale for every rule. So that I was convinced to switch brace style. Lesson learned, if it makes sense, you can change your habit (or religion).
The GNU style scored quite low, maybe that half indentation is not that appealing.
Language of Choice
At the beginning it was just machine code, no one could disagree. Then it came Fortran, Lisp and Cobol (but the real story is a bit more complex) and suddenly there were four religions (not three, because there were those claiming that machine code was still the best). For my poll, I picked 5 popular languages and Antani an esoteric language. (this is a mistake – the right name is Monicelli instead of Antani, sorry).
Once more surprisingly the preferred language is C++, which matches my preferences (I swear I didn’t rig the poll). With a C# second, on which I agree and third C. I am afraid that the poll group was very biased in this respect, especially when compared with official indexes such as TIOBE.
Thanks god no one chose Antani, but no one had a different preference other than the ones listed. Given the zillions of programming languages I had expected at least one vote in other.
Scripting Language of Choice
Scripting languages are the glue of the software, they allow with a moderate effort to combine components are tools to provide advanced and sometimes surprising results. The difference between a general purpose language and a scripting one may be thin in some cases and I think there is no clear answer. Python, Visual Basic, Ruby and Lua may have roots in scripting, but they aim to or are used as languages to create general purpose software.
Bash is my preferred scripting language. The latest version has a number of features that allows complex programs to be written. In its evolution lost a bit of cryptic aspects letting the programmer use more sensible constructs, but it is still a language with some obscure constructs. Hardly you can beat bash in the Unix/Linux environment when you have to automate the command line. Since in Unix/Linux you can do everything from the command line, bash allows you to automate the entire system.
The shortcomings of bash, notably a survival level of math handling, the lack of user defined structured types and no native support for binary files, do not prevent the guru programmer to use bash for everything. A more convenient way is to use Python that is based on a more modern design and can rely on many and disparate libraries.
So I expected Python to collect more votes than bash (even if bash is my preferred one).
other collected 1 vote for JavaScript (which indeed is a scripting language), 1 vote for Lua (another pretty scripting language very simple although flexible) and 1 vote for Perl (another Unix/Linux favorite).
Many (most of?) applications are written today as web applications. Languages used to code these programs are to be chosen carefully, long are gone the times when a CGI interface and some shell scripts could do the trick of making web pages dynamic.
If traditional applications are to be developed in C++ according to the majority of my friends, not so for web applications. Here Java is king getting twice the votes of the runner up – PHP. PHP being specifically developed for this task is a natural second. Surprising Scala and C++ are considered at the same level for this application.
In the other section I got one vote for NodeJS, one vote for C# and one vote for no preference (so my fried who voted for no preference, next time you can program a web application in Monicelli… :-D).
IDE
The IDE is a relatively recent concept in programming, I would date it around early eighties, at least in its modern form. Before you had several different and sparse tools to do your programmer job – an editor, then a compiler, a linker and possibly a debugger (interestingly enough on home computers you had just one environment which could be considered a rudimentary form of IDE). IDE started to appear in systems with no multiprocessing capabilities such as CP/M and MS-DOS. The first I’ve seen and used, which incidentally was also the first IDE, was Turbo Pascal on the CP/M operating system.
Nowadays complex projects are preferably managed by IDE even when they come with a build system recipe (be it make, ant, maven or sbt).
Microsoft Visual Studio is the oldest among the choice and the one who got most votes. I fully agree, Visual Studio is a powerful and comprehensive solution that long has long the lock-in-Microsoft nature it had at the beginning. When I can’t used Windows, my preferred IDE is Netbeans. On the other hand I can’t stand Eclipse. It is a bloated application with no rational design in its interface. Eclipse subsystems seem to be attached as a second thought and they don’t share the same way of using variables, or doing things. Too bad that Eclipse is the chosen platform by many vendors to implement their specific development environments. Consider NXP (former Freescale) that provides KDS to develop for their Kinetis processors. You could setup a different IDE, but you would have serious trouble in finding configuration parameters especially for the debugger.
IntelliJ is a fair alternative to Netbeans, I’ve used it for a while with Scala and I think the shortcomings of the IDE are more in attempting to understand a cumbersome language than in the IDE itself.
When writing the poll I forgot about Xcode the Apple proprietary IDE. Apple doesn’t trigger my enthusiasm, I never used XCode, but I heard it is jolly good. Though it is the only IDE in the list that works only on a proprietary hardware.
I expected some sympathy for KDevelop and Emacs (if not vim), but they got any.
Build Systems
What use is a build system today when we have such powerful IDEs? Well, first you may want to build the application in batch mode (though some IDEs support batch mode) or you don’t want to force a specific IDE on the users of your code or your IDE saves the project in a location dependent fashion (Eclipse?).
Unsurprisingly more than half of my esteemed friends and colleagues noted that using their preferred IDE they don’t need any stinking build tool. So true, but I still prefer to have something simpler if a full IDE is not needed.
Make is both the first build tool and my preferred option. Before make programs where build using shell scripts (an option that has still a supporter according to my poll). I had a look at ant when it appeared to manage the build of Java application. My impression was that ant was just a different way to write makefiles, so there was no gain in learning a different system. Cmake is somewhat similar.
Sbt is the tool for building (and managing I would say) Scala projects. It is an over-weighted tool that starts to download internet on your pc the first time you launch it. Then it relies on a repository to store and retrieve different versions of the libraries and eventually manages to build Scala applications hiding the warning messages coming from the underlying tools. As you may have guessed, I can’t stand it.
Others make me notice that I left out maven and gradle (one vote each).
Version Control System
Tenth and last question, what is your preferred version control system? A version control system is a system that takes care of the history of your source code, so it is quite an important part of the development.
In the workplace we had quite a flaming discussion over which is the version control system to use, split in half (me on one side, my colleagues on the other) we were among subversion and git supporters.
Thankfully no one in her/his right mind thinks that there is no need for such a thing as well as that this management can be done without specialized tools (i.e. using basic tools like tar, zip and the likes).
Also CVS is gone the way of the Dodo, as it should, but without taking with itself Visual SourceSafe, as it ought. Visual SourceSafe is the Microsoft attempt a VCS and the version I used was quite crappy.
Subversion and Git make most of the votes and are quite close each other with, to my grief, git leading.
I find the subversion promotes a better cooperation in the team and avoids the need for a Software Configuration Manager. You have a single central authoritative repository, with a story fixed (as fixed can be data). The team must proceed by small commits and frequent updates. If care is taken not to commit code that breaks the build, then this is a very convenient way to proceed.
Git has been written from a very different need – suppose you are a maintainer of a project. Of course you want to have control of what the contributors to the project are contributing. Maybe you want to bugs to be fixed, but not some new features, or you want to be very careful about the code written by a specific programmer. You need an easy way to add and remove single commits, merge and go back if anything is wrong.
But when you apply git to a development team you run the very high risk to have every developer with a different version of the codebase, with a central codebase out of date or not consistent because there is no such thing as a maintainer.
Nonetheless my team promised me a tenfold increase in productivity if we replace subversion with git, so I opted for the wrong tool. We are about to switch, and this could be some stuff for another post.
Anyway there are some other VCS – beside mercurial, also perforce got a single vote. I got a vote also for none at the moment.
I hope you enjoyed taking the poll and reading my comments at least as much as I enjoyed writing the poll and reading your answer. Your comments are welcome, after all this is about religions, I would be disappointed if no flaming comment would appear :-).
In his autobiography – I Am Spock – famous actor Leonard Nimoy wrote that when he started shooting Star Trek he was given a pair of lousy pointy ears props from the production. He found no way to make them look real, nor to stand still on the top of his ears. So he decided to shell out his own money and get some real props.It is always difficult, when you are passionate about your work, to draw a line to constrain your involvement. In the same way I don’t like the precision scale approach – i.e. stopping exactly at my duty edge.
So I decided to overcome some internal bureaucracy problems of my employer and buy a logic state analyzer to do my job. Waiting official channels would have taken a time not compatible with project deadlines.
I opted for a Saleae (the cheapest one – I don’t need anything more… and, well, that’s still quite some money) and placed an on-line order on Batter Fly. No shipping fees, I got the package in two days… great service (just beware – they show prices on the site without the VAT, surprise is saved for checkout time).
Here is the box, very light… Suspiciously light, hadn’t they bothered to put at least a brick inside?
No brick, apparently nothing. But my cat Trilli looks very interested in the chips
Here it is.
The logic state analyzer is very small. From the picture I’ve seen I thought it was at least twice the size it actually is
In the bag, aside from the logic analyzer itself, there are – connection wires, probes, USB cable and a thank you postcard that doubles as a quick guide.
Yesterday my old friend Jimmy Playfield found a motivational poster on the wall beside his desk. Usually it is quite hard to find a motivational poster that is not lame or, worse, demotivational. And the motto on the wall was quite belonging to the latter. An ancient Chinese proverb quoted: “The person who says that it cannot be done should not interrupt the person doing it”. The intent, possibly, was to celebrate those who, despite of the adverse situation and the common sense, heroically work against all odds to achieve what is normally considered impossible.
Unfortunately reality is quite different. As that famous singer optimistically sang – One among thousand makes it. That sounds like a more accurate estimation for those trying to attain the impossible. And likely, I would add, that one is the person who benefits from advice and help from friends and co-workers.
Human kind didn’t reach the moon just because someone was kept away from those who said it wasn’t possible. To achieve the impossible you need a great team fully informed on why and how your goal is considered impossible. Usually great management and plenty of resource is helpful, too.
Just reminding a lesson at the University. The teacher, made this example: “Microsoft, in the search for new features to add to their compilers line may find that adding non termination detection would be great. Imagine, you write the code and the compiler tells you that no way, under these condition, your code will hang forever in some hidden loop. Wouldn’t it be really helpful? But this is not possible and it has been proved by Turing”. But… According to the motivational post, no one would be expected to tell the marketing and the poor programmer that volunteered for implementing the feature, that no matter how hard he tries, that feature is simply not possible (at least on Turing machine equivalents).
So that motivational sentence is basically against team work, informed decisions, information sharing, risk management.
A witty saying doesn’t prove anything [Voltaire], but sometimes it is just detrimental.
Well, I was about to close here, but I just read another quote by Voltaire: “Anything too stupid to be said is sung.” Well… that makes a good confrontation between Morandi and Voltaire.
Happy Birthday! BASIC just turned 50. It should have been 1981 or ’82 when I first saw a BASIC listing. The magazine was named Nuova Elettronica (New Electronics) and featured a series of columns about building a computer. I remember the writer was very proud they managed to license an Italian version of BASIC. For sure it was weird (even more in hindsight), something was even understandable (LET a=3, in Italian “SIA a=3”), something else was pretty obscure (FOR i=1 TO 10, “PER i=1 A 10”). I had no computer and the Internet wasn’t even in my wildest sci-fi dreams, so I wondered how those lines could produce 10 high-resolution (!) concentric rectangles. One rather puzzling statement was LET a=a+1. I understood equations (I was in my first year of high school), but that couldn’t possibly be an equation as I knew. So I tried to ask an even more puzzled math teacher, who stared at the line for a while, then muttered something about simulations and universe-changing semantics.
Luckily shortly later another magazine “Elettronica 2000” (“Electronics 2000”) started a BASIC tutorial. I read those pages until I consumed them and learned Basic. For some years programming and BASIC were quite synonyms. The first thing you saw when you switched a Zx or a Commodore on, was the BASIC prompt. The machine was ready to be programmed.
The BASIC era, for me, ended with the Amiga, years later (at that age, years are eons indeed). Microsoft BASIC for Amiga was pretty unstable, also real performances can be achieved with C. Maybe the tombstone was in the second year of University, in the first lesson of Computer Science, when the professor wrote “BASIC” on the blackboard and then stroked it out saying: “This is the last time I want to hear about BASIC”.
Talking about anniversaries, I think it is noteworthy that the first message I wrote on my blog was posted 10 years ago. I would have liked to celebrate this with a new look for my website. I have been working on this for a while, but it is still not ready. I hope to have it online before my website becomes of age.
The embedded muse is always filled with interesting links and articles. In the last issue I read this: Compared to what? about cost of firmware. In fact I think that the main problem we programmer face when trying to have our work recognized by non-programmers (usually those who sign paychecks) is that it is very hard to describe what we do to non-programmers. Going in details to explain how hard it is can be out of our reach.That’s like to say that programming (real programming, not integration or recompiling demo projects) has a non-intuitive grade of complexity.
This reminds of a recent project which I routinely failed to have the upper management appreciate the complexity, since they perceived it as a replacement for an electrical system (a couple of lights, buttons and wires). In fact the firmware equivalent of such simple system was completed in a short time. The rest of the time (actually quite a large amount of time) went in what couldn’t be done by a plain electrical system (graphical display, history, complex priority and communication management).
“How can you say that 2+2 is not 5?! Never ever?! It is only because you believe this. For sure there will be someone, out there, better than you at this, that will be able to make 5 out of 2+2.”That’s about the transcript of a heated discussion I overheard in a near office. Though they were not talking about mathematics, but physics. Basically my colleague was claiming that a given result could not be achieved because of something that is physically not possible.
Just a few days ago I wrote about seeing atoms and that something considered not possible should be considered (basically) as an opportunity in disguise. So I felt the urge of sharing a couple more of thoughts.
It is important – I believe – to clear out the distinction. Photons do not hit a single atom (I’m sure that this is not a formally correct assumption, but it is just for getting the hang of it), so it is true that you can’t see single atoms. And human race stood with that for quite a long time. Then someone had the technology to build a mapping of atoms into a display screen and made the magic, but it is still true that you cannot watch them with your bare eyes or any powerful optical microscope.
It doesn’t matter how much you boss yells at you, it is something that cannot be done. And it would still be impossible if you don’t have the budget to build a complex machinery that does the job with special sensors. If your customer requirements is – I wanna see atoms bare eyes, then it IS impossible.
If you want a significant advance, then you have to mix lateral thinking and quite a lot of money. But, most of all, you have to be ready to accept that though these conditions are required, they do not grant any result.
You can’t see atoms. That’s what I was taught. Well, everything you see is actually made by atoms, but you can’t see single atoms – that was the meaning of the teaching. Simply it is not possible – it has to do with atom dimensions and light wavelength. I won’t delve into the scientific reasons, but every serious physic textbook will confirm this.You may imagine my surprise when I saw some pictures of actual molecules. In fact it is true that you can’t see atoms (nor molecules), but you can use specialized sensor to produce a visual representation of atoms and molecules aspects. Yes it is not seeing directly by light rays, but is fully equivalent for all practical purposes.
That’s the idea – something is considered impossible until someone comes out with an idea to work around the limitations and voila, what once was considered impossible, today is within everyone reach.
This is what my friend Jimmy Playfield (the name is fake to protect his privacy) told me.
Some days ago Jimmy’s boss called him and all senior programmer and senior hardware designers to assign them a homework for the short vacancy they were about to have.
The goal was to find a way to work in the company environment and successfully craft project after project. The constraint though were that they couldn’t change anything in the chaotic way projects were handled.
Here is a sum of the constraints –
budgets are fixed, no increase in the engineers workforce;
requirements are roughly defined at the beginning of the project, then they continue to pour in during the entire lifetime of the project;
Resources are not statically assigned to a single project, but they may be temporary (and not so temporary) shared among two (or more) projects;
contracts are naively written, no real fences and firewall are set to force the customer to behave constructively nor to protect the company from customer whims;
project manager role has been declared useless, so their company charges the responsibilities of project management onto project specialists;
there are more projects than programmers;
no real policy for resource management, no motivational incentives, no benefits, no training (well, here I guess Jimmy was somewhat exaggerating).
Easy, ain’t it?
Well it sounds a hard problem. Let’s see what “project management 101” has to say about this situation.
First the triangle of project variables. Basically for every project there are three variables – Budget, Features and Time. Of these three variables you can fix two (any two), but not the third one. E.g. you can fix Time and Features, but the Budget is a consequence, or you can fix Budget and Time, but Features are a consequence (this is the agile methodology configuration.)
Usually projects at Jimmy’s company have Budget and Time fixed. So the only chance would be to work on Features.
Features variable is to be intended as a mix of features and their quality. A poorly implemented feature is likely to take less time/budget than a robust and top quality implementation. So they could slash the quality.
The problem to work in this direction – put aside ethical aspects – is that usually quality is the topmost motivational driver. Taking pride of what one does helps the involvement in the project. When a programmer in despair states – “tell me what I have to do and I’ll do it” – that’s the end of motivation, the end of involvement and the sunset of efficiency. The programmer will consider that project an accident, something that is not worth his/her neurons to be burnt on.
The other problem in slashing the quality is that legal contracts have to be armored to protect the company against the customer that could complain about the lack of quality.
I can’t see any solution for them, at least, not within the system, much like you can’t see the atom without moving to another level.
So by moving to a meta-level, you can break out of the system, e.g. by hiring someone to write the code for the project. This programmer won’t do any better than Jimmy and his coworkers – sometimes he will complete the project, sometimes he will fail. But to the company is a win-win solution, if the contractor succeeds then the company wins, if the contractor fails then the company could blame him and that’s a win again.
The major problem I see with this is that it is a bit suicidal, i.e. Jimmy and his coworkers become redundant and disposable as soon and the contractor is hired. Good luck, Jimmy and let me know if you find a better solution.
In the last days before vacations, discussions at workplace raged over the subject of software quality. The topic is interesting and we got easily carried away throwing flames each other on programming techniques, programming mastery, (we stopped just before getting to personal arguments). Eventually we didn’t reach any agreement, but the discussion was interesting and insightful. More precisely some of us claimed that in software development quality always pays off while others claimed that the same project goals may be achieved earlier (thus cheaper) by employing quality-less programming (I am a bit overemphasizing.)
In other words, the question the office tried to answer was – is the cost of quality enhancement worth spending?
It is hard to get to a final answer and that’s possibly why there isn’t a clear consensus even within programmers. Take for example Lua designers. They were adamant in not limiting the language expressiveness in some questionable directions because they targeted the language at programs of no more than a few hundreds lines written by a single programmer. In other words they sacrificed some forms of quality because, in their opinion, using those was just overhead without benefit.
Detractors of software quality have their reasons – programmers are human and as such they may “fall in love” with their own creation or with the process. This causes either an endless polishing and improvement, or the creation of overwhelming designs well beyond the scope of the project. If you question these programmers about their choices the answer usually involves design for change, portability or re-usability. Seldom an assessment has been made to check whether these are really needed for the project, its future evolution or the company.
It is clear that the point is finding the right compromise. “Anybody can build a bridge that stands, but it takes an engineer to build a bridge that barely stands.”
Unfortunately bridge engineering (and other elder and more stable engineering fields) is not much of help beside witty saying. For bridge engineering does not have to take in account staged deliveries, demos, versioning, evolution maintenance (“yeah, nice bridge, now grow it three stories, add a panoramic wheel and a skiing slope”), customer changing idea (“yeah, that’s exactly my bridge, now move it on the other river”), nor specification problems (“I said ‘bridge’, but I meant ferry-boat”).
When talking about quality there is an important distinction to make – quality perceived by the user and internal quality (perceived only by the programmers working on such software).
User perceived quality is the opinion of software customers. As such, it is the most important quality of your software. As an engineer you should write the program so to spend the minimum needed to reach the required user perceived quality or slightly exceed it. No more, no less. (Just keep in mind that these aspects strongly depend by your customers). This claim holds as long as you consider the whole project without any maintenance. Since maintenance is usually the most expensive part of a project cost (even up to 75% of the whole project cost), you may want to lower its impact by employing maintenance-oriented development techniques, this means that you must spend more during the development, than what is strictly needed to match customer expected quality.
Internal quality is what, we programmers usually refer to when we say “quality”, good code, well insulated, possibly OO, with patterns clearly identified, documented, no spaghetti, readable and so on.
Unfortunately for software engineering supporters, programs do exist with a fair, adequate or even good perceived quality that have bad or no internal quality at all.
And this is the main argument against the strive for internal quality.
I think that this reasoning has a fault. In fact it relies on the hypothesis that since you can have fair to good perceived quality without any internal quality, then internal quality is an extra cost that we can save. But usually this is not the case, even if there would be no direct relation, would it be simpler to write a program where you keep just a bunch of variables under your attention to get to your goal, or write a program where you need to practice Zen in order to get an holistic comprehension of the whole not to mess anything in the process of treading your way?
And then it is hard that no whatsoever relation exists between the two natures of quality.
On this point both literature and commonsense agree that from medium to large program size internal quality affects several aspects of user perceived quality. E.g. if the program crashes too often perceived quality can’t be any good.
Moreover – in the same class of projects – the overall cost of the software across its entire life cycle is less if internal quality is high. That is desirable software properties really help you in understanding and modifying the software more confidently and with reliable results.
Quality-less supporters – at this point – backs up on picking single aspects of internal quality. For example they can target reuse and state that reuse practice is not worth since you pay in this project a greater cost with the risk that you won’t reuse the code in another project or that the context will be so different that it will be cheaper rewrite the code from scratch.
In this case it is hard to provide good figure of savings for designing and coding reusable components. This is unfortunate both for internal quality supporters in their advocacy and for software architects deciding if a component has to be reusable or not.
Unfortunate also because there is a large band of components between the ones that obviously are to be made reusable and the ones that obviously not.
Also it has to be considered that techniques adopted to make a component reusable are – generally speaking – techniques to improve internal quality. Pushing this thought to the extreme, you can’t rule out reusability from your tool chest because all other techniques that improve internal quality drive you in the reusability direction to the point that reusability either comes for free or at a little cost.
Despite of what I wrote, I think that there exists a problem and it occurs when the designer starts overdesigning too many components and the programmer starts to code functions that won’t be used in this project but just because.
A little overdesigning could be a good idea most of the times (After all design takes a little percentage of the development time), nonetheless you should always keep in mind a) the scope of the project and b) the complexity you are introducing in the code respect to the complexity of the problem to solve.
Since you are overdesigning you are not required to implement everything. Just implement what is needed for the next iteration of the project.
At this point you shouldn’t rush into moving your reusable component in the shared bin, but you should keep it in a “aging” area, granting it time to consolidate. Before being reused it should prove to survive at least to one project (or most of it). Then, when the component is reused in at least two projects, once properly documented, could be moved in the shared folder.
What I can’t understand and leaves me wandering is that in many workplaces firmware is considered an accidental cost, that everyone would happily do without it only if could it be possible. So the logical consequence would be to scientifically attempt to lower the cost of development and maintenance by finding the proper conditions. Instead in many places there is a blind shooting at firmware development, attempting to shrink any cost, with the effect that the cost will actually increase. E.g. you can save on tools and training, only to discover that your cost per LoC is more. I am afraid that the reason is either mistrust or ignorance in the whole body of researches done on software development since the 60s.