Blog

Connecting reactive applications with fast data using reactive streams.

Talk by Luc Bourlier. As in the previous posts these are my quick notes.
Who doesn’t know what a reactive application is? Responsive, elastic, resilient and message driven – this is what reactive apps are.
Big data means that there are too much data to be handled by traditional means on a single machine.
Fast Data are big data that comes in big volume and you want up to the second information with continuous process.
Spark streaming is the technology by light bend that does the trick.
Spark is an evolution of map reduce model. A driver program (spark context) talks to the cluster manager to get worker nodes to do the job.
Spark can be used on streams by using mini-batching. A mini batch is the work executed on data received in a unit of time.
Spark streaming deals with all kind of failures (hardware, software and network). It also handles recovery for continuous processing and deals with excess of data volume.
A demo is presented with a raspberry pi cluster. (On raspberry pi you don’t need to push the system to the limit, because you are already at the limit).
Demo ran fine, but it broke, that makes me wonder how stable is this technology. The demo model seemed quite simple.

Back pressure is the mechanism implemented by akka streaming to slow the data producer if the consumer is not able to consume data fast enough.
Congestion in spark was handled by static limit on the input rate. In spark 1.5 the limit has been changed into dynamic rate limit. There is a rate limit estimator based on PID that sets the rate limit.
There are some limitations to this method based on the assumptions used in the design – all records require about the same time to process, the process is linear (a 3rd assumption was there, but it got lost my my note taking)

Scala Days – key note

It may not be a great surprise, but the opening key note is held by Martin Odersky. I don’t feel much expectation, more or less everyone expects he’s going to repeat the opening speech of the last Scala conference in New York.
In this post I’ll just summarize the content, my considerations will be in a next one.
Scala days are going to be attended by some 1000 people. Conf app is, of course written in Scala and swift (and it’s open source) courtesy of 47 degrees.

Odersky enters with son et lumier effect, halfway between a disco and an alien abduction… Maybe both.
First he shows a steady growth of the language. Scala jobs get a little over the line. There is no comparison with Java, of course, but there is no drop in popularity.
He will talk about the future, next, mid and distant futures.
What’s next? Scala center, Scala 2.12 , Scala libraries.
Scala center is a vendor neutral initiative supported by several partners that promotes Scala, and undertakes projects that benefit all the Scala community.
Scala 2.12 is the next release of the language, about to be completed. Optimized for Java 8. Shorter code and faster execution.
This release will arrive mid year. Older Java versions will be supported by Scala 2.11 that, in turn, will be supported for quite a while.
Not many new features: 33. Main contributors are from the community.
Martin’s book “Programming in Scala” will be updated to 3rd edition to include release 2.12 of the language.

In the farther future: 2.13 will focus on libraries – Scala collections, simpler, lazy, integrated with Spark. Backward compatible.

Split Scala stdlib into core and platform. Stdlib was much prototypal with the idea that wouldn’t have lasted long.

Scala js, and Scala native.

The dot is the foundation of Scala. a mini language, small enough that programs written in this language can be machine proof. Much of the language can be encoded in this language. 8 years in the working. Language work can be done with much more confidence.

image

Type soundness, properties of code can be demonstrated (the examples say that an expression of type T produces a value of type T).
Dotty a language close to Scala compiler that produces dot code. Generics are processed with dot. Not ready for industry, but if you want to try something cool…
Faster
Goal: Best language Martin knows how to make.

Dropped: procedure syntax. Rewrite tool that will take care of translating. DelayedInit.
Macros – of was just a long run example. There will be an alternative.
Early initializers – for stuff that needs to be initialized before the base trait.
Existential Types forSome.
General Type Projection T#x.

Added: intersection and union types- types T&U just the common properties of the two types. T|U will have either the properties of T or the properties of U.

Function arity adaptation. Pairs.map ((a,b) =>a+b)

Static method for object.

Non blocking lazy Vals: locking time is much shorter now. Avoiding deadlocks.
@volatile for thread shared lazy vals.

Multiversal equality type safe equality and inequality operators. Named type parameters – partial type parametrization.

Motivations better foundation, safer,…

SBT integration. Repl with syntax highlighting. Intellij. Doc generation. Linker.

Future.
Scala meta – will replace macros and meta programming. Inline and meta. Executed by the compiler.
Implicit function types . used to compose … Just more mess.

Effect checking a->b pure function, a=>b impure. Checked by the compiler.

Nullable types T? =T | Null. Types coming from Java will have a ? Because they have side effect. It is an alternative to monads.

Generic programming.

Guard rails – how to prevent the programmers to misuse or abuse the language. Strategic Scala style: principle of least power. Strategic Scala style.

Libraries that inject bad behavior (eg implicit conversion). Implicit conversions will make a style error if public.

Syntax flexibily. Even Martin regrets the space syntax. Add @infix annotation if the author intends it to be used as infix and give a style error in other cases.

Operators are regretted as well . @infix will have the option to give names to such operators.

Scala center

Notes from “Scala center” by Heather Miller. Scala Center is a non profit organization established at EPFL. It is not lightbend. Same growth chart of yesterday, source are not cited (indeed?). Stack overflow survey reports Scala in the top 5 most loved languages.
The organization will take the burden of evolving and keeping organized libraries and language environment, educating and managing the community rather than the language itself.
Coursera Scala class is very popular (400k) with a high completion rate. There will be 2 new courses on the new coursera platform. Unverified courses are free, verified and certified courses are paid.
Functional programming in Scala – 6 weeks.
Functional program design in Scala – 4 weeks.
Parallel programming – 4 weeks.
Big data analysis in Scala and spark – 3 weeks.

My (somewhat cynic) impression – lot of work and desperate needs for workforce, they are looking to get for free by grooming the community.
EPFL funds for 2 ppl for moocs . donations from the industry and revenues from moocs.
Lightbend? Will continue to maintain the stable Scala.
Package index is not yet available for Scala. Aka people should be able to publish their projects and get them to be used without the need of being a salesperson.
Scala library index. Index.scala-lang.org
It is an indexing engine.

Just wondering – is this a language for the academia or for the industry? Keep changing things and the investments made by the industry will be lost: language is going to change, base libraries are going to change as well… Which warranties do I have that my code will still compile 5 years ahead in the future?
Changing things is good for the academia since it allows to do research and to better teach new concepts. It doesn’t harm the community where workforce is free and there is no lack of people to redo the same things with new tech.

Scala Days – Berlin

Scala… Where to start? Well I’m going to the Scala Days 2016 in Berlin. This is the first conference of this kind I will attend. Maybe I would have preferred a conference about C++, such as c++con, but this is what I got. And, as a second thought, Scala has nothing to envy to C++, at least in terms of unfriendliness and unreadability.
In Scala I like that I am not forced to use Java when programming for the JVM, but I find that by looking at the pro and weakness of the existing languages, EPFL could have designed the language somewhere more … Industrial. Instead it felt in several pitfalls, first of which is the misconception that the speed of developing software is capped by the speed of typing the code.
Anyway, here I am waiting for the conference to begin. Keynote speech will be from Martin Odersky voice. Martin is, of course the inventor of Scala language and, if I understood correctly, a cofounder of lightbend (was Typesafe) the for-profit company that backs up Scala and its environment.
image

First Sprint

The first sprint is over and it has been though. As expected we are encountering some difficulties. Mainly role keys without enough time to invest in Scrum.

Scrum IMO tends to be well balanced – the team has a lot of power having the last word on what can’t be done, what can be done and how long it will take (well, it’s not really the time, because you estimate effort and derive duration, but basically you can do the inverse function and play the estimation rules to get what you want).

This great power is balanced by another great power – the Product Owner (PO) who defines what has to be done and in what order.

Talking, negotiating and bargaining over the tasks is the way the product progress from stage to stage into its shippable form.

In this scenario it is up to the PO to write User Stories (brief use cases that can be implemented in a sprint) and define their priority. In the first scrum planning meeting, the team estimates one story at time to set the Story Points value.

This is a straightforward process, just draw a Fibonacci Series along a wall and then set a reference duration. We opted for a single person in a week is capable of working for 8 story points. This is just arbitrary. You can set whatever value you consider sound. Having set two-week sprint and being more or less 5 people we estimated an initial velocity somewhere between 80 points per sprint and (having the value) 40 points. The Scrum Master (SM) reads the User Story aloud and the team decides where to put it on the wall. Since the relationship between story points is very evident then it is quite easy and fast to find where to put new stories.

In that meeting, that went well beyond the boxed 2 hours, all the team was there, SM included, but the PO who was ill (the very first day in years). We could have delayed the meeting but the team would have been idling for some days…. not exactly doing nothing, you know there’s always something to refactor, some shiny new technology to try, but for sure we wouldn’t have gone the way our PO would have want us to go.

So we started and in a few moments it became clear that we were too much. Being a project that ranges from firmware to the cloud when talking about a specific story many people were uninterested and became bored and started doing their personal stuff. In the end, we were about three being quite involved in the estimation game.

The lack of the PO in that specific moment was critical – we missed the chance of setting proper priority on tasks since we had to rely on a mail and we can’t ask our questions and get the proper feedback. At the end, we discovered that the topmost prioritaire User Story remained out of the sprint.

The other error we made was to avoid decomposing User Stories into Tasks. This may seem redundant at first, but it is really needed because an User Story may involve different skills and thus different people to be implemented.

The sprint started and we managed to get the daily scrum. This is a short meeting scheduled each day early in the morning where each member of the team says what she/he has done the day before, what is going to do that day and if she/he sees any obstacle in reaching the goal. This meeting is partly a daily assessment and partly an intent statement where, at least ideally, everyone sets the goal for the day. Everyone could assist, but only the team may speak (in fact has to speak).

Daily Scrums were fine, we managed to host one/two programmers that were not co-located most of the time via a skype video call. The SM updated the planned, doing, done walls. The PO attended most of the times.

On the other hand, the team interacted sparingly with the PO during the sprint, but just in part because his presence was often needed elsewhere. Also, I need to say that our PO is usually available via phone call even if he’s not in the office.

This paired with the team underestimating the integration testing and definition of done. Many times user stories were considered done even if not tested with real physical systems, but just with mockups. Deploy process failed silently just before the sprint review leaving many implemented features not demonstrable. Also, our definition of done requires that the PO approve the user story implementation. This wasn’t done for any task before the sprint end, but we relied on the sprint review for getting the approval.

The starting plan included 70 Story Points and we collect nearly another 70 points in new User Stories during the sprint. These new Stories appeared both because we found stuff that could be done together with the current activities or the needed to be done to make sense of the current activities.

Without considering the Definition of Done, we managed to crunch about 70 points that were nearly halved by the application of Definition of Done (in the worst possible moment, i.e. during the Sprint Review).

Thinking about improving the process, probably working from remote is not quite efficient, I noticed that a great deal of communication (mainly via slack, but also via email and Skype) happened the day that those two programmers were off-site.

The sprint end was adjusted to allow for an even split of sprints before delivery, therefore we had something less time than was we planned for.

End/Start sprint meetings (i.e. Sprint Review, Sprint Planning, and Sprint Retrospective), didn’t fit quite well in the schedule, mostly because of PO being too busy in other meetings and activities.

Am I satisfied about the achievements? Quite. I find that at least starting to implement the process exposes the workload and facilitates communication. The team pace is clear for everybody.

Is the company satisfied about the achievements? This is harder to say and should be the PO to say this. I fear that other factors affecting the team speed in implementing the requests of the PO may be considered with the scrum and dumped altogether. Any process poses some overhead and the illusion that a process is not really needed to get the things done is lurking closely.

Today we planned for another sprint, but this is for another post.

Ready? Set! Scrum

So, it has started. After taking a Construx on line SCRUM boot-camp, I started the scrum process at my workplace. This needs a bit of background though. The company I work for it is quite an unusual environment where developing software. The company started nearly 70 years ago and always manufactured plastic and metal goods. Maybe one of the reasons it is lasting so long is that avoided electronics first and software then for a very long period.

Nowadays softwareless goods are becoming increasingly difficult to sell, and beside everything, adding software to laboratory equipments may turn a great product into a fantastic one… and since “fantastic” means in the fantasy, you see the risk involved.

Developing software in a traditional mechanical manufacturing company has its own share of challenges and woes. If I manage to keep up with my proposition, I’ll report about my attempt to introduce a process that is as far as you can think of from the classical waterfall approach.

 

Programmer’s Religion Survey

I’m pretty sure that in the common sense programmers are considered rational folks, their minds solidly rooted in facts, comforted by engineering, based on logic, algebra and maths. Brains like knife part truth from lies, dispel doubts and myths.

Well, maybe. What is true is that those who write programs for passion before than for a living, proud themselves to be artists (or at least craftsmen). Artists have inspirations and base their work on inner emotions and use rationality just as a tool when they need it and irrationality as the tool for the other times.

We, programmers, can write code that with is capable to insert a spacecraft in Pluto orbit with astounding precision, while, at the same time we can decide to quit a job if forced to use some tool or process we don’t like.

I tried to be as rational as possible in choosing my tools, programming habits and process, always trying to justify in terms of engineering practice, sometimes changing my gut choice. Recently I confronted with, or, maybe better,  have been challenged by colleagues and friends on these matters and long and heated discussions arose.

So I decided to prepare a short poll via surveymokey (even shorter because of the limit for free survey), on the issues that more closely seem to be matter of religion and faith among my friends. Here I’m presenting the result.

As of today I have received 24 poll submission, the poll is still open so feel free to take it, if the exit poll would change significantly in the future I’ll update my analysis. I won’t claim any statistical validity, it is just a poll among friends, likely a very biased set of programmers.

Preferred Text Editor

This is one of the most ancient religion war among programmers, dating well before the advent of PC

Poll_ProgRel_01_PreferredTextEditor
Preferred Text Editor

I find interesting that set aside the Windows editor notepad, vim/gvim comes second, winning even over nano which is the other Linux/Unix standard. Emacs seems much dead, which is somewhat surprising when compared to vim. In the other votes I count one and half for Sublime Text, half vote for Atom (which I don’t know), 1/3 vote for notepad++ and two misvotes (I requested no IDE).

My editor of choice is usually vim/gvim, but when I’m on Windows I often and often go for Notepad++. The vim choice was not a straight one because vim is hard to learn. At the beginning, when I used vt100 terminals at the university, I hated it. It appeared like a cumbersome relic from a long gone era. At home I could interactively use CygnusEd on Amiga. But at school we were prevented to use Emacs because the poor HP-UX box we used had just 16M RAM and Emacs made it crash on launch.

Then I came to terms with vi, but I never suggest anyone to learn it, even after I reached a fair proficiency. There are two main reasons that make vim my preferred editor. First it is available on every Unix machine. Maybe you find nano or pico or even emacs or none of them, it depends on the distribution, on the system. But vi, if not vim, is there for sure. Also consider that now linux is used also on embedded systems that are still resource constrained so chances are that you can’t install the editor you want. The second reason is that when you have a slow/intermittent (or blind) connection nothing beats vi. You count how many columns you want advance, how many characters you want to delete, where to insert and with a single command you instruct the editor to do what you want. Try to move the cursor 10 columns forward on an intermittent connection using a conventional editor by pressing repeatedly the right arrow key. Are you sure you pressed it ten times? And that the editor on the other side of the Moon received 10 keypresses? How confident you are? Well with vim you are sure of what you have done. On the other hand if you happen to have the wrong keyboard layout…

Indent Technique

Indentation is a need for readability in many programming language, in some it is even needed for proper compilation. There are two basic techniques whose origin is lost in the dawn of electrical typewriters – spaces and tabs.

Poll_ProgRel_01_IndentTechnique

The advantage of tabs is that you can use editor/IDE preferences to set the preferred width, so the indent could always please your taste. But tabs mixes badly with spaces, so a file that mixes both spaces and tabs may become messy if it is viewed with a tab size different from the one used to write the file. Also sometimes you need an indent level not aligned with standard indentation levels (e.g. when you need to split a long line). In this case you are force to use the spaces, causing the file to mess up again if viewed with a different tab size.

Spaces may be a bit more dull, but they are reliable – always you see what you have, the file appears always the same regardless of user preferences. That’s why I found the result of this poll interesting.

As properly pointed out by a friend of mine – you should use tabs for indentation and spaces for alignment. That makes a lot of sense, but it is pretty hard to enforce without entering quite heavily in the syntax of the language being edited.

Indent Column

Still about indent, this question asked the preferred indent size. Keeping a wide indent size hints the programmer at avoiding too nested code, since quickly the code goes out of the right margin. That’s why I prefer a quite wide indent at 4 columns. I found so may colleagues and friends agree with me:

Poll_ProgRel_03_IndentColumn

Interestingly enough – the sum of all the votes for indentation lesser than 4 is not greater than the votes for indentation 4. Surprisingly there two people love single column indent! That option was more of a joke than an option I would take seriously.

Opening Brace Position (Brace=open block symbol)

Braces are used to define blocks of instruction in many languages whose ancestry can be tracked, more or less easily, back to BCPL (even if I would have some difficulties to see such lineage in Scala). In this sense there are several styles about the placement of the opening brace. The C Language by Kernigham & Ritchie used the open brace at the end of the statement that defines what kind of block is. Allman style, used initially to write most of the BSD utilities, uses braces in the same way Pascal-like languages use begin/end, i.e. on a line alone aligned with the statement. GNU is a third popular indent style and requires the opening (and closing) brace to be on a line alone, half indented between the statement and the inner block code.

Poll_ProgRel_04_OpenBrace
second option is “alone on the next line aligned with the statement”, while third option is “alone on the next line indented with the statement”

Once again my preferred style won. I like the symmetry of the matching parenthesis the helps in reading the code and hints the programmer to keep the code short because some lines have to accommodate braces. The denser the code the harder to read.

Also I think that is interesting that I changed my style – I started with K&R style (pretty obvious since I started coding C before the ANSI standard was out). Then, when switching to C++ back in the early ninenties, I read the Ellemtel Rules and Recommendation. Those rules made a lot of sense and provided rationale for every rule. So that I was convinced to switch brace style. Lesson learned, if it makes sense, you can change your habit (or religion).

The GNU style scored quite low, maybe that half indentation is not that appealing.

Language of Choice

At the beginning it was just machine code, no one could disagree. Then it came Fortran, Lisp and Cobol (but the real story is a bit more complex) and suddenly there were four religions (not three, because there were those claiming that machine code was still the best). For my poll, I picked 5 popular languages and Antani an esoteric language. (this is a mistake – the right name is Monicelli instead of Antani, sorry).

Poll_ProgRel_05_Language

Once more surprisingly the preferred language is C++, which matches my preferences (I swear I didn’t rig the poll). With a C# second, on which I agree and third C. I am afraid that the poll group was very biased in this respect, especially when compared with official indexes such as TIOBE.

Thanks god no one chose Antani, but no one had a different preference other than the ones listed. Given the zillions of programming languages I had expected at least one vote in other.

Scripting Language of Choice

Scripting languages are the glue of the software, they allow with a moderate effort to combine components are tools to provide advanced and sometimes surprising results. The difference between a general purpose language and a scripting one may be thin in some cases and I think there is no clear answer. Python, Visual Basic, Ruby and Lua may have roots in scripting, but they aim to or are used as languages to create general purpose software.

Poll_ProgRel_06_Scripting

Bash is my preferred scripting language. The latest version has a number of features that allows complex programs to be written. In its evolution lost a bit of cryptic aspects letting the programmer use more sensible constructs, but it is still a language with some obscure constructs. Hardly you can beat bash in the Unix/Linux environment when you have to automate the command line. Since in Unix/Linux you can do everything from the command line, bash allows you to automate the entire system.

The shortcomings of bash, notably a survival level of math handling, the lack of user defined structured types and no native support for binary files, do not prevent the guru programmer to use bash for everything. A more convenient way is to use Python that is based on a more modern design and can rely on many and disparate libraries.

So I expected Python to collect more votes than bash (even if bash is my preferred one).

other collected 1 vote for JavaScript (which indeed is a scripting language), 1 vote for Lua (another pretty scripting language very simple although flexible) and 1 vote for Perl (another Unix/Linux favorite).

Ruby got no sympathy, although at least one noteworthy web application is written in this language.

Web Server Language

Many (most of?) applications are written today as web applications. Languages used to code these programs are to be chosen carefully, long are gone the times when a CGI interface and some shell scripts could do the trick of making web pages dynamic.

Poll_ProgRel_07_ServerLanguage

If traditional applications are to be developed in C++ according to the majority of my friends, not so for web applications. Here Java is king getting twice the votes of the runner up – PHP. PHP being specifically developed for this task is a natural second. Surprising Scala and C++ are considered at the same level for this application.

In the other section I got one vote for NodeJS, one vote for C# and one vote for no preference (so my fried who voted for no preference, next time you can program a web application in Monicelli… :-D).

IDE

The IDE is a relatively recent concept in programming, I would date it around early eighties, at least in its modern form. Before you had several different and sparse tools to do your programmer job – an editor, then a compiler, a linker and possibly a debugger (interestingly enough on home computers you had just one environment which could be considered a rudimentary form of IDE). IDE started to appear in systems with no multiprocessing capabilities such as CP/M and MS-DOS. The first I’ve seen and used, which incidentally was also the first IDE, was Turbo Pascal on the CP/M operating system.

Nowadays complex projects are preferably managed by IDE even when they come with a build system recipe (be it make, ant, maven or sbt).

Poll_ProgRel_08_IDE

Microsoft Visual Studio is the oldest among the choice and the one who got most votes. I fully agree, Visual Studio is a powerful and comprehensive solution that long has long the lock-in-Microsoft nature it had at the beginning.  When I can’t used Windows, my preferred IDE is Netbeans. On the other hand I can’t stand Eclipse. It is a bloated application with no rational design in its interface. Eclipse subsystems seem to be attached as a second thought and they don’t share the same way of using variables, or doing things. Too bad that Eclipse is the chosen platform by many vendors to implement their specific development environments. Consider NXP (former Freescale) that provides KDS to develop for their Kinetis processors. You could setup a different IDE, but you would have serious trouble in finding configuration parameters especially for the debugger.

IntelliJ is a fair alternative to Netbeans, I’ve used it for a while with Scala and I think the shortcomings of the IDE are more in attempting to understand a cumbersome language than in the IDE itself.

When writing the poll I forgot about Xcode the Apple proprietary IDE. Apple doesn’t trigger my enthusiasm, I never used XCode, but I heard it is jolly good. Though it is the only IDE in the list that works only on a proprietary hardware.

I expected some sympathy for KDevelop and Emacs (if not vim), but they got any.

Build Systems

What use is a build system today when we have such powerful IDEs? Well, first you may want to build the application in batch mode (though some IDEs support batch mode) or you don’t want to force a specific IDE on the users of your code or your IDE saves the project in a location dependent fashion (Eclipse?).

Poll_ProgRel_09_BuildSystem

Unsurprisingly more than half of my esteemed friends and colleagues noted that using their preferred IDE they don’t need any stinking build tool. So true, but I still prefer to have something simpler if a full IDE is not needed.

Make is both the first build tool and my preferred option. Before make programs where build using shell scripts (an option that has still a supporter according to my poll). I had a look at ant when it appeared to manage the build of Java application. My impression was that ant was just a different way to write makefiles, so there was no gain in learning a different system. Cmake is somewhat similar.

Sbt is the tool for building (and managing I would say) Scala projects. It is an over-weighted tool that starts to download internet on your pc the first time you launch it. Then it relies on a repository to store and retrieve different versions of the libraries and eventually manages to build Scala applications hiding the warning messages coming from the underlying tools. As you may have guessed, I can’t stand it.

Others make me notice that I left out maven and gradle (one vote each).

Version Control System

Tenth and last question, what is your preferred version control system? A version control system is a system that takes care of the history of your source code, so it is quite an important part of the development.

In the workplace we had quite a flaming discussion over which is the version control system to use, split in half (me on one side, my colleagues on the other) we were among subversion and git supporters.

Poll_ProgRel_10_VersionControlSystem

Thankfully no one in her/his right mind thinks that there is no need for such a thing as well as that this management can be done without specialized tools (i.e. using basic tools like tar, zip and the likes).

Also CVS is gone the way of the Dodo, as it should, but without taking with itself Visual SourceSafe, as it ought. Visual SourceSafe is the Microsoft attempt a VCS and the version I used was quite crappy.

Subversion and Git make most of the votes and are quite close each other with, to my grief, git leading.

I find the subversion promotes a better cooperation in the team and avoids the need for a Software Configuration Manager. You have a single central authoritative repository, with a story fixed (as fixed can be data). The team must proceed by small commits and frequent updates. If care is taken not to commit code that breaks the build, then this is a very convenient way to proceed.

Git has been written from a very different need – suppose you are a maintainer of a project. Of course you want to have control of what the contributors to the project are contributing. Maybe you want to bugs to be fixed, but not some new features, or you want to be very careful about the code written by a specific programmer. You need an easy way to add and remove single commits, merge and go back if anything is wrong.

But when you apply git to a development team you run the very high risk to have every developer with a different version of the codebase, with a central codebase out of date or not consistent because there is no such thing as a maintainer.

Nonetheless my team promised me a tenfold increase in productivity if we replace subversion with git, so I opted for the wrong tool. We are about to switch, and this could be some stuff for another post.

Anyway there are some other VCS – beside mercurial, also perforce got a single vote. I got a vote also for none at the moment.

I hope you enjoyed taking the poll and reading my comments at least as much as I enjoyed writing the poll and reading your answer. Your comments are welcome, after all this is about religions, I would be disappointed if no flaming comment would appear :-).

Godspeed Sir Terry

“Why do you go away? So that you can come back. So that you can see the place you came from with new eyes and extra colors. And the people there see you differently, too. Coming back to where you started is not the same as never leaving.”― Terry Pratchett, A Hat Full of Sky (Discworld, #32)

“I meant,” said Ipslore bitterly, “what is there in this world that truly makes living worthwhile?”
Death thought about it.
CATS, he said eventually. CATS ARE NICE.”
― Terry Pratchett, Sourcery

“DON’T THINK OF IT AS DYING, said Death. JUST THINK OF IT AS LEAVING EARLY TO AVOID THE RUSH.”
― Terry Pratchett, Good Omens: The Nice and Accurate Prophecies of Agnes Nutter, Witch

“The whole of life is just like watching a film. Only it’s as though you always get in ten minutes after the big picture has started, and no-one will tell you the plot, so you have to work it out all yourself from the clues.”
― Terry Pratchett, Moving Pictures

…and thank you for all your books – all masterpieces.