Tuesday, February 17, 2015

You Don't Like Google's Go Because You Are Small

When you look at Google's presentations about Go, they are not shy about it. Go is about very smart people at Google solving very BIG problems. They know best. If you don't like Go, then you are small and are solving small problems. If you were big (or smart), you would surely like it.

For example, you might naively think that printing the greater of two numbers should be as simple as
  std::cout << max(b(),c())
That is because you think small. What you really should want is:
  t1 := b()
  if t2 := c(); t1 < t2 {
    t1 = t2
  fmt.Print( t1 )
Isn't it much better? We didn't have to type all those extra semicolons that were killing productivity before. If you don't like it, you are small.

If you wanted to extract an attribute of an optional parameter, you may be used to typing something like:
a = p ? p->a : 0;
or even:
a = p && p->a
You just make me sad because obviously what you really want is:
a = 0
if p != nil {
  a = p->a
It is so much more readable. Trust me, you don't need the ternary operator "?:". It is too complex for you, with your small mind. The creators of Go know better. You also don't need macros or templates, or you might accidentally try to abstract something like "max()" and get confused.

Collections? Don't make me laugh. The only ones you will ever need (or can comprehend in your small mind) are arrays and hash tables. But don't worry about the actual hash function used - you are not allowed to know about it or modify it. Why would you ever care about it, anyway, when everybody knows that practically any hash function always generates perfect results. Or at least, any hash function you could come up with.

If you really, really, are deluded enough to believe you need a different collection, be prepared to remember how you coded in 1987 in C, except you don't have macros, so all typecasts are explicit. Who cares about typecasts anyway, when you are solving really BIIIG problems.

Oh, forgot to mention, "range" doesn't work for your silly little custom data structures, so you better give them up.

Function overloading based on type you ask? Stop with your complaining! You don't need it. People much smarter than you have decided that you will never ever need such a complicated feature in a programming language. Function with the same name performing different operations based on the type of the arguments?? This is insanity. Way too confusing for you anyway. So, this, naturally does not compile:

func add ( a int, b int ) int {
    return a + b

func add ( a string, b string ) string { // ERROR!!
    return a + "+" + b
(Yes, it is a meaningless example, but that is just to show that function overloading is meaningless anyway!)

Oops, forgot to mention. You might end up needing something like that in a very obscure corner case which is referred by computer scientists as "calling methods of objects". So, in fact, you do have function overloading, but the syntax is much more simple, intuitive and self consistent than that nonsense above. Here it is:

type Int int
func (a Int) add ( b Int ) Int {
    return a + b

type String string
func (a String) add ( b String ) String {
    return a + "+" + b

func test () {
    var a Int = 0
    b := a.add( 10 )
    var c String = "aa"
    d := c.add( "cc" )
It does work, try it. And is so much better than what you may be used to in primitive languages like C++, Java, C#, etc. But you with your small mind cannot be expected to appreciate the beauty.

Interfaces you ask? I am glad you asked. Interfaces are indeed extremely important, so Go has lots of stuff about interfaces. But you don't want to declare them. No, no, no, that would be too hard. God forbid that you wanted to implement "interface Writer" by having to actually type "implements Writer". That is crazy talk. It is much better to just write scattered methods here and there and if they don't happen to match the interface signature because you missed a return type, you get the error in a completely different source file. That is big thinking.

Why should the compiler go to all the trouble of validating an interface implementation when you, the programmer, can do it manually? Compiler time is expensive if you are solving big problems.

Virtual functions? Don't be ridiculous. Who needs that... If you are that stuck up on them, you small minded freak, you can emulate them with interfaces, because you typing is much better than the compiler doing it. Besides, virtual functions are overrated.

And finally, variable case. The big brains decided that typing "export" is too much when solving big problems, so just capitalize the identifier. That alone will probably cause at least 30% increase in productivity.  Of course, don't forget to rename things if you decide they are not public. It is much easier than deleting a keyword, besides you cannot be trusted to follow a coding style, so it is better to enforce at the language level. Same with braces and semicolons.

Good luck in the wonderful world of Go!

-- Update

My blog post got like a 100 views, which is amazing, so I decided I should add a clarification. I don't "hate" Go, I think there is lot there to like (static typing, precise garbage collection, true closures, fast interfaces, sane module system, etc), and that is precisely why its big failings bother me even more.

I view Go as a huge wasted opportunity. How often does a new semi-successful language backed by a successful company appear? Google squandered it through something which looks very much like arrogance.

As it is now, unfortunately Go is little more than an incredible runtime library: very efficient green threads and a precise garbage collector. Someone might port Java to that runtime and reap all of the benefits.

Wednesday, February 16, 2011

The fallacy that WP7 is immune from the race to the bottom

People left and right are claiming that if Nokia had chosen Android, they would be one vendor amongst many competing on thinner and thinner margins, while WP7 somehow isolates them for that. As far as I can tell, that is now commonly accepted as truth. It is amusing how a fallacy turns into "common sense" when repeated enough times.

In reality, of course, people don't buy operating systems. They buy phones, and they would not buy a more expensive phone if a cheaper one offered the same capabilities given comparable manufacture quality and design. This is such a simple and fundamental truth that it is almost unbelievable that it is ignored.

Apple is always used as a counter-example, but in reality an unlocked iPhone costs about the same as any other high-end smartphone with comparable parameters. Granted, people will pay more for good design or even just a fancy logo, but the name of the OS running in the phone hardly even enters the equation.  In particular the notion that consumers will pay premium specifically for WP7 is frankly laughable.

Of course there are other factors: the brand name of the manufacturer, the quality of the app ecosystem and so on. Those are valid considerations, except that most of those currently point towards an advantage for Android, not the other way around.

So, specifically the fallacy is this: while it is commonly accepted that Android is "comparable", if not "better" than WP7, at the same time it is also claimed that WP7 will bring in higher premiums.

Nokia can manufacture some pretty good phone hardware, so they deserve high margins, but the sad reality is they are in the same race to the bottom as everybody else, including Apple, and one day the license cost of WP7 (compared to free Android and IOS) might end up being a huge problem.

That is my objective evaluation of the situation, but it doesn't mean I like it. We will end up getting crappier and crappier product as a result, just as today it is almost impossible to buy a high quality PC regardless of the price.

Monday, November 30, 2009

Git in non-patch mode

The longer I use Git, the more things I find to love. Of course when I say "love", I don't mean the fanboy kind of love, but the feeling of satisfaction and happiness which good tools give the proud professional :-)

I just recently realized how easy it is to apply an arbitrary commit on top of another one, without treating it as a patch. Since all high-level Git commands work with patches and diffs, one sometimes forgets that internally Git doesn't use patches, but simply stores the state of the tree as is at any point.

So, to put commit-a on top of commit-b, I simply do:

  git-checkout commit-a
  git-read-tree -u -a commit-b

Again, this doesn't apply commit-b as a patch on top of commit-a; it copies the exact state of the tree in commit-b.

Why is this necessary? Sometimes there can be a very messy path from commit-a to commit-b, while all we want to record in the official history is just the two commits.

After reading the Git user manual, the first idea would probably be to do this using "git-rebase -i" or even a sequence of "git-cherry-pick -n". However that approach takes a lot (!) of effort, and is risky because there can be conflicts which need to be resolved.

Command line issue tracking

Issue tracking is needed even for smaller projects or personal ones, but frequently the effort of setting up a complex issue tracking system for a small project is too much. A typical bug tracking system might requires a database server and a web server. Administering those is too much of a PITA.

Another severe problem lies with the portability of the bug database. For my hobby or consulting projects I like to keep the bugs close to the source - I might happen to work om my desktop, on a laptop, etc. They should be easy to move around and archive. By comparison, moving or backing up a Bugzilla installation involves unspeakable complexities that my mind simply refuses to do.  (I do realize that it is not technically complex to move a Bugzilla installation, in fact it is a relatively easy as these things go,  but it is not something that one would undertake casually - it takes preparation, time and care).

Yes, one could maintain a running publicly accessible (even only via SSh/OpenVPN) web server, and always use it, but that is a whole lot of work, not to mention the security implications and the real added expense. Plus, one is not always online.

So, until recently my bug tracking for these kinds of projects has been restricted to keeping a couple of text files called BUGS.TXT and TODO.TXT. Not very high-tech, but hey, they do the job.

That is until I found Ditz. It is a command line bug tracking tool, and it keeps all its data, including all configuration files under one neat sub-directory, plus everything is in plain text format! I absolutely love it and highly recommend it.

With Ditz the entire bug database even can be a part of the source tree, which automatically makes it distributed if one uses Git for example. (I am not sure that is such a great idea by the way - polluting the source history with bug reports - but I am still thinking about it). Moving it between computers is just a copy, and so is backup. It literally requires no thinking or effort.

While I am on this subject, todotxt  also deserves a honorable mention, although it is not suitable for issue tracking.

Bottom line, Git and Ditz provide a full featured and yet very simple infrastructure for small projects. Now all I have to do is finally start using a good command line email client.

Friday, November 13, 2009

Android ... not so great after the honeymoon

Android is really starting to annoy me. Really, although I want to like it. I have about 30 applications and every week I get about a dozen or so app updates. For each and every one of them I have to click several times and wait for the update to complete. It is time consuming and extremely annoying. And it never stops!

So, one, there isn't a way to auto-update all applications. Two, it feels like developers for Android are constantly pushing out buggy unfinished apps, and then updating them all the time. I mean, I sometimes get two-three updates of the same app within a week. People, test your god-damned apps before publishing them, and Google, please provide a way to update all apps.

In short, my attitude towards Android has definitely changed for the worse since I actually have an Adroid Phone. Initially it was awesome, compared to my previous phone a Motorola Razr. But we bought an iPhone for my wife at approximately the same time that I got my G1, and I have had an ample chance to compare the two since then. Although I would never buy an iPhone for myself on ideological grounds, I grudgingly have to admit that the iPhone is a superior phone! Damn it.

Although I would never admit it (oops, I just did), I catch my self "casually" wanting to use my wife's iPhone for browsing the web, or looking at photos, or what not, because it is simply better than the Android.

Wednesday, November 4, 2009

Ubuntu's LTS Release Schedule

A few days ago I accidentally came across the Ubuntu 10.04 LTS Release Schedule. I am copying it here:

December 3rd, 2009 – Alpha 1 release

January 7th, 2010 – Alpha 2 release

February 4th, 2010 – Alpha 3 release

March 4th, 2010 – Beta1 release

April 1st, 2010 – Beta2 release

April 15th, 2010 – Release Candidate

April 29th, 2010 – Final release of Ubuntu 10.04 LTS

So, they have three weeks between Beta1 and Beta2 and then only two weeks more before release. What I don't understand is this: how the hell can they guarantee that they will fix all critical beta bugs within a total of five weeks? And since I know for a fact that they can't (nobody can), it follows that they will simply release no matter what.

So, let me repeat that. No matter what remaining bugs there are, the Ubuntu LTS version will be released on time. With the bugs. Why bother with the betas at all then? I have to say this is not what I would expect from a "Long Term Support" version or like to run on my servers. Or my desktops.

Sunday, October 25, 2009

Reading the replies to this comment (and the ones around it) in Slashdot, it is somewhat shocking to realize how short sighted  most posters are - considering these are Linux users, which we should assume means something. These people are content to reinstall their OS every six months and couldn't possibly imagine why one wouldn't want to, or why not all software in existence can be in their favorite distro's repository.

The problem is not that these people exist, but that apparently they are the target audience of popular Linux desktop distributions. This worries me. At least with Windows you know that there is a "design committee" somewhere in Redmond, trying to do the right thing; they don't end up doing it, but they at least try to approach it intelligently. What am I saying - apparently they do succeed, and the proof is in the pudding - 98% of the desktop market share.

Recently there were discussions in Slashdot about PulseAudio. The latest attempt to "fix Linux sound". Apparently OSS was not good enough, so it was replaced with Alsa, which wasn't good enough, so Arts, Esound, etc, were created, and now finally PulseAudio to rule them all. In the meantime Windows 2000 (and perhaps even Windows 95 and Win 3.1??) had better sound than Linux today. But I digress.

What really made an impression on me is the admission by Pulse Audio's creator that its mixer controls can't possibly handle all different sound cards, but that at least it was an improvement over the existing Alsa mixer control. To this I say, if you aim low you land low. Let's think for a second. Why isn't this a problem in Windows? How are all possible sound cards handled there?

BTW, here is a hint: it is not only the sound.  Exactly the same limitation exists in Linux printing. While printers might have tons of special functionality (cleaning, adjustments, manual double-sided printing, etc) available in Windows, this is never available from CUPS in Linux.

The answer is, in Windows the manufacturer can ship a custom applet, designed specifically for the capabilities of their hardware, together with the driver. Who knew that instead of trying to handle all possible cases in the OS, it is possible to push the burden to the manufacturers and let them take care of it.

Homework question: why doesn't this obvious solution work in Linux, and why does this mean that Linux is destined forever to fail as a desktop.