Friday, January 11, 2008

KDE 4.0: Unexciting.

I was going to start this post with a question. When did KDE become a bizarre mash-up of Windows and OSX? But now that I've admitted that's what I was going to do, I can no longer do it. So I won't. But the question will hang in the air like butterfly ghosts.

Instead, this is an attempt to reconcile a glass that's half full with one that's half empty. After looking through screenshots for the recently release of KDE 4.0, that may or may not be possible.

A hand-written scrawl in the half-empty glass says "Hmm, a pale imitator of OSX. Is this really going to re-endear Linux to me?"

An equally illegible note in the half-full glass says "Hey, a decent imitation of OSX - for free!"

Both are true. In reality, there is only one glass. (In the matrix, of course, the glass is neither broken nor unbroken.) It is simply The Way of Things.

It is both impressive - that one can run such software legally, without paying a seashell - and depressing - that such software, set somewhat "free" of economics and corporations, fails to push any real boundaries. Anything in the modern, politico-capitalist world, inherently contains both these facets: Consumerism is a social statement. Usage is a personal practicality. Both - neither - are ultimately more important than the other.

But if that's the case, why am I still disappointed? Because I still cling to the idea of a "hacker" as an experimenter, perhaps. Because the pioneering spirit of giving things away free often went hand-in-hand with new ideas; money means value, value often precludes taking risk, and risk... well, new ideas are always risky. But lately I'm converted to Apple. Not simply because they come up with "risky" (as in new) ideas, but because a) their "risky" ideas are well researched before release, and b) the implementation of these ideas is generally well-rounded. In other words, open, distributed codebases may engender a modular, relatively solid codebase. But perhaps usability innovation requires something more than this.

What's interesting is the way the dynamics - the balance between technical sturdiness and usability innovation - shift as the context shifts. Economics obviously affect what people can afford to pay for: let's face it, there's no point in advocating the idea that schoolchildren in developing countries should all have Macs. But the role(s) of - and possibilities for - Linux et al also changes as more people have access to computing generally. An alternative to Windows for the masses is needed, but Linux isn't in a position to fulfil that.

Alternatively, many markets for embedded OSes are springing up, and all of the large players are obviously trying to get in on them. Here, Linux faces a tough task. Why? Because not only do Apple have a solid technical grounding (much more solid than MS), Apple also have the structure to think about both usability and design. There's a lot to be said for owning the code, the interface and the thing-in-the-box.

So ultimately, and to get back on topic (was there one?), I'm disappointed that KDE 4.0 seems to be more of the same, that most of the changes seem to be technical, under-the-hood improvements. There are some huge opportunities to make technology really applicable to everyday lives, opportunities which are still being pecked at around the edges.I guess the cyberpunk anarchist in me still just wishes these were being explored by people not in it for the cash.

7 comments:

phil jones said...

I think there's a lot to be said for the point that having an existing *model* to work from is very useful (ie. almost essential) when you're working in a loosely co-ordinated way.

How can a bunch of people with little formal connection or responsibility to each other, or written specification, come up with a complex piece of software? Well, one way is if they can all point at the same thing and say "we want one of those". (Eg. Unix, C-library, browser, editor etc.)

Eric Raymond long ago tried to dispel the myth that open source couldn't do real innovation by pointing at Perl. (Which has never had an equivalent in the proprietary world) How convincing that is, I'm not sure. Programming languages look good candidates for free-software innovation; OTOH most of the ideas of the big free languages are borrowed from research of 30 or 40 years ago. (Hence Ruby, Python are not substantially more innovative than Lisp and Smalltalk)

I believe that there *is* innovation in the free software community, but almost by definition, the innovative projects are not those which are going to attract a lot of people to them. (If it's too innovative it probably isn't the solution to a lot of people's problems, nor is it likely to be very understandable. In fact true innovation probably isn't recognisable as innovation - just some weird shit.)

Group decision making (of which peer-production is an example) is always going to tend towards the mean.

Actually, I guess the most fertile soil for free-software innovation has been where there's a common, widely recognised problem which proprietary software companies nevertheless failed to address well : for example light-weight, easy to use, web-frameworks. Perhaps PHP and Rails are the biggest examples of free *innovations* that have been gone mainstream.

However, there are almost no equivalent widely recognised but under-served problems on the desktop (who *cares* about the desktop these days?) So anyone working in free software in that area has nothing to do *except* try to copy Windows and Mac. (Which are themselves not doing much innovation in this area.)

The next exciting place is the coming "device-swarm" but that's going to be composed of problems largely defined by the new hardware devices themselves. That creates a different dynamic, even though I confidently expect hackers to be hooking their Wiimotes and Nonchucks up with their Arduinos and Chumbies, Roombas and Livescribes etc. it won't have the same flavour as the competition between proprietory and free software. Most software will be free-as-in-beer, tethered to online-services (either paid by advertising or subscription)

There'll probably be lots of free-as-in-speech software too, but most of its value will still come from the network it connects to (often proprietary unless P2P), and backed by utility data-storage from Amazon, Google, Sun and Microsoft.

Hmmm ... maybe that P2P angle is the real frontier to be pushing now ...

Scribe said...

Thanks for the thoughtful comment, Phil. Some very interesting points.

One thing I'd like to unpick - in my mind, at least - is where the split in innovation comes from. That is, I don't think open-source projects are inherently *bad* at user innovation. But there's a correlation somewhere that means it (and other things) tend to get ignored. 3 possible causes for this:

1. Open Source as a distributed, open development model carries over to a distributed, open use feature set. The "traditional" approach to GUI options, for instance, is to throw everything in so that the user has full control. Thus, the (potential) diversity of users matches the (potential) diversity of developers.

However, this assumes users vary only in terms of what they like to use, and that they are equal in terms of how they like to configure things. In reality, it's often actually quite nice to be able to just run with thoughtful defaults, and configuration be damned.

2. Open Source is a technical development model, but says nothing of other forms of innovation. Hence, functional decisions are generally not made with non-technical/coding users in mind. In other words, the audience, as people write code, is the person writing the software.

I think this ties in with point 1 actually. Code is a form of control, and this idea of control carries over into usage. "Eating your own dogfood" is often considered a good approach. But developers' food can be very different to non-developers' food. The question then becomes, who develops the non-developers' food?

3. Coming out of 1 & 2, maybe it's just a matter of relevance. I'd argue there's a big difference in innovations in development tools - such as Perl, Web toolkits, etc - and innovations in non-development tools, and I think this is probably the central dichotomy I'm trying to get at here.

Take, for instance, Firefox. There is, it seems to me, actually fairly little innovation in terms of usability and interface design. The big innovation, in these terms, is more the ability and infrastructure to support Extensions/Plug-ins. Once that's in place, you have (potentially) a much more "organic" approach to user-end innovation. (N.B. This amusingly mirrors the problems that a diverse GUI configuration tool throws up - too much choice, too much searching, etc.)

In switching from "real" innovation (in a UX-research kind of way) to a more "liberalised" approach such as this, an interesting shift occurs: we see the slide towards a Platform based approach to innovation.

Perhaps this explains where things like "web 2.0" come from - it's "easier" to develop things technically, and allow a genericisation that engenders "choice". But does choice ultimately trump sensible defaults for users?

Hmm, I should stop there, but 2 notes to conclude:

1. Is platformisation/functional genericisation some form of technical determinism? Or can "real" user innovation still win out when done properly? In fact, can "real" user innovation be done in a distributed/open source manner alongside code? And if so, how?

2. This is turning into an intriguing analogy for the Public Sector, especially in Britain. There's a similar move towards Authorities providing cash, and handing over responsibility for what actually happens to either contractors, or the users themselves. Think the "cash = platform" meme.

Of course, that raises the link between "innovation" and "responsibility", as (IMHO) much of that move is based on shifting accountability (disguised as "efficiency").

I can see overlapping hierarchies in my head now.

zby said...

Maybe it is harder to convince fellow programmers that you have a good idea then investors? In both cases to have the resources for a bigger development you need to convince other people that you have a good idea - but with the investors you need to convince them only once - and after you receive the money you decide what to do - while when leading a free software project you need to convince the peer developers constantly.

lionkimbro said...

I thought wiki was pretty innovative: Easy Page Creation & Linking + Recent Changes + Web. Wiki turned out to be easy to reproduce, and easy to evolve.

User Interface is notoriously hard work. Everything gets connected to everything else -- your data, your visualization, your interactions, your cache, and so on. It's pretty wicked.

I would inspire a spiritual community for the development of software, from commodity hardware up, to further the development of the Noosphere.

Scribe said...

Zby and Lionkimbro: Many thanks for the comments. It feels like there's a chipping-away-at-the-edge of an interesting social-organisation "vs" idea-formation debate. That is, how the development of an idea is linked to the group developing it.

I can think of 2 ways (at the moment), therefore, that distributed development is *not* suited to interface development/design:

1. Personal insight to decide. This is important because simplicity - KISS - involves narrowing down available options, not opening them up. This requires decision-making, and a certain degree of top-down control. (This may or may not be achievable with a distributed development set-up...)

2. Testing. On users. This requires more organisation, and a lot of planning. It's quite difficult to "evolve" the testing process in a coherent manner, so again a certain amount of individual a) experience, and b) control is needed to get this right.

Feel free to argue against either/both of these though.

There are also some thoughts to be had about interfaces as a coherent, shared experience (e.g. tech support) but I'll leave them alone for now...

Scribe said...

Follow-up to check out: Mobile Firefox seeking UI feedback via a wiki.

Scribe said...

Follow-up thought to maybe chase up later: I just realised that I'm missing the point that, even if no innovation takes place in terms of GUI design, a decision still needs to be made in terms of what gets copied. Where doe this decision get made currently then, and why?