This is, of course, a common problem, and one that came up on the IRC channel at Notcon during the geolocation speech. Plausible facts:
- Coders design systems initially for a). a small audience and b). to experiment with new features. As a result there are often a). technical security holes, and b). "socio"-security holes, i.e. exploitable features that can turn the network into a torrent of abuse. One could say that Wikis have open-ness built in, but that doens't make b) any less true.
- Assuming that a system is genuinely open to all (i.e. globally public, as opposed to confined to a limited area/user group/etc), there are always going to be people ready to exploit it for whatever purpose.
Should we be thinking about these issues now, before it gets out of hand? I don't see why not. But I think there should also be more thought put into how to make the answers a) secure but b) simple, perhaps even part of the system being designed (e.g. see how many social networking systems offer contact/white lists as a "feature" rather than a "necessary hindrance").
Basically, add an automated, mailing-list-style sign-up response system to be able to edit a Wiki perhaps. Or maybe design some moderation/reputation-based (FOAF?) approach to who can edit what, or that dictates how prominent their remarks appear.
It may cut down on the sheer "openness" of many new technologies, but I think they'd be, in the long term, better, more usable places for it.