Wednesday, April 07, 2004

For some reason whilst reading this Politech post on ACLU suing Feds over "do not fly" list, I started thinking about the plausibility of supposedly hermetic technical schemes versus the scalability of their implementation. Probably something to do with the ID Cards post before, too.

The fact that it's obvious makes it even more depressing, though.
That is, the technical effort involved to ensure a rigid flow of information increases according to the size of the network it flows through.

OK, so it's nothing new, and it's kind of vague. What I mean is, it's technically easy to dictate the flow and use of data within a small network. However, as the network gets bigger, the technology starts having to take into account emergent social factors that manifest either through collusion by the network's actors (more links between), or through pure probability (more chance of a single person doing something). This goes for people abusing the system, as well as mistakes occurring.

I'd love to know the design process that goes into large scale (as in, national/international) technical systems - do they rely mainly on lessons learned on smaller scales, that may not take sufficient factors into account (and, indeed, that may not be predictable if previous experience on such a scale does not exist)?

For instance, taking the "evidence" that biological identification schemes, such as iris-scanning, are foolproof, and using it as proof of the success of a nation-wide ID card fails to take into account either a). any large-scale tests that haven't been done of the technology, and b). other emergent "workaround" factors that render the technology useless in a large-scale context.


No comments: