The distributed self is already here, from small automated bits of our greed attempting to snipe away coveted rare toys in eBay, to large automated bits of our greed attempting to grab billions in electronic financial markets. Even without artificial intelligence advanced enough to fully replicate, extend, or simulate consciousness, we’re already not wholly in any single place, and we probably will be even less so in the future. The advantages of doing multiple things at the same time, or doing things we couldn’t otherwise do, are obvious.
But our distributed selves aren’t quite flawless when following either our instructions or our intent — which, as anybody who has ever programmed knows, are seldom the same thing. Code is buggy, communication channels break, platforms do unexpected things, and suddenly remote parts of yourself are doing questionable things you didn’t want them to do. This has long been a concrete regulatory and practical issue in exchange markets that allow fully automated trading, and as automation of this sort becomes more widespread, it has the potential to impact most individuals and organizations.
What happens if, or rather when, your automated energy-saving home network glitches and you end up buying energy at peak prices? Or when the email/IM bot you use to convince your customers or boss you’re always on call (trust us, given the state of the economy, you are going to need one of those before long) gets confused and sends something it, or rather you, shouldn’t? Should you be held responsible every time your computer is hijacked by a worm and used to stage an attack on a business or government website?
We are going to have to think carefully about the legal implications of these situations.
Most legal systems have already some limited liability shields for individuals, and the very strong form of liability shield that is the modern corporation. Because they are legal entities distinct from the people they serve, shielding them from many of the legal (and even social) consequences of their acts, they offer a very attractive model for the legal structuring of the swarms of computer code we are increasingly empowering to act on our behalf. And wouldn’t it be terribly convenient — isn’t it terrible convenient — to have partial doppelgangers that share your agenda, but whose acts you can’t be fully held responsible for? There’s a technical difficulty in that setting up legal shields is still a cumbersome and expensive process, specially compared with the way in which information technology has become cheaper and more powerful. But this can be fixed with relative ease, if we want to.
Do we want to, though? Corporations have proven an extremely powerful tool for the creation of wealth, and also an extremely powerful tool to mess things up. Widespread legal protection from the acts of our distributed bits would mean, for example, that many forms of harassment would be very difficult to prosecute: just set up a separate loop of code that sends insults. It’s clear that liability shields carry with them important incentive problems, and because it can be impossible to distinguish between software errors and clever coding, when it comes to complex, distributed systems it’s not always easy or even possible to determine intent.
But tying legal responsibility for a wide swath of remote actions to individuals is also problematic. Most people, and even most organizations, lack the knowledge, resources, or clout to exert much control over the whole of their technological platforms (can you fully ascertain that none of the Google ads on your website are illegal in your country?). We would be, and to some degree are, leaving ourselves open to the consequences of programming errors in pieces of code we might not even know exist.
Right now there’s no simple answer. If insanity can be loosely seen as lack of control over our own actions, our distributed selves are all a bit insane. Neither our legal system nor our societal norms (and perhaps not even our understanding of ethics) have yet fully adapted to this emerging development.