I listened last week to a podcast debate between Steven Pinker and Stuart Russell and was not especially surprised to find that I am not the first to make the point that corporations might fairly be described as 'superintelligences'. I made the point that we already use law to regulate the reward function of superintelligences': for example, by making corporations more accountable for the costs of pollution. Stuart Russell summarised an argument by the AI pioneer , Danny Hillis, making essentially the same point, but with a different emphasis:
I kind of like Danny Hillis’ argument, which says that actually, no, [superintelligence] already does exist and it already has, and is having significant global consequences. And his example is to view, let’s say the fossil fuel industry as if it were an AI system. I think this is an interesting line of thought, because what he’s saying basically and — other people have said similar things — is that you should think of a corporation as if it’s an algorithm and it’s maximizing a poorly designed objective, which you might say is some discounted stream of quarterly profits or whatever. And it really is doing it in a way that’s oblivious to lots of other concerns of the human race. And it has outwitted the rest of the human race.
Oh dear! I was writing about how law can constrain such superintelligences; Hillis said we've already failed to do so.
Steven Pinker's retort to this example of the catastrophic risk of being outwitted by AI is fairly pithy:
A simpler explanation is that people like energy, fossil fuels are the most convenient source, and no one has had to pay for the external damage they do.
The implication is consistent with my arguments from earlier posts. If you implement rules that make corporations pay for environmental damage, you can reduce, if not eliminate the damage. But how do we reconcile this argument with the current climate crisis? Doesn't our failure to prevent global warming prove that law and the rule of law are not effective in constraining superintelligence?
I wouldn't go quite so far. We have failed to reduce carbon emissions sufficiently to prevent climate change, but this does not mean we haven't reduced them at all, nor that we will be unable to achieve further reductions. It's a matter of degree (sorry for the terrible pun).
One of the reasons that the law has not gone far enough is the extensive, global, co-ordinated lobbying of the fossil fuel industry. Now, this does expose a weakness of the rule of law in a democratic order. Law can be hacked.
As I mentioned in the previous post, being compelled to obey the law doesn't rule out attempts to erode the effectiveness of laws in constraining power (including the power of AIs and of corporations). Democratic representatives are susceptible to lobbying, to legitimate political donations and perhaps also to less legitimate forms of influence.
I don't think this means that we have to despair - neither about the prospects of reducing emissions, nor about containing super-AIs. What it tells us is that the probity and accountability of democratic representatives and institutions has a significance that goes beyond the ebb and flow of today's politics.
To put it another way, we need to root out corruption and undue politicla influence not only because of the harm that it does right now, but because it exposes us to very serious risks in future. If lobbying is troubling today, imagine lobbying super-charged by AI. Imagine lobbyists who know exactly what words to use, what buttons to push, how much money to donate, and to whom.
A super-AI, or lobbyists armed with AI tools, could lobby and exert influence on their own behalf, armed with super-intelligent capacity for strategy; for individual manipulation; and presumably with a lot of money. Even if AIs were constrained by the rule of law in ways discussed in earlier posts, plenty of lawful (but troubling) avenues for lobbying and influence exist. And, of course, a human lobbyist who is just using AI as a tool can't simply be programmed to obey the law.
As usual, I don't pretend to have fulsome solutions to these problems, but I do have some ideas. Here are a few, in no particular order (and they are, no doubt, explored in greater depth by anti-corruption experts).
Placing limits on political donations is an urgent priority. The Citizens United case in the US, where the Supreme Court held that the First Amendment prohibits the placing of limtiations on political donations by corporations, was disastrous. Let us hope the opportunity arises for the Court to reconsider and overturn this decision. If not, the alternative is to amend the First Amendment... not likely, but conceivable.
In a future of super-intelligent lobbyists, existing levels of transparency won't do. It is not enough to simply report donations. The public should have access to far, far more information about the interactions and schedules of their elected representatives. We should know whom they are meeting (except of course where there are reasons to keep this classified), and how much time they are spending with each person or group. I need to think more about whether there should also be minutes of what is discussed.
By the same token, corporations and industry bodies should keep a publicly accessible log of whom they are meeting in government, and how much time they have spent with these stakeholders.
There should be anti-corruption agencies with wide powers over all branches of government, and at every level of government which includes (depending on where you are): local, state, federal and international levels.
All of these measures would help to address existing problems associated with corruption, and the excesssive influence of wealth over politics. They would also diminish the vectors of attack for AI super-charged lobbyists, determined to prevent laws from constraining them.
Granted, the persistent concern of AI saftey experts still stands. A sufficiently intelligent AI would find loopholes and ways around even these additional proposed measures. Point taken, but I'm becoming more convinced that law has a role to play in complementing technical AI safety measures. Good AI safety reduces surface area for unintended consequences, and good governance reduces that surface area still further.
I'm starting to see one theme come up, insistently, whenever I turn my mind to long term AI risks. Good governance and good AI governance are not really separate things.
Comments