top of page
henrylfraser

Does regulating AI do more harm than good?

Updated: May 7, 2021

Those who are concerned about AI governance recognise that there is a role for regulation in promoting safe development. But even those in favour of regulation worry about poorly conceived laws doing more harm than good by stifling innovation. Big tech harps on that particular message - 'regulation stifles innovation' - endlessly.


The message is clearly self-serving, so we have to take it with a grain of salt. Still, let's take it seriously. Is clumsy, overreaching regulation a real threat to positive development of AI technologies?


I can imagine a world in which it would be a real threat, but current trends are certainly not moving in that direction! If anything, there is a tendency for governments to underregulate technology. In this post I'll first to respond to concerns about hasty, ill-conceived, overreaching regulation; then I'll touch on risks of under-regulation.


How does tech regulation develop?

Elon Musk, a passionate and high-profile advocate of improving AI safety, suggested, at a conference of US state governers, a staged approach to regulation:

The right order of business would be to set up a  regulatory agency – initial goal: gain insight into the status of AI  activity, make sure the situation is understood, and once it is, put  regulations in place to ensure public safety. That’s it. … I’m talking  about making sure there’s awareness at the government level.

One gets the sense from this kind of statement that regulation is something that, without careful guidance from technologists, just happens suddenly, imposed by fiat from above, and wrecks things. In fact, what Musk is asking for is more or less common in regulating new technologies.


In liberal democracies, regulatory reform tends to proceed slowly and cautiously. It tends to involve many stages of consultation and refinement, which give ample opportunity for engagement and feedback from diverse stakeholders.


Any regulatory process pertaining to AI is likely to involve consultation not only with developers and suppliers of AI tech and services, but also consumers of the relevant technologies; and even third parties whose interests might be affected. So, for example, a government agency seeking to develop regulation of autonomous vehicles would not only seek input from car-makers and would-be car buyers. It would also consider the safety of the general public; and even the interests of specific groups, such as the need for disabled people to have fair and equitable access to transport.


Stakeholder consultation and the generation of regulatory recommndations and measuers is a painstakingly slow process. I'm most familiar with the regulatory landscape in Australia, so I'll speak to that. There are numerous regulatory processes relating to AI underway in Australia. They include

These procesess have involved long periods of substantial research, reporting, stakeholder consultation, feedback and furthe reporting. Often this iterative process of reporting and consultation proceeds in multiple stages, over periods of 2-4 years or even longer. Here's the timetable, for example, of the Digital Platforms Inquiry that I mentioned above:


Stages                         |   Date
Terms of reference             |   4 December 2017
Issues paper                   |   26 February 2018
Submissions                    |   3 May 2018
Preliminary report             |   10 December 2018
Forums & key meetings          |   1 March 2019
Preliminary report submissions |   4 March 2019
ACCC commissioned research     |   26 July 2019
Final report                   |   26 July 2019
Government response & roadmap  |   12 December 2019

This is all before the implementation of actual, substantive regualtion. We're only now (August 2020) at the point where the government is ready to implement certain of the ACCC's recommendations, such as a mandatory code of conduct with respect to the negotiation of copyright licences between news providers and platforms. And even with the government's apparently firm commitment to implementing the code, platforms such as Google have not given up on lobbying, indicating that they at least think it's not over til the fat lady sings.


Does tech regulation tend to overreach?


Ok, so we can agree that developing regulation is usually pretty slow and consultative. What about the strength of the regulation? Is overreach a common problem?


There are some instances where regulation has dramatically slowed innovation. Regulation of stem cell research is one oft-cited example.


But initial haste to regulate is not the defining characteristic of tech regulation, especially with regard to algorithmic and AI technologies. I've been diving into IoT regulation recently. In The Age of Surveillance Capitalism, Shoshana Zuboff eloquently describes the threat to individual sovereingty and freedom posed by the combination of ubiquitous, intimate data-collection through domestic devices and the predictive and manipulative power of AI. The threat she describes seems a pretty plausible example of the kind of dystopian lock-in that Toby Ord would describe as an existential risk.


Given the severity of risks - not just existential risks, but also day-to-day risks relating to the safety of smart-locks, smart-houses, smart cooking devices; and risks of hacking, malware and other kinds of disruption - IoT privacy and security seems an eminently suitable target for regulation.


Yet, as Prof David Lindsay and Dr Evana Wright point out in a forthcoming article on the regualtion of IoT around the world: fear of unintended consequences has produced only 'hesitant', 'light touch' regulation.


Toothless regulation might seem better than stifling regulation, but I'm not sure it's quite so clear cut. In the context of IoT, Lindsay and Wright aptly point out the risks to innovation posed by inadequate regulation of security:

If the harms caused by insecure devices are  sufficiently serious, this will undermine trust, which inevitably  adversely affects innovation. In other words, any benefits from  unconstrained innovation may be outweighed by detriments. 

Under-regulation in the long run might be worse for innovation if it leads to a loss of trust, and consequently a severe reduction in demand.


It also has the potential to cause regulatory backlashes. Online platforms successfully opposed regulation for ages, but gradually public opinion began to shift, and we're now in the midst of a huge 'techlash'.


A series of dramatic incidents, such as the livestreaming of the Christchurch shootings in New Zealand, broke the dam. Regulatory reactions to this terrible incident were swift, and not particularly careful. The build-up of pressure from public opinion, outrage, and concern was released in sudden, reactionary, poorly-conceived laws like Australia’s imposition of criminal liability for failure to take down violent extremist content - drafted in such a way as to encourage pretty troubling levels of automated filtration and censorship, including of legitimate content like news. 


Conclusion


My sense is that it is best for responsible AI developers to get on the front foot; to push for appropriate regulation, and to guide the dialogue. Fortunately, this seems to be happening through the work of various industry and public interest organisations like the Partnership on AI.


The question that will need to answered over the coming years will not be whether or not to regulate, but what kind of regulatory measures balance the many competing rights, interests and objectives at stake in the governance of AI.



21 views0 comments

Recent Posts

See All

Comments


bottom of page