The recent Congressional hearings in the US greatly increase the prospect of some form of regulation of the AI industry, but this will have to be very carefully crafted to ensure that unintended consequences are minimised and that the US does not hamstring itself in the technological arms race with China.
- OpenAI CEO Sam Altman, NYU professor Gary Marcus and Christina Montgomery from IBM jointly testified before Congress on Tuesday in what was generally a constructive and non-combative session.
- The first topic under discussion was regulation and here, OpenAI and Microsoft are clearly open to working with the US government and even seem to welcome the prospect of some form of regulation.
- What seems to be on the cards is a regulatory agency that issues licenses to operators over a certain level of scale. But even this is fraught with problems.
- I continue to think that the real risk to humans is not from malevolent machines but from malevolent humans ordering the machines to do bad things. Therefore, some form of licensing may help.
- At the same time, Altman and Markus called out the risk of a technocracy where AI is concentrated in the hands of a few large players who then would have unimaginable power to control and shape society.
- This is one of the biggest dangers that would result from regulation because a regulatory environment increases the cost of doing business and creates a (often large) bias towards the larger companies as the smaller players cannot afford to comply.
- This would see smaller players forced out of the market and consolidation towards a few larger players which is exactly one of the things regulation seems to be seeking to avoid.
- Limiting the development of AI is also a non-starter for two main reasons:
- First, the genie is already out of the bottle. Large language models and the technologies and know-how of how to create them are already widely available in the open-source community.
- So large is this community that there is speculation that the performance of open-source models may soon rival those of large companies.
- Placing restrictions on development will only serve to drive development underground (bad scenario) or drive it overseas (even worse scenario).
- Consequently, this technology is going to be developed regardless of the regulatory environment and so the scheme that embraces it is far more likely to succeed than the one that slows or holds it back.
- Second, technology rivalry where the US (and increasingly the West) is locked in an ideological struggle with China.
- This battle is currently being fought in the technology sector and semiconductors in particular, but it is now also starting to move into AI.
- Unlike semiconductors, the US and the West have a much weaker ability to restrict China’s development in this space as limiting access to training semiconductors will only slightly slow China’s development.
- Hence, if the US intentionally hobbles its own development, then this will hand an advantage to China, the one thing that all parties in the US government agree is a bad idea.
- Hence, I suspect that the best regulatory environment will be a low-touch system that is cheap and simple to comply with and targets restricting access of bad actors rather than the technology itself.
- Other areas discussed included the management of copyrights for content owners whose content is used for training and then becomes the genesis of a novel creation.
- This is not a new issue as a similar problem exists with DJs who sample music or extracts of content to create new tracks and so I suspect that this will be solved over time.
- Employment was also discussed. Both Altman and Marcus were of the opinion that the job market faces no immediate danger although there is likely to be some change. This is broadly in line with my view.
- This is the first time I have seen an industry asking to be regulated which gives a much better chance of getting regulation that is productive rather than the unintended consequences that so regularly occur when rules are unilaterally imposed.
- I continue to think that the machines are as dumb as ever, but their size and complexity have greatly enhanced their linguistic skills even if they are simply calculating the probability of words occurring next to each other.
- This creates a convincing illusion of sentience which leads people to anthropomorphise these systems, which in turn is what makes them much more capable of being used by bad actors.
- Hence, humans are in far more danger from other humans than they are from the machines, and it is this that any regulation should target.
(This guest post was written by Richard Windsor, our Research Director at Large. This first appeared on Radio Free Mobile. All views expressed are Richard’s own.)
Related Research
May 15, 2023
May 12, 2023
Apr 10, 2023
Feb 27, 2023
Jun 20, 2023
Jul 12, 2023
Feb 7, 2024