MacMusic  |  PcMusic  |  440 Software  |  440 Forums  |  440TV  |  Zicos
content
Search

What Laws Will We Need to Regulate AI?

Sunday January 14, 2024. 05:34 PM , from Slashdot
johnnyb (Slashdot reader #4,816) is a senior software R&D engineer who shares his proposed framework for 'what AI legislation should cover, what policy goals it should aim to achieve, and what we should be wary of along the way.' Some excerpts?

Protect Content Consumers from AI
The government should legislate technical and visual markers for AI-generated content, and the FTC should ensure that consumers always know whether or not there is a human taking responsibility for the content. This could be done by creating special content markings which communicate to users that content is AI-generated... This will enable Google to do things such as allow users to not include AI content when searching. It will enable users to detect which parts of their content are AI-generated and apply the appropriate level of skepticism. And future AI language models can also use these tags to know not to consume AI-generated content...

Ensure Companies are Clear on Who's Taking Responsibility
It's fine for a software product to produce a result that the software company views as advisory only, but it has to be clearly marked as such. Additionally, if one company includes the software built by another company, all companies need to be clear as to which outputs are derived from identifiable algorithms and which outputs are the result of AI. If the company supplying the component is not willing to stand behind the AI results that are produced, then that needs to be made clear.

Clarify Copyright Rules on Content Used in Models

Note that nothing here limits the technological development of Artificial Intelligence... The goal of these proposals is to give clarity to all involved what the expectations and responsibilities of each party are.

OpenAI's Sam Altman has also been pondering this, but on a much larger scale. In a (pre-ouster) interview with Bill Gates, Altman pondered what happens at the next level.

That is, what happens 'If we are right, and this technology goes as far as we think it's going to go, it will impact society, geopolitical balance of power, so many things...'

[F]or these, still hypothetical, but future extraordinarily powerful systems — not like GPT- 4, but something with 100,000 or a million times the compute power of that, we have been socialized in the idea of a global regulatory body that looks at those super-powerful systems, because they do have such global impact. One model we talk about is something like the IAEA. For nuclear energy, we decided the same thing. This needs a global agency of some sort, because of the potential for global impact. I think that could make sense...

I think if it comes across as asking for a slowdown, that will be really hard. If it instead says, 'Do what you want, but any compute cluster above a certain extremely high-power threshold' — and given the cost here, we're talking maybe five in the world, something like that — any cluster like that has to submit to the equivalent of international weapons inspectors. The model there has to be made available for safety audit, pass some tests during training, and before deployment. That feels possible to me. I wasn't that sure before, but I did a big trip around the world this year, and talked to heads of state in many of the countries that would need to participate in this, and there was almost universal support for it.

Read more of this story at Slashdot.
https://yro.slashdot.org/story/24/01/13/2310209/what-laws-will-we-need-to-regulate-ai?utm_source=rss...

Related News

News copyright owned by their original publishers | Copyright © 2004 - 2024 Zicos / 440Network
Current Date
May, Sun 19 - 23:27 CEST