Navigation
Search
|
Trump’s move to lift Biden-era AI rules sparks debate over fast-tracked advances — and potential risks
Friday January 24, 2025. 08:23 PM , from ComputerWorld
President Donald Trump’s executive order removing Biden-Administration rules governing AI development is being cast as an opening of AI development flood gates, which could fast track advances in the still-new technology, but could also pose risks.
Signed on Thursday, the executive order (EO) overturns former President Joe Biden’s 2023 policy, which mandated that AI developers conduct safety testing and share results with the government before releasing systems that could pose risks to national security, public health, or the economy. The revocation of the 2023 Eo shifts federal oversight from mandates to voluntary commitments, reducing requirements such as safety training submissions and large-scale computer acquisition notices, enabling less regulated innovation. “This means some states may continue to follow the regulatory guidance in the 2023 EO, while others may not,” said Lydia Clougherty Jones Sr., a director analyst at Gartner Research. Trump’s policy states its purpose is to “sustain and enhance America’s dominance in AI,” and promote national security. The EO directs the creation of an “AI Action Plan” within 180 days, led by the Assistant to the President for Science and Technology, the White House AI and Crypto Czar, and the National Security Advisor. Michael Kritsios (former US CTO under the Trump administration), David Sacks (venture capitalist and former PayPal executive), and US Rep. Mike Waltz (R-FL), have been nominated or appointed, respectively, to these positions. The EO states part of its purpose is to “enhance America’s global AI dominance in order to promote human flourishing, economic competitiveness, and national security.” Mark Brennan, who leads the global technology and telecommunications industry sector group for the Washington-based law firm of Hogan Lovel, said setting a 180-day deadline to develop a new AI action plan means the group drafting the plan will need to quickly “gather input and start writing.” The mention of “human flourishing” is also “sure to spark diverse interpretations,” Brennan said. A public-private partnership on AI Along with the order, Trump also unveiled the Stargate initiative, a public-private venture that would create a new company to build out the nation’s AI infrastructure, including new data centers and new power plants to feed them. Initially, Stargate will team up the US government with OpenAI, Oracle, and Softbank. The companies will initially invest $100 billion in the project, with plans to reach $500 billion. Trump said the move would create 100,000 US jobs. Oracle CEO Larry Ellison, for example, said 10 new AI data centers are already under construction. He linked the project to the use of AI for digital health records, noting the technology could help develop customized cancer vaccines and improve disease treatment. Not everyone is, however, upbeat about the loosening of government oversight of AI development and partnerships with the private sector. The Stargate announcement, along with the Trump Administration’s reversal of the earlier AI safety order, could replace many federal workers in key public service roles, according to Cliff Jurkiewicz, vice president of global strategy at Phenom, a company specializing in AI-enabled human resources. “While it’s impressive to see such a significant investment by the federal government and private businesses into the nation’s AI infrastructure, the downside is that it has the potential to disenfranchise federal workers who are not properly trained and ready to use AI,” Jurkiewicz said. “Federal employees need training to use AI effectively; it can’t just be imposed on them.” Stargate will speed up what Jurkiewcz called “the Great Recalibration” — a shift in how work is performed through an human-AI partnership. Over the next 12 to 18 months, businesses will realize they can’t fully replace human knowledge and experience with technology, “since machines don’t perceive the world as we do,” he said. The move could put smaller AI companies at a competitive disadvantage by stifling innovation, Jurkiewicz said. “Stargate could also deepen inequities, as those who know how to use AI will have a significant advantage over those who don’t.” Removing AI regulations, however, won’t inherently lead to a completely unbridled technology that can mimic human intelligence in areas such as learning, reasoning, and problem-solving. Commercial risk will drive responsible AI, with investment and success shaped by the private market and state regulations, according to Gartner. Industry commitments and consortia will advance AI safety and development to meet societal needs, independent of federal or state policies. AI unleashed to become Skynet? Some predict AI will become as ubiquitous as electricity or the internet, in that it will eventually be operating behind the scenes and woven into everyday life, silently powering countless systems and services without drawing much attention. “I’m sure the whole Terminator thing could happen. I don’t consider it likely,” said John Veitch, dean of the School of Business and Management at Notre Dame de Namur in Belmont, CA. “I see lots of positive things with AI and taking the guardrails off of it.” Regulating something as transformative as AI is challenging, much like the early internet. “If we had foreseen social media’s impact in 1999, would we have done things differently? I don’t know,” Veitch said. Given AI’s complexity, less regulation might be better than more, at least for now, he said. AI is valuable as the US faces an aging population and a shrinking labor force, Veitch said. With skilled workers harder to find and expensive to hire, AI can replace call centers or assist admissions teams, offering cost-effective solutions. For example, Notre Dame de Namur’s admissions team uses generative AI to follow up on enrollment requests. Trump’s executive order prioritizes “sovereign AI” affecting the private market, while shifting most regulatory oversight to state and local governments. For example, New York plans to restrict government use of AI for automated decisions without human monitoring, while Colorado’s new AI law, effective in 2026, will require businesses to inform consumers when they’re interacting with AI, Gartner’s Jones said. The revocation of Biden’s 2023 order reduces federal oversight of model development, removing requirements such as submitting safety training results or sending notifications about large-scale computer cluster acquisitions, which could encourage faster innovation, according to Jones. “Thus, it was not a surprise to see the Stargate announcement and the related public-private commitments,” she said. Strengthening sovereign AI, Jones said, will boost public-private partnerships like Stargate to maintain US competitiveness and tech leadership. What enterprises should focus on Now that the regulatory buck has been passed to states, so to speak, organizations should monitor US state AI executive orders, laws, and pending legislation, focusing on mandates that differentiate genAI from other AI techniques and apply to government use, according to a Gartner report. “We have already seen diverse concerns unique to individual state goals across the nearly 700 pieces of state-level AI-proposed legislation in 2024 alone,” Gartner said. According to Gartner: By 2029, 10% of corporate boards globally are expected to use AI to challenge key executive decisions. By 2027, Fortune 500 companies will redirect $500 billion from energy operating expenses to microgrids to address energy risks and AI demands. By 2027, 15% of new applications will be fully generated by AI, up from 0% today. Executives should identify patterns in new laws, especially those addressing AI biases or errors, and align responsible AI with company goals. Companies are also being urged to document AI decision-making and manage high-risk use cases to ensure compliance and reduce harm. Organizations should also assess opportunities and risks from federal investments in AI and IT modernization. For global operations, companies will need to monitor AI initiatives in regions like the EU, UK, China, and India, Gartner said. “Striking a balance between AI innovation and safety will be challenging, as it will be essential to apply the appropriate level of regulation,” the researcher said. “Until the new administration determines this balance, state governments will continue to lead the way in issuing regulations focusing on AI innovation and safety-centric measures that impact US enterprises.”
https://www.computerworld.com/article/3809779/trumps-move-to-lift-biden-era-ai-rules-sparks-debate-o...
Related News |
25 sources
Current Date
Jan, Sat 25 - 02:19 CET
|