MacMusic  |  PcMusic  |  440 Software  |  440 Forums  |  440TV  |  Zicos
security
Search

What AI regulations mean for software developers

Tuesday August 27, 2024. 10:30 AM , from InfoWorld
As organizations of all sizes and sectors race to develop, deploy or buy AI and LLM-based products and services, what are the things they should be thinking about from a regulatory perspective? And if you’re a software developer, what do you need to know?

The regulatory approaches of the EU and US have, between them, firmed-up some of the more confusing areas. In the US, we’ve seen a new requirement that all US federal agencies have a chief AI officer and submit annual reports identifying all AI systems in use, any risks associated with them, and how they plan to mitigate those risks. This echoes the EU’s requirements for similar risk, testing, and oversight before deployment in high-risk cases.

Both have adopted a risk-based approach, with the EU specifically identifying the importance of “Security by design and by default” for “High-risk AI systems.” In the US, the CISA states that “Software must be secure by design, and Artificial Intelligence is no exception.”

This is likely to be music to the ears of anyone familiar with proactive security. The more we do to reduce the friction between machine logic and human analysis, the more we can anticipate threats and mitigate them before they become a problem.

Code is at the core

At a fundamental level, “Security by design, by default” begins with the software developer and the code being used to build AI models and applications. As AI development and regulation expand, the role of developers will evolve to include security as part of daily life. If you can’t code, you can’t use AI. And if you’re a developer using AI, you’re going to be keeping a closer eye than ever on weaknesses and security.

Hallucinated or deliberately poisoned software packages and libraries are already emerging as a very real threat. Software supply chain attacks that start life as malware on developer workstations could have significant, serious consequences for the security and integrity of data models, training sets, or even final products. It’s worth noting that the malicious submissions that have dogged code repositories for years are already emerging on AI development platforms. With reports of massive volumes of data being exchanged between AI and machine learning environments such as Hugging Face (aka “The GitHub of AI”) and enterprise applications, baking security in from the outset has never been more critical.

Article 15 of the EU’s AI Act seeks to preempt this scenario by mandating measures to test, mitigate, and control risks including data or model poisoning. New guidance issued by the US includes any government-owned AI models, code, and data being made publicly available, unless they pose operational risk. With code simultaneously under scrutiny and under attack, organizations developing and deploying AI will need to get a firm handle on the weaknesses and risks in everything from AI libraries to devices.

Proactive security will be at the heart of driving security by design, as regulations increasingly require an ability to find weaknesses before they become vulnerabilities.

Innovation meets risk meets reality

For many organizations, the risk will vary depending on the data they use. For example, healthcare companies will need to ensure that data privacy, security, and integrity are maintained across all outputs. Financial services companies will be looking to balance benefits such as predictive monitoring against regulatory concerns for privacy and fairness.

Both the EU and US regulatory approaches place a heavy emphasis on privacy, protection of fundamental rights, and transparency. From a product development perspective, the requirements depend on the type of application:

Unacceptable risk: Systems that are considered a threat to humans will be banned, including government-run social scoring, biometric identification/categorization of people, and facial recognition. Some exceptions for law enforcement.

High risk: Systems with capacity to negatively impact safety or fundamental rights. These fall into two categories:

AI systems to be used in products that fall under EU product safety legislation including aviation, automotive, medical devices, and elevators.

AI systems that will be used in areas including critical infrastructure, education, employment, essential services, law enforcement, border control, or application of the law.

Low risk: Most current AI applications and services fall into this category, and will be unregulated. These include AI-enabled games, spam filters, basic language models for grammar-checking apps, etc.

Overall, under the EU AI Act, applications like ChatGPT aren’t considered high risk (yet!) but they will have to ensure transparency around the use of AI, avoiding the generation of illegal content and the undisclosed use of copyrighted data in training models. Models with the capacity to pose systemic risk will be obliged to test prior to release—and to report any incidents.

For products, services, and applications in the US, the overarching approach is also risk-based—with a heavy emphasis on self-regulation.

The bottom line is that restrictions increase with each level. To comply with the EU AI Act, before any high-risk deployment, developers will have to pass muster with a range of requirements including risk management, testing, data governance, human oversight, transparency, and cybersecurity. If you’re in the lower risk categories, it’s all about transparency and security.

Proactive security: Where machine learning meets human intelligence

Whether you’re looking at the EU AI Act, the US AI regulations, or NIST 2.0, ultimately everything comes back to proactive security, and finding the weaknesses before they metastasize into large-scale problems. A lot of that is going to start with code. If the developer misses something, or downloads a malicious or weak AI library, sooner or later that will manifest in a problem further up the supply chain. If anything, the new AI regulations have underlined the criticality of the issue—and the urgency of the challenges we face. Now is a good time to break things down and get back to the core principles of security by design.

Ram Movva is the chairman and chief executive officer of Securin Inc. Aviral Verma leads the Research and Threat Intelligence team at Securin.



Generative AI Insights provides a venue for technology leaders—including vendors and other outside contributors—to explore and discuss the challenges and opportunities of generative artificial intelligence. The selection is wide-ranging, from technology deep dives to case studies to expert opinion, but also subjective, based on our judgment of which topics and treatments will best serve InfoWorld’s technically sophisticated audience. InfoWorld does not accept marketing collateral for publication and reserves the right to edit all contributed content. Contact doug_dineley@foundryco.com.
https://www.infoworld.com/article/3481535/what-ai-regulations-mean-for-software-developers.html

Related News

News copyright owned by their original publishers | Copyright © 2004 - 2024 Zicos / 440Network
Current Date
Nov, Sat 16 - 02:52 CET