Kathy Gibson reports – When the European Union (EU) passed the Artificial Intelligence (AI) Act last month, it set a comprehensive legal framework for rules on data quality, transparency, human oversight and accountability.

Under the new legislation, AI applications must be classified and regulated according to the risk they pose, and their ability to cause harm – either unacceptable, high, limited or minimal risk. According to how they are classified, they can be banned, made to be comply with security, transparency and quality obligation, or conform to transparency.

The Act imposes obligations on organisations based in the EU as well as those doing business in the region, so it will have far-reaching global implications.

The EU AI Act is the first legislation specifically targeting AI, but it probably won’t be the last, says Christina Montgomery, IBM’s vice-president and chief privacy & trust officer.

“The passage of the EU AI Act is monumental,” she says. “It provides a much-needed framework for ensuring transparency, accountability, and human oversight in developing and deploying AI technologies.

“I commend the EU for its leadership in passing comprehensive, smart AI legislation. The risk-based approach aligns with IBM’s commitment to ethical AI practices and will contribute to building open and trustworthy AI ecosystems. IBM stands ready to lend our technology and expertise – including our watsonx.governance product – to help our clients and other stakeholders comply with the EU AI Act and upcoming legislation worldwide so we can all unlock the incredible potential of responsible AI.”

Montgomery believes the Act will prove beneficial to the vendors and users of AI technologies, as effective regulation will allow people to reap the benefits while addressing the risk.

“While important work must be done to ensure the Act is successfully implemented, IBM believes the regulation will foster trust and confidence in AI systems while promoting innovation and competitiveness.”

But responsible organisations should already be developing policies and processes to regulate their AI applications, she says. “The requirements of the Act should align with what companies are doing anyway.”

Ana Paula de Jesus Assis, Chair & GM of Europe, the Middle East, and Africa, says passage of the Act means the EMEA region could play a leading role in ethical and responsible innovation.

“AI could help the region to boost productivity,” she says. “There are also risks, which most companies are well aware of. But the AI AI Act gives companies the opportunity for companies to prioritise governance in their AI deployments.

“With the passing of the EU AI ACT, governments and enterprises need to start getting ready for responsible, compliant AI adoption. At IBM we believe that there  is a need for open and transparent innovation, building trust in AI, and looking at the horizon for the next evolutions in tech – like quantum computing.”

Assis thinks other regions and countries will soon follow the EU’s lead in proposing AI governance frameworks. “We see the potential for localisations and variations, but the key principles will cross multiple countries and cultures.”

She echoes Montgomery’s contention that organisations should have at least started to formulate responsible and ethical AI policies, and cites the example of how IBM has tackled the issue.

“At IBM, we put ourselves as client and test many implementations of things we believe customers will experience. What we did before the regulations came was, was create the principles of ethical and trustworthy AI.

“And we are seeing many customers doing similar projects to get ahead of the regulatory requirements, so they have AI that can be trusted, and is safe.”