The European Union’s groundbreaking AI Act sets a precedent in regulating artificial intelligence (AI), introducing stringent requirements for developers and deployers, says research group GlobalData.

With the adoption of the landmark legislation, establishing corporate ethical AI strategies becomes an urgent imperative for multinational corporations operating within the bloc.

“With the passing of the EU AI Act, staying on top of issues related to AI and ethics is going to become increasingly important to multinational organisations,” says Rena Bhattacharyya, chief analyst and practice lead: enterprise technology and services at GlobalData.

Adopted by the European Parliament on 13 March, the legislation strives to hold organisations more accountable for their use of AI. It categorises use cases by risk, stipulates greater oversight of riskier AI use cases, bans certain use cases outright, and requires increased transparency over the use the technology in addition to many other requirements.

“While these new obligations provide much-needed consumer protections, they create increased complexity for enterprises already struggling to scale their use of AI,” says Bhattacharyya.

To meet the requirements outlined by the EU AI Act, organisations operating in Europe must start devising a strategy to enhance documentation and oversight of AI technology.

“Staying on top of issues related to AI and ethics is going to become increasingly important,” Bhattacharyya says. “Furthermore, goalposts will likely change when looking across borders and with the passage of time. And, while the industry has long talked about the lack of AI experts in data science, there will now be a need for individuals that can help organisations adapt business processes to meet evolving ethical AI requirements.”