Surfshark analysis reveals that workers have been involved in more than 700 AI-related incidents and hazards so far this year, with the US representing nearly one-third of these cases – and a significant 72% of all cases reported caused “Economic/Property” harm to workers, leading to at least 200 000 job losses across various sectors.
According to Tomas Stamulis, chief security officer at Surfshark, while machine learning can enhance processes by automating repetitive tasks and quickly analysing large data sets, the rapid integration of AI poses new risks to security, privacy, and even employment.
“The AI boom is just beginning to reach its peak with AI tools now widely adopted across society and integrated into businesses to enhance productivity,” Stamulis says. “However, this rapid adoption can bring significant risk management challenges. AI systems rely on large data collection and complex algorithms which can create privacy, security, and compliance vulnerabilities. Without robust processes, companies risk exposing confidential information, infringing intellectual property rights, and operating outside legal and ethical boundaries.”
Stamulis adds that effective AI risk management requires identifying these threats early, establishing clear accountability, and implementing strong policies to ensure responsible and lawful AI use.
These concerns are not just theoretical.
From January to September 2025, the AI Incidents and Hazards Monitor (AIM) recorded over 3 200 publicly-reported AI incidents, averaging 12 daily. Among various affected stakeholders, workers have been identified in more than 700 AI incidents and hazards, with nearly 16 000 articles covering these cases globally. The majority – 72% – of these cases are labeled as causing “Economic/Property” harm.
The US accounts for 225 of AI incidents – nearly one-third of the total – followed by China (49), and both Ukraine and the UK (38 each). Notably, the US also records the lowest Labour Rights Index score, indicating greater vulnerability among its workers.
Some articles explicitly report up to at least 200 000 job layoffs attributed to AI in the first three quarters of 2025 affecting roles such as customer service, sales administration, IT, design, UX, or copywriting. However, the true numbers may be even higher as not all job losses are clearly linked to AI – and the data does not account for job postings that were never created.
Moreover, several high-profile incidents highlight AI’s economic and ethical consequences.
For example, a lawsuit concerning the use of pirated books to train AI generated 471 news articles. Over 1 000 musicians staged a silent protest against proposed UK copyright law changes, drawing 431 articles, which could facilitate AI training on copyrighted material. Additionally, Amazon’s plans to reduce corporate jobs due to generative AI and automation were reported in 343 articles, further underscoring the broad implications of this technology.
“Beyond the direct impact of job losses, the integration of AI into business also raises significant security and privacy concerns,” says Stamulis. “AI tools frequently process extensive datasets that often contain sensitive personal information, leading to raised concerns about potential data breaches, unauthorised access, and misuse. Moreover, poorly protected AI models are vulnerable to cyberattacks, manipulation, or prompt injections which can lead to system failures.
“AI’s ability to process vast amounts of data is both its greatest advantage and its greatest danger if left unchecked as it can harm privacy and security in an instant,” he adds.
Without robust safeguards, transparency, and comprehensive risk management AI can compromise data integrity and personal privacy – underscoring the need not to replace workers, but to ensure that AI is properly monitored by employees.