Nearly half of customer service teams, over two-fifths of salespeople, and a third of marketers say they’ve fully implemented artificial intelligence (AI) to augment their work – yet 77% of business leaders cite nagging issues around trusted data and ethics that could grind their deployments to a halt, according to new Salesforce research.

The Trends in AI for CRM report analysed statistics from several studies and found companies worried they could miss out on the opportunities that generative AI (GenAI) presents if data underpinning large language models (LLMs) isn’t grounded in their own trusted customer records.

At the same time, respondents expressed ongoing concerns about a lack of clear company policies to govern the ethical use of this innovation, as well as a complex vendor landscape of LLMs that prompted 80% of enterprises to report they currently use multiple models.

AI is‌ the most significant technology in generations, with one forecast projecting a net gain of more than $2-trillion in new business revenues by 2028 from Salesforce and its network of partners alone. As enterprises across industries develop their AI strategies, leaders in customer-facing departments such as sales, service, and marketing are eager to use AI to drive internal efficiencies and revolutionise customer experiences.

A lack of trusted data could hamper AI ambitions.

While the report found AI adoption rates are expected to climb dramatically, only 10% of people today fully trust AI to help them make informed decisions – and 59% of organisations said they lack the unified data strategies that boost AI’s reliability and accuracy.

Employees’ AI enthusiasm is getting ahead of organisational policies.

Eighty percent of those who have used AI at work said it makes them more productive, a key driver for the rapid acceleration of AI adoption among the workforce. Yet only 21% of surveyed workers said their company has clearly articulated policies around approved tools and use cases. Employees aren’t waiting for these policies to be put in place, with many using unapproved (55%) or explicitly banned (40%) tools. What’s more, 69% said their employers haven’t provided training on the use of AI in their workplace.

Trust, data security, and transparency are at the heart of successful AI. Seventy-four percent of the general population is concerned about the unethical use of AI, according to the report.

Companies that focus on end user control are in the strongest position to build customer trust as they build their AI strategies, with 56% of respondents to the same survey signalling their openness to AI under such circumstances.

“This is a pivotal moment for the world as business leaders across industries are looking to AI to unlock growth, efficiency, and customer loyalty,” says Linda Saunders, Salesforce director of Solutions Engineering Africa. “But success requires much more than an LLM. Enterprise deployments need trusted data, user access control, vector search, audit trails and citations, data masking, low-code builders, and seamless UI integration in order to succeed.”