With the staggering pace of innovation in enterprise technology, it’s all too easy to overlook the role of employees whose job is to process, interpret and action the data emanating from these systems.

Dr Anne Hsu is a behavioural psychologist and computer science lecturer at Queen Mary University of London. In this article, she shares her perspectives on how people are interacting with, and being impacted by, the volume and velocity of data in the modern workplace.

There is a common misconception surrounding the use of AI that remains pervasive in many organisations. It’s that AI exists purely in the domain of data scientists and technical teams, with the endgame of ‘dehumanising’ the workplace – making the future of work appear calculated and mechanical. This pessimistic vision needs to be put to rest and replaced with one more aligned to the reality; that the future of AI will embrace, not displace human skills and emotions in the quest for better business outcomes.

When decision makers within a business are able to envision and craft the role AI can have in their organisation, it opens up many possible opportunities for growth and innovation that would previously have been unimaginable.

Through combining the positive business impact of AI with a deep understanding of human behaviours, executives can enhance the way their organisation operates and help their employees to be happier in their roles and more productive in their work. And for the employees themselves, the effective analysis of data via AI promises to augment their innate human skills, freeing them to be more strategic, and innovative in their daily work.

The (evolving) role of data in the workplace

With continuous advancements in AI’s various constituent fields, which include machine learning, deep learning, computer vision and digital assistants amongst others, our understanding regarding how to apply AI is also improving. Not only does it change the way we use the data at our disposal, but it also alters our perception of what it means to be a data-centred business.

As more and more processes are digitalised, a trend accelerated by the mass shift towards remote working, executives are becoming increasingly aware that the business they are running is in fact a data business, regardless of the industry or niche they operate in.

Becoming an effective data business means more than simply implanting data into business decision-making, but also using data to shape and direct internal processes alongside interactions with those in their network but outside the organisation, such as with customers and suppliers.

This makes data literacy- the ability to apply models and analysis to extract patterns from data and interpret these patterns effectively- a fundamental requirement for any workplaces to ensure operations are running as planned and are consistent with data best practices. Yet, many organisations are still exposed to gaps in basic data competencies that are easier to ignore than to address.

This is a common but missed opportunity as this data now has the capacity to inform better business decisions and shape human behaviour i.e. how employees work. The design of the data – how it’s collected, stored and classified, which in-turn allows models and analysis to be applied to the data to draw insights – is a necessary part of making it actionable for employees.

For many organisations, being at a stage where the design and interpretation of data is embedded in the fabric of how they work is still regarded as a desirable future state, rather than today’s priority. The issue is exacerbated by the data literacy gap as without consensus on best-practice data decision-making, employees at all levels are exposed to the pitfalls of ad hoc processes.

For business leaders, this is a significant concern as well-reasoned assertions grounded in data influence how decisions are made, and their employees will react differently in relation to how that data is presented to them.

It’s for this reason that leaders are becoming more attuned to the risk of (unintentional) bias, both in the data itself and in those working with it. Take, for example, the use of AI in hiring processes – if modelling for new hires is based on data from past hires, then the design of the selection frameworks should account for that. If previous hires have typically reflected a limited social grouping, taking into consideration factors such as age, race or educational attainment, then without checks and overrides in place the model is going to overvalue these factors in new candidates and sway judgements on their suitability for the role.

The advantage of using AI and data to make such decisions is that the bias can also be systematically removed. As long as the data modellers are aware of the biases in the data, which can be measured, then they have the opportunity to systematically remove these biases much more reliably than diversity training ever could. However, the bias must be accurately recognised for this to happen.

The fair and ethical use of data will be a cornerstone of a future where intelligent technologies, such as smart assistants and automation tools for low-level tasks, play a more active role in workplace relationships. This in turn will help drive AI’s evolution into a working partner that enables better decision-making and directs humans towards their intended outcomes.

Embracing human qualities

The challenge in getting to this future is not just people understanding data and AI; it’s also data and AI understanding people. By operationalising AI internally, through the assigning of roles, responsibilities, information flows and prompts for action, organisations can better understand their employees’ psychological states and profiles in order to help reshape company processes to match real-life behaviours.

Consider image recognition and Natural Language Processing algorithms as examples, which consume massive data sets of visual or textual information and categorise based on patterns and commonalities. In an HR context, these approaches can be applied to collect relevant employee data over time to help the business identify accurate stimuli for improving workplace motivation, performance and overall job satisfaction.

For example, the algorithms can be used to detect signs that an employee may be unhappy in their current role or thinking about leaving the company. Management can then use this information to trigger pre-emptive measures in order to resolve the situation.

Another element of understanding the psychology of data is being aware of its limitations. In a world where data is only growing exponentially, there’s also the risk of data overload and data quality to consider. Organisations must be mindful that humans can only absorb so much before becoming overwhelmed. More data doesn’t necessarily lead to better data, with the opposite in fact being true under most circumstances.

Recent research from the US in the field of neuroscience has shown that the human brain has a capacity limit and, where exceeded, a phenomenon known as ‘inattentional blindness’ can arise where we ignore available information even if it’s useful to us.

There’s also the risk of bad data to consider – where employees are working with sets that contain inaccurate, inappropriate, or missing data points. In such cases, the employee can be procedurally correct in how they’re using the data yet may be drawn towards erroneous assumptions that result in poor or undesirable outcomes.

Think, for example, of an employee working for a pharmaceutical company who may draw spurious conclusions on the health needs of a particular social group or region, based on incomplete or outdated data, and resulting in wasted time for the employee and sunk costs for the business. Setting up data collection processes correctly with clarity on what data is to be collected, why and how, can help employees ensure they’re playing their part in maintaining its accuracy.

Furthermore, in the quest for greater data maturity, it’s worth remembering the adage that not everything that can be counted counts and not everything that counts can be counted. Leaders should be aware that if everything starts to be observed with a data-centric lens, then all aspects of their business can become abstracted to quantifiable measures and metrics, bringing the risk that emotional connections can easily become devalued.

As data science aggregates individual data points, the nuances of individual emotions can quickly get lost, removing the unique elements of human interactions. For example, how can you put an accurate numerical value on qualities like loyalty, creativity, empathy and humour, all of which can contribute to a happy and productive workplace? Fundamentally, the goal of AI is not to reduce everything down to different sets of data; rather it should look to enhance the humanity of a workplace, and with it employee wellbeing.

Humanising AI

At its core, AI is a human technology, and it needs to be approached as such. The most desirable employers of the future will be those who are able to operate human-centric workplaces in an increasingly data-centric world, so it’s imperative for leadership to articulate how data will be used throughout the organisation to support employees to achieve more in a better, faster and smarter way.

Taking such an approach requires appreciation not only of the opportunities that exist in data, but also in the psychological and behavioural limitations inherent in those working with it. By having AI applications and the data it collects work in synchronisation with people, leaders can direct their AI initiatives to be implemented in a way that augments the uniqueness of their organisation. That uniqueness comes from its people.