AI in the workplace: Balancing productivity and psychosocial risks 

The implementation of AI or generative AI tools within workplaces can impact both the perception and existence of psychosocial risks at work, according to RMIT.

For example, AI can automate data collection on the performance of workers in real-time, potentially reduce bias in evaluation, and automate different kinds of feedback. However, AI tools can also remove the social interaction component, which is essential to align human perceptions between workers and employers about their productivity and working conditions, said Dr Emmanuelle Walkowiak, Vice Chancellor’s Senior Research Fellow at RMIT in the School of Economics, Finance and Marketing.

“It is likely there will also be psychosocial, health and wellbeing risks to workers with the integration of AI in the workplace,” said Walkowiak.

“It is important that businesses scrutinise psychosocial risks when deploying AI tools to ensure the deployment of technologies that serve both the wellbeing and productivity of workers.”

For example, Walkowiak, together with researchers in Europe, has examined the role of technological and organisational change in reducing physical strain, decreasing the human knowledge content of work, intensifying the rhythm of work, and in improving physical working conditions. 

“Workers feel that they complete less complex tasks and learn less at work, that their work intensifies and accelerates, and they benefit from lower physical risks,” said Walkowiak, who explained that technologies produce both positive and negative effects in terms of working conditions.

And in another study, Walkowiak and other researchers used 58 variables to measure psychosocial risks, which are multidimensional and directly relate to wellbeing at work. 

These risks encompass how demanding your job is psychologically and emotionally, the level of control and autonomy you have, conflicts and relational tensions at work, experiences of unfair treatment or dissonance with your core values, or job or income insecurity (economic insecurity), she said. 

“We found that the interactions between managers and employees via performance appraisals are a way to reduce the perception of psychosocial risks related to poor workplace relationships, perception of unfairness at work and job-economic insecurity,” said Walkowiak.

“This relational and human dimension at work is central for the wellbeing of workers. Their automation through AI can have a negative mental health impact.”

More broadly, Walkowiak observed that generative AI can drive significant productivity gains in the workforce, however optimising these productivity gains sustainably depends on protecting workers’ fundamental rights like fair wages, working conditions and job security as AI becomes more widely deployed.

“Our research at RMIT shows that with GenAI, productivity and risks are inseparable. These risks include privacy breaches, cybersecurity, breaches of professional standards, bias, misinformation, accountability, and intellectual property risks. Our findings are clear: workers are highly exposed to AI risks.

“Importantly, AI will never be a ‘workforce’ and we will not have ‘AI workers’. The idea of reporting ‘digital workers’ as we report human employees is non-sensical. If you naively consider AI as an autonomous agent at work, you should audit your AI risks.”

Such language reinforces the well-known narrative of job displacements by technology and incentivises this in workplace organisational processes, according to Walkowiak, who said it “dangerously dehumanises” work and the potential brought by digital transformation to complement workers.

“It is true that we need to design new AI-resource management practices to mitigate AI risks and ensure a safe and ethical deployment of AI,” she said.

“The role of HR management should be to upskill workers to use AI, improve job quality with AI, ensure that the distributional effects of AI are shared, and support collective bargaining for the deployment of AI.”

Walkowiak recommended that decision-makers upskill workers on AI risks and systematically integrate AI risk management, business operations and strategy. 

“There is currently a lack of framework and guidelines specifically assessing the use of automated AI systems and how they can generate psychosocial hazards at work,” she said.

“These frameworks must be developed. On the other side, there are also opportunities to use AI to better identify and mitigate workplace stressors with appropriate human oversight and assist workers to improve their wellbeing and organisation.”