Employees Need Voting Power, Mental Health-related Absences Increase, and Part 3 of The Leaders Championing the AI Workplace Movement


News Spotlight

Giving employees a vote can help companies. Harvard Business School professor Rosabeth Moss Kanter explains how giving employees a say can help tackle problems and support businesses (Wall Street Journal).

Remote workers need breaks too. Employees who work remotely are more likely to skip their lunch break contributing to burnout and mental health issues (Business Insider).

Job hopping slows down. The median pay for employees changing employers has been cut in half showing that employees are less likely to move right now (Fortune).


Stat of the Week

Mental health-related leaves of absence increased by 300% from 2017 to 2023 and 22% this year, reports a new study.

The increasing number of mental health-related leaves of absence can be attributed to several interconnected factors. Firstly, there's growing awareness and reduced stigma around mental health issues, encouraging more people to seek help and take time off when needed. Workplace stress has intensified due to factors like increased workloads, job insecurity, and the blurring of work-life boundaries, especially with the rise of remote work. The COVID-19 pandemic has exacerbated these issues, adding anxiety, isolation, and uncertainty to many people's lives. Societal pressures, economic instability, and the constant connectivity of the digital age contribute to heightened stress and burnout. Improved mental health policies in workplaces and better coverage in health insurance plans have made it more feasible for employees to take mental health leave. Also, younger generations entering the workforce tend to prioritize mental well-being and are more likely to address mental health concerns openly.


Deep Dive Article

The Leaders Championing the AI Workplace Movement - Part 3

For part three of The Leaders Championing the AI Workplace Movement series, we look at how HR is addressing ethical concerns and biases when it comes to recruitment, performance evaluation, and other areas. Addressing potential ethical concerns and biases in AI-driven HR tools is crucial for companies to ensure fairness, transparency, and legal compliance in their human resources processes. AI algorithms used in recruitment and performance evaluation can inadvertently perpetuate or even amplify existing biases present in historical data, potentially leading to discriminatory outcomes based on factors such as race, gender, age, or socioeconomic background. This not only undermines the principles of equal opportunity and diversity but can also expose companies to legal risks and reputational damage. Biased AI systems can result in overlooking qualified candidates or misrepresenting employee performance, ultimately hindering the company's ability to build a diverse, talented workforce and make informed decisions about career development and promotions.

By proactively addressing these ethical concerns, companies can build trust with employees and job candidates, enhance their employer brand, and improve the overall efficacy of their HR processes. Implementing measures such as regular audits of AI algorithms, diverse representation in AI development teams, and transparency in how AI-driven decisions are made can help mitigate risks associated with biased systems. Additionally, maintaining human oversight and intervention in AI-driven processes ensures that ethical considerations and context-specific factors are considered, which AI alone might miss. By striking a balance between leveraging AI's efficiency and maintaining ethical standards, companies can harness the benefits of technology while upholding their responsibility to create fair and inclusive workplaces.

For this series, I spoke to Fortune 500 CHROs including Donna Morris (Chief People Officer, Walmart), Michael Fraccaro (Chief People Officer, Mastercard), Cornelius Boone (Chief People Officer, eBay), Kirsten Marriner (Chief People & Corporate Affairs Officer, The Clorox Company), Maria Zangardi (SVP, HR, and Corporate Officer, Universal Health Services), and Karen Dunning (Senior Vice President, Human Resources, Motorola Solutions).

How are you addressing potential ethical concerns and biases in AI-driven HR tools, particularly in areas like recruitment and performance evaluation?

Donna Morris (Walmart): The key is to ensure there’s always a person in the loop. While it’s true that technology can save time on processes, it lacks context and critical thinking that’s so important in driving decision-making. We see opportunity to leverage AI to drive career mobility by providing personalized job and learning recommendations to associates’ profiles in our Me@Campus app over the next year. That will allow associates to chart the next steps in their careers based on their unique skills, motivations, interests, performance and potential. Associates will be able to showcase themselves and the skills they picked up in previous roles. The app will even recommend skills to add based on peoples’ prior experience.

Michael Fraccaro (Mastercard): Our conversation on AI always begins with ethical and responsible AI. We hold ourselves accountable to our seven Data and Tech Responsibilities, which includes our commitments to safety, privacy, ethical use, inclusion, among others. We also launched our AI Governance Program in 2019. It includes a practical guide for our employees to help them identify and address potential risks, such as the risk of bias, hallucinations and/or unfair outcomes. We are constantly enhancing our policies, our tools and processes to mitigate these risks and to provide for human oversight and accountability. This is an iterative journey. Ultimately, it comes down to putting people first in all we do, which is core to earn their trust.

Cornelius Boone (eBay): We are committed to the responsible use of AI. Last year, we hired Lauren Wilcox as our Head of Responsible AI, and since then have adopted five principles that govern our use of AI. The principles include Reliability, Safety and Security; Privacy by Design; Transparency; Accountability and Lawfulness; and Inclusivity, Equity and Fairness. We have checks and balances to ensure fairness, and have active human oversight in every phase from development to monitoring the use of AI. And we’re taking steps to evaluate our HR vendors that offer AI to support our Responsible AI principles, and ensure consistency with our values.

Kirsten Marriner (The Clorox Company): Like any new and transformational technology, the use of AI in our business must be done by balancing the positive outcomes with the potential risks. We’ve created guidelines that have been broadly communicated across our teammates and will continue to refine our approach as we gain more experience. While teammates test various AI applications, all work retains a human touch.

Maria Zangardi (Universal Health Services): We are committed to being a highly ethical healthcare provider. It is one of our UHS principles and it defines who we are and what we do. To accomplish our mission of serving patients with high quality care, we will always operate with integrity, doing the right thing for the right reason. If AI can be used in a responsible manner to support select HR operations, we will actively pursue.

Karen Dunning (Motorola Solutions): Human judgment remains essential for responsible decision-making, especially when it comes to critical functions of an HR program. Although AI plays a role in HR process enhancement, we ensure that human oversight and intervention is ever-present, and we have direct partnership with our legal and IT teams to make sure our activities are aligned with the latest legislation and cybersecurity best practices. Our team has the final say in making critical HR decisions, and AI serves as a tool to streamline and enhance workflows in order to free up cognitive bandwidth for our team during these moments.

This is the third of five newsletters in the leaders championing the AI workplace series. Stay tuned for the next one which will focus on how HR leaders foresee AI changing job roles and skill requirements.

Thanks for reading — be sure to join the conversation on LinkedIn and let me know your thoughts on this topic!


Quote of the Week

“Trust yourself. You probably know more than you think you do… Trust that you can learn anything.”
Melinda French Gates


Welcome to our newsletter!

Check out the previous issues of the Workplace Intelligence Insider newsletter below and subscribe now to get new articles every Monday.

Read more from Welcome to our newsletter!

News Spotlight College counselors are the latest employee benefit. Free college coaching is the latest in a growing list of benefits to which companies have been resorting in efforts to bolster worker satisfaction (USA Today). Employers shift from salaries to bonus-based pay. More American workers are seeing their compensation tied to performance metrics, a shift from traditional fixed salaries (Wall Street Journal). Employees want companies to prioritize upskilling. While employees see the...

News Spotlight The rise of ghost jobs and talent fishing. On job sites like LinkedIn and Indeed, recruiters are increasingly posting fake jobs to build a stable of resumes, leading hiring managers to avoid the platform entirely (Inc.). Women miss out on early promotions. While more women hold executive jobs than ever before, many are passed over for promotions early in their career, something that could be exacerbated by recent DEI pushback (The Wall Street Journal). AI is poised to take our...

News Spotlight AI in recruiting is gaining popularity. Employers, recruiters and job candidates are increasingly turning to AI to streamline the hiring process, from drafting a resume and cover letter to preparing for job interviews (Marketplace). Employees resist the hot desking trend. More companies are giving workers their own space instead of promoting hot desking (Bloomberg). Digital balance is the next work-life balance goal. More companies are encouraging their employees to achieve...