News SpotlightEntry-level jobs are being automated. AI is competing for jobs with college graduates as labor conditions have declined since the start of the year, even for M.B.A. grads (The Atlantic). AI helps job seekers change careers. New tools give users insight into how skills in one field can transfer to a different one (The Wall Street Journal). Pay-for-performance incentivizes productivity. Google, Amazon, and other companies are rewarding top performers to get higher rewards and paying lower performers less (Inc.) Stat of the WeekA new study finds that 64% of employers use skills-based hiring, 38% screen candidates based on GPA, and only 11% use AI. The strong adoption of skills-based hiring shows that employers value demonstrable abilities over traditional credentials. HR should refine job descriptions, implement effective skills assessments, and train hiring managers to evaluate competencies. The continued reliance on GPA suggests an opportunity for HR to educate hiring teams on the limitations of this metric in predicting job performance and advocate for more skills-focused evaluations to broaden the talent pool. And the surprisingly low adoption of AI in hiring indicates a significant area for HR to explore, as AI tools can potentially streamline sourcing, screening, and matching candidates based on skills, ultimately improving efficiency and reducing bias in the recruitment process. Deep Dive ArticleHow to Combat Deepfake Candidates and Hiring Scams in the Age of AIA New Hiring ThreatIn a labor market already shaped by economic uncertainty, skill shortages, and remote work, employers now face a new and deeply unsettling challenge: deepfake job applicants. These aren’t just embellished résumés or minor resume stretches—these are fraudulent candidates using AI to fabricate entire professional identities, fool interviewers, and infiltrate organizations. With remote roles increasing, these AI-assisted scams are on the rise, threatening not only hiring integrity but also corporate cybersecurity. Generative AI tools now allow bad actors to generate synthetic voices, forge professional headshots, fake employment histories, and even create deepfake video avatars. Some scammers have gone as far as using ChatGPT in real-time interviews to generate responses to recruiters’ questions. Others substitute a qualified proxy for the actual candidate in live interviews. What begins as a fake hire can end in data breaches, ransomware attacks, or the theft of proprietary company information. And it’s not just isolated incidents. According to Gartner, by 2028, one in four job candidates could be fake, thanks to the rapid accessibility and sophistication of generative AI. The implications of this trend are far-reaching, affecting not only HR and recruiting teams but the entire digital trust infrastructure of organizations worldwide. A Cybersecurity Crisis in DisguiseHiring scams today go far beyond lying on a résumé. Fraudulent candidates are now part of coordinated efforts, sometimes even linked to foreign governments, to infiltrate businesses for financial or strategic gain. For example, the U.S. Department of Justice has uncovered multiple cases of North Korean operatives using fake identities to secure remote jobs with U.S. companies. The goal? Channeling funds back to the North Korean government and its nuclear weapons program. These candidates present sophisticated digital profiles: fake LinkedIn pages, AI-generated headshots, cloned voices, and polished personal websites. Once onboarded, these impostors may install malware, demand ransom, or siphon sensitive data. What’s most concerning is that they can slip through even experienced hiring teams, particularly when screening for remote roles where in-person meetings aren’t feasible. Cybersecurity firms and tech startups are seeing a surge in fake applicants due to the valuable data and infrastructure they possess. One case involved a security firm that discovered its new remote hire was a deepfake, mimicking a real job seeker with eerily realistic credentials and behavior. The warning signs are often subtle—laggy video, inconsistent backstories, or an overdependence on scripted answers. But in a fast-moving hiring process, these can be easy to overlook. How AI Tools Enable Deception at ScaleWhat makes this threat especially dangerous is that AI allows job scammers to scale their deception. Generative AI can now be used in nearly every step of the hiring process:
The result is a pipeline of applications that look too good to be true—because they are. Legitimate candidates may get edged out by fakes using AI to meet or exceed every hiring benchmark. This “arms race” of digital deception puts hiring managers at a disadvantage and creates an uneven playing field for honest applicants. The Real-World Impact on Businesses and TeamsBringing on a fraudulent employee doesn’t just waste time and salary dollars. The consequences can be devastating. Once inside, bad actors can access sensitive client information, intellectual property, or internal systems. In some cases, they’ve installed backdoors into company infrastructure or initiated ransomware attacks shortly after being hired. There’s also a long-term cost in terms of trust. Businesses risk reputational damage, client loss, and legal exposure. Moreover, the presence of fake employees can erode morale among real team members—especially if coworkers sense something is “off” but feel powerless to escalate concerns. HR leaders are now finding themselves on the front lines of cybersecurity. It’s no longer just about screening for cultural fit or technical skill. It’s about ensuring the person on the other end of the Zoom call is who they claim to be. Best Practices for Verifying Candidate AuthenticitySo, how can businesses protect themselves in this new environment? The good news is that several effective strategies can help detect fraud before it's too late: 1. Scrutinize LinkedIn Profiles 2. Ask Cultural or Location-Based Questions 3. Insist on Live or In-Person Interviews 4. Check ID Thoroughly 5. Implement Two-Factor Verification for Onboarding The Numbers Game: How Fake Applicants Hurt Real OnesBeyond security risks, there's a broader labor market issue. The rise in fake applicants is flooding job pipelines, making it harder for legitimate candidates to break through. Automated systems struggle to distinguish genuine resumes from AI-generated ones, and overwhelmed hiring teams may unintentionally dismiss real talent. This flood also creates unnecessary delays, burdens screening teams, and increases the risk of burnout among recruiters. Worse, the presence of fakes can distort hiring metrics, making it appear as if quality candidates are applying but failing later stages. That illusion can mislead decision-makers about the effectiveness of their sourcing strategies. Conclusion: Building Trust in a Deepfake WorldAs AI evolves, so do the threats facing today’s workforce. Fake job seekers powered by generative AI are a growing challenge—blurring the line between convenience and deception, efficiency and exploitation. But by staying vigilant, adopting new verification practices, and understanding the tools scammers use, businesses can fight back. Hiring in the AI era requires more than good instincts—it requires a cybersecurity mindset. With the right balance of human insight and technological checks, companies can protect their teams, customers, and reputations from the growing tide of deepfake deception. Thanks for reading — be sure to join the conversation on LinkedIn and let me know your thoughts on this topic! Quote of the Week“Always remember that you are absolutely unique. Just like everyone else.” |
Check out the previous issues of the Workplace Intelligence Insider newsletter below and subscribe now to get new articles every Monday.
Hi Reader, Thanks for subscribing to the Workplace Intelligence Insider newsletter. In today's edition, I share news stories about the infinite workday, rise in workplace incidents, health costs of routine work, and how the workplace benefits that employers are overlooking. I hope you find our newsletter helpful. Be sure to share it with your colleagues and reply if you have any topics for consideration. We value your feedback and want to continue to improve our content to serve your...
News Spotlight Employees are working while sick. Employees often work while sick due to structural workplace pressures rather than just personal choice or lack of sick leave, leading to significant negative consequences for organizations (Harvard Business Review). Forced AI use is the new norm. To increase efficiency, employees are increasingly encouraged to leverage artificial intelligence in their everyday responsibilities (The Washington Post). Remote work has changed blue-collar jobs....
News Spotlight AI threatens white collar jobs. Anthropic CEO Dario Amodei warned that AI could eliminate half of all entry-level office jobs within the next few years (CNN). Access to retirement accounts increases. States are offering automatic payroll-deduction IRA plans to help more private-sector workers save for retirement (New York Times). Employers' latest RTO tactic. Employers are offering "recharge days" as a perk to ease the transition back to the office (Fortune). Stat of the Week A...