People are asking tougher and tougher questions as AI is sprinting forward:
-
Apple warns that large language models can look smart while just guessing
-
Walmart is testing an interview coach that trains candidates instead of judging them
-
Indeed shifts its leadership to keep “responsible AI” front and centre.
Apple Names the Problem: The Illusion of Thinking

Apple Names the Problem: The Illusion of Thinking
Apple’s scientists tested large language models on reasoning tasks and found a pattern they call the illusion of thinking. The model presents a chain of thought that feels logical, but under the hood, it often strings together superficial cues rather than actual reasoning. Apple proposes a self-check layer to catch these shortcuts, yet the paper admits that no automated guardrail spots every slip.
Why it matters: If a model can convincingly fake logic, then a hiring decision that relies on that output can drift off course without anyone noticing. A human-led review loop fixes the blind spot. People read the chain of thought, compare it with real evidence, and sign off only when the reasoning holds up. Evidence-based analysis plus human sense keep illusion from turning into hiring error.
Reflection point: Would your current interview flow reveal a model’s shallow guess, or would the answer slide through because it “sounds” right?
Walmart Tests an AI Interview Coach for Frontline Roles

Walmart Tests an AI Interview Coach for Frontline Roles
Walmart has rolled out a virtual coach that lets applicants practise ten role-specific questions, get a one-to-ten score for each response, and see instant tips on clarity and confidence. Early feedback says candidates feel less anxious walking into a human interview because they have already rehearsed the format.
Why it matters: When AI is used to prepare candidates instead of judging them, the playing field widens. Applicants who never had mock-interview support now get structured feedback. Recruiters still make the final call, but the initial voice in the candidate’s ear is consistent and calm. That sets up a cleaner pipeline for the human reviewers who come next.
It is possible to level the playing field for candidates. The main aim of a job interview should be to identify people with genuine skills and behaviors rather than people who have done lots of actual interviews.
Reflection point: When every candidate can walk in perfectly rehearsed, the next stage must reach past the practice and surface real capability. Structured scoring paired with human judgment keeps the bar exactly where it belongs.
Indeed Re-Shuffles to Put Responsible AI Up Front

Indeed Re-Shuffles to Put Responsible AI Up Front - Image Credits: Chris Ratcliffe / Bloomberg / Getty Images
Chris Hyams is stepping out of the CEO chair and into an advisory role so he can dedicate his time to what he calls “responsible AI, disinformation, and human-rights work.” Returning to the top seat is Hisayuki “Deko” Idekoba, who plans to accelerate a generative platform that acts like a personal talent scout for employers and job seekers.
Why it matters: A talent giant rewiring itself around explainable AI tells the rest of the market that evidence will rule the next chapter. Recruiters may get faster shortlists, but leadership still wants clear evidence why the shortlist looks the way it does. Hiring credibility lives or dies on that paper trail.
Reflection point: We saw an example of how leadership can put “responsible AI” into focus. Is your organisation also emphasizing the importance of bias checks and explainability in your hiring funnel?
If a client asked you to show the justification of every step between a job post and a hire, could you do it in minutes? Structured questions and transparent scoring make that request routine instead of stressful.