What is common between IBM, Salesforce and California’s AI practices?
Explainability. Explainability. Explainability
-
California moved generative AI into day‑to‑day public services
-
Salesforce turned an AI coach into an internal mobility engine
-
IBM re‑balanced work so people do the thinking while software handles the routine
Three headlines, three angles, plenty to unpack.
California’s GenAI Pilot Shows What “Proof First” Really Looks Like

California’s GenAI Pilot Shows What “Proof First” Really Looks Like
On 29th April, the Governor’s office approved three separate projects that use large language models for real public needs. Microsoft and Accenture will predict traffic jams by analysing live highway feeds. Deloitte, working with Google Gemini, will flag high‑risk crash zones before collisions spike. Anthropic’s Claude is already answering questions pulled from a sixteen‑thousand‑page tax manual so agents can focus on edge‑case calls. The contract language is almost more interesting than the tech. Quarterly bias tests, full prompt logs, and named human reviewers are required before any result goes public. In other words, California will not buy black‑box claims.
For teams building hiring systems this is a gentle but clear nudge. If a state that large demands line‑by‑line evidence for its traffic model, corporate buyers will expect the same detail when a platform scores a candidate. Do your interviews already capture every question, score, and reviewer note? If yes, you are ready for this procurement tide. If not, the fix is less about adding more AI and more about storing the story your data is already telling.
Salesforce Turns an AI Coach into a Quiet Hiring Revolution

Salesforce Turns an AI Coach into a Quiet Hiring Revolution
Career Connect, an internal AI advisor, scanned employee profiles, suggested stretch projects, and nudged managers to look inside before posting a job. 74% of pilot participants engaged with the tool, 40% enrolled in recommended courses, and 28% applied for new roles. By the end of the first quarter half of all open positions were filled by current employees, often across department lines. An HR program manager moved into cybersecurity. A sales rep stepped into product operations.
Why does that stick? The coach works because it pairs verifiable skill data with concrete next steps. Managers are not being told, “Trust the algorithm.” They see the evidence that an employee can handle a new challenge and have budget‑friendly proof to move faster.
Question for hiring teams: Are our interviews collecting the same level of actionable evidence, or do we still screen for “years of experience” and hope? Structured questions, sentence‑level analysis, and sharable scorecards let downstream stakeholders move with similar confidence, whether the candidate is a stranger or a well‑known colleague.
IBM Hands the Tedious Bits to AI But Keeps People on the Puzzles

IBM Hands the Tedious Bits to AI But Keeps People on the Puzzles
Speaking with the Wall Street Journal, IBM’s chief Arvind Krishna explained that several hundred back‑office HR tasks now run on autonomous agents. Think email triage, calendar juggling, and first‑pass screening. Those savings did not vanish into a black hole. The company redirected budget to new programming roles, quantum projects, and consultative sales. At the same time IBM secured six billion dollars in generative AI consulting deals. The pattern is clear. Automate the repetitive, re‑invest in creativity and complex problem solving.
There is a quiet catch. That relay from software to human only works if every AI decision point is recorded and traceable. Otherwise the first incorrect auto‑reply, the first mis‑scored CV, turns into a reputational dent. Teams that pair automation with a clean evidence trail can pivot dollars toward high‑impact roles without tripping compliance wires. Those that do not are betting that no one will ask the obvious audit question. History suggests someone always does.