Kaiser workers launch war against AI, protesting potential job losses and patient harm

4 hours ago 2

Workers of 1 of the astir almighty unions successful California are forming an aboriginal beforehand successful the conflict against artificial intelligence, informing it could instrumentality jobs and harm people’s health.

As portion of their ongoing negotiations with their employer, Kaiser Permanente workers person been pushing backmost against the elephantine healthcare provider’s usage of AI. They are gathering demands astir the contented and others, utilizing picket lines and hunger strikes to assistance transportation Kaiser to usage the almighty exertion responsibly.

Kaiser says AI could prevention employees from tedious, time-consuming tasks specified arsenic taking notes and paperwork. Workers accidental that could beryllium the archetypal measurement down a slippery slope that leads to layoffs and harm to diligent health.

“They’re benignant of coating a representation that would trim their request for quality workers and quality clinicians,” said Ilana Marcucci-Morris, a licensed objective societal idiosyncratic and portion of the bargaining squad for the National Union of Healthcare Workers, which is warring for much protections against AI.

The 42-year-old Oakland-based therapist says she knows exertion tin beryllium utile but warns that the consequences for patients person been “grave” erstwhile AI makes mistakes.

Kaiser says AI tin assistance physicians and employees absorption connected serving members and patients.

“AI does not regenerate quality appraisal and care,” Kaiser spokesperson Candice Lee said successful an email. “Artificial quality holds important imaginable to payment healthcare by supporting amended diagnostics, enhancing patient-clinician relationships, optimizing clinicians’ time, and ensuring fairness successful attraction experiences and wellness outcomes by addressing idiosyncratic needs.”

AI fears are shaking up industries crossed the country.

Medical administrative assistants are among the astir exposed to AI, according to a caller survey by Brookings and the Centre for the Governance of AI. The assistants bash the benignant of enactment that AI is getting amended at. Meanwhile, they are little apt to person the skills oregon enactment needed to modulation to caller jobs, the survey said.

There are millions of different jobs that are among the astir susceptible to AI, specified arsenic bureau clerks, security income agents and translators, according to the probe released past month.

In California, labour unions this week urged Gov. Gavin Newsom and lawmakers to walk much authorities to support workers from AI. The California Federation of Labor Unions has sponsored a bundle of bills to code AI’s risks, including occupation nonaccomplishment and surveillance.

The exertion “threatens to eviscerate workers’ rights and origin wide occupation loss,” the radical said successful a associated missive with AFL-CIO leaders successful antithetic states.

Kaiser Permanente is California’s largest backstage employer, with adjacent to 19,000 physicians and much than 180,000 employees statewide. It has a large beingness successful Washington, Colorado, Georgia, Hawaii and different states.

The National Union of Healthcare Workers, which represents Kaiser employees, has been among the earliest to admit and respond to the encroachment of AI into the workplace. As it has negotiated for amended wage and moving conditions, the usage of AI has besides go an important caller constituent of treatment betwixt workers and management.

Kaiser already uses AI bundle to transcribe conversations and instrumentality notes betwixt healthcare workers and patients, but therapists person privateness concerns astir signaling highly delicate remarks. The institution besides uses AI to foretell erstwhile hospitalized patients mightiness go much ill. It offers intelligence wellness apps for enrollees, including astatine slightest 1 with an AI chatbot.

Last year, Kaiser intelligence wellness workers held a hunger onslaught successful Los Angeles to request that the healthcare supplier amended its intelligence wellness services and diligent care.

The national ratified a caller declaration covering 2,400 intelligence wellness and addiction medicine employees successful Southern California past year, but negotiations proceed for Marcucci-Morris and different Northern California intelligence wellness workers. They privation Kaiser to pledge that AI volition beryllium utilized lone to assist, but not replace, workers.

Kaiser said it’s inactive bargaining with the union.

“We don’t cognize what the aboriginal holds, but our connection would perpetrate america to bargain if determination are changes to moving conditions owed to immoderate caller AI technologies,” Lee said.

Healthcare workers accidental they are disquieted astir what they are already seeing tin hap erstwhile radical struggling with intelligence wellness issues interact excessively overmuch with AI chatbots.

AI chatbots specified arsenic OpenAI’s ChatGPT aren’t licensed oregon designed to beryllium therapists and can’t regenerate nonrecreational intelligence wellness care. Still, immoderate teenagers and adults person been turning to chatbots to stock their idiosyncratic struggles. People person agelong been utilizing Google to woody with carnal and intelligence wellness issues, but AI tin look much almighty due to the fact that it delivers what looks similar a diagnosis and a solution with assurance successful a conversation.

Parents whose children died by termination aft talking to chatbots person sued California AI companies Character.AI and OpenAI, alleging the platforms provided contented that harmed the intelligence wellness of young radical and discussed termination methods.

“They are not trained to respond arsenic a quality would respond,” said Dr. Dustin Weissman, president of the California Psychological Assn. “A batch of those nuances tin autumn done the cracks, and due to the fact that of that, it could pb to catastrophic outcomes.”

Healthcare providers person besides faced lawsuits implicit the usage of AI tools to grounds conversations betwixt doctors and patients. A November lawsuit, filed successful San Diego County Superior Court, alleged Sharp HealthCare utilized an AI note-taking bundle called Abridge to illegally grounds doctor-patient conversations without consent.

Sharp HealthCare said it protects patients’ privateness and does not usage AI tools during therapy sessions.

Some Kaiser doctors and clinicians, including therapists, usage Abridge to instrumentality notes during diligent visits. Kaiser Permanente Ventures, its task superior arm, has invested successful Abridge.

The healthcare supplier said, “Investment decisions are distinctly abstracted from different decisions made by Kaiser Permanente.”

Close to fractional of Kaiser behavioral wellness professionals successful Northern California said they are uncomfortable with the instauration of AI tools, including Abridge, successful their objective practice, according to their union.

The supplier said its workers reappraisal the AI-generated notes for accuracy and get diligent consent, and that the recordings and transcripts are encrypted. Data are “stored and processed successful approved, compliant environments for up to 14 days earlier becoming permanently deleted.”

Lawmakers and intelligence wellness professionals are exploring different ways to restrict the usage of AI successful intelligence healthcare.

The California Psychological Assn. is trying to propulsion done authorities to support patients from AI. It joined others to backmost a measure requiring clear, written consent earlier a client’s therapy league is recorded oregon transcribed.

The measure besides prohibits individuals oregon companies, including those utilizing AI, from offering therapy successful California without a licensed professional.

Sen. Steve Padilla (D-Chula Vista), who introduced the bill, said determination needs to beryllium much rules astir the usage of AI.

“This exertion is powerful. It’s ubiquitous. It’s evolving quickly,” helium said. “That means you person a constricted model to marque definite we get successful determination and enactment the close guardrails successful place.”

Dr. John Torous, manager of integer psychiatry astatine Beth Israel Deaconess Medical Center, said radical are utilizing AI chatbots for proposal connected however to attack hard conversations not needfully to regenerate therapy, but much probe is inactive needed.

He’s moving with the National Alliance connected Mental Illness to make benchmarks truthful radical recognize however antithetic AI tools respond to intelligence health.

To beryllium sure, immoderate users are uncovering worth and adjacent what feels similar companionship successful conversations with chatbots astir their intelligence wellness and different issues.

Indeed, immoderate accidental the AI bots person fixed them easier entree to intelligence wellness tips and assistance them enactment done thoughts and feelings successful a conversational benignant that mightiness different necessitate an assignment with a therapist and hundreds of dollars.

Roughly 12% of adults are apt to usage AI chatbots for intelligence wellness attraction successful the adjacent six months and 1% already do, according to a NAMI/Ipsos survey conducted successful November 2025.

But for intelligence wellness workers similar Marcucci-Morris, AI by itself is not enough.

“AI is not the savior,” she said.

Read Entire Article