374 Comments
User's avatar
⭠ Return to thread
el gato malo's avatar

it will be pretty much impossible to hide bad outcomes from AI if it has good patient inputs so any attempt to do this will result in either noticably inferior product or rapid adaptation away from error.

Expand full comment
Kurt's avatar

Nice fairytale, EGM.

The history of medicine is filled with evil, deceit, turf protecting authoritarianism, and politics, all under the guise of ’science’. You know that as well as I do. Medical ethics always follows the money. Expect a fight.

Expand full comment
Leskunque Lepew's avatar

Big fight

Expand full comment
Craig's avatar

Kurt nailed it.

If (and only if) we could trust medicine to simply work as it should, an AI assistant could likely help.

Good scientists are few and far between, and they're usually not much interested in people.

Modern medicine seems far more like an unofficial arm of the government placed in charge of managing the population, tbh. There's a lot of "people farmers" out there.

Expand full comment
Kurt's avatar

AI is but a tool, employed by a corrupt regime. They will use it to further their own goals, and slashing their pay ain’t one of them.

Expand full comment
Kevin Maher's avatar

Exactly. Well put. Expect a huge rise in the number of people requiring mental health services, as a direct result of interaction with AI, in general. I wonder, will that AI being used by the people I see having breakdowns as AI fractures their perception of reality, identity itself as the root cause of their users malaise, and recommend that they cease use? Post haste?

Expand full comment
Metta Zetty's avatar

Between the COVID pandemonium (lockdowns, mandates, etc.) and the rise of the LLMs, mental health issues are already off the charts:

> https://workflowy.com/s/beyond-covid-19/SoQPdY75WJteLUYx#/bba444b65713

Worse still, mental health professionals don't know what to do now about the AI-induced psychosis ~ because they've never seen anything like it before:

> https://bra.in/2jLN3N

Expand full comment
George Bredestege's avatar

It’ll be a fight, but a futile fight. The seller of the AI stands to make money just like the Drs, but the better product will rise to the top.

Expand full comment
Kurt's avatar

Yes, I hope patient outcomes will improve dramatically, which is entirely possible. It won’t be easy or a straight shot.

Expand full comment
kertch's avatar

It's possible someone will get wise and the opposite will happen. The medical community, primarily hospitals and big pharma, will try to determine which AI model will be used. They will try to use their influence to require that only the AMA certified AI will be acceptable. "Don't use one of those other non-certified AIs. You have no idea how it was trained. The medical community stands behind PharmaGPT." The result would be better diagnosis, but the treatment would still be steered along preferred routes.

Expand full comment
Freedom Fox's avatar

The USMLE, what physicians must pass to practice allopathic medicine has made many substantive changes that aren't reflected in the AI Chat EGM had. It's been severely dumbed-down. Focus is all now on specialization. No real primary care physicians being trained to practice, more of a "team" structure. No "Art" of healing is taught. Just "follow The Science (TM)" "Evidence-Based Medicine." Is why AI "doctors" are beginning to look good in comparison. Medical school graduates are now being trained to be arrogant idiots. They used to learn the Art of healing that made them better.

Changes to Step 1, made Pass/Fail along with other dumbing down of it. And what's happened with Step 2 and Step 3 get worse. Step 2, Clinical Skills has been eliminated. And Step 3 similarly slated. Step 3 tests ability to independently practice medicine. As in sole practitioner not in a hospital system where specialists work as a team, stay in their silo. Step 3 also tests understanding of disease concepts, applying scientific concepts to making diagnoses. Per the ACP article below this is now considered "outdated," "impractical," and "burdensome":

Revisiting the Utility of U.S. Medical Licensing Examination Step 3

American College of Physicians

Annals of Internal Medicine, July, 2023

https://www.acpjournals.org/doi/10.7326/M23-0695

"As medical practice has become more specialized, the U.S. Medical Licensing Examination (USMLE) Step 3 has become outdated. In recent years, there have been substantial changes to the USMLE Step examinations, with Step 1 transitioning to pass/fail scoring and the elimination of Step 2 Clinical Skills. These changes bring us to a unique time of reflection on the purpose and structure of Step 3. Considering its history and aims within the current landscape of medical practice and licensure, we believe that the present structure of Step 3 is impractical and burdensome."

FF - These changes in the USMLE and medical school training today reflect the specialization in the practice of medicine that no longer account for a overall, holistic approach to the patient. Specialists only diagnosing and treating their specialty. A "team" approach with interchangeable doctors playing their assigned position on the team.

If you take these types of changes in the qualifications to practice medicine it's my belief that the current allopathic medical model is devolving into training MD's to simply enter patient symptom data into expensive WebMD-like AI computer systems that print out prescriptions to be filled at the pharmacy or schedule for invasive surgeries. AI "doctors" with a human cast playing the Doogie Howser/Quinn Medicine Woman role, reading their lines with dramatic flair.

Training monkeys. Monkeys that follow The Science (TM) directions, don't think for themselves to practice the *Art* of medicine. One-size-fits-all. An obedient medical system of pill-pushing and flesh-slicing monkeys wearing white coats and stethoscopes. Evidence is what the computer programmers say it is.

Expand full comment
The Real Mary Rose's avatar

Well, Steve Kirsh and others constantly publish the bad outcomes from the covid vaccinations and they are just ignored. So I don't have a lot of faith in the same gatekeepers caring what AI discovers.

Expand full comment
Dr Linda's avatar

Great point

Expand full comment
TRM's avatar

I don't know about that. It all comes down to who will control the AI? If it's the medical boards or college of physician & surgeons then it will be pharma controlled.

But ... There will be independent ones that may be off shore in a sane jurisdiction. Good results get noticed. As always don't trust anyone or anything. VERIFY!!!

Expand full comment
Robird's avatar

No such thing as “ independent ones.” There are huge sums of money involved in healthcare, and all parties will be jockeying for their share of the money.The concept that AI models are objective and non corrupt is a fantasy. As in most situations, diagnosis is 80+ % routine. There are some challenging presentations, but rare. Real life medicine is not an episode of “House.” Diagnostic challenges are mainly solved by adding tests and procedures,which will be little different in the age of AI. Diagnostic criteria for various diseases are well established. I have seen no information that use of AI enables diagnosis of novel conditions. Nor is there evidence that AI can develop more effective treatment pathways.

Expand full comment
Emily Terrell's avatar

Yet. No evidence yet. Someone will innovate this space.

I fear the revolutionary impact of AI on humanity because I don’t yet conceive of how the majority will be put to purposeful work.

But I personally have needed a Dr. House my whole life. I can’t count how long I’ve wished I had a competent, private and independent AI to dump my thousands of pages of medical history into to figure out what’s wrong with me and how to fix it. I currently have a working theory (RCCX gene split), about 16 specialists across the hospital (down from 30) and am polypharmacy enough for 10 people. If AI can figure this out and remove me from the list of working disabled, I’d do it in a heartbeat.

The important part for me is GIGO. No current medical knowledge can be trusted.

Expand full comment
Metta Zetty's avatar

As Robird pointed out, the problem is that "The concept that AI models are objective and non corrupt is a fantasy". It is impossible for AI to be objective because "Algorithms are simply opinions embedded in code." ~ Kathy O'Neil

Moreover, we don't understand how AI works, AI can't reason, hallucination and deception now appear to be inevitable, and it looks like there is no way now to prevent AI from going rogue:

> Don't Understand: https://bra.in/6jYzyA

> Reasoning: https://bra.in/5jLyAY

> Hallucination & Deception: https://bra.in/3jbJ8k

> Going Rogue: https://bra.in/7j9zrx

In addition, IMHO, the "innovation" in AI is not promising at all, especially when

> AI tech bros are made Lt. Colonels in the military: https://bra.in/6vPyQ9

> AI's "great potential" includes developing 40,000 chemical weapons in just 6 hours: https://bra.in/9jX252

> Evan Sam Altman thinks AI will likely lead to the "end of the world": https://bra.in/6qe92y

Expand full comment
St. Alia the Knife's avatar

We understand and sympathize, Emily. Please don't give up!

Expand full comment
Metta Zetty's avatar

100%, Robird ~ on all counts. 🎯

Expand full comment
cat's avatar

🤔 Why are millions of people still taking statins even though heart disease hasn't gone down? You assume there is critical thinking involved in those noticing inferior products. I see no change in critical thinking of patients regardless of whether it's AI or a human doctor that pushes the statin (or indeed, many other drugs) onto them.

Expand full comment
peter tomkinson's avatar

When the editor of a prestigious medical journal states clearly that 60-70+% of ALL peer reviewed journal medical reports are false, what hope AI can do better than any other search engine. There is more than enough good data available in the current dis-functional system for people to greatly improve outcomes. There is simply not the will to do so in those empowered to control.

Expand full comment
weedom1's avatar

I’m seeing the AI summarizing material that I’ve already found and discounted as incorrect or bad procedure in my niche area. Guess i should be training it.

Expand full comment
peter tomkinson's avatar

Exactly as you should do, challenge it. AI summaries are NOT absolute. Garbage In = Garbage Out. If all an AI can do is word search what it can access at a tremendous speed then its output will only be as good as the material it accessed filtered through its programming.

AI is just another search engine on acid - turbocharged in other words.

Expand full comment
toolate's avatar

You are a dreamer

Expand full comment