Musk has been claiming AI and robotics will ensure no one needs a job, we will need to hand everyone money in the form of UBI, but everyone will live in a penthouse. Those claims span a range from unlikely to impossible for reasons I would hope we don’t need to elaborate.

I’m going to assume that Musk isn’t an idiot and is aware of this as well. The only reasonable explanation is he is lying. Musk is well known for pump and dump schemes.

AI is nowhere near ready to take over anything. AI isn’t intelligent at all-it’s simply a search engine designed to interact with the user to sound human. The program is still limited by the personality, biases, and thought processes of the people who wrote the program.

I’ve seen the claims and studies: AI is correct more often than human doctors. Why this isn’t the big gotcha everyone assumes it is, is that AI is correct on standardized patients more often than doctors. A standardized patient is one where all of the symptoms are there for a diagnosis to be made. It’s essentially like a test question, because it is. Standardized patients don’t present like real patients: they don’t lie, they don’t have too many distracting symptoms, they are perfect patients. Not real world. In the real world, patients lie about symptoms they have, they add symptoms they don’t have, and sometimes they don’t fit the mold of what their condition should look like. All of this complicates diagnosis.

One of the things we have in the hospital is cardiac and vital signs monitoring. It seems like a straightforward thing- if the patient has vital signs outside the norm, if their heart rhythm isn’t correct, sound an alarm. The computer gets that wrong more times than it is correct. A patient will scratch an itch, and an alarm will sound, saying he is in a lethal heart rhythm. Nail polish of the wrong color, and the machine claims hypoxia. Heart rate high, and the alarm sounds- even though it isn’t a medical condition that caused the heart rate, it’s something benign. That’s why it’s only an alarm that sounds to get someone’s attention- these things still require the judgement of a human to decide whether or not intervention is required.

The same is true in other fields. Remember the Teslas that were slamming into trucks without hitting the brakes? Remember the 737 Max aircraft that crashed because programmers from India put in faulty code? How about the Airbus aircraft that were randomly dropping in altitude due to computer errors just three months ago?

Until they can fix that, AI isn’t ready to take over the world. It’s just not.

Anyone who thinks AI is ready to do more doesn’t understand the way this stuff works. Musk is smarter than that.

However, Musk is heavily invested in his new AI startup, xAI. (He does have an odd fetish for the letter x) His statements accomplish nothing but making people invest in AI. I wonder if that isn’t the reason for this.

Categories: Uncategorized

5 Comments

Honk Honk · April 20, 2026 at 10:19 am

Artificial Imbecile is good for guarding controversial quotes but it won’t build the New Man workers utopia.
Comrade Elon wants the government to pay the UBI in the bug chitin pod.
O/T-Bolshevik Mockingbird enemedia is on about 8 children KIA in Louisiana mass shooting.

JimmyPx · April 20, 2026 at 10:35 am

AI is a LONG way from being sentient rather it currently is a tool that can be useful if used correctly.
It can be fabulous at assisting in a diagnosis and choosing the most effective treatment plan for a patient with condition X.
As we speak, they are stripping out the meta data from 100s of thousands of patient records and loading them into huge databases that AI can search on.
So in the near future the doctor can load in the symptoms, test results and using all of that patient data the AI can come back 92% chance patient has X, 8% chance patient has Y.
They then can say “patient with condition X, african american, smoker in 50s, obese, most effective treatment plan is Y”.

    Divemedic · April 20, 2026 at 11:23 am

    It is assistive only. It isnt going to eliminate human judgement.

Steady Steve · April 20, 2026 at 12:08 pm

About that human monitoring. My wife was recently in the hospital, the PCU floor to be exact. Two things occurred that were, shall we say, less than stellar. She had a chest tube in to drain fluid from around her lung. It was supposed to have the hose to the collection box unclamped after visiting hours. The next morning there was no fluid collected because NO ONE HAD OPENED THE SMALL VALVE ON THE CHEST TUBE. I’m not in health care I’m just an engineer but I know a valve when I see one. Second incident, one of the sticky pads for her heart monitor detached so no info was being sent to the main station at the central desk on the floor. Her monitor said “loss of telemetry”. It was 1.5 hours before anyone noticed and came to check. Good thing I was there and could keep watch on her. So it seems with a patient/nurse ratio of 5:1 AI will be needed for patient monitoring as the human element is deficient.

    Divemedic · April 20, 2026 at 12:30 pm

    In that case, you should push the call button and talk to the nurse. The AI isn’t going to fix that. How is an AI going to know a valve wasn’t opened?

Leave a Reply to Divemedic Cancel reply

Your email address will not be published. Required fields are marked *