Musk has been claiming AI and robotics will ensure no one needs a job, we will need to hand everyone money in the form of UBI, but everyone will live in a penthouse. Those claims span a range from unlikely to impossible for reasons I would hope we don’t need to elaborate.
I’m going to assume that Musk isn’t an idiot and is aware of this as well. The only reasonable explanation is he is lying. Musk is well known for pump and dump schemes.
AI is nowhere near ready to take over anything. AI isn’t intelligent at all-it’s simply a search engine designed to interact with the user to sound human. The program is still limited by the personality, biases, and thought processes of the people who wrote the program.
I’ve seen the claims and studies: AI is correct more often than human doctors. Why this isn’t the big gotcha everyone assumes it is, is that AI is correct on standardized patients more often than doctors. A standardized patient is one where all of the symptoms are there for a diagnosis to be made. It’s essentially like a test question, because it is. Standardized patients don’t present like real patients: they don’t lie, they don’t have too many distracting symptoms, they are perfect patients. Not real world. In the real world, patients lie about symptoms they have, they add symptoms they don’t have, and sometimes they don’t fit the mold of what their condition should look like. All of this complicates diagnosis.
One of the things we have in the hospital is cardiac and vital signs monitoring. It seems like a straightforward thing- if the patient has vital signs outside the norm, if their heart rhythm isn’t correct, sound an alarm. The computer gets that wrong more times than it is correct. A patient will scratch an itch, and an alarm will sound, saying he is in a lethal heart rhythm. Nail polish of the wrong color, and the machine claims hypoxia. Heart rate high, and the alarm sounds- even though it isn’t a medical condition that caused the heart rate, it’s something benign. That’s why it’s only an alarm that sounds to get someone’s attention- these things still require the judgement of a human to decide whether or not intervention is required.
The same is true in other fields. Remember the Teslas that were slamming into trucks without hitting the brakes? Remember the 737 Max aircraft that crashed because programmers from India put in faulty code? How about the Airbus aircraft that were randomly dropping in altitude due to computer errors just three months ago?
Until they can fix that, AI isn’t ready to take over the world. It’s just not.
Anyone who thinks AI is ready to do more doesn’t understand the way this stuff works. Musk is smarter than that.
However, Musk is heavily invested in his new AI startup, xAI. (He does have an odd fetish for the letter x) His statements accomplish nothing but making people invest in AI. I wonder if that isn’t the reason for this.
16 Comments
Honk Honk · April 20, 2026 at 10:19 am
Artificial Imbecile is good for guarding controversial quotes but it won’t build the New Man workers utopia.
Comrade Elon wants the government to pay the UBI in the bug chitin pod.
O/T-Bolshevik Mockingbird enemedia is on about 8 children KIA in Louisiana mass shooting.
JimmyPx · April 20, 2026 at 10:35 am
AI is a LONG way from being sentient rather it currently is a tool that can be useful if used correctly.
It can be fabulous at assisting in a diagnosis and choosing the most effective treatment plan for a patient with condition X.
As we speak, they are stripping out the meta data from 100s of thousands of patient records and loading them into huge databases that AI can search on.
So in the near future the doctor can load in the symptoms, test results and using all of that patient data the AI can come back 92% chance patient has X, 8% chance patient has Y.
They then can say “patient with condition X, african american, smoker in 50s, obese, most effective treatment plan is Y”.
Divemedic · April 20, 2026 at 11:23 am
It is assistive only. It isnt going to eliminate human judgement.
ghostsniper · April 20, 2026 at 3:25 pm
Maybe it will (eliminate human judgement), humans tend to be lazy at every opportunity, especially employees “on the clock” or salaried.
Everybody’s workin’ for the weekend, doncha know?
Steady Steve · April 20, 2026 at 12:08 pm
About that human monitoring. My wife was recently in the hospital, the PCU floor to be exact. Two things occurred that were, shall we say, less than stellar. She had a chest tube in to drain fluid from around her lung. It was supposed to have the hose to the collection box unclamped after visiting hours. The next morning there was no fluid collected because NO ONE HAD OPENED THE SMALL VALVE ON THE CHEST TUBE. I’m not in health care I’m just an engineer but I know a valve when I see one. Second incident, one of the sticky pads for her heart monitor detached so no info was being sent to the main station at the central desk on the floor. Her monitor said “loss of telemetry”. It was 1.5 hours before anyone noticed and came to check. Good thing I was there and could keep watch on her. So it seems with a patient/nurse ratio of 5:1 AI will be needed for patient monitoring as the human element is deficient.
Divemedic · April 20, 2026 at 12:30 pm
In that case, you should push the call button and talk to the nurse. The AI isn’t going to fix that. How is an AI going to know a valve wasn’t opened?
ghostsniper · April 20, 2026 at 3:32 pm
“How is an AI going to know a valve wasn’t opened?”
===============
The new AI valve will detect back pressure and flow rate and co0rdinate them with visiting hours. Same with all the electrical components. Humans make errors all the time, AI will act sort of like a “body cam”, and new products will be created (AI valve) that incorporate the AI.
Divemedic · April 20, 2026 at 5:27 pm
I dont think we are even close to that yet.
ghostsniper · April 20, 2026 at 7:51 pm
Times flyin’!
Vehicles have had these things for years, tire pressure monitors for example.
When a tail light goes out another light on the dash lights up, etc. The stuff’s already out there it just hasn’t been expanded into all areas.
Divemedic · April 21, 2026 at 6:28 am
Until that system exists, it doesn’t exist.
I work with technology every day, and I just dont think its ready for that yet. I dont see it happening in my lifetime.
Steady Steve · April 20, 2026 at 6:04 pm
Perhaps AI should generate a checklist for people who are supposed to know their job. I deliberately waited to see how long it would take someone to notice there was a problem AT A STATION THAT IS SUPPOSED TO BE CONTINUOUSLY MONITORED. If it takes that long a patient could die. Such is the state of our health care these days. I did speak to the charge nurse and hospitalist, they were both very pissed off at those situations.
Divemedic · April 20, 2026 at 6:19 pm
Yeah, the whole point of PCU is you are supposed to be watched more closely than the usual nursing floor.
Modern Day Jeremiah · April 20, 2026 at 11:29 pm
This is true. It is also true that short staffing is becoming routine which, along with alarm fatigue, increases the risk of things being missed.
TCK · April 21, 2026 at 5:28 am
I’ll have you know that there’s nothing odd about having a fetish for the letter “x”.
Everything is cooler and/or more awesome if you can fit an X in there somewhere.
Divemedic · April 21, 2026 at 6:33 am
Let’s try.
Xplosive diarrhea
Xtreme opinion
Xwife
PK · April 22, 2026 at 5:27 am
Things may get spicy real soon, check this out: https://www.profstonge.com/p/ai-can-now-hack-everything?
Comments are closed.