Artificial Intelligence Discussion

When NASA put a human on the moon, we did it. There was a sense of connection, ownership. We had the idea that progress benefits humanity. AI sits in the hands of a few select individuals and companies. Progress benefits the owners, not us, we just get what trickles down. Tough to get excited by just another job for the boss.

1 Like

A post was merged into an existing topic: Political truths that are worth sharing but aren’t funny

I think many people were similarly pessimistic about what space. It was an ultra-expensive game played by MIC, cold war powers, out of touch elites, and with no realistic benefit to humanity.

But it was still exciting to watch… right?

(Maybe I’ve overestimating people’s skepticism back then. Maybe they were will more naively optimistic.)

In any case, I kind of disagree. AI is a lot closer to the average human being than space. It is owned by rich people, but then again everything is owned by rich people. That doesn’t mean everything is boring.

I don’t believe the AI hype and doomsday talk around AI. I think it’s just going to hit its limits pretty quickly. The human brain is a majestic thing, a result of evolution. I am very skeptical that engineers can replicate that kind of hardware. All this neural net stuff is dumb as hell, essentially brute force freshman math. Not sure how long all these silicon valley nutjobs are going to chase the scaling hypothesis. I am actually afraid that their pursuit of AGI will lead to massive social unrest. It’s not the technology that is going to endanger us but the pursuit of such technology.

1 Like

I bought into the hype around 10 years ago, and I’d say the last 5 years have been absolutely mind blowing.

No idea if we’ll magically reach “AGI” or whatever, but it fees to me like we’ve already gone to the moon, and folks are all “mehhhhhhh”.

I promise I’m better than they hype

3 Likes

Self-driving and robotics is basically a bust. That’s the kind of AI that people are interested in not lame word salad generators and image classification.

1 Like

I mean, they’re not really. Robotics have stuttered in the US, but are on an exponential curve in general. Self-Driving took a little while to get going, but it’s also on an exponential curve. The problem is that exponential curves are really slow.

Waymo drives like 1 million miles a week now, and they’ve been doubling every 6 months. That sounds great, but humans drive around 10 trillion miles a year, so it will take them something like 10 years before anyone cares.

But that’s kind of the normal pace for technology. Lots of cool things fail for numerous reasons. And things that don’t fail, like solar power, feel like they failed because it takes decades develop and build and manufacture.

The word salad/image/audio/video etc. generators are in a completely different universe though.

Much easier than driving. Any teenager knows that.

1 Like

Most of the stuff I’ve seen being done with AI seems to need a lot of babysitting or are minor modifications of things that are already being done.

The Canadian government is using it to fix inconsistencies in date formatting on some forms which is screwing up the government pay system. I’d think this is a problem that could have been easily addressed by putting a computer programmer on the task for a day or two and/or just fixing the stupid forms in the first place to eliminate the problem.

I listened to an episode Ric Edelman did on the benefits of AI for the personal finance industry. The top benefit I saw was that it made it easy to transcribe all client calls. The industry people saw great value in being able to mine those calls for ways to personalize marketing materials. I saw it as a negative as it potential means an increase in spam or junk mail. They also talked about making customized client presentations, but that feels like it’s just a minor modification of the idea of using a template to generate client presentations.

I’m sticking with AI will help knowledgeable people be more productive and let ignorant or destructive people be more harmful. I’m not convinced that AI will have a net positive effect on society.

4 Likes

I contemplated getting a robot vacuum, but I saw how often my mom’s gets stuck and it seems expensive for a lot of hassle.

We will see. From a purely mathematical point of view, all of the deep learning stuff is just pure religion. I fear that these silicon valley titans are going to turn the planet into a giant data warehouse until they finally give up.

Also, if you have dogs that occasionally poop indoors, robot vacuum = Bad Idea Jeans :grimacing:

1 Like

No dogs

On a practical level, I think we are still figuring out what to do with AI. We don’t really know what its capable of (and to be fair, it’s hard to tell because it is changing rapidly). And given a degree of capability, we haven’t decided ethically what we want from AI.

For a funny recent example, JAMA just reported a small RCT, where 50 doctors were given tricky case studies and asked to make a differential diagnosis, along with follow-up steps.

Some used GPT-4. Some did not. The doctors who used GPT-4 did same as the ones who did not. Oh well!
However, GPT-4 by itself outperformed all of the doctors.

Logically speaking, this should be a more exciting result than the primary outcome of the study. But it isn’t because we don’t want AI to make a diagnosis. We don’t trust AI because it hallucinates. And even if it didn’t hallucinate, we love doctors more than robots. We don’t want to replace people, we just want a tool.

So what is the take-away anyway?
That doctors are bad at using LLMs?
That GPT-4 was better than doctors at (at least) some kinds of diagnosis and triage?

That seems like a really big deal. Getting the correct diagnosis is really important to everybody. On a personal, legal, ethical, and financial level.

Of course we would need more and better studies to prove that. (What about hallucinations? Maybe its in the training data somehow? What about reasoning? What about less common cases? Simpler cases?
How dangerous are the respective errors?)

But who wants to run those studies?

What would it even mean if AI was better at diagnosis and triage?

Would people just ask GPT-4 (or some other better LLM), instead of seeing a doctor?
What about people who can’t afford doctors?
Or if there aren’t enough doctors in your town?
Or the doctors in your town aren’t as qualified, because they don’t read English and are out of date on the literature.
Or you do live in a rich English Speaking country, but your country is also going bankrupt paying medical bills.
If this became a trend, would we have fewer doctors? fewer medical schools? less trained doctors? would humanity in general lose some clout?

Anyway… that’s just one example, but I think it’s a common thread.

I agree with you about personalized marketing. Really personalized anything is alarming. “Big Data” and the “Surveillance State” and how well we can spy on each other is going to be really scary.

Lol. I don’t think they need to achieve “AGI”. Just making the parrots work better than they do now will have big practical ramifications. I think health care is the obvious application. Also millions of tedious jobs in things like tech support and customer service?

And setting these things aside-- the benchmarks they’re interested in these days, SWE-Bench, MLE-Bench, Core Bench, etc. have immediate real world application… Things like: letting AI automatically fix all our software bugs… kind of a big deal but not “AGI” or whatever.

I’m not sure this is the right way to frame it. Shouldn’t we want the robots to do the jobs they are better at, freeing up more time for doctors to do what they are better at? Same goes for other professions. The problem is how the benefits are distributed. If it just puts people out of work and enriches the corporations it’s a net negative. If it allows people to live easier lives with more leisure time it can be a positive development.

I could see that once the symptoms for the case studies have been established that AI would do very well.

Establishing those symptoms might be a ways off though. Physically touching the patient to use feel for symptoms would be a bit harder, requiring a highly calibrated robot. Observing the patient for behavior or unusual movement would be difficult with current AI but certainly possible. Also interpreting patient’s response to questions would also require some very good language skills.

1 Like

A good way of using AI is reading X-Rays.

They can do it faster and have better accuracy

This then frees up time for the Dr to do more productive work.

I’m working on a funding proposal to modify a convoluted neural network for use on one of the things my work group does right now and thinking of this AI discussion. By Government of Canada standards, what we’re doing here counts as AI.

I’m thinking that perhaps a lot of the talking heads and AI promoters are kind of missing the ball on AI. I’m not sure that GPT-4 etc. are where the big gains are going to be made. Most things don’t need a general AI. They need targeted AIs to solve specific tasks. This tool and some previous models I’ve built run on my desktop computer in R in seconds. NVIDIAs GPU or AI processors aren’t needed for this. We just need a run of the mill Intel or AMD CPU. If these are successful, they’ll likely result in 25-50% labour savings for these particular tasks. On the other hand we might also chose to process more samples or dedicate more labour to other work tasks that aren’t currently being fulfilled due to lack of labour.

It wouldn’t surprise me if the medical study posted earlier could run on a standard CPU as well.

I think it comes back to my thesis that AI will just make skilled people more effective rather than being super disruptive to the workforce.

2 Likes