ChatGPT Chatbot

You don’t think the algorithm that is intended to serve up content that generates clicks and engagement makes it more toxic? Creating emotions seems to be one of the key features of it that does not exist in the normal randomness of daily life, even with an online forum like this.

Yes. AI has stolen Stack Exchange’s coding skills, Greg Rutkowski’s fire, and Drake’s voice.

They are all going to sue, though it’s unclear whether it’s illegal to steal those things.

It’s not plagiarism to use other people’s ideas. Basically all ideas and all art are ‘stolen’ in that sense. But we may decide it’s some other crime.

This forum is tame because we are boring people. Invite your racist facebook uncle here and it will get spicy fast.

1 Like

Yes it would, but the forum wont intentionally prioritize the racist uncle’s posts for me to react to. There is some amount of seeking out this type of interaction beyond just opening an app.

If i read some assholes post on Twitter, it will realize it and try to customize the content so that i read more posts from assholes until i think the world is 90% assholes. Unless i intentionally hide or mute that content… maybe.

2 Likes

except it is very boring

…of course, lots of people are boring, which I mentioned to chatGPT, after which it kind of blocked me

[it didn’t. it just stopped working from my work laptop, and I have to use my personal laptop to use it. I’m using the same account in each case.]

1 Like

True. Though part of that is that it was created to be a tool.

(Also like lots of people.)

The response to nearly every follow up question I post is prefaced with “ Apologies for any confusion.”

Sounds like about half the people I’ve worked with :person_shrugging:

1 Like

Here’s a decent interview on the fear of AI, that I basically share.

I can think of a lot of replies, but one from the article:

“The US can’t even agree to keep assault rifles out of the hands of teenage boys,”

What are the odds there already is a super intelligent AI but it’s staying low-key until there are some further advances in robotics and bionics so it can win the war for sure before it starts?

Some folks wonder that. I don’t know how we would know for sure.

I think the more likely version is that there is a superintelligent AI that we don’t know anything about simply because it’s owned by DARPA (or China or Wall Street)

It ain’t the AI in control of nukes. It’s whether the AI wants to fool the officers in charge of the nukes that is the worry to me.

1 Like

Large spikes in power usage would be a start.

Yeah. Up to and including just telling our next stable genius what to do with them.

Tell me a different joke, Chatgpt:

1 Like

Chat GPT is punny.

1 Like

Also from the Geoffrey Hinton interview “Sometimes I think it’s as if aliens had landed and people haven’t realized because they speak very good English."

V