>he activities on the account didn’t meet the threshold for informing law enforcement at the time because it didn’t identify credible or imminent planning.
There’s a start. It’s maybe not perfect, but we don’t want the thought police either.
>he activities on the account didn’t meet the threshold for informing law enforcement at the time because it didn’t identify credible or imminent planning.
There’s a start. It’s maybe not perfect, but we don’t want the thought police either.
I get that they don’t want to be the thought police. On the other hand, if you’re a company that’s got a service being used by millions of people and a user is getting flagged for a human review (e.g. likely in the 0.0001% of users) and the company decides that they’re not comfortable with you using our service anymore, then maybe that should be sufficient to talk to the police. I’d think you’re crossing some pretty massive hurdles to get flagged for that. Another one of looking at it, is you don’t have to be behaving particularly extreme for a lot of other businesses to call the police on you.
How much higher should the threshold be higher for tech companies? I know Musk goes on about being a free-speech absolutist (unless you say stuff about him or disagree with his views), but I think they still have a role in protecting the community from harm.
If AI really were helping, you wouldn’t fire people, you’d use the people you have to process 5 times as many claims so you can steamroll the competition.
Not only white collar jobs are at risk from AI. Automation has steadily improved agricultural productivity over the last 100 years and robots are becoming increasingly popular as immigrant labour becomes scarcer. Americans don’t want to milk cows!
This is a tough one imo, with no good balance.
If you’re in a restaurant, and the waiter happens to overhear you talking about committing murder, the owner might call the cops?
But it’s not like McDonalds is going to install cameras in every store, and record every conversation that everybody has, and use AI to read through every conversation, and flag possible-crimes, and then link them to faces and names. (Or maybe they will who knows)
Of course it’s different if you’re talking to the AI about your maybe murder.
Of course it’s also if you want the AI to guarantee your privacy. Enough that you feel comfortable talking about your kinks and your cancer diagnosis.
I think I’m the restaurant case the owner or the waiter might call the cops.
I’m not sure your analogy is quite right, but I’m not sure that this one is right either. I think it’s more analogous to every McDonalds in a city or region having security cameras. The person monitoring the cameras notices all sorts of bad behaviour, but don’t do anything because the police refuse to show up for petty stuff 4 times a shift. Then they see someone actively bring beaten by several people and they call the cops on that incident.
Perhaps another example might be the bank reporting to family members that their elderly relative appears to be getting scammed. They ignore tons of other people making poor financial choices, but when there’s a vulnerable victim, they flag the issue to outside individuals.
In, I think, related news, I have a hard drive in for warranty repair. I have a server coming in, and I need that hard drive.
There’s no sign of the hard drive replacement being sent. I’ve queried them twice and all I get is ‘we have your drive, no timeframe for a replacement drive’ and it’s been two months.
I think the problem is they just don’t have a drive to send me. Because apparently AI tech is consuming all the hard drives now as well.
That is a good example. And I don’t know the right level of privacy. I don’t think there is a precedent. You could point to people who share their plans on social networks-- but those people are sharing them at least semi-publicly.
Fundamentally, it requires a kind of drag-net on people who think of their conversation as confidential (though maybe we shouldn’t be offering that at all?)
Maybe the mandatory-reporting requirements for a therapist come closest?
I think one confounding factor is going to be related to mental health.
I know that “venting” can help some manage their mental health state and the merely venting doesn’t mean that they’re going to act on their stated impulses.
Others might be “venting” as a way to gain attention from a particular crowd. That is, saying something “outrageous” (in their view) and seeing what the response is.
Then there is the category where the “venting” is an actual cry for help . . . they need intervention but don’t know how to exactly ask for it. Note: it’s not as simple as “just ask for it” for those in their mental health struggle as their “logical” thinking is impaired significantly.
But the police isn’t necessarily the first place to call for these situations . . . unless there’s a special unit with in that department devoted to mental health crisis intervention as for all of the above cases, this is likely the best response for each one.
I think the difficulty with your argument is that they have millions of users. A tiny fraction of that are doing things extreme enough to be brought to the attention of human reviewers. To get to the point where they’re reviewing your file you have to be doing some pretty extreme venting.
I don’t think it’s supposed to be a call for the tactical unit to go in. I think it’s more a call for a wellness check to be done.
That is what I was saying.
I know some PD have formed a special “mental health” unit with people trained more in that space than “criminal enforcement”. My PD is one such.
But I know that a lot of PD–if not most–don’t have such a unit. In that regard, I agree with your statement.
And ITA that there are a lot more people that need those sorts of services that AI won’t be able to “detect”. I’m looking more at those that the AI does flag as needing “additional review”.
That doesn’t bring guns with them.
Or at least make those guys stay in the car while the others try to help. Like a hostage negotiator.
IBM apparently dropped 7% or so overnight because Claude reads COBOL.
Similar to other legacy software stonk hit, this seems like an obvious thing, so I’m not sure why the sudden reaction.
The first wave of AI-related layoffs?
If AI can’t even replicate its own staff, what good is it??
Sam Altman under fire after Tumbler Ridge shooting.
We started this thread in November 2024.
In less than 1.5 years it looks like AI is starting to cause mass layoffs in the US.
I have to admit, this is going faster than I originally expected in the US.
Non-US countries (UK, Europe) have more employment protections so the pace of layoffs is slower.