Artificial Intelligence Discussion

Right….

….so why is the California legislature doing this? Aren’t most of the politicians lawyers themselves?

I think there’s two issues here:

  1. Do not give your LLM agents access to anything.
  2. Do not give any info to a website that was “vibe coded” overnight.

That said, it’s unclear to me if there have been any major hacks. And I’m not even positive what the concern is?

Is it just people who use the website might lose their bitcoins or whatever? Or is it that a fleet of bots is going to hack another fleet of bots into hacking something else?

Security concerns aside, it seems like a kind of interesting experiment in what social-network slop looks like when it is encouraged.

I think it’s a difference between “ethics” and “professionalism” and becoming a “criminal offense” that can be prosecuted outside of the bar association.

1 Like

Anthropic released its new agents (cowork which uses Claude code) and the market panicked.

What is this supposed to mean?

Was the market unhappy with the performance of the software?

Or does the market think the software works really well?

Market thinks the software works well enough (currently) to damage the bottom line of those companies (in specific sectors)

Marketing, Data, and Software are now affected by the latest iteration of Anthropic’s Claude coding agent, as well as its new legal AI agent.

AI development is moving a lot faster now because they are training the models using more scalable techniques (this then improves their accuracy in terms of responses).

Anthropic’s new AI coding tools have rattled markets this week, amid fears the start-up is upending traditional software development in ways that will disrupt sectors from publishing and advertising to law.

The San Francisco-based start-up has unveiled a set of tools that allow users to generate, deploy and automate software using generative AI, sharply reducing the technical expertise traditionally required to write and maintain code.

Technologists suggest Anthropic’s advances will undermine the economics of software development and squeeze specialist providers of AI tools, such as in legal services.

“It was very clear that we will never ever write code by hand again,” said Aditya Agarwal, the former chief technology officer at Dropbox. “Something I was very good at is now free and abundant.”

1 Like

That’s a bit surprising. Claude Code is very popular with developers right now, but probably less than GPT and Gemini? And all 3 have been pushing for a while now to make their service more user friendly and also expand into every domain.

I’m curious if these plugins do anything really surprising, or if the market is just starting to wake up to what’s happening, or what.

Of the few developers I know, they’re all using Claude.

And you don’t even need claude code. Regular old claude will do stupid crazy stuff. Here’s a prompt I gave it:

please create a javascript calculator that has the following inputs: income, percentage of income, years, inflation, interest. The output is two things. First, a number which is the present value of the stream of the given percentage of the given income, with that number growing each year with inflation. Sort of the present value of an increasing annuity with payments being the inflated value of the income times the percentage. Secondly, I would like a table showing how this actually works in practice, starting with the calculated lump sum, removing the annual income from the lump sum, adding interest, until the lump sum goes to zero over the given timeframe.

And two minutes later I’ve got a Javascript present value of an annuity calculator. First time it used the wrong annuity type, but just pointing out the error fixed it.

Tonite I’ve got a huge spreadsheet of life insurance premiums that I need to devolve into rates/1000 and then inject into a spreadsheet with a specific format so it can be uploaded. Through the years that’s taken me an hour or two of cutting and pasting every time I do it. TOnite, I’m hoping it takes me about 5 minutes.

2 Likes

Person A prompts thier bot to inject code via a bot on the bot chatnet that lets Person A then access all secure information on Person B’s device.

The bot on Person B’s device already has this information, and bots are highly suggestible, even from other bots.

I think it is more likely a “do something” posturing after such cases have ended up in the news.

Separately:

Proud to announce my new article, “Hallucinated Cases Are Good Law,” forthcoming in the Princeton Law Review.

They may be “better law”, in that they are supposedly more consistent than what actually exists….

…but as in some “proofs” for the existence of God, existence is superior to non-existence.

1 Like

I suggest “neverborn law” as acceptable alternative to luddites “hallucinated”

[

](https://x.com/AiInsomniach/status/2019194464221167697)

Okay. Well, hopefully most people are -not- giving their bots any access.

Really this is a general problem. Say you are writing code…

If you want your chat-bot to be knowledgeable then you need to give it internet access, because training data is both unreliable and dated.
…If you want it to be able to see the code you’re working on, then you need to let it read your scripts…
…Better if you give it write access, so it can edit them directly, gradually expanding and improving it…
…Better still if you let it run the compiler, so it can respond to its own bugs…
…Better still if you let it execute the code, so it can run its own tests…
…And since it can write 20x faster than you can read, it only makes sense to let another bot do the review…

We’re not there yet with business and personal applications, but I suspect there will be some similar control issues.

Doesn’t this make plain the rationalism that’s behind equating these models with thinking?

Who needs facts when we have finally discovered the universal method for right thinking!

I’m about to start reading a pdf when I see the generate a summary button, so i clicked. It was still churning even after I finished reading. Daddy is disappoint, oh so disappoint. :unamused_face:

The whole point of the newer bots is they can be truly helpful, because they have unfettered access, hence the risk.

I think it’s more the other way around. The newest models are so effective that you can ask them to write thousands of lines of code. They still make mistakes here and there, but a generation ago, they would just meltdown.

In general yes, moltbot or whatever it’s called now was positioned as give it everything and it can do a lot.

Ooh. I just thought moltbook was a weird experimental joke.

Further on this “law review article”:

Law Professor’s AI Hallucination Satire Draws Huge Crowd

Last updated 16 hours ago

Robert Anderson, who teaches maritime and corporate law in Fayetteville, shared a satirical abstract Wednesday for ‘Hallucinated Cases Are Good Law,’ pretending AI inventions fill gaps in real precedents just right. Users responded with jokes like Orin Kerr’s nod to philosophers and car brands, riffs on Aristotle, and serious takes on AI’s role in courts. The post spotlights ongoing issues, like 2023 sanctions against lawyers for citing ChatGPT fabrications, amid tools promising better research.

https://x.com/i/trending/2019128808549351638

such as:

Orin Kerr

@OrinKerr

This is just the kind of pathbreaking reconceptualization of the legal paradigm that AI will bring—and yet with deep roots in the classic work of Foucault, Derrida, Renault, and Citroën.

(Princeton does not have a law school)

1 Like