Oh we have that on this site. If you need tough math questions answered just type in @meep.
Another totally not insane sounding AI paper.
Basically, you can tell GPT-4 that it might have made a mistake, without specifying the mistake. It will then reconsider its own response and often catch the error it made and fix it.
There’s a couple theories on why, one being that GPT-4 has no memory really, it just has to spitball shit. And by having it review its own output works as a sort of scratchpad for ideas.
Anyway, it’s pretty interesting, and of course folks are playing with automating the process, like just telling GPT-4 to review every response. Another I like (from another paper) is assigning GPT-4 multiple personalities and having them criticize each other.
This seens to me like it could also be used in the future to use GPT-4’s self-critical responses to generate useful training data.
I found this video, which uses a screenshot of their presentation from the news article to recreate the proof.
Um, what is the “impossible” part?
And why resort to trig?
Also, if “just trig” why plus some other stuff? that seems contradictory.
Lastly, I’m going to watch this to see if any of the “and, of course, this is true because of…” which relies on the Pythagorean Theorem. Might not occur, but that is what might happen.
OK, proof works great when a<b. Perhaps a WLOG statement allows the symmetry.
Not sure what happens when a=b.
“other stuff” is infinite sums, for one.
They made a longer proof than necessary, but, I’ll admit it’s a pretty impressive proof. It’s way outside the box.
I think this should be a testable hypothesis at least.
Can you think of a completely novel question that has never appeared the internet?
Maybe not quite that threshold, but I asked it what my D&D party should do, given their somewhat unique situation-- falsely accused of treason by a jerk prince, arrested, and all but one captured and contained on an island prison.
It effortlessly came up with exhaustive lists of entirely practical advice, covering the legal problem, the overarching political intrigue, the likely prison escape, and describing how each specific character might contribute given their unique spells and skills.
Of course, I’m sure we’re not the only party ever to plan a prison escape, but at the same time, I doubt there’s any exhaustive, common-sense list of things my party specifically should plan for.
Surprisingly, there’s lots. The term stacking strategy I described in another thread, that’s not online anywhere. And I dont believe that given the inputs that an AI would come up with that strategy. Yet the strategy is extremely simple.
I’ve seen others in years past. There a lot of areas in line where there’s really only rudimentary information online.
Stacking strategy? Nevermind-- I see.
I tried asking it what it thought of RBC life policies, and it didn’t mention your trick.
Admittedly, I think I’d need to show it actual rates and limitations.
I do not think this is a good test. I’d use the analogy of a linear model. A linear model can extrapolate to new values of “x”. But it does so without any really understanding of what it is modeling (obviously!)
So for example, if we are modeling something that depends on the speed of a billiard ball, then there is no way for it to know that the billiard ball cannot go faster than the speed of light, and it will happily make predictions far above that value.
I think these “hallucinations” show consequences of the model not understanding the contents of the words. Any person who really understands this history of spaceflight will be very suspicious of the claim that Russia has sent many bears into space, but ChatGPT confidently says this happened.
I do not think that ChatGPT is useless. It can give us results that are novel in a sense. But it is “tuned” to an existing construct of reality, namely the internet writings through 2020, or whatever the date is. It cannot adjust itself for new realities, because it does not participate in reality itself. It is like the linear model, that only knows model of reality that we have told it.
I believe it will be able to make some extrapolations we never could. But it will also make really stupid mistakes that we never would. It’s like any model in that sense.
Rbc can’t do it because you have to wait a year before you can access the exchange option. Most companies’ exchange option is year 1-5. This needs exchange option from policy issue.
Could you pm me the contract? I’ll feed it to gpt4. Admittedly it might not consider your plan that amazing either.
Sorry, I jusy realized I already asked that. Yeah, there’s a chinese room / p zombie issue that are fundamentally unresolvable.
Still I do want to test it on issues that have never been encountered to see how well it can adapt.
Unclear what the resistance is to paying 40% less on term premiums prior to your next birthday. Like, it’s legit. What’s the problem? I’m lost, ask the chat bot what the problem is.
Like, you’re shopping for T20, I suggest this,and you’re like, nah?
Reminds me of the guy with Korsakoff’s syndrome from the Man who Mistook his Wife for a Hat.
Iirc, the nuns wonder if he is a “lost soul”
Actually it does seem to “know” that. I was asking about the Lorentz equation and how fast you’d need to go to fit in DP’s garage and it warned me against trying to get close to c for a couple of good reasons. But then it went into a repetitive spiral of responses that were like a stuck record player. But yeah it didn’t say “totally impractical, bht let’s think of it as a thought experiment…”
I read an article once about a condition where patients believe they are dead. I found it disturbing in a way. If an unhealthy brain is able to believe something that seems so fundamentally untrue, then does this mean our healthy brains are making us believe similarly untrue things about our own sense of identity?
I meant the hypothetical linear model does not know that the billiard ball cannot go the speed of light, because it is not built into its assumptions.
What I was getting at is that ChatGPT is unable to get at the “real” things behind the word “billiard ball”. It knows how that word relates to other words, but is unable to penetrate this “shell” and see the “real” billiard ball that the sign “billiard ball” points to.
https://www.wsj.com/articles/beaming-solar-energy-from-space-gets-a-step-closer-fc903658?mod=mhp
Tl;dr: create solar energy in outer space, beam it down to earth, ???, profit
Are these the Jewish space lasers people keep talking about?
Another paper claiming to have found room temperature super conductor. Different group, different materials than the recent retraction, but some crack potty word salad in article. Still would be cool if true. They wrote 2 preprints, first had 6 authors, 2nd has 3 = limit on Nobel prize recipients.
Sucks that it contains lead. But mega awesome if it works.
https://www.reuters.com/article/usa-nuclearpower-fusion-idAFL1N39N0FP?utm_source=reddit.com
Update on fusion research. Looks like actual progress.