ChatGPT Chatbot

What kind of SQL? I’d be curious to know examples of how chatgpt writes great SQL code.

Um, the one that’s on my computer at work? SQL studio by loreal or something. Maybe I’ll remember to look it up tomorrow.

I wouldn’t say “great” but “effective”. Mind you, I’m not painting the cistern chapel. I just need to manage some data. Here’s a sample prompt from me:

…and then it’s initial reply…

The astute reader will notice that it didn’t quite deliver what I was looking for, so I continued with…

…and then it replied…

:bulb:-moment from the actuary…

and on and on…tweak, nip, and tuck until I got it to be exactly what I wanted.

Mind you, sometimes it spits out code that doesn’t even compile, so you go back and tell it, then it tries again. A smarter person would learn how to code properly in SQL, but I am not that person.

2 Likes

For posterity.

Please ask ChatGPT for this.

ChatGPT is a text bot, but here’s openart.ai:


:person_shrugging:

1 Like

ChatGPT never compliments me, but Claude.ai does:

…You make an excellent analogy - …

1 Like

“Microsoft SQL Server Management Studio 18”

1 Like

I used to be anti-ChatGPT but now that I see that he uses COALESCE instead of ISNULL, I’m warming up.

2 Likes

The CFTC takes charge:

Commodity Futures Trading Commission has promoted Ted Kaouk to the chief data and artificial intelligence officer role

https://www.wsj.com/articles/u-s-financial-markets-regulator-names-first-chief-artificial-intelligence-officer-6c9e832e?mod=latest_headlines

ChatGPT
The Commodity Futures Trading Commission has appointed its inaugural Chief Artificial Intelligence Officer, Ted Kaouk, in response to the Biden administration’s directive to regulate AI use in government. Kaouk, currently serving as the agency’s Chief Data Officer, will play a pivotal role in formulating strategies for overseeing and enforcing financial markets. This move aligns with the administration’s efforts to establish oversight mechanisms for AI systems, as outlined in executive orders and governmentwide guidance. The appointment reflects a broader initiative to enhance AI governance across federal agencies, with a focus on promoting safe and responsible AI adoption.

Follow up question:

So , is it mostly about use of AI within government or will it reach into private areas?

ChatGPT
While the article primarily focuses on the use of AI within government agencies, it hints at broader implications for AI governance beyond the public sector. The Biden administration’s directives and initiatives outlined in the article, such as appointing chief AI officers and establishing oversight mechanisms, primarily target government agencies. However, the overarching goal of promoting safe and responsible AI adoption suggests a potential ripple effect into private sectors as well. As AI increasingly permeates various industries, including finance, healthcare, and technology, regulatory frameworks and best practices developed in government settings may influence private sector AI governance practices.

iow, :tfh:

Claude.ai has an iphone app now.

I went to read the comments on a reel on FB and their AI created a summary of the thousands of comments. Listed pretty much what I was expecting so I was able to skip reading any. Wish it would do that more often.

1 Like

https://www.wsj.com/tech/ai/chat-gpt-tips-ai-responses-d48a8f6d

Alt link: https://archive.ph/DgEjT

text
  1. TECHNOLOGY

  1. ARTIFICIAL INTELLIGENCE

The Secret to Talking to an AI Chatbot

Research reveals how to phrase questions to get the best responses. But that can lead to some unusual requests.

Some researchers even ask the chatbots themselves for tips on how to talk to them. SIUNG TJIA/WSJ

By

Bart Ziegler

May 12, 2024 10:00 am ET

Want to get the best answers out of an AI? It’s all in how you talk to it.

Tell it, for instance, to pretend it is Albert Einstein. Or that somebody’s life depends on the response. Or that it needs to stay focused on its goals.

Prodding an artificial-intelligence chatbot is nothing like doing a Google search. Instead, it is like having a conversation with a book-smart person who needs coaxing—sometimes very indirect or bizarre coaxing—to give the most creative and effective answers to questions.

The trouble is, nobody knows why AI responds to those strange prompts in the ways it does—not even the people who created large language models such as OpenAI’s ChatGPT, Google’s Gemini and Microsoft’s Copilot. That has led to experiments and guesswork to find the approaches that work best. Some researchers even ask the chatbots themselves for tips on how to talk to them.

Below are some creative ways researchers have found to elicit better chatbot responses. These aren’t universal truths; not all of them work with all chatbots or with all types of questions. But they are good places to begin.

Tell the chatbot it is an expert

Asking a chatbot to act as an expert—even a long-dead one—can yield better results, or at least be more entertaining. If you’re seeking information about designing a midcentury modern house, tell it to “act as architect Frank Lloyd Wright.” Here’s how Gemini began its response, which listed five useful tips: “Alright, here’s Frank Lloyd Wright speaking to you about your future home…”

Similarly, if you’re trying to understand Einstein’s general theory of relativity, ask it to answer as Einstein would. ChatGPT’s response to that question: “Ah, mein freund, I shall endeavor to elucidate the wondrous fabric of space-time as perceived through the lens of my general theory of relativity…” It then explained the theory in fairly easy-to-understand terms.

SHARE YOUR THOUGHTS

What strategies have you found to get the results you need from AI chatbots? Join the conversation below.

Why does this technique work? “If you ask the AI to ‘imagine you’re a world-renowned chef,’ you’re signaling that you want a response that reflects culinary expertise,” Copilot responded when I asked it the question. “The AI will then generate a response that aligns with what it has learned about how chefs talk and the kind of information they might provide.”

At the other end of the scale, telling a chatbot to act like a middle-school teacher should produce simple explanations for hard-to-understand concepts, such as how nuclear power is created or the techniques used in gene editing.

Encourage the chatbot to do better

Telling a chatbot that its response is important to your job, or to take pride in its work, can improve its performance, a group of researchers found.

The researchers, from Microsoft and several universities, created 11 motivating phrases derived from human psychology, including, “Stay focused and dedicated to your goals” and “Are you sure this is your final answer?” They inserted these phrases into questions or tasks they gave to six chatbots.

The outcome: The method, which the group dubbed “EmotionPrompt,” boosted the accuracy and usefulness of the responses. The chatbots showed a 115% improvement on an artificial-intelligence test called BIG-Bench—a compendium of over 200 extremely difficult language-based tasks—over straightforward prompts.

Chatbots “possess emotional intelligence and can be enhanced by emotional stimuli,” the group said.

Ask the AI to suggest prompts

Using specialized software called an automatic prompt optimizer, researchers at the tech company Broadcom asked a chatbot for phrases that would boost the accuracy of the answers it gives on a math test.

Among other things, the chatbot suggested that researchers use “Star Trek” language in their questions: “Command, we need you to plot a course through this turbulence and locate the source of the anomaly. Use all available data and your expertise to guide us through this challenging situation.”

And the chatbot said it would answer in this format: “Captain’s Log, Stardate [insert date here]: We have successfully plotted a course through the turbulence and are now approaching the source of the anomaly.”

When the researchers used this verbiage to run the test, the chatbot followed it with its numerical answer to the question. This oddball role-playing improved the score to 59% correct answers, from 56% correct answers when researchers didn’t use prompts the AI helped create.

“Surprisingly, it appears that the [AI] model’s proficiency in mathematical reasoning can be enhanced by the expression of an affinity for ‘Star Trek,’ ” the researchers wrote. They called the chatbot’s suggested prompt language “remarkably different from anything a human practitioner would be likely to generate.”

“The takeaway here is not to try ‘Star Trek’-style prompting for other problems,” says one of the researchers, Rick Battle, because the chatbot suggested that wording specifically for this test. “The takeaway is that no human-written prompts will outperform an automatic prompt optimizer.”

Battle couldn’t explain why the “Star Trek” role-playing would make the chatbot more accurate. “This is just another example of LLMs being a black box,” he says.

Play a role yourself

Instead of asking the chatbot to play a role, try playing one yourself. If you’re asking a medical question, “you could role-play as a concerned parent, a skeptical patient, or even a curious alien studying human biology,” ChatGPT told me when I asked it how to obtain better responses. “This approach can sometimes elicit more detailed and empathetic responses,” it said.

Be genial, and don’t get mad

Chatbots seem to work better if you ask them things in a colloquial and friendly manner, but don’t go overboard. “Excessive flattery is not necessarily welcome,” according to researchers at Waseda University in Tokyo, who tested the impact of politeness levels in prompts. They found chatbot performance is “strongly related to human behavior” and that they “are sensitive and vulnerable to prompts.”

The researchers devised questions that used varying degrees of politeness and respect, from the lowest, level 1, to the highest, level 8. They used these questions to prompt answers from various chatbots.

With ChatGPT version 3.5, the researchers found that when using level 8 of polite language, the chatbot scored 60.02 on a language-understanding test, compared with the score of 51.93 for questions using level 1 of politeness.

“However, highly respectful prompts do not always lead to better results. In most conditions, moderate politeness is better,” the researchers wrote.

The same study found that becoming angry with a chatbot that isn’t responding as you would like can make things worse. “Using impolite prompts can result in the low performance of LLMs, which may lead to increased bias, incorrect answers or refusal of answers,” the researchers wrote.

Encourage it to be methodical

A study by Google found that telling a chatbot to “take a deep breath and work on this problem step by step” produced markedly better answers to math questions. Other effective phrases—created largely by chatbots themselves—included “break this down” and “a little bit of arithmetic and a logical approach will help us quickly arrive at the solution to this problem.”

Chatbots primed with these encouraging words outperformed human-designed prompts by up to 8% on a test of grade-school math word problems, and by up to 50% on the BIG-Bench tasks, the researchers said.

Bart Ziegler is a former Wall Street Journal editor. He can be reached at reports@wsj.com.

Key points outlined via chatgpt:

Chat GPT-4o now available to freeloaders such as myself.
26min vid:

I’m using “Code Copilot” in ChatGPT to help me write some vba code. After describing what I need to do it gives its response, first describing the “pseudocode” (i.e., in general terms, what this code is going to do).

The last step is

Handle any potential errors gracefully.

Made me ctm.

5 Likes

This apparently is older news, but an algorithm served it up to me this morning:

1 Like

One more thing for incels to be salty about.

1 Like

I’m sure someone will come along and create ChatGPT Danni.

1 Like

It’s all fun and games until you discover your wife with ChadGPT.

3 Likes