We’re using AI chatbots wrong. Here’s how to direct them. – The Denver Post

Last Updated on August 12, 2023 by Admin

[ad_1]

Artificial Intelligence The Courts 80865

By Brian X. Chen, The New York Times Company

Anyone seduced by AI-powered chatbots like ChatGPT and Bard — wow, they can write essays and recipes! — eventually runs into what are known as hallucinations, the tendency for artificial intelligence to fabricate information.

The chatbots, which guess what to say based on information obtained from all over the internet, can’t help but get things wrong. And when they fail — by publishing a cake recipe with wildly inaccurate flour measurements, for instance — it can be a real buzzkill.

Yet as mainstream tech tools continue to integrate AI, it’s crucial to get a handle on how to use it to serve us. After testing dozens of AI products over the last two months, I concluded that most of us are using the technology in a suboptimal way, largely because the tech companies gave us poor directions.

The chatbots are the least beneficial when we ask them questions and then hope whatever answers they come up with on their own are true, which is how they were designed to be used. But when directed to use information from trusted sources, such as credible websites and research papers, AI can carry out helpful tasks with a high degree of accuracy.

“If you give them the right information, they can do interesting things with it,” said Sam Heutmaker, the founder of Context, an AI startup. “But on their own, 70% of what you get is not going to be accurate.”

With the simple tweak of advising the chatbots to work with specific data, they generated intelligible answers and useful advice. That transformed me over the last few months from a cranky AI skeptic into an enthusiastic power user. When I went on a trip using a travel itinerary planned by ChatGPT, it went well because the recommendations came from my favorite travel websites.

Directing the chatbots to specific high-quality sources like websites from well-established media outlets and academic publications can also help reduce the production and spread of misinformation. Let me share some of the approaches I used to get help with cooking, research and travel planning.

Meal planning

Chatbots like ChatGPT and Bard can write recipes that look good in theory but don’t work in practice. In an experiment by The New York Times’ Food desk in November, an early AI model created recipes for a Thanksgiving menu that included an extremely dry turkey and a dense cake.

I also ran into underwhelming results with AI-generated seafood recipes. But that changed when I experimented with ChatGPT plug-ins, which are essentially third-party apps that work with the chatbot. (Only subscribers who pay $20 a month for access to ChatGPT4, the latest version of the chatbot, can use plug-ins, which can be activated in the settings menu.)

[ad_2]

Source link