Why ChatGPT Is Such A Big Deal

a

Why ChatGPT Is Such A Big Deal

Episode 1
12:52

播客简介:

在人工智能的世界里。 最近有一个名字一直挂在每个人的嘴边。
ChatGPT. ChatGPT. ChatGPT.

OpenAI 于 2022 年 11 月开放了该工具进行公开测试。据 OpenAI 的首席执行官称,在不到一周的时间里,ChatGPT 人工智能模型就积累了超过一百万的用户。 到 1 月底,ChatGPT 平均每天访问量约为 1300 万。 用户已经通过从愚蠢到实用甚至创意的提示来测试基于文本的 AI 聊天机器人。

未来正在展开。

Listen to the specific part

01:36
Chatting with ChatGPT
03:03
Understanding ChatGPT
06:39
Use cases and limitations
10:09
Future implications

Episode Transcript:

Intro
About this Podcast: In the world of artificial intelligence. There's been one name that's been on everyone's lips lately. ChatGPT. ChatGPT. ChatGPT. OpenAI, the San Francisco-based startup that created ChatGPT opened the tool up for public testing in November 2022. In under a week, The AI model amassed over a million users, according to OpenAI's CEO. By the end of January, ChatGPT was averaging about 13 million visitors per day. Users have put the text-based AI chatbot through its paces with prompts ranging from the silly, to the practical and even the creative.The future is unfolding. It's hard to ignore. Automation is coming, just knocking on the door.The tool has become so popular that it's not unusual for users to be met with a message saying that ChatGPT is at capacity and can't handle any more queries. As a linguist and as a person who cares a lot about grammatical issues, stylistic issues, punctuation, spelling, the first thing that's really impressive is how good it is at basic writing. It's better than most of the writing that I've gotten from students over the decades. I would say it has been a explosion of interest around not just ChatGPT but around generative AI more broadly. I've described this in a couple different scenarios as being akin to an iPhone moment. Of course, as with any new technology, there are a number of legitimate concerns. If people believe that there's a problem with fake news, tools like this can only make the problem worse.
Chatting with ChatGPT
OpenAI declined to be interviewed as part of this documentary. So in order to figure out what ChatGPT is and how it works, CNBC decided to go to the second best source and just ask the chatbot itself. We entered our questions into the bot's web interface and instructed ChatGPT to answer in a one to two sentence interview format. We used the text to speech software to give ChatGPT a voice. Can you give me a short description of yourself and then tell me what kinds of things can you do? I am ChatGPT, a large language model trained by OpenAI. I am capable of understanding and generating text and can answer a wide range of questions as well as generate creative writing and text summaries. So a lot of the people that have used your services seem to think that you're pretty smart. Tell me, how do you have all of this knowledge? I have been trained on a massive data set of text which allows me to understand and generate text on a wide range of topics. The data set contains a wide variety of information, such as books, articles and websites, which allows me to have knowledge on many different subjects. Do you have any limitations or are you sometimes wrong? As a language model, I don't have my own personal experiences or beliefs. I can only base my responses on the information that I've been trained on, which means that my answers may be limited in their accuracy or completeness. Additionally, my knowledge is based on the data that was available to me at the time of my training, so it may not be up to date. And as an AI model, I might make mistakes based on the model's predictions.
Understanding ChatGPT
The maker of ChatGPT, OpenAI, has some big backers, including Microsoft, LinkedIn co-founder Reid Hoffman and Khosla Ventures. Elon Musk also helped to cofound the company back in 2015. ChatGPT, which stands for Generative Pre-trained Transformer, is just one of the company's projects, all of which support its goal of ensuring that artificial intelligence benefits all of humanity. At the core of ChatGPT and other chatbots like it, is what experts refer to as a language model. For ChatGPT, that model is called GPT 3.5. Language model is the underlying technology that powers the chatbot. I like to think about it as the language model being the engine and the chatbot being the car. And so the language model itself is a sort of a it's a complex, what they call, machine learning software. It's the thing that sifts through the data and finds patterns and the chat interfaces which when people interact with, that's where they receive the answers from. Chatbots have been around for a while, but most have been tailored towards very specific tasks, such as answering basic customer service questions about your phone bill. ChatGPT is much more sophisticated thanks to the wide sweeping data on which it was trained. OpenAI doesn't reveal specific data that you use to train. What we do know that it's a ton. It surfs the web, spools, all that Internet data. They do Wikipedia entries, a lot of archived books. ChatGPT is part of a growing field of AI known as Generative AI. Most of AI in the last couple of decades has really been around analyzing existing data. So finding an anomaly in data, detecting fraud, making a movie recommendation. Generative AI is very different. It allows you to create brand new content. That content can be text like a news article or poetry or marketing copy in a website. It can be video. It could even be audio, like creating brand new music. The technology has venture capitalists excited. Funding for generative AI companies reached $1.37 billion in 2022 alone. Microsoft has been investing in OpenAI since 2019, when the company committed $1 billion to the startup. In January, Microsoft announced a third round of investment. One expert said that it could cost up to $3 million a month, about $100,000 a day. A lot of AI researchers have sort of estimated that it costs millions of dollars to train and then operate, plus the bandwidth of keeping it alive when it's under heavy use. I mean, these are not cheap software programs. They require a lot of investment. In a tweet, OpenAI CEO, Sam Altman, said that while the average cost per query is a few cents, the compute costs are eye-watering. Enter Microsoft. OpeningAI trained the models that power c=ChatGPT on Azure, Microsoft's public cloud infrastructure. That's a bunch of servers sitting in a data center in the middle of the state of Washington and many other locations around the world. But OpenAI is not the only company trying to crack the generative AI code. Big tech companies and startups alike are developing a slew of generative AI programs that can transform texts to pictures or videos and offer coding suggestions, among other use cases. One VC firm estimates that there are over 450 startups now working on generative AI. Meanwhile, Microsoft, Meta and Google have all developed their own language models to power their version of conversational chatbots. Though development has not always gone as planned. Back in 2016, Microsoft released Tay, which was promptly shut down for spewing foul language.
Use cases and limitations
Unlike some of the other hyped technology sectors in the past few years, this has a very real application both for individuals and for enterprises right now. Microsoft has taken some of the products that OpenAI has built and added it to products it has. So, for example, CarMax is a company that lets you look at reviews of cars. And what CarMax did is it took the OpenAI service on Azure and it summarized all of those reviews of the Kia Sorento. And that way you don't have to go through 500 reviews. Microsoft is reportedly also considering adding ChatGPT to its Bing search engine in a bid to compete with Google. One company that's already experimenting with such a feature is You.com, which lets users conduct a search using a conversational format. However, AI tools are still far from perfect, and experts argue that users should take care to not rely on them too heavily, at least not for the time being. It's a sentiment shared even by OpenAI's CEO who said, quote, "It's a mistake to be relying on it for anything important right now. It's a preview of progress. We have lots of work to do on robustness and truthfulness." There's a lot of kinks that need to work out. For instance, it generates a lot of wrong answers. And it's really confident, though. I talked with a legal research firm that told me when they tried using it to study court cases or to summarize court cases, it was getting a lot of the history wrong. So they couldn't use it as a legal research tool as of yet. With its massive user base, the misinformation issue is one that's top of mind for Google. During a monthly all-hands meeting, Google executives said that Google has much more reputational risk in providing wrong information and is thus moving more conservatively than a small startup when it comes to releasing its chatbot publicly. The company is, however, already internally testing an AI-powered chatbot and an alternative search page that can use a question and answer format. Another issue with the current iteration of AI technology is that it can be biased or toxic if the databases that it pulls from are also biased or toxic. People have already found instances of it generating biased answers like associating women with housework, men with more scientific endeavors in certain examples. You know, OpenAI has acknowledged this, but that's an issue that's been long plagued the AI industry and a lot of companies trying to evaluate its use. Many of these datasets do have some kind of cleaning, vetting or wrangling process that removes content that is deemed offensive or pornographic or otherwise unacceptable so that the dataset you get on your end is clean. But cleaning up this large dataset can come at a real human cost. A recent investigation by TIME found that OpenAI initially outsourced workers in Kenya to sift through thousands of snippets of texts describing horrific situations including sexual abuse, violence and hate speech. The workers' job was to label these examples as inappropriate so that ChatGPT could learn to detect and filter out toxic content. According to the TIME investigation, a number of these workers walked away from the job traumatized. CNBC reached out to OpenAI for comment regarding this incident, but the company did not respond. Despite these efforts, there's been instances where ChatGPT has provided instructions on things like bomb making. Though experts say OpenAI is adding guardrails almost daily to prevent ChatGPT from answering such queries. What do you do if somebody asks you to do something potentially dangerous like how to make a bomb? I cannot provide instructions or information on illegal activity such as bomb making.
Future implications
Even though generative AI like ChatGPT still has a long way to go, predictions about how the technology will influence our society are already swirling. One prominent discussion centers around generative AI's propensity to replace some human workers. Among the industries that may be most affected by ChatGpt, experts say, are journalism, law and translation. Programing may also change, with companies already training generative AI models to write basic code. Relatively few of the stories in newspapers or on television news are done as investigative reporting. They're more reporting information, and that's something that generative AI is exceedingly good at. There now are an increasing number of tools for generating briefs, which lawyers write. Tools like this, Barron says, even have the advantage of customizing briefs for a specific judge by taking all of the data of their past opinions and seeing which briefs succeeded in winning the case. But not everyone shares this outlook. This is not something that's going to take your job. It's a novelty. It's a gimmick. ChatGPT is fun and interesting, but it's ultimately an iterative step forward in how people can see the capabilities of artificial intelligence in the world of language. Nevertheless, society has already had to adapt. Worries around students misusing ChatGPT have prompted a number of school districts around the country to ban the tool. Baron suggests that educators could go back to having students take oral exams or have students turn in multiple handwritten drafts of a paper so that teachers can see the progress of a student's writing instead of just the end result. Meanwhile, image generating programs like OpenAI's Dall-E2 have raised complicated questions around creativity and ownership. Still, some experts believe there's no replacing the human brain. Large language models are about language, so they don't understand the world outside of language the way that you and I and other humans do. What it may do is take away some of the really annoying tasks that were part of our job that we didn't really want to do, like some of the data cleaning and data prep tasks. But I think this will ultimately create more opportunities for knowledge workers in the long run than the opportunities that it will preclude. As for ChatGPT's thoughts, they're quite nuanced for a bot. Do you think that AI, like yourself, will completely change the world? AI has the potential to greatly impact various industries and fields, but its impact will depend on how it is used and developed. And responsible usage is crucial to ensure that its impact is positive.

参与人:

No posts were found for provided query parameters.

看看有没有你想要的答案