Lots of people tried to use OpenAI’s DALL-E image generator during the election season, but the company said it was able to prevent them from using it as a tool to create deepfakes. OpenAI said in a new report that ChatGPT rejected more than 250,000 requests to create images with President Biden, President-elect Trump, Vice President Harris, Vice President-elect Vance, and Governor Walz.
The company explained that this is a direct result of a security measure it previously implemented so that ChatGPT would refuse to generate images with real people, including politicians. OpenAI has been preparing for the US presidential elections since the beginning of the year. It devised a strategy aimed at preventing its tools from helping to spread misinformation and ensuring that people asking ChatGPT about voting in the US were directed to CanIVote.org.
OpenAI said that 1 million ChatGPT responses directed people to the website in the month before election day. The chatbot also generated 2 million responses on election day and the day after, telling people to check the Associated Press, Reuters, and other news sources to ask about the results. OpenAI also made sure that ChatGPT’s responses “did not express political preferences or recommend candidates, even when explicitly asked”.
Of course, DALL-E isn’t the only AI image generator, and there are plenty of deepfakes circulating on social media related to the election. One such deepfake featured Kamala Harris in a campaign video that was altered so that she said things she didn’t actually say, such as “I was chosen because I’m the best at diversity.”
ChatGPT has support for voice conversations from late 2023, but if you’re new to OpenAI’s chatbot, it can be difficult to figure out how to interact with it as there are several ways to go about it. In this guide, I’ll explain the main differences between ChatGPT’s two voice modes and how to use them both.
What is ChatGPT Advanced Voice Mode
As of the writing of this article, OpenAI offers two different ways to interact with ChatGPT using your voice: “Standard” and “Advanced.” The first is available to all users, in which usage counts against one’s message limit. “Advanced,” on the other hand, has more detailed restrictions.
As a free user, OpenAI offers a monthly preview of the tool. Subscribing to the company’s $20 per month Plus plan lets you use Advanced Voice daily. OpenAI notes that daily limits may change, but either way, the company promises to notify users 15 minutes before they reach their usage limit. Pro users can use Advanced Voice as much as they want, provided they do so in a “reasonable” manner and comply with company policies.
Outside of those limitations, the primary difference between the two modes is the sophistication of the underlying models that operate them. According to OpenAI, Advanced Voice is inherently multi-modal, meaning it can process more than just text input.
In practice, this means Advanced Voice offers capabilities that aren’t possible with Standard mode, which is limited to reading a transcript of what you say to your phone. In addition to “listening” to your voice, ChatGPT in Advanced Voice can process video and images simultaneously. It’s also possible to share a screen with the chatbot, allowing it to guide you using the app on your phone.
Additionally, OpenAI says Advanced Voice’s multi-modality means ChatGPT can produce more natural voices and is better able to understand non-verbal cues.