Tag Archives: ChatBots
Could ChatGPT supercharge false narratives?
Many warn of the tool’s potential to be a misinformation superspreader, capable of instantly producing news articles, blogs and political speeches.
ChatGPT, a new artificial intelligence application by OpenAI, has captured the imagination of the internet. Some have suggested it’s the largest technological advancement in modern history. In a recent interview, Noam Chomsky called it “basically high tech plagiarism.” Others have suggested large language models like ChatGPT spell the end for Google search, because they eliminate the user process of filtering through multiple websites to access digestible information.
The technology works by sifting through the internet, accessing vast quantities of information, processing it, and using artificial intelligence to generate new content from user prompts. Users can ask it to produce almost any kind of text-based content.
Given its clear creative power, many are warning of ChatGPT’s potential to be a misinformation superspreader, capable of instantly producing news articles, blogs, eulogies and political speeches in the style of particular politicians, writing whatever the user desires. It’s not hard to see how AI-powered bot accounts on social media could become virtually indistinguishable from humans with just slight advancements. Read More
Battle of the Behemoths
The tech giants are girding their loins for battle in the AI search space.
Microsoft announced that today, we’re launching an all new, AI-powered Bing search engine and Edge browser, available in preview now at Bing.com, to deliver better search, more complete answers, a new chat experience and the ability to generate content. We think of these tools as an AI copilot for the web.
“AI will fundamentally change every software category, starting with the largest category of all – search,” said Satya Nadella, Chairman and CEO, Microsoft. “Today, we’re launching Bing and Edge powered by AI copilot and chat, to help people get more from search and the web.” Read More
Meanwhile, Google’s CEO, Sundar Pichai, announced Bard, a ChatGPT competitor, in a blog post today, describing the tool as an “experimental conversational AI service” that will answer users’ queries and take part in conversations. The software will be available to a group of “trusted testers” today, says Pichai, before becoming “more widely available to the public in the coming weeks.”
It’s not clear exactly what capabilities Bard will have, but it seems the chatbot will be just as free ranging as OpenAI’s ChatGPT. A screenshot encourages users to ask Bard practical queries, like how to plan a baby shower or what kind of meals could be made from a list of ingredients for lunch. Read More
Not to be outdone, China’s largest search engine company plans to debut a ChatGPT-style application in March, initially embedding it into its main search services, said the person, asking to remain unidentified discussing private information. The tool, whose name hasn’t been decided, will allow users to get conversation-style search results much like OpenAI’s popular platform. Read More
Exclusive Interview: OpenAI’s Sam Altman Talks ChatGPT And How Artificial General Intelligence Can ‘Break Capitalism’
As CEO of OpenAI, Sam Altman captains the buzziest — and most scrutinized — startup in the fast-growing generative AI category, the subject of a recent feature story in the February issue of Forbes.
After visiting OpenAI’s San Francisco offices in mid-January, Forbes spoke to the recently press-shy investor and entrepreneur about ChatGPT, artificial general intelligence and whether his AI tools pose a threat to Google Search. Read More
Extracting Training Data from Diffusion Models
Image diffusion models such as DALL-E 2, Imagen, and Stable Diffusion have attracted significant attention due to their ability to generate high-quality synthetic images. In this work, we show that diffusion models memorize individual images from their training data and emit them at generation time. With a generate-and-filter pipeline, we extract over a thousand training examples from state-of-the-art models, ranging from photographs of individual people to trademarked company logos. We also train hundreds of diffusion models in various settings to analyze how different modeling and data decisions affect privacy. Overall, our results show that diffusion models are much less private than prior generative models such as GANs, and that mitigating these vulnerabilities may require new advances in privacy-preserving training. Read More
#chatbots, #nlp, #Diffusion#ChatGPT in one infographic!
Google is asking employees to test potential ChatGPT competitors, including a chatbot called ‘Apprentice Bard’
- Google is testing ChatGPT-like products that use its LaMDA technology, according to sources and internal documents acquired by CNBC.
- The company is also testing new search page designs that integrate the chat technology.
- More employees have been asked to help test the efforts internally in recent weeks.
#big7, #chatbots
How To Delegate Your Work To ChatGPT (Use These Prompts) with Rob Lennon
Outthink ChatGPT
- ChatGPT tries to give you results an average person would expect. If you want to write something that’s novel you almost have to start from the point of view that you have a semi-adversarial relationship with the way that it’s designed.
- You need to be thinking ‘Okay, how can I get past what it thinks first? How can I get into the deeper stuff that’s less average or less expected or less predictable?’
- Use a prompt where you ask something like ‘What are the counter-intuitive things here? What would I not think of on this topic? What’s something that most people believe that’s untrue? What are some uncommon answers to the same question?’
- Then you get the real list. You almost need to give it a chance to get those bad ideas out to get to the real meat of something.
#chatbots, #podcasts
An Indigenous Perspective on Generative AI
Earlier this month, Getty Images, one of the world’s most prominent suppliers of editorial photography, stock images, and other forms of media, announced that it had commenced legal proceedings in the High Court of Justice in London against Stability AI, a British startup firm that says it builds AI solutions using “collective intelligence,” claiming Stability AI infringed on Getty’s intellectual property rights by including content owned or represented by Getty Images in its training data. Getty says Stability AI unlawfully copied and processed millions of images protected by copyright and the associated metadata owned or represented by Getty Images without a license, which the company says is to the detriment of the content’s creators. The notion at the heart of Getty’s assertion– that generative AI tools like Stable Diffusion and OpenAI’s DALLE-2 are in fact exploiting the creators of the images their models are trained on– could have significant implications for the field.
Earlier this month I attended a symposium on Existing Law and Extended Reality, hosted at Stanford Law School. There, I met today’s guest, Michael Running Wolf, who brings a unique perspective to questions related to AI and ownership, as a former Amazon software engineer, a PhD student in computer science at McGill University, and as a Northern Cheyenne man intent on preserving the language and culture of native people. Read More
OpenAI releases tool to detect AI-generated text, including from ChatGPT
After telegraphing the move in media appearances, OpenAI has launched a tool that attempts to distinguish between human-written and AI-generated text — like the text produced by the company’s own ChatGPT and GPT-3 models. The classifier isn’t particularly accurate — its success rate is around 26%, OpenAI notes — but OpenAI argues that it, when used in tandem with other methods, could be useful in helping prevent AI text generators from being abused.
“The classifier aims to help mitigate false claims that AI-generated text was written by a human. However, it still has a number of limitations — so it should be used as a complement to other methods of determining the source of text instead of being the primary decision-making tool,” an OpenAI spokesperson told TechCrunch via email. “We’re making this initial classifier available to get feedback on whether tools like this are useful, and hope to share improved methods in the future.” Read More
