As CEO of OpenAI, Sam Altman captains the buzziest — and most scrutinized — startup in the fast-growing generative AI category, the subject of a recent feature story in the February issue of Forbes.
After visiting OpenAI’s San Francisco offices in mid-January, Forbes spoke to the recently press-shy investor and entrepreneur about ChatGPT, artificial general intelligence and whether his AI tools pose a threat to Google Search. Read More
Tag Archives: ChatBots
Extracting Training Data from Diffusion Models
Image diffusion models such as DALL-E 2, Imagen, and Stable Diffusion have attracted significant attention due to their ability to generate high-quality synthetic images. In this work, we show that diffusion models memorize individual images from their training data and emit them at generation time. With a generate-and-filter pipeline, we extract over a thousand training examples from state-of-the-art models, ranging from photographs of individual people to trademarked company logos. We also train hundreds of diffusion models in various settings to analyze how different modeling and data decisions affect privacy. Overall, our results show that diffusion models are much less private than prior generative models such as GANs, and that mitigating these vulnerabilities may require new advances in privacy-preserving training. Read More
#chatbots, #nlp, #Diffusion#ChatGPT in one infographic!
Google is asking employees to test potential ChatGPT competitors, including a chatbot called ‘Apprentice Bard’
- Google is testing ChatGPT-like products that use its LaMDA technology, according to sources and internal documents acquired by CNBC.
- The company is also testing new search page designs that integrate the chat technology.
- More employees have been asked to help test the efforts internally in recent weeks.
#big7, #chatbots
How To Delegate Your Work To ChatGPT (Use These Prompts) with Rob Lennon
Outthink ChatGPT
- ChatGPT tries to give you results an average person would expect. If you want to write something that’s novel you almost have to start from the point of view that you have a semi-adversarial relationship with the way that it’s designed.
- You need to be thinking ‘Okay, how can I get past what it thinks first? How can I get into the deeper stuff that’s less average or less expected or less predictable?’
- Use a prompt where you ask something like ‘What are the counter-intuitive things here? What would I not think of on this topic? What’s something that most people believe that’s untrue? What are some uncommon answers to the same question?’
- Then you get the real list. You almost need to give it a chance to get those bad ideas out to get to the real meat of something.
#chatbots, #podcasts
An Indigenous Perspective on Generative AI
Earlier this month, Getty Images, one of the world’s most prominent suppliers of editorial photography, stock images, and other forms of media, announced that it had commenced legal proceedings in the High Court of Justice in London against Stability AI, a British startup firm that says it builds AI solutions using “collective intelligence,” claiming Stability AI infringed on Getty’s intellectual property rights by including content owned or represented by Getty Images in its training data. Getty says Stability AI unlawfully copied and processed millions of images protected by copyright and the associated metadata owned or represented by Getty Images without a license, which the company says is to the detriment of the content’s creators. The notion at the heart of Getty’s assertion– that generative AI tools like Stable Diffusion and OpenAI’s DALLE-2 are in fact exploiting the creators of the images their models are trained on– could have significant implications for the field.
Earlier this month I attended a symposium on Existing Law and Extended Reality, hosted at Stanford Law School. There, I met today’s guest, Michael Running Wolf, who brings a unique perspective to questions related to AI and ownership, as a former Amazon software engineer, a PhD student in computer science at McGill University, and as a Northern Cheyenne man intent on preserving the language and culture of native people. Read More
OpenAI releases tool to detect AI-generated text, including from ChatGPT
After telegraphing the move in media appearances, OpenAI has launched a tool that attempts to distinguish between human-written and AI-generated text — like the text produced by the company’s own ChatGPT and GPT-3 models. The classifier isn’t particularly accurate — its success rate is around 26%, OpenAI notes — but OpenAI argues that it, when used in tandem with other methods, could be useful in helping prevent AI text generators from being abused.
“The classifier aims to help mitigate false claims that AI-generated text was written by a human. However, it still has a number of limitations — so it should be used as a complement to other methods of determining the source of text instead of being the primary decision-making tool,” an OpenAI spokesperson told TechCrunch via email. “We’re making this initial classifier available to get feedback on whether tools like this are useful, and hope to share improved methods in the future.” Read More
ChatGPT: Netscape Moment or Nothing Really Original
As the sudden explosion of public interest in ChatGPT continues to excite millions, we ask: Is this the tipping point for machine-driven conversation (and more)? Is ChatGPT the Netscape of our time?
In Fortune’s The inside story of ChatGPT: How OpenAI founder Sam Altman built the world’s hottest technology with billions from Microsoft, author Jeremy Kahn helpfully explains OpenAI’s history, structure, financing, and much more — at 6K words, the article covers a lot of territory. Kahn cuts straight to The Big Moment scenario in his opening paragraph [emphasis mine]:
“A few times in a generation, a product comes along that catapults a technology from the fluorescent gloom of engineering department basements, the fetid teenage bedrooms of nerds, and the lonely man caves of hobbyists — into something that your great-aunt Edna knows how to use. There were web browsers as early as 1990. But it wasn’t until Netscape Navigator came along in 1994 that most people discovered the internet. There were MP3 players before the iPod debuted in 2001, but they didn’t spark the digital music revolution. There were smartphones before Apple dropped the iPhone in 2007 too — but before the iPhone, there wasn’t an app for that.” Read More
AI Detector Pro is latest tool to detect ChatGPT-written content
A newly available online tool can purportedly detect AI-written content from ChatGPT and similar systems. Called AI Detector Pro, it works by identifying commonly-used styling and wording used by OpenAI’s GPT-based algorithms.
… Similarly, Stanford researchers announced DetectGPT to identify content created by large language models like ChatGPT. Read More
ChatGPT passes exams from law and business schools
ChatGPT is smart enough to pass prestigious graduate-level exams – though not with particularly high marks.
The powerful new AI chatbot tool recently passed law exams in four courses at the University of Minnesota and another exam at University of Pennsylvania’s Wharton School of Business, according to professors at the schools.
To test how well ChatGPT could generate answers on exams for the four courses, professors at the University of Minnesota Law School recently graded the tests blindly. After completing 95 multiple choice questions and 12 essay questions, the bot performed on average at the level of a C+ student, achieving a low but passing grade in all four courses.
ChatGPT fared better during a business management course exam at Wharton, where it earned a B to B- grade. In a paper detailing the performance, Christian Terwiesch, a Wharton business professor, said ChatGPT did “an amazing job” at answering basic operations management and process-analysis questions but struggled with more advanced prompts and made “surprising mistakes” with basic math. Read More
