How To Delegate Your Work To ChatGPT (Use These Prompts) with Rob Lennon

Outthink ChatGPT

  • ChatGPT tries to give you results an average person would expect. If you want to write something that’s novel you almost have to start from the point of view that you have a semi-adversarial relationship with the way that it’s designed.
  • You need to be thinking ‘Okay, how can I get past what it thinks first? How can I get into the deeper stuff that’s less average or less expected or less predictable?’
  • Use a prompt where you ask something like ‘What are the counter-intuitive things here? What would I not think of on this topic? What’s something that most people believe that’s untrue? What are some uncommon answers to the same question?’
  • Then you get the real list. You almost need to give it a chance to get those bad ideas out to get to the real meat of something.
Read More

#chatbots, #podcasts

An Indigenous Perspective on Generative AI

Earlier this month, Getty Images, one of the world’s most prominent suppliers of editorial photography, stock images, and other forms of media, announced that it had commenced legal proceedings in the High Court of Justice in London against Stability AI, a British startup firm that says it builds AI solutions using “collective intelligence,” claiming Stability AI infringed on Getty’s intellectual property rights by including content owned or represented by Getty Images in its training data. Getty says Stability AI unlawfully copied and processed millions of images protected by copyright and the associated metadata owned or represented by Getty Images without a license, which the company says is to the detriment of the content’s creators. The notion at the heart of Getty’s assertion– that generative AI tools like Stable Diffusion and OpenAI’s DALLE-2 are in fact exploiting the creators of the images their models are trained on– could have significant implications for the field. 

Earlier this month I attended a symposium on Existing Law and Extended Reality, hosted at Stanford Law School. There, I met today’s guest, Michael Running Wolf, who brings a unique perspective to questions related to AI and ownership, as a former Amazon software engineer, a PhD student in computer science at McGill University, and as a Northern Cheyenne man intent on preserving the language and culture of native people. Read More

#gans, #podcasts, #chatbots, #nlp

OpenAI releases tool to detect AI-generated text, including from ChatGPT

After telegraphing the move in media appearances, OpenAI has launched a tool that attempts to distinguish between human-written and AI-generated text — like the text produced by the company’s own ChatGPT and GPT-3 models. The classifier isn’t particularly accurate — its success rate is around 26%, OpenAI notes — but OpenAI argues that it, when used in tandem with other methods, could be useful in helping prevent AI text generators from being abused.

“The classifier aims to help mitigate false claims that AI-generated text was written by a human. However, it still has a number of limitations — so it should be used as a complement to other methods of determining the source of text instead of being the primary decision-making tool,” an OpenAI spokesperson told TechCrunch via email. “We’re making this initial classifier available to get feedback on whether tools like this are useful, and hope to share improved methods in the future.” Read More

#chatbots, #fake