What We Learned from a Year of Building with LLMs (Part I)

t’s an exciting time to build with large language models (LLMs). Over the past year, LLMs have become “good enough” for real-world applications. The pace of improvements in LLMs, coupled with a parade of demos on social media, will fuel an estimated $200B investment in AI by 2025. LLMs are also broadly accessible, allowing everyone, not just ML engineers and scientists, to build intelligence into their products. While the barrier to entry for building AI products has been lowered, creating those effective beyond a demo remains a deceptively difficult endeavor.

We’ve identified some crucial, yet often neglected, lessons and methodologies informed by machine learning that are essential for developing products based on LLMs. … Our goal is to make this a practical guide to building successful products around LLMs, drawing from our own experiences and pointing to examples from around the industry. We’ve spent the past year getting our hands dirty and gaining valuable lessons, often the hard way. While we don’t claim to speak for the entire industry, here we share some advice and lessons for anyone building products with LLMs. — Read More

#strategy

Meta says it removed six influence campaigns including those from Israel and China

Meta says it cracked down on propaganda campaigns on its platforms, including one that used AI to influence political discourse and create the illusion of wider support for certain viewpoints, according to its quarterly threat report published today. Some campaigns pushed political narratives about current events, including campaigns coming from Israel and Iran that posted in support of the Israeli government.

The networks used Facebook and Instagram accounts to try to influence political agendas around the world. The campaigns — some of which also originated in Bangladesh, China, and Croatia — used fake accounts to post in support of political movements, promote fake news outlets, or comment on the posts of legitimate news organizations. — Read More

#cyber

In a first, OpenAI removes influence operations tied to Russia, China and Israel

Online influence operations based in Russia, China, Iran, and Israel are using artificial intelligence in their efforts to manipulate the public, according to a new report from OpenAI.

Bad actors have used OpenAI’s tools, which include ChatGPT, to generate social media comments in multiple languages, make up names and bios for fake accounts, create cartoons and other images, and debug code.

OpenAI’s report is the first of its kind from the company, which has swiftly become one of the leading players in AI. ChatGPT has gained more than 100 million users since its public launch in November 2022. — Read More

#cyber