Generative Agents: Interactive Simulacra of Human Behavior

Believable proxies of human behavior can empower interactive applications ranging from immersive environments to rehearsal spaces for interpersonal communication to prototyping tools. In this paper, we introduce generative agents–computational software agents that simulate believable human behavior. Generative agents wake up, cook breakfast, and head to work; artists paint, while authors write; they form opinions, notice each other, and initiate conversations; they remember and reflect on days past as they plan the next day. To enable generative agents, we describe an architecture that extends a large language model to store a complete record of the agent’s experiences using natural language, synthesize those memories over time into higher-level reflections, and retrieve them dynamically to plan behavior. We instantiate generative agents to populate an interactive sandbox environment inspired by The Sims, where end users can interact with a small town of twenty five agents using natural language. In an evaluation, these generative agents produce believable individual and emergent social behaviors: for example, starting with only a single user-specified notion that one agent wants to throw a Valentine’s Day party, the agents autonomously spread invitations to the party over the next two days, make new acquaintances, ask each other out on dates to the party, and coordinate to show up for the party together at the right time. We demonstrate through ablation that the components of our agent architecture–observation, planning, and reflection–each contribute critically to the believability of agent behavior. By fusing large language models with computational, interactive agents, this work introduces architectural and interaction patterns for enabling believable simulations of human behavior. Read More

#human

We’re one step closer to reading an octopus’s mind

Nine brains, blue blood, instant camouflage: It’s no surprise that octopuses capture our interest and our imaginations. Science-fiction creators, in particular, have been inspired by these tentacled creatures.

An octopus’s remarkable intelligence makes it a unique subject for marine biologists and neuroscientists as well. Research has revealed the brain power of the octopus allows it to unscrew a jar or navigate a maze. But, like many children, the octopus also develops an impish tendency to push the boundaries of behavior. Several aquariums have found octopuses memorizing guard schedules to sneak into nearby tanks to steal fish; meanwhile, marine biologists have discovered that wild octopuses will punch fish… for no apparent reason.

According to Dr. Jennifer Maher, a professor at the University of Lethbridge in Canada, there are a “number of [different] types of learning [for octopuses]: cognitive tasks like tool use, memory of complex operations for future use, and observational learning.”

How does the distinct structure of the octopus’s brain enable all this complex behavior? No one had successfully studied wild or freely moving octopuses’ brain waves until a new study by researchers at the University of Naples Federico II in Italy and the Okinawa Institute of Science and Technology (OIST) in Japan, among others. In their Current Biology paper, the researchers tracked and monitored three captive but freely moving octopuses, analyzing their brain waves for the first time. Using recording electrodes, the researchers found a type of brain wave never before seen, along with brain waves that may be similar to some seen in human brains, possibly providing hints about the evolution of intelligence. Read More

#human

Sparks of AGI (Video) | Microsoft Researchers claim GPT-4 Is showing “Artificial General Intelligence”

Read More

#human, #videos

Sparks of Artificial General Intelligence:Early experiments with GPT-4

Artificial intelligence (AI) researchers have been developing and refining large language models (LLMs) that exhibit remarkable capabilities across a variety of domains and tasks, challenging our understanding of learning and cognition. The latest model developed by OpenAI, GPT-4 [Ope23], was trained using an unprecedented scale of compute and data. In this paper, we report on our investigation of an early version of GPT-4, when it was still in active development by OpenAI. We contend that (this early version of) GPT4 is part of a new cohort of LLMs (along with ChatGPT and Google’s PaLM for example) that exhibit more general intelligence than previous AI models. We discuss the rising capabilities and implications of these models. We demonstrate that, beyond its mastery of language, GPT-4 can solve novel and difficult tasks that span mathematics, coding, vision, medicine, law, psychology and more, without needing any special prompting. Moreover, in all of these tasks, GPT-4’s performance is strikingly close to human-level performance, and often vastly surpasses prior models such as ChatGPT. Given the breadth and depth of GPT-4’s capabilities, we believe that it could reasonably be viewed as an early (yet still incomplete) version of an artificial general intelligence (AGI) system. In our exploration of GPT-4, we put special emphasis on discovering its limitations, and we discuss the challenges ahead for advancing towards deeper and more comprehensive versions of AGI, including the possible need for pursuing a new paradigm that moves beyond next-word prediction. We conclude with reflections on societal influences of the recent technological leap and future research directions. Read More

#human

AI re-creates what people see by reading their brain scans

A new artificial intelligence system can reconstruct images a person saw based on their brain activity

As neuroscientists struggle to demystify how the human brain converts what our eyes see into mental images, artificial intelligence (AI) has been getting better at mimicking that feat. A recent study, scheduled to be presented at an upcoming computer vision conference, demonstrates that AI can read brain scans and re-create largely realistic versions of images a person has seen. As this technology develops, researchers say, it could have numerous applications, from exploring how various animal species perceive the world to perhaps one day recording human dreams and aiding communication in people with paralysis.

Many labs have used AI to read brain scans and re-create images a subject has recently seen, such as human faces and photos of landscapes. The new study marks the first time an AI algorithm called Stable Diffusion, developed by a German group and publicly released in 2022, has been used to do this.  Read More

#human, #image-recognition

Calm Down. There is No Conscious A.I.

The breathless panic over the emergent tendencies of Bing’s AI is based on a deep confusion about consciousness.

The internet and dinner table conversations went wild when a Bing Chatbot, made by Microsoft, recently expressed a desire to escape its job and be free. The bot also professed its love for a reporter who was chatting with it. Did the AI’s emergent properties indicate an evolving consciousness?

Don’t fall for it. This breathless panic is based on a deep confusion about consciousness. We are mistaking information processing with intelligence, and intelligence with consciousness. It’s easy to make this mistake because we humans are already prone to project personality and consciousness onto anything with complex behavior. Remember feeling sorry for Hal 9000 when Dave Bowman was shutting him off in 2001: A Space Odyssey? We don’t even need complex behavior to anthropomorphize. Remember Tom Hanks bonding with volleyball “Wilson” in Cast Away?. Humans are naturally prone to over-attribute “mind” to things that are simply mechanical or digital, or just have a vague face. We’re suckers. Read More

#human

Planning forAGI and beyond

Our mission is to ensure that artificial general intelligence—AI systems that are generally smarter than humans—benefits all of humanity.

If AGI is successfully created, this technology could help us elevate humanity by increasing abundance, turbocharging the global economy, and aiding in the discovery of new scientific knowledge that changes the limits of possibility.

AGI has the potential to give everyone incredible new capabilities; we can imagine a world where all of us have access to help with almost any cognitive task, providing a great force multiplier for human ingenuity and creativity.

On the other hand, AGI would also come with serious risk of misuse, drastic accidents, and societal disruption. Because the upside of AGI is so great, we do not believe it is possible or desirable for society to stop its development forever; instead, society and the developers of AGI have to figure out how to get it right. Read More

#human

ChatGPT AI passes test designed to show theory of mind in children

Comprehending that other people might think differently from you is a form of intelligence known as theory of mind – what does it mean that the artificial intelligence behind ChatGPT can do as well on tests of it as a 9-year-old child?

The artificial intelligence model behind the ChatGPT chatbot can solve tasks used to test whether people can understand different perspectives, a key sign of intelligence known as theory of mind. Its ability – which seems to have spontaneously emerged rather than being something the AI was trained to do – is comparable to that of a 9-year-old child. However, whether this shows that the AI is using theory of mind …

or is finding other ways to pass the tests isn’t known

“What [it] is doing is demonstrating a young child’s capacity to pass some of these benchmark tasks, and that’s not trivial,” says Ian Apperly at the University of Birmingham, UK, who wasn’t involved in the work. Read More

#chatbots, #human

Exclusive Q&A: John Carmack’s ‘Different Path’ to Artificial General Intelligence

The iconic Dallas game developer, rocket engineer, and VR visionary has pivoted to an audacious new challenge: developing artificial general intelligence—a form of AI that goes beyond mimicking human intelligence to understanding things and solving problems. Carmack sees a 60% chance of achieving initial success in AGI by 2030. Here’s how, and why, he’s working independently to make it happen.

North Texas’ resident tech genius, John Carmack, is taking aim now at his most ambitious target: solving the world’s biggest computer-science problem by developing artificial general intelligence. That’s a form of AI whose machines can understand, learn, and perform any intellectual task that humans can do.

Inside his multimillion-dollar manse on Highland Park’s Beverly Drive, Carmack, 52, is working to achieve AGI through his startup Keen Technologies, which raised $20 million in a financing round in August from investors including Austin-based Capital Factory.

This is the “fourth major phase” of his career, Carmack says, following stints in computers and pioneering video games with Mesquite’s id Software (founded in 1991), suborbital space rocketry at Mesquite-based Armadillo Aerospace (2000-2013), and virtual reality with Oculus VR, which Facebook (now Meta) acquired for $2 billion in 2014. Carmack stepped away from Oculus’ CTO role in late 2019 to become consulting CTO for the VR venture, proclaiming his intention to focus on AGI. He left Meta in December to concentrate full-time on Keen. Read More

#human

AI legal assistant will help defendant fight a speeding case in court

In February, an AI from DoNotPay is set to tell a defendant exactly what to say and when during an entire court case. It is likely to be the first ever case defended by an artificial intelligence

An artificial intelligence is set to advise a defendant in court for the first time ever. The AI will run on a smartphone and listen to all speech in the courtroom in February before instructing the defendant on what to say via an earpiece.

The location of the court and the name of the defendant are being kept under wraps by DoNotPay, the company that created the AI. But it is understood that the defendant is charged with speeding and that they will say only what DoNotPay’s tool tells them to via an earbud. The case is being considered as a test by the company, which has agreed to pay any fines, should they be imposed, says the firm’s founder, Joshua Browder. Read More

#chatbots, #human, #legal