ChatGPT Gets a Computer

… Computers are, at their core, incredibly dumb; a transistor, billions of which lie at the heart of the fastest chips in the world, are simple on-off switches, the state of which is represented by a 1 or a 0. What makes them useful is that they are dumb at incomprehensible speed; the Apple A16 in the current iPhone turns transistors on and off up to 3.46 billion times a second.

… It is mathematical logic that reduces all of math to a series of logical statements, which allows them to be computed using transistors.

… ChatGPT does great at the “human-like parts”, where there isn’t a precise “right answer”. But when it’s “put on the spot” for something precise, it often falls down. But the whole point here is that there’s a great way to solve this problem—by connecting ChatGPT to Wolfram|Alpha and all its computational knowledge “superpowers”.

… That’s exactly what OpenAI has done, by adding support for plug-ins to ChatGPT. Read More

#chatbots

Google C.E.O Sundar Pichai on Bard, A.I. ‘Whiplash’ and Competing With ChatGPT

For years, Google was seen as one of the most cutting-edge developers of A.I. But, with OpenAI’s release of ChatGPT, and other chatbots beating Google to market, is that distinction still the case? Google’s chief executive is in an unenviable position: Scramble to catch up or, in the face of potentially harmful technology, move slowly.

Today, Sundar Pichai on Google’s delicate balance between A.I. innovation and safety. Read More

#chatbots, #podcasts

Cerebras releases seven large language models for generative AI, trained on its specialized hardware

Artificial intelligence chipmaker Cerebras Systems Inc. today announced it has trained and now released seven GPT-based large language models for generative AI, making them available to the wider research community.

The new LLMs are notable as they are the first to be trained using CS-2 systems in the Cerebras Andromeda AI supercluster, which are powered by the Cerebras WSE-2 chip that is specifically designed to run AI software. In other words, they’re among the first LLMs to be trained without relying on graphics processing unit-based systems. Cerebras said it’s sharing not only the models, but also the weights and training recipe that was used, via a standard Apache 2.0 license. Read More

#chatbots

MM-REACT: Prompting ChatGPT for Multimodal Reasoning and Action

We propose MM-REACT, a system paradigm that integrates ChatGPT with a pool of vision experts to achieve multimodal reasoning and action. In this paper, we define and explore a comprehensive list of advanced vision tasks that are intriguing to solve, but may exceed the capabilities >of existing vision and vision-language models. To achieve such advanced visual intelligence, MM-REACT introduces a textual prompt design that can represent text descriptions, textualized spatial coordinates, and aligned file names for >dense visual signals such as images and videos. MMREACT’s prompt design allows language models to accept, associate, and process multimodal information, thereby facilitating the synergetic combination of ChatGPT and various vision experts. Zero-shot experiments demonstrate MM-REACT’s effectiveness in addressing the specified capabilities of interests and its wide application in different scenarios that require advanced visual understanding. Furthermore, we discuss and compare MM-REACT’s system paradigm with an alternative approach that extends language models for multimodal scenarios through joint finetuning. Code, demo, video, and visualization are available at https://multimodal-react.github.io/.

Read More

#chatbots

Superhuman: What can AI do in 30 minutes?

The thing that we have to come to grips with in a world of ubiquitous, powerful AI tools is how much it can do for us. The multiplier on human effort is unprecedented, and potentially disruptive. But this fact can often feel abstract.

So I decided to run an experiment. I gave myself 30 minutes, and tried to accomplish as much as I could during that time on a single business project. At the end of 30 minutes I would stop. The project: to market the launch a new educational game. AI would do all the work, I would just offer directions.

And what it accomplished was superhuman. I will go through the details in a moment, but, in 30 minutes it: did market research, created a positioning document, wrote an email campaign, created a website, created a logo and “hero shot” graphic, made a social media campaign for multiple platforms, and scripted and created a video. In 30 minutes. Read More

#chatbots, #augmented-intelligence

OpenAI says 80% of workers could see their jobs impacted by AI. These are the jobs most affected

OpenAI, the company behind the popular chatbot ChatGPT, has crunched the numbers on different jobs’ exposure to artificial intelligence (AI) – and those numbers are eye-opening.

Using its latest machine learning language model (LLM), the recently released GPT-4, as well as human expertise, researchers investigated the potential implications of language models on occupations within the US job market.

While the researchers stress the paper is not a prediction, they found around 80 per cent of the US workforce could have at least 10 per cent of their work tasks affected by GPTs, or Generative Pre-trained Transformers. Read More

#chatbots

Try Bard and share your feedback

We’re starting to open access to Bard, an early experiment that lets you collaborate with generative AI. We’re beginning with the U.S. and the U.K., and will expand to more countries and languages over time.

Today we’re starting to open access to Bard, an early experiment that lets you collaborate with generative AI. This follows our announcements from last week as we continue to bring helpful AI experiences to people, businesses and communities.

You can use Bard to boost your productivity, accelerate your ideas and fuel your curiosity. You might ask Bard to give you tips to reach your goal of reading more books this year, explain quantum physics in simple terms or spark your creativity by outlining a blog post. We’ve learned a lot so far by testing Bard, and the next critical step in improving it is to get feedback from more people. Read More

#chatbots

The genie escapes: Stanford copies the ChatGPT AI for less than $600

Stanford’s Alpaca AI performs similarly to the astonishing ChatGPT on many tasks – but it’s built on an open-source language model and cost less than US$600 to train up. It seems these godlike AIs are already frighteningly cheap and easy to replicate.

Six months ago, only researchers and boffins were following the development of large language models. But ChatGPT’s launch late last year sent a rocket up humanity’s backside: machines are now able to communicate in a way pretty much indistinguishable from humans. They’re able to write text and even programming code across a dizzying array of subject areas in seconds, often of a very high standard. They’re improving at a meteoric rate, as the launch of GPT-4 illustrates, and they stand to fundamentally transform human society like few other technologies could, by potentially automating a range of job tasks – particularly among white-collar workers – people might previously have thought of as impossible.

Many other companies – notably Google, Apple, Meta, Baidu and Amazon, among others – are not too far behind, and their AIs will soon be flooding into the market, attached to every possible application and device. … But what about a language model you can build yourself for 600 bucks? Read More

GitHub Here

#chatbots

Lightning AI CEO slams OpenAI’s GPT-4 paper as ‘masquerading as research’

Shortly after OpenAI’s surprise release of its long-awaited GPT-4 model yesterday, there was a raft of online criticism about what accompanied the announcement: a 98-page technical report about the “development of GPT-4.” 

Many said the report was notable mostly for what it did not include. In a section called Scope and Limitations of this Technical Report, it says: “Given both the competitive landscape and the safety implications of large-scale models like GPT-4, this report contains no further details about the architecture (including model size), hardware, training compute, dataset construction, training method, or similar.”

“I think we can call it shut on ‘Open’ AI: the 98 page paper introducing GPT-4 proudly declares that they’re disclosing *nothing* about the contents of their training set,” tweeted Ben Schmidt, VP of information design at Nomic AI.  Read More

#chatbots

OpenAI’s GPT-4 Just Smoked Basically Every Test and Exam Anyone’s Ever Taken

OpenAI’s GPT-4 is officially here — and the numbers speak for themselves.

Hot on the heels of its announcement, OpenAI has released a bunch of stats about its even-more-powerful new large language model — and reader, we’re both spooked and skeptical in equal measures.

According to a new white paper, the algorithm got incredibly good scores on a number of exams including the Bar, the LSATs, the SAT’s Reading and Math tests, and the GRE. Read More

#chatbots