AGI, or artificial general intelligence, is one of the hottest topics in tech today. It’s also one of the most controversial. A big part of the problem is that few people agree on what the term even means. Now a team of Google DeepMind researchers has put out a paper that cuts through the cross talk with not just one new definition for AGI but a whole taxonomy of them.
In broad terms, AGI typically means artificial intelligence that matches (or outmatches) humans on a range of tasks. But specifics about what counts as human-like, what tasks, and how many all tend to get waved away: AGI is AI, but better.
To come up with the new definition, the Google DeepMind team started with prominent existing definitions of AGI and drew out what they believe to be their essential common features.
The team also outlines five ascending levels of AGI: emerging (which in their view includes cutting-edge chatbots like ChatGPT and Bard), competent, expert, virtuoso, and superhuman (performing a wide range of tasks better than all humans, including tasks humans cannot do at all, such as decoding other people’s thoughts, predicting future events, and talking to animals). They note that no level beyond emerging AGI has been achieved. — Read More
Read the Paper
Monthly Archives: November 2023
A Coder Considers the Waning Days of the Craft
Ihave always taken it for granted that, just as my parents made sure that I could read and write, I would make sure that my kids could program computers. It is among the newer arts but also among the most essential, and ever more so by the day, encompassing everything from filmmaking to physics. Fluency with code would round out my children’s literacy—and keep them employable. But as I write this my wife is pregnant with our first child, due in about three weeks. I code professionally, but, by the time that child can type, coding as a valuable skill might have faded from the world.
I first began to believe this on a Friday morning this past summer, while working on a small hobby project. A few months back, my friend Ben and I had resolved to create a Times-style crossword puzzle entirely by computer. In 2018, we’d made a Saturday puzzle with the help of software and were surprised by how little we contributed—just applying our taste here and there. Now we would attempt to build a crossword-making program that didn’t require a human touch.
… Something strange started happening. Ben and I would talk about a bit of software we wanted for the project. Then, a shockingly short time later, Ben would deliver it himself. At one point, we wanted a command that would print a hundred random lines from a dictionary file. I thought about the problem for a few minutes, and, when thinking failed, tried Googling. I made some false starts using what I could gather, and while I did my thing—programming—Ben told GPT-4 what he wanted and got code that ran perfectly. — Read More
Ten Ways AI Will Change Democracy
Artificial intelligence will change so many aspects of society, largely in ways that we cannot conceive of yet. Democracy, and the systems of governance that surround it, will be no exception. In this short essay, I want to move beyond the “AI-generated disinformation” trope and speculate on some of the ways AI will change how democracy functions—in both large and small ways.
When I survey how artificial intelligence might upend different aspects of modern society, democracy included, I look at four different dimensions of change: speed, scale, scope, and sophistication. Look for places where changes in degree result in changes of kind. Those are where the societal upheavals will happen.
Some items on my list are still speculative, but none require science-fictional levels of technological advance. And we can see the first stages of many of them today. When reading about the successes and failures of AI systems, it’s important to differentiate between the fundamental limitations of AI as a technology, and the practical limitations of AI systems in the fall of 2023. Advances are happening quickly, and the impossible is becoming the routine. We don’t know how long this will continue, but my bet is on continued major technological advances in the coming years. Which means it’s going to be a wild ride. — Read More
Generative AI will level up cyber attacks, according to new Google report
As technology gets smarter with developments such as generative AI, so do cybersecurity attacks. Google’s new cybersecurity forecast reveals the rise of AI brings new threats you should be aware of.
On Wednesday, Google launched its Google Cloud Cybersecurity Forecast 2024, a report put together through a collaboration with numerous Google Cloud security teams that deep dives into the cyber landscape for the upcoming year. — Read More
Nvidia announces new HGX H200 computing platform, with advanced memory to handle AI workloads
Nvidia Corp. today announced the introduction of the HGX H200 computing platform, a new powerful system that features the upcoming H200 Tensor Core graphics processing unit based on its Hopper architecture, with advanced memory to handle the massive amounts of data needed for artificial intelligence and supercomputing workloads.
The company announced the new platform (pictured) during today’s Supercomputing 2023 conference in Denver, Colorado. It revealed that the H200 will be the first GPU to be built with HB3e memory, a high-speed memory designed to accelerate large language model AIs and high-performance computing capabilities for scientific and industrial endeavors.
The H200 is the next generation after the H100 GPU, Nvidia’s first GPU to be built on the Hopper architecture. It includes a new feature called the Transformer Engine designed to speed up natural language processing models. With the addition of the new HB3e memory, the H200 has more than 141 gigabytes of memory at 4.8 terabits per second, capable of nearly double the capacity and 2.4 times the bandwidth of the Nvidia A100 GPU. — Read More
Steve Jobs President & CEO, NeXT Computer Corp and Apple. MIT Sloan Distinguished Speaker Series
How producers used AI to finish The Beatles’ ‘last’ song, ‘Now And Then’
The Beatles finally released their hotly anticipated “last” song, and as many fans speculated, the record is the completed version of John Lennon‘s love song called “Now And Then.”
… As producer Giles Martin explains, a big part of why “Now And Then” has been in production limbo for so long is due to the poor quality of the cassette tape.
“The very original recording is just John playing the piano with TV in the background,” Martin tells World Cafe. “That’s part of this technology — we could now extract John from the piano and from the television.” — Read More
Charts No 1
CogVLM: Visual Expert for Pretrained Language Models
We introduce CogVLM, a powerful open-source visual language foundation model. Different from the popular shallow alignment method which maps image features into the input space of language model, CogVLM bridges the gap between the frozen pretrained language model and image encoder by a trainable visual expert module in the attention and FFN layers. As a result, CogVLM enables deep fusion of vision language features without sacrificing any performance on NLP tasks. CogVLM-17B achieves state-of-the-art performance on 10 classic cross-modal benchmarks, including NoCaps, Flicker30k captioning, RefCOCO, RefCOCO+, RefCOCOg, Visual7W, GQA, ScienceQA, VizWiz VQA and TDIUC, and ranks the 2nd on VQAv2, OKVQA, TextVQA, COCO captioning, etc., surpassing or matching PaLI-X 55B. Codes and checkpoints are available at this https URL. — Read More
Large Language Models, ALBERT — A Lite BERT for Self-supervised Learning
In recent years, the evolution of large language models has skyrocketed. BERT became one of the most popular and efficient models allowing to solve a wide range of NLP tasks with high accuracy. After BERT, a set of other models appeared later on the scene demonstrating outstanding results as well.
The obvious trend that became easy to observe is the fact that with time large language models (LLMs) tend to become more complex by exponentially augmenting the number of parameters and data they are trained on. Research in deep learning showed that such techniques usually lead to better results. Unfortunately, the machine learning world has already dealt with several problems regarding LLMs, and scalability has become the main obstacle in effective training, storing and using them.
As a consequence, new LLMs have been recently developed to tackle scalability issues. In this article, we will discuss ALBERT which was invented in 2020 with an objective of significant reduction of BERT parameters. — Read More
Nvidia to release new AI chips for Chinese market after export ban
Nvidia is expected to introduce new high-end AI chips for Chinese customers after its current ones were blocked from being sold in the country. China, together with Taiwan and the U.S., ranks among Nvidia’s top markets — Read More