Ways to think about AGI

In 1946, my grandfather, writing as ‘Murray Leinster’, published a science fiction story called ‘A Logic Named Joe’. Everyone has a computer (a ‘logic’) connected to a global network that does everything from banking to newspapers and video calls. One day, one of these logics, ‘Joe’, starts giving helpful answers to any request, anywhere on the network: invent an undetectable poison, say, or suggest the best way to rob a bank. Panic ensues – ‘Check your censorship circuits!’ – until they work out what to unplug. (My other grandfather, meanwhile, was using computers to spy on the Germans, and then the Russians.)

For as long as we’ve thought about computers, we’ve wondered if they could make the jump from mere machines, shuffling punch-cards and databases, to some kind of ‘artificial intelligence’, and wondered what that would mean, and indeed, what we’re trying to say with the word ‘intelligence’. There’s an old joke that ‘AI’ is whatever doesn’t work yet, because once it works, people say ‘that’s not AI – it’s just software’. Calculators do super-human maths, and databases have super-human memory, but they can’t do anything else, and they don’t understand what they’re doing, any more than a dishwasher understands dishes, or a drill understands holes. A drill is just a machine, and databases are ‘super-human’ but they’re just software. Somehow, people have something different, and so, on some scale, do dogs, chimpanzees and octopuses and many other creatures. AI researchers have come to talk about this as ‘general intelligence’ and hence making it would be ‘artificial general intelligence’ – AGI.

If we really could create something in software that was meaningfully equivalent to human intelligence, it should be obvious that this would be a very big deal. Can we make software that can reason, plan, and understand? At the very least, that would be a huge change in what we could automate, and as my grandfather and a thousand other science fiction writers have pointed out, it might mean a lot more. — Read More

#singularity

Microsoft’s AI Copilot is coming to your messaging apps, starting with Telegram

Whether you love or hate Microsoft’s Copilot AI, there could be no escaping it soon as it has recently been spotted crawling around messaging apps, specifically Telegram. Microsoft seems to have sneakily introduced Copilot into the messaging app, allowing Telegram users to experience it firsthand.

According to Windows Latest, the move is part of a new project from Microsoft dubbed ‘copilot-for-social’, which is an initiative to bring generative AI to social media apps. — Read More

#big7

A quarter of U.S. teachers say AI tools do more harm than good in K-12 education

As some teachers start to use artificial intelligence (AI) tools in their work, a majority are uncertain about or see downsides to the general use of AI tools in K-12 education, according to a Pew Research Center survey conducted in fall 2023.

A quarter of public K-12 teachers say using AI tools in K-12 education does more harm than good. About a third (32%) say there is about an equal mix of benefit and harm, while only 6% say it does more good than harm. Another 35% say they aren’t sure. — Read More

#strategy

Pocket-Sized AI Models Could Unlock a New Era of Computing

When ChatGPT was released in November 2023, it could only be accessed through the cloud because the model behind it was downright enormous.

Today I am running a similarly capable AI program on a Macbook Air, and it isn’t even warm. The shrinkage shows how rapidly researchers are refining AI models to make them leaner and more efficient. It also shows how going to ever larger scales isn’t the only way to make machines significantly smarter. — Read More

#strategy

Four Singularities for Research

As a business school professor, I am keenly aware of the research showing that business school professors are among the top 25 jobs (out of 1,016) whose tasks overlap most with AI. But overlap doesn’t necessarily mean replacement, it means disruption and change. I have written extensively about how a big part of my job as a professor – my role as an educator – is changing with AI, but I haven’t written as much about how the other big part of my job, academic research, is being transformed. I think that change will be every bit as profound, and it may even be necessary.

Even before ChatGPT, something alarming was happening in academia. Though academics published ever more work, the pace of innovation appeared to be slowing rapidly. In fact, one paper found that research was losing steam in every field, from agriculture to cancer research. More researchers are required to advance the state of the art, and the speed of innovation appears to be dropping by 50% every 13 years. The reasons for this are not entirely clear, and are likely complex, but it suggests a crisis already occurring, one that AI had no role in. In fact, it is possible that AI may help address this issue, but not before creating issues of its own.

I think AI is about to bring on many more crises in scientific research… well, not crises – singularities. I don’t mean The Singularity, the hypothetical moment that humans build a machine smarter than themselves and life changes forever, but rather a narrower version. A narrow singularity is a future point in human affairs where AI has so altered a field or industry that we cannot fully imagine what the world on the other side of that singularity looks like. I think academic research is facing at least four of these narrow singularities. Each has the potential to so alter the nature of academic research that it could either restart the slowing engine of innovation or else create a crisis to derail it further. The early signs are already here, we just need to decide what we will do on the other side. — Read More

#singularity

Personal AI Assistants and Privacy

Microsoft is trying to create a personal digital assistant:

At a Build conference event on Monday, Microsoft revealed a new AI-powered feature called “Recall” for Copilot+ PCs that will allow Windows 11 users to search and retrieve their past activities on their PC. To make it work, Recall records everything users do on their PC, including activities in apps, communications in live meetings, and websites visited for research. Despite encryption and local storage, the new feature raises privacy concerns for certain Windows users. — Read More

#privacy

KAN: Kolmogorov-Arnold Networks

Inspired by the Kolmogorov-Arnold representation theorem, we propose Kolmogorov-Arnold Networks (KANs) as promising alternatives to Multi-Layer Perceptrons (MLPs). While MLPs have fixed activation functions on nodes (“neurons”), KANs have learnable activation functions on edges (“weights”). KANs have no linear weights at all — every weight parameter is replaced by a univariate function parametrized as a spline. We show that this seemingly simple change makes KANs outperform MLPs in terms of accuracy and interpretability. For accuracy, much smaller KANs can achieve comparable or better accuracy than much larger MLPs in data fitting and PDE solving. Theoretically and empirically, KANs possess faster neural scaling laws than MLPs. For interpretability, KANs can be intuitively visualized and can easily interact with human users. Through two examples in mathematics and physics, KANs are shown to be useful collaborators helping scientists (re)discover mathematical and physical laws. In summary, KANs are promising alternatives for MLPs, opening opportunities for further improving today’s deep learning models which rely heavily on MLPs. — Read More

#deep-learning

Anthropic tricked Claude into thinking it was the Golden Gate Bridge (and other glimpses into the mysterious AI brain)

AI models are mysterious: They spit out answers, but there’s no real way to know the “thinking” behind their responses. This is because their brains operate on a fundamentally different level than ours — they process long lists of neurons linked to numerous different concepts — so we simply can’t comprehend their line of thought.

But now, for the first time, researchers have been able to get a glimpse into the inner workings of the AI mind. The team at Anthropic has revealed how it is using “dictionary learning” on Claude Sonnet to uncover pathways in the model’s brain that are activated by different topics — from people, places and emotions to scientific concepts and things even more abstract. — Read More

Read The Paper

#explainability

PwC’s 2024 AI Jobs Barometer

AI is the Industrial Revolution of knowledge work, transforming how all workers can apply
information, create content, and deliver results at speed and scale. How is this affecting
jobs? With the AI Jobs Barometer, PwC set out to find empirical evidence to help sort fact
from fiction.

PwC analysed over half a billion job ads from 15 countries to find evidence of AI’s impact at worldwide scale through jobs and productivity data. — Read More

#strategy

Hollywood at a Crossroads: “Everyone Is Using AI, But They Are Scared to Admit It”

For horror fans, Late Night With the Devil marked one of the year’s most anticipated releases. Embracing an analog film filter, the found-footage flick starring David Dastmalchian reaped praise for its top-notch production design by leaning into a ’70s-era grindhouse aesthetic reminiscent of Dawn of the Dead or Death Race 2000. Following a late-night talk show host airing a Halloween special in 1977, it had all the makings of a cult hit.

But the movie may be remembered more for the controversy surrounding its use of cutaway graphics created by generative artificial intelligence tools. One image of a dancing skeleton in particular incensed some theatergoers. Leading up to its theatrical debut in March, it faced the prospect of a boycott, though that never materialized. — Read More

#vfx