Artificial intelligence isn’t a good argument for basic income

We’re flooded by guaranteed income pilot experiments that offer some promising results, but don’t seem to be moving us any closer to actual federal policy. Yet findings published today from the largest randomized basic income experiment in the US to date, backed by Sam Altman and OpenAI, should get your notice.

The study, held from November 2020 through October 2023, gave 1,000 recipients $1,000 per month, no strings attached. It’s one of the biggest and longest trials ever run on direct cash giving. Many other basic income pilots have given people $500 or less, and rarely for more than a year or two. — Read More

#strategy

OpenAI’s “Strawberry” Model: Stage 2 Of 5-Level AI Development?

OpenAI, the creator of ChatGPT, is developing a new AI model named “Strawberry.” This initiative aims to advance AI tools towards human-level intelligence through enhanced reasoning capabilities. Building on the previous Q* project, Strawberry is designed to autonomously scan the internet and perform “deep research.”

Strawberry is a cutting-edge AI model intended to tackle complex real-world problems on a large scale. This model builds upon the Q* project, which was previously hailed as a technical breakthrough, enabling the creation of “far more powerful” AI models. — Read More

#strategy

Folk psychological attributions of consciousness to large language models

Technological advances raise new puzzles and challenges for cognitive science and the study of how humans think about and interact with artificial intelligence (AI). For example, the advent of large language models and their human-like linguistic abilities has raised substantial debate regarding whether or not AI could be conscious. Here, we consider the question of whether AI could have subjective experiences such as feelings and sensations (‘phenomenal consciousness’). While experts from many fields have weighed in on this issue in academic and public discourse, it remains unknown whether and how the general population attributes phenomenal consciousness to AI. We surveyed a sample of US residents (n = 300) and found that a majority of participants were willing to attribute some possibility of phenomenal consciousness to large language models. These attributions were robust, as they predicted attributions of mental states typically associated with phenomenality—but also flexible, as they were sensitive to individual differences such as usage frequency. Overall, these results show how folk intuitions about AI consciousness can diverge from expert intuitions—with potential implications for the legal and ethical status of AI. — Read More

#strategy

AI supercharges data center energy use – straining the grid and slowing sustainability efforts

The artificial intelligence boom has had such a profound effect on big tech companies that their energy consumption, and with it their carbon emissions, have surged.

The spectacular success of large language models such as ChatGPT has helped fuel this growth in energy demand. At 2.9 watt-hours per ChatGPT request, AI queries require about 10 times the electricity of traditional Google queries, according to the Electric Power Research Institute, a nonprofit research firm. Emerging AI capabilities such as audio and video generation are likely to add to this energy demand

The energy needs of AI are shifting the calculus of energy companies. They’re now exploring previously untenable options, such as restarting a nuclear reactor at the Three Mile Island power plant that has been dormant since the infamous disaster in 1979.

Data centers have had continuous growth for decades, but the magnitude of growth in the still-young era of large language models has been exceptional. AI requires a lot more computational and data storage resources than the pre-AI rate of data center growth could provide. — Read More

#strategy

What is AI?

Everyone thinks they know but no one can agree. And that’s a problem

AI is sexy, AI is cool. AI is entrenching inequality, upending the job market, and wrecking education. AI is a theme-park ride, AI is a magic trick. AI is our final invention, AI is a moral obligation. AI is the buzzword of the decade, AI is marketing jargon from 1955. AI is humanlike, AI is alien. AI is super-smart and as dumb as dirt. The AI boom will boost the economy, the AI bubble is about to burst. AI will increase abundance and empower humanity to maximally flourish in the universe. AI will kill us all.

What the hell is everybody talking about? — Read More

#strategy

Why AI can’t replace science

The scientific revolution has increased our understanding of the world immensely and improved our lives immeasurably. Now, many argue that science as we know it could be rendered passé by artificial intelligence. 

… Today, AI is being increasingly integrated into scientific discovery to accelerate research, helping scientists generate hypotheses, design experiments, gather and interpret large datasets, and write papers. But the reality is that science and AI have little in common and AI is unlikely to make science obsolete. The core of science is theoretical models that anyone can use to make reliable descriptions and predictions. … The core of AI, in contrast, is data mining. … However, without an underlying causal explanation, we don’t know whether a discovered pattern is a meaningful reflection of an underlying causal relationship or meaningless serendipity. — Read More

#strategy

ChatGPT is bullshit

Recently, there has been considerable interest in large language models: machine learning systems which produce human-like text and dialogue. Applications of these systems have been plagued by persistent inaccuracies in their output; these are often called “AI hallucinations”. We argue that these falsehoods, and the overall activity of large language models, is better understood as bullshit in the sense explored by Frankfurt (On Bullshit, Princeton, 2005): the models are in an important way indifferent to the truth of their outputs. We distinguish two ways in which the models can be said to be bullshitters, and argue that they clearly meet at least one of these definitions. We further argue that describing AI misrepresentations as bullshit is both a more useful and more accurate way of predicting and discussing the behaviour of these systems. — Read More

#strategy

SITUATIONAL AWARENESS: The Decade Ahead

You can see the future first in San Francisco. 

Over the past year, the talk of the town has shifted from $10 billion compute clusters to $100 billion clusters to trillion-dollar clusters. Every six months another zero is added to the boardroom plans. Behind the scenes, there’s a fierce scramble to secure every power contract still available for the rest of the decade, every voltage transformer that can possibly be procured. American big business is gearing up to pour trillions of dollars into a long-unseen mobilization of American industrial might. By the end of the decade, American electricity production will have grown tens of percent; from the shale fields of Pennsylvania to the solar farms of Nevada, hundreds of millions of GPUs will hum.

The AGI race has begun. We are building machines that can think and reason. By 2025/26, these machines will outpace many college graduates. By the end of the decade, they will be smarter than you or I; we will have superintelligence, in the true sense of the word. Along the way, national security forces not seen in half a century will be unleashed, and before long, The Project will be on. If we’re lucky, we’ll be in an all-out race with the CCP; if we’re unlucky, an all-out war. — Read More

#strategy

She Built an AI Product Manager Bringing in Six Figures—As A Side Hustle

How Claire Vo created ChatPRD while working a demanding job

Claire Vo built ChatPRD—an on-demand chief product officer powered by AI. It’s now used by over 10,000 product managers and is pulling in six figures in revenue. 

The best part?

Claire has a demanding day job as the chief product officer at LaunchDarkly. So she built all of ChatPRD herself—over the weekend—with AI. — Read More

#podcasts, #strategy

More New Open Models

A trio of powerful open and semi-open models give developers new options for both text and image generation. Nvidia and Alibaba released high-performance large language models (LLMs), while Stability AI released a slimmed-down version of its flagship text-to-image generator.

… Nvidia offers the Nemotron-4 340B family of language models, which includes a 340-billion parameter base model as well as versions fine-tuned to follow instructions and to serve as a reward model in reinforcement learning from human feedback. …. Alibaba introduced the Qwen2 family of language models. Qwen2 includes base and instruction-tuned versions of five models that range in size from 500 million to 72 billion parameters and process context lengths between 32,000 and 128,000 tokens. …. Stability AI launched the Stable Diffusion 3 Medium text-to-image generator, a 2 billion-parameter based on the technology that underpins Stable Diffusion 3.  — Read More

#strategy