AI isn’t coming for your current job. It’s coming for your next one — and has already wrecked it
… According to a wide variety of institutions and publications, the past two years have featured the strongest labor environment in decades. The Commerce Department announced in February of 2023 that “Unemployment is at its lowest level in 54 years.” When this April’s official numbers showed that the U.S. recorded its 27th straight month of sub-4% unemployment, tying the second-longest streak since World War II, the Center for Economic and Policy Research was but one of a multitude of sources celebrating: “This matches the streak from November 1967 to January 1970, often viewed as one of the most prosperous stretches in US history.” In June, Investopedia practically gushed that “U.S. workers are in the midst of one of the best job markets in history. They haven’t had this much job security since the 1960s, and haven’t seen a longer stretch of low unemployment since the early 1950s.”
Arguments about statistical methodology aside, there’s nothing to suggest that those headline numbers were incorrect to any significant extent. But raw unemployment is considered a lagging economic indicator, and there is quite a bit of evidence supporting the premise that, below the surface, the biggest drivers of new employment — online job listings — have become elaborate façades destined to cause more problems than they solve for those seeking work. — Read More
Tag Archives: Strategy
DeepMind hits milestone in solving maths problems — AI’s next grand challenge
After beating humans at everything from the game of Go to strategy board games, Google DeepMind now says it is on the verge of besting the world’s top students at solving mathematics problems.
The London-based machine-learning company announced on 25 July that its artificial intelligence (AI) systems had solved four of the six problems that were given to school students at the 2024 International Mathematical Olympiad (IMO) in Bath, UK, this month. The AI produced rigorous, step-by-step proofs that were marked by two top mathematicians and earned a score of 28/42 — just one point shy of the gold-medal range.
… DeepMind and other companies are in a race to eventually have machines give proofs that would solve substantial research questions in maths. Problems set at the IMO — the world’s premier competition for young mathematicians — have become a benchmark for progress towards that goal, and have come to be seen as a “grand challenge” for machine learning, the company says. — Read More
Artificial intelligence isn’t a good argument for basic income
We’re flooded by guaranteed income pilot experiments that offer some promising results, but don’t seem to be moving us any closer to actual federal policy. Yet findings published today from the largest randomized basic income experiment in the US to date, backed by Sam Altman and OpenAI, should get your notice.
The study, held from November 2020 through October 2023, gave 1,000 recipients $1,000 per month, no strings attached. It’s one of the biggest and longest trials ever run on direct cash giving. Many other basic income pilots have given people $500 or less, and rarely for more than a year or two. — Read More
OpenAI’s “Strawberry” Model: Stage 2 Of 5-Level AI Development?
OpenAI, the creator of ChatGPT, is developing a new AI model named “Strawberry.” This initiative aims to advance AI tools towards human-level intelligence through enhanced reasoning capabilities. Building on the previous Q* project, Strawberry is designed to autonomously scan the internet and perform “deep research.”
Strawberry is a cutting-edge AI model intended to tackle complex real-world problems on a large scale. This model builds upon the Q* project, which was previously hailed as a technical breakthrough, enabling the creation of “far more powerful” AI models. — Read More
Folk psychological attributions of consciousness to large language models
Technological advances raise new puzzles and challenges for cognitive science and the study of how humans think about and interact with artificial intelligence (AI). For example, the advent of large language models and their human-like linguistic abilities has raised substantial debate regarding whether or not AI could be conscious. Here, we consider the question of whether AI could have subjective experiences such as feelings and sensations (‘phenomenal consciousness’). While experts from many fields have weighed in on this issue in academic and public discourse, it remains unknown whether and how the general population attributes phenomenal consciousness to AI. We surveyed a sample of US residents (n = 300) and found that a majority of participants were willing to attribute some possibility of phenomenal consciousness to large language models. These attributions were robust, as they predicted attributions of mental states typically associated with phenomenality—but also flexible, as they were sensitive to individual differences such as usage frequency. Overall, these results show how folk intuitions about AI consciousness can diverge from expert intuitions—with potential implications for the legal and ethical status of AI. — Read More
AI supercharges data center energy use – straining the grid and slowing sustainability efforts
The artificial intelligence boom has had such a profound effect on big tech companies that their energy consumption, and with it their carbon emissions, have surged.
The spectacular success of large language models such as ChatGPT has helped fuel this growth in energy demand. At 2.9 watt-hours per ChatGPT request, AI queries require about 10 times the electricity of traditional Google queries, according to the Electric Power Research Institute, a nonprofit research firm. Emerging AI capabilities such as audio and video generation are likely to add to this energy demand
The energy needs of AI are shifting the calculus of energy companies. They’re now exploring previously untenable options, such as restarting a nuclear reactor at the Three Mile Island power plant that has been dormant since the infamous disaster in 1979.
Data centers have had continuous growth for decades, but the magnitude of growth in the still-young era of large language models has been exceptional. AI requires a lot more computational and data storage resources than the pre-AI rate of data center growth could provide. — Read More
What is AI?
Everyone thinks they know but no one can agree. And that’s a problem
AI is sexy, AI is cool. AI is entrenching inequality, upending the job market, and wrecking education. AI is a theme-park ride, AI is a magic trick. AI is our final invention, AI is a moral obligation. AI is the buzzword of the decade, AI is marketing jargon from 1955. AI is humanlike, AI is alien. AI is super-smart and as dumb as dirt. The AI boom will boost the economy, the AI bubble is about to burst. AI will increase abundance and empower humanity to maximally flourish in the universe. AI will kill us all.
What the hell is everybody talking about? — Read More
Why AI can’t replace science
The scientific revolution has increased our understanding of the world immensely and improved our lives immeasurably. Now, many argue that science as we know it could be rendered passé by artificial intelligence.
… Today, AI is being increasingly integrated into scientific discovery to accelerate research, helping scientists generate hypotheses, design experiments, gather and interpret large datasets, and write papers. But the reality is that science and AI have little in common and AI is unlikely to make science obsolete. The core of science is theoretical models that anyone can use to make reliable descriptions and predictions. … The core of AI, in contrast, is data mining. … However, without an underlying causal explanation, we don’t know whether a discovered pattern is a meaningful reflection of an underlying causal relationship or meaningless serendipity. — Read More
ChatGPT is bullshit
Recently, there has been considerable interest in large language models: machine learning systems which produce human-like text and dialogue. Applications of these systems have been plagued by persistent inaccuracies in their output; these are often called “AI hallucinations”. We argue that these falsehoods, and the overall activity of large language models, is better understood as bullshit in the sense explored by Frankfurt (On Bullshit, Princeton, 2005): the models are in an important way indifferent to the truth of their outputs. We distinguish two ways in which the models can be said to be bullshitters, and argue that they clearly meet at least one of these definitions. We further argue that describing AI misrepresentations as bullshit is both a more useful and more accurate way of predicting and discussing the behaviour of these systems. — Read More
SITUATIONAL AWARENESS: The Decade Ahead
You can see the future first in San Francisco.
Over the past year, the talk of the town has shifted from $10 billion compute clusters to $100 billion clusters to trillion-dollar clusters. Every six months another zero is added to the boardroom plans. Behind the scenes, there’s a fierce scramble to secure every power contract still available for the rest of the decade, every voltage transformer that can possibly be procured. American big business is gearing up to pour trillions of dollars into a long-unseen mobilization of American industrial might. By the end of the decade, American electricity production will have grown tens of percent; from the shale fields of Pennsylvania to the solar farms of Nevada, hundreds of millions of GPUs will hum.
The AGI race has begun. We are building machines that can think and reason. By 2025/26, these machines will outpace many college graduates. By the end of the decade, they will be smarter than you or I; we will have superintelligence, in the true sense of the word. Along the way, national security forces not seen in half a century will be unleashed, and before long, The Project will be on. If we’re lucky, we’ll be in an all-out race with the CCP; if we’re unlucky, an all-out war. — Read More