ChatGPT unleashed a tidal wave of innovation with large language models (LLMs). More companies than ever before are bringing the power of natural language interaction to their products. The adoption of language model APIs is creating a new stack in its wake. To better understand the applications people are building and the stacks they are using to do so, we spoke with 33 companies across the Sequoia network, from seed stage startups to large public enterprises. We spoke with them two months ago and last week to capture the pace of change. As many founders and builders are in the midst of figuring out their AI strategies themselves, we wanted to share our findings even as this space is rapidly evolving. — Read More
Monthly Archives: June 2023
AI unlikely to gain human-like cognition, unless connected to real world through robots
Connecting artificial intelligence systems to the real world through robots and designing them using principles from evolution is the most likely way AI will gain human-like cognition, according to research from the University of Sheffield.
In a paper published in Science Robotics, Professor Tony Prescott and Dr Stuart Wilson from the University’s Department of Computer Science, say that AI systems are unlikely to resemble real brain processing no matter how large their neural networks or the datasets used to train them might become, if they remain disembodied. — Read More
I-JEPA: The first AI model based on Yann LeCun’s vision for more human-like AI
Last year, Meta’s Chief AI Scientist Yann LeCun proposed a new architecture intended to overcome key limitations of even the most advanced AI systems today. His vision is to create machines that can learn internal models of how the world works so that they can learn much more quickly, plan how to accomplish complex tasks, and readily adapt to unfamiliar situations.
We’re excited to introduce the first AI model based on a key component of LeCun’s vision. This model, the Image Joint Embedding Predictive Architecture (I-JEPA), learns by creating an internal model of the outside world, which compares abstract representations of images (rather than comparing the pixels themselves). I-JEPA delivers strong performance on multiple computer vision tasks, and it’s much more computationally efficient than other widely used computer vision models. The representations learned by I-JEPA can also be used for many different applications without needing extensive fine tuning. For example, we train a 632M parameter visual transformer model using 16 A100 GPUs in under 72 hours, and it achieves state-of-the-art performance for low-shot classification on ImageNet, with only 12 labeled examples per class. Other methods typically take two to 10 times more GPU-hours and achieve worse error rates when trained with the same amount of data.
Our paper on I-JEPA will be presented at CVPR 2023 next week, and we’re also open-sourcing the training code and model checkpoints today. — Read More
Mistral AI secures €105M in Europe’s largest-ever seed round
Mistral AI was founded only four weeks ago, by a trio of AI researchers. The company has yet to develop its first product. It hopes to take on OpenAI with actual open-sourced models and data sets, setting itself apart by targeting enterprises instead of consumers. — Read More
EU lawmakers sign off on world’s first comprehensive AI rules
European lawmakers on Wednesday gave approval to the world’s first comprehensive rules governing AI. The legislation still needs to go through negotiations before a final version is passed later this year. …[T]he AI Act gained strong backing from European Parliament members, with 499 votes in favor, 28 against, and 93 abstentions. — Read More
GPTs are GPTs: An Early Look at the Labor Market Impact Potential of Large Language Models
We investigate the potential implications of large language models (LLMs), such as Generative Pre-trained Transformers (GPTs), on the U.S. labor market, focusing on the increased capabilities arising from LLM-powered software compared to LLMs on their own. Using a new rubric, we assess occupations based on their alignment with LLM capabilities, integrating both human expertise and GPT-4 classifications. Our findings reveal that around 80% of the U.S. workforce could have at least 10% of their work tasks affected by the introduction of LLMs, while approximately 19% of workers may see at least 50% of their tasks impacted. We do not make predictions about the development or adoption timeline of such LLMs. The projected effects span all wage levels, with higher-income jobs potentially facing greater exposure to LLM capabilities and LLM-powered software. Significantly, these impacts are not restricted to industries with higher recent productivity growth. Our analysis suggests that, with access to an LLM, about 15% of all worker tasks in the US could be completed significantly faster at the same level of quality. When incorporating software and tooling built on top of LLMs, this share increases to between 47 and 56% of all tasks. This finding implies that LLM-powered software will have a substantial effect on scaling the economic impacts of the underlying models. We conclude that LLMs such as GPTs exhibit traits of general-purpose technologies, indicating that they could have considerable economic, social, and policy implications. — Read More
#strategy, #chatbotsThe Beatles will release a new and ‘final record’ this year, Paul McCartney says — with a little help from AI
It’s the news fans of the Fab Four thought they would never see: The Beatles will release a new song this year featuring vocals from John Lennon, with a little help from artificial intelligence, Paul McCartney said Tuesday.
Speaking to BBC Radio 4, the 80-year-old McCartney confirmed that the band — whose cultural influence may have been unmatched in the 20th century — will release “the final Beatles record” this year, having used cutting-edge technology to extract Lennon’s voice from an old demo recording.
“We just finished it up and it’ll be released this year,” he said. — Read More
Loneliness, insomnia linked to work with AI systems
Employees who frequently interact with artificial intelligence systems are more likely to experience loneliness that can lead to insomnia and increased after-work drinking, according to research published by the American Psychological Association.
Researchers conducted four experiments in the U.S., Taiwan, Indonesia and Malaysia. Findings were consistent across cultures. The research was published online in the Journal of Applied Psychology. — Read More
The Study
Geoffrey Hinton – Two Paths to Intelligence
Why trying to “shape” AI innovation to protect workers is a bad idea
Instead, we should empower workers and create mechanisms for redistribution.
I’ve been to a number of meetings and panels recently where intellectuals from academia, industry, media, and think tanks gather to discuss technology policy and the economics of AI. Chatham House Rules prevent me from saying who said what (and even without those rules, I don’t like to name names), but one perspective I’ve encountered increasingly often is the idea that we should try to “shape” or “steer” the direction of AI innovation in order to make sure it augments workers instead of replacing them. And the economist Daron Acemoglu has been going around advocating very similar things recently:
According to Acemoglu and [his coauthor] Johnson, the absence of new tasks created by technologies designed solely to automate human work will…simply dislocate the human workforce and redirect value from labour to capital. On the other hand, technologies that not only enhance efficiency but also generate new tasks for human workers have a dual advantage of increasing marginal productivity and yielding more positive effects on society as a whole… — Read More