Digital artists and visual effects pros acknowledge that artificial intelligence-driven tools can contribute to the creative process. But they lament that jobs will be lost, ethics will be challenged, and it could lead to a “dehumanization of art” in a new episode of The Hollywood Reporter‘s podcast series Behind the Screen. The episode is an edited version of a candid panel discussion surrounding AI, recorded Oct. 19 at the View VFX and computer graphics conference in Torino, Italy. — Read More
#vfx, #podcastsMonthly Archives: October 2023
The Humane AI Pin apparently runs GPT-4 and flashes a ‘Trust Light’ when it’s recording
Humane’s first gadget, the AI Pin, is currently slated to launch on November 9th, but we just got our best look at it yet thanks to a somewhat unexpected source. Before it has even been announced, the AI Pin is one of Time Magazine’s “Best Inventions of 2023,” along with everything from the Framework Laptop 16 to the Samsung Galaxy Z Flip 5 to the Bedtime Buddy alarm clock.
The write-up is brief and relatively light on details, but there are a couple of new details, along with the best photo we’ve seen yet of the device. It appears the AI Pin will attach magnetically to your clothing, and uses “a mix of proprietary software and OpenAI’s GPT-4” to power its many features. (If you remember, that includes everything from making calls to translating speech to understanding the nutritional information in a candy bar.) — Read More
Minds of machines: The great AI consciousness conundrum
David Chalmers was not expecting the invitation he received in September of last year. As a leading authority on consciousness, Chalmers regularly circles the world delivering talks at universities and academic meetings to rapt audiences of philosophers—the sort of people who might spend hours debating whether the world outside their own heads is real and then go blithely about the rest of their day. This latest request, though, came from a surprising source: the organizers of the Conference on Neural Information Processing Systems (NeurIPS), a yearly gathering of the brightest minds in artificial intelligence.
Less than six months before the conference, an engineer named Blake Lemoine, then at Google, had gone public with his contention that LaMDA, one of the company’s AI systems, had achieved consciousness. Lemoine’s claims were quickly dismissed in the press, and he was summarily fired, but the genie would not return to the bottle quite so easily—especially after the release of ChatGPT in November 2022. Suddenly it was possible for anyone to carry on a sophisticated conversation with a polite, creative artificial agent.
Chalmers was an eminently sensible choice to speak about AI consciousness. He’d earned his PhD in philosophy at an Indiana University AI lab, where he and his computer scientist colleagues spent their breaks debating whether machines might one day have minds. In his 1996 book, The Conscious Mind, he spent an entire chapter arguing that artificial consciousness was possible. — Read More
This new data poisoning tool lets artists fight back against generative AI
The tool, called Nightshade, messes up training data in ways that could cause serious damage to image-generating AI models.
A new tool lets artists add invisible changes to the pixels in their art before they upload it online so that if it’s scraped into an AI training set, it can cause the resulting model to break in chaotic and unpredictable ways.
The tool, called Nightshade, is intended as a way to fight back against AI companies that use artists’ work to train their models without the creator’s permission. Using it to “poison” this training data could damage future iterations of image-generating AI models, such as DALL-E, Midjourney, and Stable Diffusion, by rendering some of their outputs useless—dogs become cats, cars become cows, and so forth. MIT Technology Review got an exclusive preview of the research, which has been submitted for peer review at computer security conference Usenix. — Read More
Improving Wikipedia verifiability with AI
Verifiability is a core content policy of Wikipedia: claims need to be backed by citations. Maintaining and improving the quality of Wikipedia references is an important challenge and there is a pressing need for better tools to assist humans in this effort. We show that the process of improving references can be tackled with the help of artificial intelligence (AI) powered by an information retrieval system and a language model. This neural-network-based system, which we call SIDE, can identify Wikipedia citations that are unlikely to support their claims, and subsequently recommend better ones from the web. We train this model on existing Wikipedia references, therefore learning from the contributions and combined wisdom of thousands of Wikipedia editors. Using crowdsourcing, we observe that for the top 10% most likely citations to be tagged as unverifiable by our system, humans prefer our system’s suggested alternatives compared with the originally cited reference 70% of the time. To validate the applicability of our system, we built a demo to engage with the English-speaking Wikipedia community and find that SIDE’s first citation recommendation is preferred twice as often as the existing Wikipedia citation for the same top 10% most likely unverifiable claims according to SIDE. Our results indicate that an AI-based system could be used, in tandem with humans, to improve the verifiability of Wikipedia. — Read More
Skynet in China, a nightmare Surveillance Network out of the Black Mirror episode and the Terminator Movie
In recent years, China has made significant strides in the development and implementation of a massive surveillance network known as “Skynet.” No kidding, it’s after Terminator’s Movie” This ambitious project aims to enhance public security and control through the widespread use of advanced technologies such as artificial intelligence (AI), facial recognition, and big data analysis. — Read More
Strange Ways AI Disrupts Business Models, What’s Next For Creativity & Marketing, Some Provocative Data
This edition explores forecasts and implications around: (1) business models likely to become antiquated as AI proliferates in more industries, (2) reflections on another round of AI launches in the creative world, and (3) some provocative data and surprises at the end, as always.
If you’re new, here’s the rundown on what to expect. This ~monthly analysis is written for founders + investors I work with, colleagues, and a small group of subscribers. I aim for quality, density, and provocation vs. frequency and trendiness. My goal is to ignite discussion and add some kindling to the fire of feedback and serendipitous dot connecting. — Read More
ScaleAI wants to be America’s AI arms dealer
Alexandr Wang grew up in the shadow of the Los Alamos National Laboratory — the birthplace of the nuclear bomb. Now, the 26-year-old CEO of artificial intelligence company ScaleAI intends to play a key role in the next major age of geopolitical conflict.
Scale, which was co-founded by Wang in 2016 to help other companies organize and label data to train AI algorithms, has been aggressively pitching itself as the company that will help the U.S. military in its existential battle with China, offering to help the Pentagon pull better insights out of the reams of information it generates every day, build better autonomous vehicles and even create chatbots that can help advise military commanders during combat.
… In May, Scale became the first AI company to have a “large language model” — the tech behind chatbots such as ChatGPT — deployed on a classified network after it signed a deal with the Army’s XVIII Airborne Corps. Scale’s chatbot, known as “Donovan,” is meant to summarize intelligence and help commanders make decisions faster. — Read More
“Math is hard” — if you are an LLM – and why that matters
Some Reply Guy on X assured me yesteday that “transformers can multiply”. Even pointed me to a paper, allegedly offering proof.
The paper turns out to be pretty great, doing exactly the right test, but it doesn’t prove what its title alleges. More like the opposite.
The paper alleges “GPT Can Solve Mathematical Problems Without a Calculator.” But it doesn’t really show that, except in the sense that I can shoot free throws in the NBA, Sure, I can toss the ball in the air, and sometimes I might even sink a shot, the more so with practice; but I am probably going to miss a lot, too. And 70% would be great for free throws; for multiplication it sucks. 47323 * 19223 = 909690029 and it shall always be; no partial credit for coming close. — Read More
GPTs are GPTs: An Early Look at the Labor Market Impact Potential of Large Language Models
We investigate the potential implications of large language models (LLMs), such as Generative Pre-trained Transformers (GPTs), on the U.S. labor market, focusing on the increased capabilities arising from LLM-powered software compared to LLMs on their own. Using a new rubric, we assess occupations based on their alignment with LLM capabilities, integrating both human expertise and GPT-4 classifications. Our findings reveal that around 80% of the U.S. workforce could have at least 10% of their work tasks affected by the introduction of LLMs, while approximately 19% of workers may see at least 50% of their tasks impacted. We do not make predictions about the development or adoption timeline of such LLMs. The projected effects span all wage levels, with higher-income jobs potentially facing greater exposure to LLM capabilities and LLM-powered software. Significantly, these impacts are not restricted to industries with higher recent productivity growth. Our analysis suggests that, with access to an LLM, about 15% of all worker tasks in the US could be completed significantly faster at the same level of quality. When incorporating software and tooling built on top of LLMs, this share increases to between 47 and 56% of all tasks. This finding implies that LLM-powered software will have a substantial effect on scaling the economic impacts of the underlying models. We conclude that LLMs such as GPTs exhibit traits of general-purpose technologies, indicating that they could have considerable economic, social, and policy implications. — Read More