Artificial neural networks modeled on real brains can perform cognitive tasks
By examining MRI data from a large Open Science repository, researchers reconstructed a brain connectivity pattern, and applied it to an artificial neural network (ANN). They trained the ANN to perform a cognitive memory task and observed how it worked to complete the assignment. These ‘neuromorphic’ neural networks were able to use the same underlying architecture to support a wide range of learning capacities across multiple contexts. Read More
Monthly Archives: August 2021
The AI-Augmented Author. Writing With GPT-3 With Paul Bellow
How can authors use AI writing tools like GPT-3? What’s the best way to prompt the models to output usable text? Are there copyright issues with this approach?
Author Paul Bellow explains how he is using the tools and how authors need to embrace the possibilities rather than reject them. Read More
DeepMind unveils PonderNet, just please don’t call it ‘pondering’
DeepMind scientists suggest a way for a computer program to calculate whether or not to give up calculating. But Edgar Allan Poe would not have recognized it as “pondering.”
If you’re going to follow the news in artificial intelligence, you had better have a copy of an English dictionary with you, and maybe a couple of etymological dictionaries as well.
Today’s deep learning forms of AI are proliferating uses of ordinary words that can be potentially deeply misleading. That includes suggesting that the machine is actually doing something that a person does, such as thinking, reasoning, knowing, seeing, wondering.
The latest example is a new program from DeepMind, the AI unit of Google based in London. DeepMind researchers on Thursday unveiled what they call PonderNet, a program that can make a choice about whether to explore possibilities for a problem or to give up. Read More
Samsung Has Its Own AI-Designed Chip. Soon, Others Will Too
SAMSUNG IS USING artificial intelligence to automate the insanely complex and subtle process of designing cutting-edge computer chips.
br>The South Korean giant is one of the first chipmakers to use AI to create its chips. Samsung is using AI features in new software from Synopsys, a leading chip design software firm used by many companies. “What you’re seeing here is the first of a real commercial processor design with AI,” says Aart de Geus, the chairman and co-CEO of Synopsys.
Others, including Google and Nvidia, have talked about designing chips with AI. But Synopsys’ tool, called DSO.ai, may prove the most far-reaching because Synopsys works with dozens of companies. The tool has the potential to accelerate semiconductor development and unlock novel chip designs, according to industry watchers. Read More
Toward a Critical Technical Practice: Lessons Learned in Trying to Reform AI
Every technology fits, in its own unique way, into a far-flung network of different sites of social practice. Some technologies are employed in a specific site, and in those cases we often feel that we can warrant clear cause-and-effect stories about the transformations that have accompanied them, either in that site or others. Other technologies are so ubiquitous — found contributing to the evolution of the activities and relationships of so many distinct sites of practice — that we have no idea how to begin reckoning their effects upon society, assuming that such a global notion of “effects” even makes sense.
Computers fall in this latter category of ubiquitous technologies. In fact, from an analytical standpoint, computers are worse than that. Computers are representational artifacts, and the people who design them often start by constructing representations of the activities that are found in the sites where they will be used. This is the purpose of systems analysis, for example, and of the systematic mapping of conceptual entities and relationships in the early stages of database design. A computer, then, does not simply have an instrumental use in a given site of practice; the computer is frequently about that site in its very design. In this sense computing has been constituted as a kind of imperialism; it aims to reinvent virtually every other site of practice in its own image. Read More
Yale researchers say social media’s outrage machine has the biggest influence on moderate groups
No, you’re not imaging things: Social media is getting more extreme—and there’s a scientific reason for that.
A new study out of Yale University suggests the reason that your Facebook and Twitter feeds are now laden with scathing political diatribes and lengthy personal commentary is because we’ve been subtly trained to post those, through a system of rewards powered by “likes” and “shares.” Simply put, because content with “expressions of moral outrage” is more popular, we publish more of it. Read More
This High-Tech Chameleon Robot Can Blend in With Its Surroundings Like the Real Thing
China overtakes US in AI research
China is overtaking the U.S. in artificial intelligence research, setting off alarm bells on the other side of the Pacific as the world’s two largest economies jockey for AI supremacy.
In 2020, China topped the U.S. for the first time in terms of the number of times an academic article on AI is cited by others, a measure of the quality of a study. Until recently, the U.S. had been far ahead of other countries in AI research. Read More
Not All Memories are Created Equal: Learning to Forget by Expiring
Attention mechanisms have shown promising results in sequence modeling tasks that require long term memory. However, not all content in the past is equally important to remember. We propose Expire-Span, a method that learns to retain the most important information and expire the irrelevant information. This forgetting of memories enables Transformers to scale to attend over tens of thousands of previous timesteps efficiently, as not all states from previous timesteps are preserved. We demonstrate that Expire-Span can help models identify and retain critical information and show it can achieve strong performance on reinforcement learning tasks specifically designed to challenge this functionality. Next, we show that Expire-Span can scale to memories that are tens of thousands in size, setting a new state of the art on incredibly long context tasks such as character-level language modeling and a frame-by-frame moving objects task. Finally, we analyze the efficiency of Expire-Span compared to existing approaches and demonstrate that it trains faster and uses less memory. Read More