Functional magnetic resonance imaging (fMRI) was used to measure brain activity in seven people while they listened to more than 2 hours of stories from The Moth Radio Hour. This data was used to estimate voxel-wise models that predict brain activity in each voxel (volumetric pixel) based on the meaning of the words in the stories. Read the paper describing this research here.
This site provides an interactive 3D viewer for models fit to one subject’s brain. Read More
Daily Archives: September 20, 2022
DeepMind Says It Had Nothing to Do With Research Paper Saying AI Could End Humanity
After a researcher with a position at DeepMind—the machine intelligence firm owned by Google parent Alphabet—co-authored a paper claiming that AI could feasibly wipe out humanity one day, DeepMind is distancing itself from the work.
The paper was published recently in the peer-reviewed AI Magazine, and was co-authored by researchers at Oxford University and by Marcus Hutter, an AI researcher who works at DeepMind. The first line of Hutter’s website states the following: “I am Senior Researcher at Google DeepMind in London, and Honorary Professor in the Research School of Computer Science (RSCS) at the Australian National University (ANU) in Canberra.” The paper, which currently lists his affiliation to DeepMind and ANU, runs through some thought experiments about humanity’s future with a superintelligent AI that operates using similar schemes to today’s machine learning programs, such as reward-seeking. It concluded that this scenario could erupt into a zero-sum game between humans and AI that would be “fatal” if humanity loses out. Read More
Read the Paper
D-ID, the company behind Deep Nostalgia, lets you create AI-generated videos from a single image
sraeli AI company D-ID, which provided technology for projects like Deep Nostalgia, is launching a new platform where users can upload a single image and text to generate video. With this new site called Creative Reality Studio, the company is targeting sectors like corporate training and education, internal and external communication from companies, product marketing and sales.
The platform is pretty simple to use: Users can upload an image of a presenter or select one from the pre-created presenters to start the video creation process. Paid users can access premium presenters who are more “expressive” as they have better facial expressions and hand movements than the default ones. After that, users can either type the text from a script or simply upload an audio clip of someone’s speech. Users can then select a language (the platform supports 119 languages), voice and styles like cheerful, sad, excited and friendly.
The company’s AI-based algorithms will generate a video based on these parameters. Users can then distribute the video anywhere. The firm claims that the algorithm takes only half of the video duration time to generate a clip, but in our tests, it took a couple of minutes to generate a one-minute video. This could change depending on the type of presenter and language you selected. Read More