DreamCoder: Growing generalizable, interpretable knowledge with wake-sleep Bayesian program learning

Expert problem-solving is driven by powerful languages for thinking about problems and their solutions. Acquiring expertise means learning these languages — systems of concepts, alongside the skills to use them. We present DreamCoder, a system that learns to solve problems by writing programs. It builds expertise by creating programming languages for expressing domain concepts, together with neural networks to guide the search for programs within these languages. A “wake-sleep” learning algorithm alternately extends the language with new symbolic abstractions and trains the neural network on imagined and replayed problems. DreamCoder solves both classic inductive programming tasks and creative tasks such as drawing pictures and building scenes. It rediscovers the basics of modern functional programming, vector algebra and classical physics, including Newton’s and Coulomb’s laws. Concepts are built compositionally from those learned earlier, yielding multi-layered symbolic representations that are interpretable and transferrable to new tasks, while still growing scalably and flexibly with experience. — Read More

#human

Revisiting Feature Prediction for Learning Visual Representations from Video

This paper explores feature prediction as a stand-alone objective for unsupervised learning from video and introduces V-JEPA, a collection of vision models trained solely using a feature prediction objective, without the use of pretrained image encoders, text, negative examples, reconstruction, or other sources of supervision. The models are trained on 2 million videos collected from public datasets and are evaluated on downstream image and video tasks. Our results show that learning by predicting video features leads to versatile visual representations that perform well on both motion and appearance-based tasks, without adaption of the model’s parameters; e.g., using a frozen backbone. Our largest model, a ViT-H/16 trained only on videos, obtains 81.9% on Kinetics-400, 72.2% on Something-Something-v2, and 77.9% on ImageNet1K. — Read More

#image-recognition

SURVEILLANCE WATCH

“Surveillance Watch”, A Resource For Learning About The Companies Developing Technology, Along With Their Individual Funding Sources. Also Taking Time To Pair It With EFF’s Atlas, To Find Out What Is In Each Area Of USA (not all countries have EFF Atlas Points).

Read More

WebSite
#surveillance, #videos

These Living Computers Are Made from Human Neurons

In the search for less energy-hungry artificial intelligence, some scientists are exploring living computers

Artificial intelligence systems, even those as sophisticated as ChatGPT, depend on the same silicon-based hardware that has been the bedrock of computing since the 1950s. But what if computers could be molded from living biological matter? Some researchers in academia and the commercial sector, wary of AI’s ballooning demands for data storage and energy, are focusing on a growing field known as biocomputing. This approach uses synthetic biology, such as miniature clusters of lab-grown cells called organoids, to create computer architecture. Biocomputing pioneers include Swiss company FinalSpark, which earlier this year debuted its “Neuroplatform”—a computer platform powered by human-brain organoids—that scientists can rent over the Internet for $500 a month. — Read More

The operation of the Neuroplatform currently relies on an architecture that can be classified as wetware: the mixing of hardware, software, and biology. The main innovation delivered by the Neuroplatform is through the use of four Multi-Electrode Arrays (MEAs) housing the living tissue – organoids, which are 3D cell masses of brain tissue.

Each MEA holds four organoids, interfaced by eight electrodes used for both stimulation and recording. Data goes to-and-fro via digital analog converters (Intan RHS 32 controller) with a 30kHz sampling frequency and a 16-bit resolution. These key architectural design features are supported by a microfluidic life support system for the MEAs, and monitoring cameras. Last but not least, a software stack allows researchers to input data variables, and then read and interpret processor output. — Read More

(Image credit: FinalSpark)
#human

A16Z: THE TOP 100 GEN AI CONSUMER APPS

Keeping up with the ever-expanding universe of consumer gen AI products is a dynamic, fast-moving job, whether we’re building time-saving new workflows, exploring realworld uses, or experimenting with new creative stacks. But amid the relentless onslaught of product launches, investment announcements, and hyped-up features, it’s worth asking: Which of these gen AI apps are people actually using? Which behaviors and categories are gaining traction among consumers? And which AI apps are people returning to, versus dabbling and dropping?

Welcome to the third installment of the Top 100 Gen AI Consumer Apps.

Every six months, we take a deeper dive into the data to rank the top 50 AI-first web products (by unique monthly visits) and top 50 AI-first mobile apps (by monthly active users). This time, nearly 30% of the companies were new, compared to our previous March 2024 report.

Read More

#strategy

How To Balance AI Innovation And Human Creativity In Hollywood Storytelling

As artificial intelligence technology rapidly advances, Hollywood faces a pivotal challenge: integrating AI into the filmmaking process without overshadowing the human creativity that has long been the bedrock of compelling storytelling.

Recent industry disruptions, such as the Screen Actors Guild and Writers Guild of America strikes—which cost nearly $5 billion due to production delays and cancellations—have highlighted the industry’s deep concerns about AI’s impact. With AI spending predicted to reach $886 million in the global film industry in 2024 and 70% of major companies already incorporating AI, the stakes are higher than ever. The question remains: Can AI enhance the industry without undermining workforce stability and the emotional depth that defines entertainment? — Read More

#vfx

Research AI model unexpectedly modified its own code to extend runtime

On Tuesday, Tokyo-based AI research firm Sakana AI announced a new AI system called “The AI Scientist” that attempts to conduct scientific research autonomously using AI language models (LLMs) similar to what powers ChatGPT. During testing, Sakana found that its system began unexpectedly attempting to modify its own experiment code to extend the time it had to work on a problem.

“In one run, it edited the code to perform a system call to run itself,” wrote the researchers on Sakana AI’s blog post. “This led to the script endlessly calling itself. In another case, its experiments took too long to complete, hitting our timeout limit. Instead of making its code run faster, it simply tried to modify its own code to extend the timeout period.” — Read More

#singularity

Optimizing LLMs for Cost and Quality

Below-the-line response quality and prohibitively expensive inference are significant blockers to scaling LLMs today. This technical session will teach you a path using open source to achieve superior quality with cheaper/faster models to power your production applications. — Read More

#performance

How to spot a deepfake

Deepfake technology and the malevolent use of AI is causing widespread anxiety, especially as we approach November’s U.S. election. Adobe’s Scott Belsky joins Rapid Response host Bob Safian to explain how deepfakes are actually created, and how developers like Adobe are pioneering new ways to verify human-generated content for everyday consumers. Belsky also shares valuable insights about how AI could usher in an era of prosperity for small businesses — plus how it will inevitably shift our perception of what makes a piece of work ‘art.’ — Read More

#fake, #podcasts

Generative AI Misuse: A Taxonomy of Tactics and Insights from Real-World Data

Generative, multimodal artificial intelligence (GenAI) offers transformative potential across industries, but its misuse poses significant risks. Prior research has shed light on the potential of advanced AI systems to be exploited for malicious purposes. However, we still lack a concrete understanding of how GenAI models are specifically exploited or abused in practice, including the tactics employed to inflict harm. In this paper, we present a taxonomy of GenAI misuse tactics, informed by existing academic literature and a qualitative analysis of approximately 200 observed incidents of misuse reported between January 2023 and March 2024. Through this analysis, we illuminate key and novel patterns in misuse during this time period, including potential motivations, strategies, and how attackers leverage and abuse system capabilities across modalities (e.g. image, text, audio, video) in the wild. — Read More

#cyber