QCON London: Drawing from his 8 years of experience in AI, Paul Iusztin breaks down the core components of a scalable architecture, emphasizing the importance of RAG. He shares practical patterns, including the Feature Training Inference architecture, and provides a detailed use case for creating a “Second Brain” AI assistant, covering everything from data pipelines to observability and agentic layers. — Read More
Recent Updates Page 22
AI-Ready Data: A Technical Assessment. The Fuel and the Friction.
Most organizations operate data ecosystems built over decades of system acquisitions, custom development, and integration projects. These systems were designed for transactional processing and business reporting, not for the real-time, high-quality, semantically rich data requirements of modern AI applications.
Research shows that 50% of organizations are classified as “Beginners” in data maturity, 18% are “Dauntless” with high AI aspirations but poor data foundations, 18% are “Conservatives” with strong foundations but limited AI adoption, and only 14% are “Front Runners” achieving both data maturity and AI scale. — Read More
When the government can see everything: How one company – Palantir – is mapping the nation’s data
When the U.S. government signs contracts with private technology companies, the fine print rarely reaches the public. Palantir Technologies, however, has attracted more and more attention over the past decade because of the size and scope of its contracts with the government.
Palantir’s two main platforms are Foundry and Gotham. Each does different things. Foundry is used by corporations in the private sector to help with global operations. Gotham is marketed as an “operating system for global decision making” and is primarily used by governments.
I am a researcher who studies the intersection of data governance, digital technologies and the U.S. federal government. I’m observing how the government is increasingly pulling together data from various sources, and the political and social consequences of combining those data sources. Palantir’s work with the federal government using the Gotham platform is amplifying this process. — Read More
Hollywood Battles AI in Film
Your Brain on ChatGPT: Accumulation of Cognitive Debt when Using an AI Assistant for Essay Writing Task
This study explores the neural and behavioral consequences of LLM-assisted essay writing. Participants were divided into three groups: LLM, Search Engine, and Brain-only (no tools). Each completed three sessions under the same condition. In a fourth session, LLM users were reassigned to Brain-only group (LLM-to-Brain), and Brain-only users were reassigned to LLM condition (Brain-to-LLM). A total of 54 participants took part in Sessions 1-3, with 18 completing session 4. We used electroencephalography (EEG) to assess cognitive load during essay writing, and analyzed essays using NLP, as well as scoring essays with the help from human teachers and an AI judge. Across groups, NERs, n-gram patterns, and topic ontology showed within-group homogeneity. EEG revealed significant differences in brain connectivity: Brain-only participants exhibited the strongest, most distributed networks; Search Engine users showed moderate engagement; and LLM users displayed the weakest connectivity. Cognitive activity scaled down in relation to external tool use. In session 4, LLM-to-Brain participants showed reduced alpha and beta connectivity, indicating under-engagement. Brain-to-LLM users exhibited higher memory recall and activation of occipito-parietal and prefrontal areas, similar to Search Engine users. Self-reported ownership of essays was the lowest in the LLM group and the highest in the Brain-only group. LLM users also struggled to accurately quote their own work. While LLMs offer immediate convenience, our findings highlight potential cognitive costs. Over four months, LLM users consistently underperformed at neural, linguistic, and behavioral levels. These results raise concerns about the long-term educational implications of LLM reliance and underscore the need for deeper inquiry into AI’s role in learning. — Read More
SpikingBrain Technical Report: Spiking Brain-inspired Large Models
Mainstream Transformer-based large language models (LLMs) face significant efficiency bottlenecks: training computation scales quadratically with sequence length, and inference memory grows linearly. These constraints limit their ability to process long sequences effectively. In addition, building large models on non-NVIDIA computing platforms poses major challenges in achieving stable and efficient training and deployment. To address these issues, we introduce SpikingBrain, a new family of brain-inspired models designed for efficient long-context training and inference. SpikingBrain leverages the MetaX GPU cluster and focuses on three core aspects: i) Model Architecture: linear and hybrid-linear attention architectures with adaptive spiking neurons; ii) Algorithmic Optimizations: an efficient, conversion-based training pipeline compatible with existing LLMs, along with a dedicated spike coding framework; iii) System Engineering: customized training frameworks, operator libraries, and parallelism strategies tailored to the MetaX hardware.
Using these techniques, we develop two models: SpikingBrain-7B, a linear LLM, and SpikingBrain-76B, a hybrid-linear MoE LLM. These models demonstrate the feasibility of large-scale LLM development on non-NVIDIA platforms. SpikingBrain achieves performance comparable to open-source Transformer baselines while using exceptionally low data resources (continual pre-training of ∼150B tokens). Our models also significantly improve long-sequence training efficiency and deliver inference with (partially) constant memory and event-driven spiking behavior. For example, SpikingBrain-7B achieves more than 100× speedup in Time to First Token (TTFT) for 4M-token sequences. Our training framework supports weeks of stable large-scale training on hundreds of MetaX C550 GPUs, with the 7B model reaching a Model FLOPs Utilization (MFU) of 23.4%. In addition, the proposed spiking scheme achieves 69.15% sparsity, enabling low-power operation. Overall, this work demonstrates the potential of brain-inspired mechanisms to drive the next generation of efficient and scalable large model design. — Read More
Sam Altman says that bots are making social media feel ‘fake’
X enthusiast and Reddit shareholder Sam Altman had an epiphany on Monday: Bots have made it impossible to determine whether social media posts are really written by humans, he posted.
The realization came while reading (and sharing) some posts from the r/Claudecode subreddit, which were praising OpenAI Codex. OpenAI launched the software programming service that takes on Anthropic’s Claude Code in May. — Read More
Will OpenAI’s Critterz make or break AI filmmaking?
You may have missed the AI movie Critterz when it appeared as a short animation a couple of years ago. It didn’t exactly set the world on fire, with comments on YouTube including “I’d call this garbage, but that’d be an insult to garbage” and “This was the worst 5 minutes I will never get back”.
Nevertheless, it seems OpenAI, the maker of Chat GPT, saw potential in the ‘nature documentary turned comedy’. It’s putting its name behind the experimental short’s expansion into a feature-length movie intended for a debut at the Cannes Film Festival in May 2026 followed by a full cinema release. Will it show that AI is ready to take on Hollywood and slash the costs of filmmaking, or will it do the opposite like ‘Netflix of AI’ Showrunner? — Read More
OpenAI Is Bringing an AI-Driven Feature-Length Animated Movie to Cannes
You knew it was bound to happen, and now, it has. The Wall Street Journal reports that OpenAI is lending its services to the production of a feature-length animated film called Critterz, which is aiming to be done in time for next year’s Cannes Film Festival. That would put its production time at nine months, which is unheard of for a feature-length animated film, but that’s because it’ll be created using AI.
According to the paper, using OpenAI’s resources, production companies Vertigo Films and Native Foreign will hire actors to voice characters created by feeding original drawings into generative AI software. The entire film is expected to cost less than $30 million and will only take about 30 people to complete. — Read More
RL-as-a-Service will outcompete AGI companies (and that’s good)
Companies drive AI development today. There’s two stories you could tell about the mission of an AI company:
AGI: AI labs will stop at nothing short of Artificial General Intelligence. With enough training and iteration AI will develop a general ability to solve any (feasible) task. We can leverage this general intelligence to solve any problem, including how to make a profit.
Reinforcement Learning-as-a-Service (RLaaS)[1]: AI labs have an established process for training language models to attain high performance on clean datasets. By painstakingly creating benchmarks for problems of interest, they can solve any given problem with RL leveraging language models as a general-purpose prior. This is essentially a version of the CAIS model. — Read More