Recent Updates Page 12
The secret to sustainable AI may have been in our brains all along
Researchers have developed a new method for training artificial intelligence that dramatically improves its speed and energy efficiency by mimicking the structured wiring of the human brain. The approach, detailed in the journal Neurocomputing, creates AI models that can match or even exceed the accuracy of conventional networks while using a small fraction of the computational resources.
The study was motivated by a growing challenge in the field of artificial intelligence: sustainability. Modern AI systems, such as the large language models that power generative AI, have become enormous. They are built with billions of connections, and training them can require vast amounts of electricity and cost tens of millions of dollars. As these models continue to expand, their financial and environmental costs are becoming a significant concern. — Read More
The End of Cloud Inference
Most people picture the future of AI the same way they picture the internet: somewhere far away, inside giant buildings full of humming machines. Your phone or laptop sends a request, a distant data center does the thinking, and the answer streams back. That story has been useful for getting AI off the ground, but it’s not how it ends. For a lot of everyday tasks, the smartest place to run AI will be where the data already lives: on your device.
We already accept this in another computationally demanding field: graphics. No one renders every frame of a video game in a warehouse and streams the pixels to your screen. Your device does the heavy lifting locally because it’s faster, cheaper, and more responsive. AI is heading the same way. The cloud won’t disappear, but it will increasingly act like a helpful backup or “bigger battery,” not the default engine for everything. — Read More
AI Broke Interviews
Interviewing has always been a big can of worms in the software industry. For years, big tech has gone with the LeetCode style questions mixed with a few behavioural and system design rounds. Before that, it was brainteasers. I still remember how you would move Mount Fuji era. Tech has never really had good interviewing, but the question remains: how do you actually evaluate someone’s ability to reason about data structures and algorithms without asking algorithmic questions? I don’t know. People say engineers don’t need DSA. Perhaps. Nonetheless, it’s still taught in CS programs for a reason.
In real work, I’ve used data structure-y thinking maybe a handful of times, but I had more time to think. Literally days. Companies, on the other hand, need to make fast decisions. It’s not perfect. It was never perfect. But it worked well enough. Well, at least, I thought it did.
And then AI detonated the whole thing. Sure, people could technically cheat before. You could have friends feeding you hints or solving the problem with you but even that was limited. You needed friends who could code. And if your friend was good enough to help you, chances are you weren’t completely incompetent yourself. Cheating still filtered for a certain baseline. AI destroyed that filtration layer.
Everyone now has access to perfect code, perfect explanations, perfect system design diagrams, and even perfect behavioural answers. — Read More
Anonymous credentials: rate-limiting bots and agents without compromising privacy
The way we interact with the Internet is changing. Not long ago, ordering a pizza meant visiting a website, clicking through menus, and entering your payment details. Soon, you might just ask your phone to order a pizza that matches your preferences. A program on your device or on a remote server, which we call an AI agent, would visit the website and orchestrate the necessary steps on your behalf.
Of course, agents can do much more than order pizza. Soon we might use them to buy concert tickets, plan vacations, or even write, review, and merge pull requests. While some of these tasks will eventually run locally, for now, most are powered by massive AI models running in the biggest datacenters in the world. As agentic AI increases in popularity, we expect to see a large increase in traffic from these AI platforms and a corresponding drop in traffic from more conventional sources (like your phone).
This shift in traffic patterns has prompted us to assess how to keep our customers online and secure in the AI era. — Read More
Why You’ll Never Have a FAANG Data Infrastructure and That’s the Point | Part 1
This is Part 1 of a Series on FAANG data infrastructures. In this series, we’ll be breaking down the state-of-the-art designs, processes, and cultures that FAANGs or similar technology-first organisations have developed over decades. And in doing so, we’ll uncover why enterprises desire such infrastructures, whether these are feasible desires, and what the routes are through which we can map state-of-the-art outcomes without the decades invested or the millions spent in experimentation. This is an introductory piece, touching on the fundamental questions, and in the upcoming pieces, we’ll pick one FAANG at a time and break down the infrastructure to project common patterns and design principles, and illustrate replicable maps to the outcomes. — Read More
#data-scienceWharton AI Study: Gen AI Fast-Tracks into the Enterprise
Three years ago, in the wake of ChatGPT’s debut, we launched our initial study to push past the headlines —asking business leaders how they were actually using Gen AI and soliciting their expectations around thetechnology’s future applications in their businesses.
As Gen AI fast-tracks into budgets, processes, and training, executives need benchmarks, not anecdotes. Our unique, year-over-year, repeated cross-sectional lens now shows where the common use cases are, where returns are emerging, and which people-and-process levers could convert mainstream use into durable ROI. We will track these shifts each year in this ongoing research initiative. — Read More
Thinking Machines challenges OpenAI’s AI scaling strategy: ‘First superintelligence will be a superhuman learner’
While the world’s leading artificial intelligence companies race to build ever-larger models, betting billions that scale alone will unlock artificial general intelligence, a researcher at one of the industry’s most secretive and valuable startups delivered a pointed challenge to that orthodoxy this week: The path forward isn’t about training bigger — it’s about learning better.
“I believe that the first superintelligence will be a superhuman learner,” Rafael Rafailov, a reinforcement learning researcher at Thinking Machines Lab, told an audience at TED AI San Francisco on Tuesday. “It will be able to very efficiently figure out and adapt, propose its own theories, propose experiments, use the environment to verify that, get information, and iterate that process.” — Read More
The Faustian bargain of AI
Christopher Marlowe, the author of Doctor Faustus, never lived to see his play published or performed. He was murdered after a dispute over a bill spiraled out of control, ending in a fatal stab to the eye. He would never see how Doctor Faustus would live on for centuries. In a modern sense, we too may never fully grasp the long-term consequences of today’s technologies—AI included, for better or worse.
This social contract we are signing between artificial intelligence and the human race is changing life rapidly. And while we can guess where it takes us, we aren’t entirely sure. Instead, we can look to the past to find truth.
Take the COVID-19 pandemic, for example. As rumors of shutdowns began to swirl, I found myself reaching for Albert Camus’ The Plague. Long before we knew what was coming, Camus had already offered a striking portrait of the very world we were about to enter: lockdowns, political division, and uncertain governments.
Just as Camus captured the psychological toll of unseen forces, Marlowe explores the cost of human ambition when we seek control over the uncontrollable. Why did Faustus give up his soul? Why did he cling to his pact, even as doubt crept in? And what did he gain, briefly, in exchange for something eternal?
In Marlowe’s tragedy, we find a reflection of our own choices as we navigate the promises and perils of AI. — Read More
The 7 Secret Knobs That Control Every AI Response
Every time you hit “send” to ChatGPT, Claude, or any LLM, seven invisible parameters are silently shaping the response. Change one number, and you go from genius insights to nonsensical rambling.
Most people never touch these settings. They stick with defaults and wonder why AI sometimes feels “dumb.” Master these 7 parameters, and you’ll get better outputs than 99% of users. — Read More