In recent years, there has been a tremendous interest in using generative AI, and particularly large language models (LLMs) in software engineering; indeed there are now several commercially available tools, and many large companies also have created proprietary ML-based tools for their own software engineers. While the use of ML for common tasks such as code completion is available in commodity tools, there is a growing interest in application of LLMs for more bespoke purposes. One such purpose is code migration.
This article is an experience report on using LLMs for code migrations at Google. It is not a research study, in the sense that we do not carry out comparisons against other approaches or evaluate research questions/hypotheses. Rather, we share our experiences in applying LLM-based code migration in an enterprise context across a range of migration cases, in the hope that other industry practitioners will find our insights useful. Many of these learnings apply to any application of ML in software engineering. We see evidence that the use of LLMs can reduce the time needed for migrations significantly, and can reduce barriers to get started and complete migration programs. — Read More
Daily Archives: January 17, 2025
What to expect from Neuralink in 2025
In November, a young man named Noland Arbaugh announced he’d be livestreaming from his home for three days straight. His broadcast was in some ways typical fare: a backyard tour, video games, meet mom.
The difference is that Arbaugh, who is paralyzed, has thin electrode-studded wires installed in his brain, which he used to move a computer mouse on a screen, click menus, and play chess. The implant, called N1, was installed last year by neurosurgeons working with Neuralink, Elon Musk’s brain-interface company.
The possibility of listening to neurons and using their signals to move a computer cursor was first demonstrated more than 20 years ago in a lab setting. Now, Arbaugh’s livestream is an indicator that Neuralink is a whole lot closer to creating a plug-and-play experience that can restore people’s daily ability to roam the web and play games, giving them what the company has called “digital freedom.”
But this is not yet a commercial product. — Read More
AI Founder’s Bitter Lesson. Chapter 1 – History Repeats Itself
- Historically, general approaches always win in AI.
- Founders in AI application space now repeat the mistakes AI researchers made in the past.
- Better AI models will enable general purpose AI applications. At the same time, the added value of the software around the AI model will diminish.
Recent AI progress has enabled new products that solve a broad range of problems. I saw this firsthand watching over 100 pitches during YC alumni Demo Day. These problems share a common thread – they’re simple enough to be solved with constrained AI. Yet the real power of AI lies in its flexibility. While products with fewer constraints generally work better, current AI models aren’t reliable enough to build such products at scale. We’ve been here before with AI, many times. Each time, the winning move has been the same. AI founders need to learn this history, or I fear they’ll discover these lessons the hard way. — Read More