EARLIER THIS MONTH, Sundar Pichai was struggling to write a letter to Alphabet’s 180,000 employees. The 51-year-old CEO wanted to laud Google on its 25th birthday, which could have been easy enough. Alphabet’s stock market value was around $1.7 trillion. Its vast cloud-computing operation had turned its first profit. Its self-driving cars were ferrying people around San Francisco. And then there was the usual stuff—Google Search still dominated the field, as it had for every minute of this century. The company sucks up almost 40 percent of all global digital advertising revenue.
But not all was well on Alphabet’s vast Mountain View campus. The US government was about to put Google on trial for abusing its monopoly in search. And the comity that once pervaded Google’s workforce was frayed. Some high-profile employees had left, complaining that the company moved too slowly. Perhaps most troubling, Google—a long-standing world leader in artificial intelligence—had been rudely upstaged by an upstart outsider, OpenAI. Google’s longtime rival Microsoft had beaten it to the punch with a large language model built into its also-ran search engine Bing, causing panic in Mountain View. Microsoft CEO Satya Nadella boasted, “I want people to know we made Google dance.” — Read More
Monthly Archives: September 2023
AI co-created Coca-Cola® Y3000
New from Coca-Cola® Creations, look into the year 3000 with Coca-Cola® Y3000 – the first limited-edition Coke flavor from the future. Created to show us an optimistic vision of what’s to come, where humanity and technology are more connected than ever. For the first time, Coca-Cola® Y3000 was co-created with artificial intelligence to help bring the flavor of tomorrow to Coke fans. Taste the Future now. Coca-Cola® Y3000 will be available for a limited time only, so pick up a Coca-Cola® Y3000 and get a glimpse into the future world. — Read More
#artificial-intelligenceMoonshots with Peter Diamandis: A Conversation With My AI Clone on the Future of AI
On Robots Killing People
The robot revolution began long ago, and so did the killing. One day in 1979, a robot at a Ford Motor Company casting plant malfunctioned—human workers determined that it was not going fast enough. And so twenty-five-year-old Robert Williams was asked to climb into a storage rack to help move things along. The one-ton robot continued to work silently, smashing into Williams’s head and instantly killing him. This was reportedly the first incident in which a robot killed a human; many more would follow.
… Robots—”intelligent” and not—have been killing people for decades. And the development of more advanced artificial intelligence has only increased the potential for machines to cause harm. Self-driving cars are already on American streets, and robotic “dogs” are being used by law enforcement. Computerized systems are being given the capabilities to use tools, allowing them to directly affect the physical world. Why worry about the theoretical emergence of an all-powerful, superintelligent program when more immediate problems are at our doorstep? Regulation must push companies toward safe innovation and innovation in safety. We are not there yet. — Read More
LLM Training: RLHF and Its Alternatives
I frequently reference a process called Reinforcement Learning with Human Feedback (RLHF) when discussing LLMs, whether in the research news or tutorials. RLHF is an integral part of the modern LLM training pipeline due to its ability to incorporate human preferences into the optimization landscape, which can improve the model’s helpfulness and safety.
In this article, I will break down RLHF in a step-by-step manner to provide a reference for understanding its central idea and importance. Following up on the previous Ahead of AI article that featured Llama 2, this article will also include a comparison between ChatGPT’s and Llama 2’s way of doing RLHF. — Read More
The AI ‘Race’: China vs. the US with Jeffrey Ding and Karen Hao
In the debate over slowing down AI, we often hear the same argument against regulation. “What about China? We can’t let China get ahead.” To dig into the nuances of this argument, Tristan and Aza speak with academic researcher Jeffrey Ding and journalist Karen Hao, who take us through what’s really happening in Chinese AI development. They address China’s advantages and limitations, what risks are overblown, and what, in this multi-national competition, is at stake as we imagine the best possible future for everyone. — Read More
The Novel Written about—and with—Artificial Intelligence
THREE DISTINCT personalities, all female, walk into a bar together in Do You Remember Being Born? and emerge with fat paycheques, a collaborative long poem slyly titled “Self-portrait,” and a lot of nagging doubt. Actually, the proverbial bar in Sean Michaels’s dizzying new novel is not a bar but the Mind Studio, an entry-by-key-card-and-retina-scan-only room on an unnamed tech giant’s San Francisco campus. And one of the three personalities, a “2.5-trillion-parameter neural network” named Charlotte, is better described as feminine than female. But the doubt, tucked under a lot of surface-level optimism, is real, instilled in characters and readers alike by the author. — Read More
#humanMeta’s VR technology is helping to train surgeons and treat patients, though costs remain a hurdle
Just days before assisting in his first major shoulder-replacement surgery last year, Dr. Jake Shine strapped on a virtual reality headset and got to work.
As a third-year orthopedics resident at Kettering Health Dayton in Ohio, Shine was standing in the medical center’s designated VR lab with his attending physician, who would oversee the procedure.
Both doctors were wearing Meta Quest 2 headsets as they walked through a 3D simulation of the surgery.
… Ultimately, there were no complications in the procedure and the patient made a full recovery.
While consumer VR remains a niche product and a massive money-burning venture for Meta CEO Mark Zuckerberg, the technology is proving to be valuable in certain corners of health care. — Read More
LLMs and Tool Use
Last March, just two weeks after GPT-4 was released, researchers at Microsoft quietly announced a plan to compile millions of APIs—tools that can do everything from ordering a pizza to solving physics equations to controlling the TV in your living room—into a compendium that would be made accessible to large language models (LLMs). This was just one milestone in the race across industry and academia to find the best ways to teach LLMs how to manipulate tools, which would supercharge the potential of AI more than any of the impressive advancements we’ve seen to date.
The Microsoft project aims to teach AI how to use any and all digital tools in one fell swoop, a clever and efficient approach. Today, LLMs can do a pretty good job of recommending pizza toppings to you if you describe your dietary preferences and can draft dialog that you could use when you call the restaurant. But most AI tools can’t place the order, not even online. In contrast, Google’s seven-year-old Assistant tool can synthesize a voice on the telephone and fill out an online order form, but it can’t pick a restaurant or guess your order. By combining these capabilities, though, a tool-using AI could do it all. An LLM with access to your past conversations and tools like calorie calculators, a restaurant menu database, and your digital payment wallet could feasibly judge that you are trying to lose weight and want a low-calorie option, find the nearest restaurant with toppings you like, and place the delivery order. If it has access to your payment history, it could even guess at how generously you usually tip. If it has access to the sensors on your smartwatch or fitness tracker, it might be able to sense when your blood sugar is low and order the pie before you even realize you’re hungry.
Perhaps the most compelling potential applications of tool use are those that give AIs the ability to improve themselves. — Read More