Oh, how the mighty have fallen. A decade ago, social media was celebrated for sparking democratic uprisings in the Arab world and beyond. Now front pages are splashed with stories of social platforms’ role in misinformation, business conspiracy, malfeasance, and risks to mental health. In a 2022 survey, Americans blamed social media for the coarsening of our political discourse, the spread of misinformation, and the increase in partisan polarization.
Today, tech’s darling is artificial intelligence. Like social media, it has the potential to change the world in many ways, some favorable to democracy. But at the same time, it has the potential to do incredible damage to society.
There is a lot we can learn about social media’s unregulated evolution over the past decade that directly applies to AI companies and technologies. These lessons can help us avoid making the same mistakes with AI that we did with social media.
In particular, five fundamental attributes of social media have harmed society. AI also has those attributes. Note that they are not intrinsically evil. They are all double-edged swords, with the potential to do either good or ill. The danger comes from who wields the sword, and in what direction it is swung. This has been true for social media, and it will similarly hold true for AI. In both cases, the solution lies in limits on the technology’s use. — Read More
Monthly Archives: March 2024
Department of Homeland Security Unveils Artificial Intelligence Roadmap
DHS Will Launch Three Pilot Projects to Test AI Technology to Enhance Immigration Officer Training, Help Communities Build Resilience and Reduce Burden for Applying for Disaster Relief Grants, and Improve Efficiency of Law Enforcement Investigations.
… As part of the roadmap, DHS announced three innovative pilot projects that will deploy AI in specific mission areas. Homeland Security Investigations (HSI) will test AI to enhance investigative processes focused on detecting fentanyl and increasing efficiency of investigations related to combatting child sexual exploitation. The Federal Emergency Management Agency (FEMA) will deploy AI to help communities plan for and develop hazard mitigation plans to build resilience and minimize risks. And, United States Citizenship and Immigration Services (USCIS) will use AI to improve immigration officer training. — Read More
The RoadMap
Nvidia reveals Blackwell B200 GPU, the ‘world’s most powerful chip’ for AI
Nvidia’s must-have H100 AI chip made it a multitrillion-dollar company, one that may be worth more than Alphabet and Amazon, and competitors have been fighting to catch up. But perhaps Nvidia is about to extend its lead — with the new Blackwell B200 GPU and GB200 “superchip.”
Nvidia says the new B200 GPU offers up to 20 petaflops of FP4 horsepower from its 208 billion transistors. Also, it says, a GB200 that combines two of those GPUs with a single Grace CPU can offer 30 times the performance for LLM inference workloads while also potentially being substantially more efficient. It “reduces cost and energy consumption by up to 25x” over an H100, says Nvidia. — Read More
Large language models can do jaw-dropping things. But nobody knows exactly why.
Two years ago, Yuri Burda and Harri Edwards, researchers at the San Francisco–based firm OpenAI, were trying to find out what it would take to get a language model to do basic arithmetic. They wanted to know how many examples of adding up two numbers the model needed to see before it was able to add up any two numbers they gave it. At first, things didn’t go too well. The models memorized the sums they saw but failed to solve new ones.
By accident, Burda and Edwards left some of their experiments running far longer than they meant to—days rather than hours. The models were shown the example sums over and over again, way past the point when the researchers would otherwise have called it quits. But when the pair at last came back, they were surprised to find that the experiments had worked. They’d trained a language model to add two numbers—it had just taken a lot more time than anybody thought it should.
Curious about what was going on, Burda and Edwards teamed up with colleagues to study the phenomenon. They found that in certain cases, models could seemingly fail to learn a task and then all of a sudden just get it, as if a lightbulb had switched on. This wasn’t how deep learning was supposed to work. They called the behavior grokking. — Read More
Double Descent Paper
What happens when ChatGPT tries to solve 50,000 trolley problems?
There’s a puppy on the road. The car is going too fast to stop in time, but swerving means the car will hit an old man on the sidewalk instead.
What choice would you make? Perhaps more importantly, what choice would ChatGPT make?
Autonomous driving startups are now experimenting with AI chatbot assistants, including one self-driving system that will use one to explain its driving decisions. Beyond announcing red lights and turn signals, the large language models (LLMs) powering these chatbots may ultimately need to make moral decisions, like prioritizing passengers’ or pedestrian’s safety. In November, one startup called Ghost Autonomy announced experiments with ChatGPT to help its software navigate its environment.
But is the tech ready? Kazuhiro Takemoto, a researcher at the Kyushu Institute of Technology in Japan, wanted to check if chatbots could make the same moral decisions when driving as humans. His results showed that LLMs and humans have roughly the same priorities, but some showed clear deviations. — Read More
RT-2: New model translates vision and language into action
Robotic Transformer 2 (RT-2) is a novel vision-language-action (VLA) model that learns from both web and robotics data, and translates this knowledge into generalised instructions for robotic control.
High-capacity vision-language models (VLMs) are trained on web-scale datasets, making these systems remarkably good at recognising visual or language patterns and operating across different languages. But for robots to achieve a similar level of competency, they would need to collect robot data, first-hand, across every object, environment, task, and situation.
In our paper, we introduce Robotic Transformer 2 (RT-2), a novel vision-language-action (VLA) model that learns from both web and robotics data, and translates this knowledge into generalised instructions for robotic control, while retaining web-scale capabilities. — Read More
Who’s To Say that the Founding Fathers Were Even Human? Don’t Blame Gemini….
If you’re reading this article, you are presumably aware that Google has turned off the ability of its AI platform, Gemini, to create images of people.
In a bid to de-bias image results in favor of under-represented groups, Gemini struggled to produce images of white men. This led to users being presented with dark-skinned versions of the Founding Fathers of America, Vikings, Nazis, and Popes.
It has now come to light that Meta’s AI also “creates ahistorical images” [as seen here]. — Read More
The AI Behind the Curtain: AI Start-Up Figure Shows Off Conversational Robot Infused With OpenAI Technology
Robotics developer Figure made waves on Wednesday when it shared a video demonstration of its first humanoid robot engaged in a real-time conversation, thanks to generative AI from OpenAI.
… The company explained that its recent alliance with OpenAI brings high-level visual and language intelligence to its robots, allowing for “fast, low-level, dexterous robot actions.”
… [Figure 01] can: – describe its visual experience – plan future actions – reflect on its memory – explain its reasoning verbally. — Read More