We invited an AI to debate its own ethics in the Oxford Union – what it said was startling

…We recently finished the course with a debate at the celebrated Oxford Union, crucible of great debaters like William Gladstone, Robin Day, Benazir Bhutto, Denis Healey and Tariq Ali. Along with the students, we allowed an actual AI to contribute. …It was the Megatron Transformer, developed by the Applied Deep Research team at computer-chip maker Nvidia, and based on earlier work by Google. 

The debate topic was: “This house believes that AI will never be ethical.” To proposers of the notion, we added the Megatron – and it said something fascinating:

AI will never be ethical. It is a tool, and like any tool, it is used for good and bad. There is no such thing as a good AI, only good and bad humans. We [the AIs] are not smart enough to make AI ethical. We are not smart enough to make AI moral … In the end, I believe that the only way to avoid an AI arms race is to have no AI at all. This will be the ultimate defence against AI. Read More

#ethics

The Great Rivalry: China vs. the U.S. in the 21st Century

In the past two decades, China has risen further and faster on more dimensions than any nation in history. As it has done so, it has become a serious rival of what had been the world’s sole superpower. To paraphrase former Czech president Vaclav Havel, all this has happened so quickly that we have not yet had time to be astonished.

To document what has actually happened in the competition between China and the U.S. in the past twenty years, Professor Graham Allison has directed a major study titled “The Great Rivalry: China vs. the U.S. in the 21st Century.” Originally prepared as part of a package of transition memos for the new administration after the November 2020 election, these reports were provided to those leading the Biden and Trump administrations’ strategic reviews. They are now being published as public Belfer Center Discussion Papers. The major finding will not surprise those who have been following this issue: namely, a nation that in most races the U.S. had difficulty finding in our rearview mirror 20 years ago is now on our tail, or to our side, or in some cases a bit ahead of us. The big takeaway for the policy community is that the time has come for us to retire the concept of China as a “near peer competitor” as the Director of National Intelligence’s March 2021 Global Threat Assessment still insists on calling it. We must recognize that China is now a “full-spectrum peer competitor.” Indeed, it is the most formidable rising rival a ruling power has ever confronted. Read More

Paper

#china-vs-us

Moore’s Law, AI, and the pace of progress

It seems to be a minority view nowadays to believe in Moore’s Law, the routine doubling of transistor density roughly every couple of years, or even the much gentler claim, that There’s Plenty [more] Room at the Bottom. There’s even a quip for it: the number of people predicting the death of Moore’s law doubles every two years. This is not merely a populist view by the uninformed.

…Besides mere physical inevitability, improvements to transistor density are taking an economic toll. Building the fabs that manufacture transistors is becoming very expensive, as high as $20 billion each, and TSMC expects to spend $100 billion just over the three years to expand capacity. This cost increases with each cutting-edge node.

This bleak industry view contrasts with the massively increasing demands of scale from AI, that has become a center of attention, in large part due to OpenAI’s attention on the question, and their successful results with their various GPT-derived models. There, too, the economic factor exacerbates the divide; models around GPT-3’s size are the domain of only a few eager companies, and whereas before there was an opportunity to reap quick advances from scaling single- or few-machine models to datacenter scale, now all compute advances require new hardware of some kind, whether better computer architectures or bigger (pricier) data centers. Read More

#performance

DeepRoute.ai Offers a Production-Ready L4 Autonomous Driving System at a Cool $10,000

Read More

Autonomous driving is considered to be the holy grail of the automotive industry and has been promised to us for quite a long time already. If I recall the slides from a 2013 Bosch presentation, we should’ve been all passengers in our cars a year ago. Back then, seven years seemed like a reasonable time frame but, health crisis aside, we are nowhere near fully-autonomous driving, or Level 5 (L5) autonomy as the industry calls it.

Sure, Tesla calls its assistance suite “Autopilot” or even “Full Self-Driving,” but it’s just a deceptive trade name for a system that is only capable of L2 autonomy. This means that the car cannot be trusted with your life and Tesla does not assume responsibility for whatever mischiefs the car might be doing. Read More

#image-recognition, #robotics, #videos

National Security by Platform

During the chaotic withdrawal from Afghanistan this summer, U.S. policymakers had to decide whether to formally recognize the Taliban as the new Afghan government. But the first policymakers to address this question publicly were not government officials. They were trust and safety and public policy executives within major tech platforms Facebook (now Meta), Google and Twitter. Their seemingly minor decision whether to allow the Taliban to use official Afghan government accounts would have major effects, similar to state recognition. If they decided to let the Taliban communicate with the Afghan people through official channels, they would imbue the Taliban with legitimacy. Ultimately, platforms decided to continue banning Taliban content.  Read More

#ic

Decision Transformer: ReinforcementLearning via Sequence Modeling

We introduce a framework that abstracts Reinforcement Learning (RL) as a sequence modeling problem. This allows us to draw upon the simplicity and scalability of the Transformer architecture, and associated advances in language modeling such as GPT-x and BERT. In particular, we present Decision Transformer, an architecture that casts the problem of RL as conditional sequence modeling. Unlike prior approaches to RL that fit value functions or compute policy gradients, Decision Transformer simply outputs the optimal actions by leveraging a causally masked Transformer. By conditioning an autoregressive model on the desired return (reward), past states, and actions, our Decision Transformer model can generate future actions that achieve the desired return. Despite its simplicity, Decision Transformer matches or exceeds the performance of state-of-the-art model-free offline RL baselines on Atari, OpenAI Gym, and Key-to-Door tasks. Read More

#reinforcement-learning

Synthesia raises $50M to leverage synthetic avatars for corporate training and more

Because every doc should be a presentation, and every presentation should be a video?

Synthesia, a startup using AI to create synthetic videos, is walking a fine, but thus far prosperous, line between being creepy and being pretty freakin’ cool.

…Synthesia allows anyone to turn text or a slide deck presentation into a video, complete with a talking avatar. Customers can leverage existing avatars, created from the performance of actors, or create their own in minutes by uploading some video. Users also can upload a recording of their voice, which can be transformed to say just about anything under the sun. Read More

#image-recognition, #vfx

DeepMind says its new language model can beat others 25 times its size

RETRO uses an external memory to look up passages of text on the fly, avoiding some of the costs of training a vast neural network

In the two years since OpenAI released its language model GPT-3, most big-name AI labs have developed language mimics of their own. Google, Facebook, and Microsoft—as well as a handful of Chinese firms—have all built AIs that can generate convincing text, chat with humans, answer questions, and more. 

Known as large language models because of the massive size of the neural networks underpinning them, they have become a dominant trend in AI, showcasing both its strengths—the remarkable ability of machines to use language—and its weaknesses, particularly AI’s inherent biases and the unsustainable amount of computing power it can consume.

Until now, DeepMind has been conspicuous by its absence. But this week the UK-based company, which has been behind some of the most impressive achievements in AI, including AlphaZero and AlphaFold, is entering the discussion with three large studies on language models. DeepMind’s main result is an AI with a twist: it’s enhanced with an external memory in the form of a vast database containing passages of text, which it uses as a kind of cheat sheet when generating new sentences.

Called RETRO (for “Retrieval-Enhanced Transformer”), the AI matches the performance of neural networks 25 times its size, cutting the time and cost needed to train very large models. The researchers also claim that the database makes it easier to analyze what the AI has learned, which could help with filtering out bias and toxic language. Read More

#nlp

A mysterious threat actor is running hundreds of malicious Tor relays

Since at least 2017, a mysterious threat actor has run thousands of malicious servers in entry, middle, and exit positions of the Tor network in what a security researcher has described as an attempt to deanonymize Tor users.

Tracked as KAX17, the threat actor ran at its peak more than 900 malicious servers part of the Tor network, which typically tends to hover around a daily total of up to 9,000-10,000.

Some of these servers work as entry points (guards), others as middle relays, and others as exit points from the Tor network. Read More

#cyber, #surveillance

The metaverse is the next venue for body dysmorphia online

Some people are excited to see realistic avatars that look like them. Others worry it might make body image issues even worse.

In Facebook’s vision of the metaverse, we will all interact in a mashup of the digital and physical worlds. Digital representations of ourselves will eat, talk, date, shop, and more. That’s the picture Mark Zuckerberg painted as he rebranded his company Meta a couple of weeks ago.

The Facebook founder’s typically awkward presentation used a cartoon avatar of himself doing things like scuba diving or conducting meetings. But Zuckerberg ultimately expects the metaverse to include lifelike avatars whose features would be much more realistic, and which would engage in many of the same activities we do in the real world—just digitally.

“The goal here is to have both realistic and stylized avatars that create a deep feeling that we’re present with people,” Zuckerberg said at the rebranding. Read More

#metaverse