U.S. Outbound Investment into Chinese AI Companies

U.S. policymakers are increasingly concerned about the national security implications of U.S. investments in China, and some are considering a new regime for reviewing outbound investment security. The authors identify the main U.S. investors active in the Chinese artificial intelligence market and the set of AI companies in China that have benefitted from U.S. capital. They also recommend next steps for U.S. policymakers to better address the concerns over capital flowing into the Chinese AI ecosystem. Read More

#china-vs-us

Beginner’s Guide to Diffusion Models

An intuitive understanding of how AI-generated art is made by Stable Diffusion, Midjourney, or DALL-E

Recently, there has been an increased interest in OpenAI’s DALL-E, Stable Diffusion (the free alternative of DALL-E), and Midjourney (hosted on a Discord server). While AI-generated art is very cool, what is even more captivating is how it works in the first place. In the last section, I will include some resources for anyone to get started in this AI art space as well.

So how do these technologies work? It uses something called a latent diffusion model, and the idea behind it is actually ingenious. Read More

#diffusion

Extracting Training Data from Diffusion Models

Image diffusion models such as DALL-E 2, Imagen, and Stable Diffusion have attracted significant attention due to their ability to generate high-quality synthetic images. In this work, we show that diffusion models memorize individual images from their training data and emit them at generation time. With a generate-and-filter pipeline, we extract over a thousand training examples from state-of-the-art models, ranging from photographs of individual people to trademarked company logos. We also train hundreds of diffusion models in various settings to analyze how different modeling and data decisions affect privacy. Overall, our results show that diffusion models are much less private than prior generative models such as GANs, and that mitigating these vulnerabilities may require new advances in privacy-preserving training. Read More

#chatbots, #nlp, #Diffusion

#ChatGPT in one infographic!

Read More

#chatbots

AIs as Computer Hackers

Hacker “Capture the Flag” has been a mainstay at hacker gatherings since the mid-1990s. It’s like the outdoor game, but played on computer networks. Teams of hackers defend their own computers while attacking other teams’. It’s a controlled setting for what computer hackers do in real life: finding and fixing vulnerabilities in their own systems and exploiting them in others’. It’s the software vulnerability lifecycle.

These days, dozens of teams from around the world compete in weekend-long marathon events held all over the world. People train for months. Winning is a big deal. If you’re into this sort of thing, it’s pretty much the most fun you can possibly have on the Internet without committing multiple felonies. Read More

#cyber

In AI arms race, ethics may be the first casualty

As the tech world embraces ChatGPT and other generative AI programs, the industry’s longstanding pledges to deploy AI responsibly could quickly be swamped by beat-the-competition pressures.

Why it matters: Once again, tech’s leaders are playing a game of “build fast and ask questions later” with a new technology that’s likely to spark profound changes in society.

  • Social media started two decades ago with a similar rush to market. First came the excitement — later, the damage and regrets.
Read More

#ethics

Netflix Made an Anime Using AI Due to a ‘Labor Shortage,’ and Fans Are Pissed

A new short film called ‘The Dog & The Boy’ uses AI-generated art for its backgrounds.

Netflix created an anime that uses AI-generated artwork to paint its backgrounds—and people on social media are pissed.

In a tweet, Netflix Japan claimed that the project, a short called The Dog & The Boy uses AI generated art in response to labor shortages in the anime industry. 

“As an experimental effort to help the anime industry, which has a labor shortage, we used image generation technology for the background images of all three-minute video cuts!” the streaming platform wrote in a tweet.  Read More

#vfx

Google is asking employees to test potential ChatGPT competitors, including a chatbot called ‘Apprentice Bard’

  • Google is testing ChatGPT-like products that use its LaMDA technology, according to sources and internal documents acquired by CNBC.
  • The company is also testing new search page designs that integrate the chat technology.
  • More employees have been asked to help test the efforts internally in recent weeks.
Read More

#big7, #chatbots

FOLIO: Natural Language Reasoning with First-Order Logic

We present FOLIO, a human-annotated, open-domain, and logically complex and diverse dataset for reasoning in natural language (NL), equipped with first order logic (FOL) annotations. FOLIO consists of 1,435 examples (unique conclusions), each paired with one of 487 sets of premises which serve as rules to be used to deductively reason for the validity of each conclusion. The logical correctness of premises and conclusions is ensured by their parallel FOL annotations, which are automatically verified by our FOL inference engine. In addition to the main NL reasoning task, NL-FOL pairs in FOLIO automatically constitute a new NL-FOL translation dataset using FOL as the logical form. Our experiments on FOLIO systematically evaluate the FOL reasoning ability of supervised fine-tuning on medium-sized language models (BERT, RoBERTa) and few-shot prompting on large language models (GPT-NeoX, OPT, GPT-3, Codex). For NL-FOL translation, we experiment with GPT-3 and Codex. Our results show that one of the most capable Large Language Model (LLM) publicly available, GPT-3 davinci, achieves only slightly better than random results with few-shot prompting on a subset of FOLIO, and the model is especially bad at predicting the correct truth values for False and Unknown conclusions.  Read More

#nlp

Uh oh, people are now using AI to cheat in Rocket League

A cool machine learning bot project is being exploited by cheaters, and now players are looking for ways to beat it.

I was skeptical when I came across a Reddit poster claiming they “for sure” encountered a cheater in ranked Rocket League. Uh huh, just like how everyone who kills me in Rainbow Six Siege is “for sure” aimbotting, right? Then I watched the video. Well friends, I regret to inform you that people are cheating in Rocket League.

The alleged cheater was actually on the same team as ghost_snyped, the Reddit user who posted the clip (opens in new tab) embedded above, which shows the cheater’s perspective for part of a doubles match. I’ve been playing Rocket League for seven years and I have never seen a human being play like that at any rank. There are masterful Rocket League dribblers out there, but it’d be unusual for a skilled player to stay so rooted to the field—most throw in some aerial maneuvers here and there—and to carry and flick the ball that flawlessly. 

Sure enough, this is a real problem: People have started using a machine learning-trained Rocket League bot in online matches.  Read More

#reinforcement-learning