Has AI found a new Foundation?

I

n August, 32 faculty and 117 research scientists, postdocs, and students at Stanford University, long one of the biggest players in AI, declared that there has been a “sweeping paradigm shift in AI”. They coined a new term, “Foundation Models” to characterize the new paradigm, joined forces in a “Center for Research on Foundation Models”, and published the massive 212-page report “On the Opportunities and Risks of Foundation Models.”

Although the term is new, the general approach is not. You train a big neural network (like the well-known GPT-3) on an enormous amount of data, and then you adapt (“fine-tune”) the model to a bunch of more specific tasks (in the words of the report, “a foundation model …[thus] serves as [part of] the common basis from which many task-specific models are built via adaptation”). The basic model thus serves as the “foundation” (hence the term) of AIs that carry out more specific tasks. The approach started to gather momentum in 2018, when Google developed the natural language processing model called BERT, and it became even more popular with the introduction last year of OpenAI’s GPT-3. Read More

#artificial-intelligence, #machine-learning

A way to spot computer-generated faces

A small team of researchers from The State University of New York at Albany, the State University of New York at Buffalo and Keya Medical has found a common flaw in computer-generated faces by which they can be identified. The group has written a paper describing their findings and have uploaded them to the arXiv preprint server.

…The researchers note that in many cases, users can simply zoom in on the eyes of a person they suspect may not be real to spot the pupil irregularities. They also note that it would not be difficult to write software to spot such errors and for social media sites to use it to remove such content. Unfortunately, they also note that now that such irregularities have been identified, the people creating the fake pictures can simply add a feature to ensure the roundness of pupils. Read More

#fake, #gans, #image-recognition

From Motor Control to Team Play in Simulated Humanoid Football

Read More

Read Paper

#videos, #machine-learning

Google launches ‘digital twin’ tool for logistics and manufacturing

Google today announced Supply Chain Twin, a new Google Cloud solution that lets companies build a digital twin — a representation of their physical supply chain — by organizing data to get a more complete view of suppliers, inventories, and events like weather. Arriving alongside Supply Chain Twin is the Supply Chain Pulse module, which can be used with Supply Chain Twin to provide dashboards, analytics, alerts, and collaboration in Google Workspace. Read More

#big7, #iot

Ex-Google exec describes 4 top dangers of artificial intelligence

In a new interview, AI expert Kai-Fu Lee — who worked as an executive at Google (GOOGGOOGL), Apple (AAPL), and Microsoft (MSFT) — explained the top four dangers of burgeoning AI technology: externalities, personal data risks, inability to explain consequential choices, and warfare.

“The single largest danger is autonomous weapons,” he says. Read More

#strategy

Cold war echoes as Aukus alliance focuses on China deterrence

Analysis: military alliance is more wide-ranging than Five Eyes agreement and may come to define future approach to Indo-Pacific security.

For those who study the history of the cold war, Washington’s new initiative with London and Canberra – known by its acronym “Aukus” – has eery echoes of an intelligence-sharing agreement signed 75 years ago. This agreement is now more commonly known as the Five Eyes partnership.

…“At a first glance it looks like a traditional security partnership, however if you look at other areas mentioned – cyber and AI [artificial intelligence], for example, they mirror China’s Belt and Road Initiative. It covers more than security and it is about deterrence,” Kobayashi said. Read More

#china-vs-us