U.S. AI Workforce: Policy Recommendations

This policy brief addresses the need for a clearly defined artificial intelligence education and workforce policy by providing recommendations designed to grow, sustain, and diversify the U.S. AI workforce. The authors employ a comprehensive definition of the AI workforce—technical and nontechnical occupations—and provide data-driven policy goals. Their recommendations are designed to leverage opportunities within the U.S. education and training system while mitigating its challenges, and prioritize equity in access and opportunity to AI education and AI careers. Read More

#workforce

Brain cell differences could be key to learning in humans and AI

Imperial researchers have found that variability between brain cells might speed up learning and improve the performance of the brain and future artificial intelligence (AI).

The new study found that by tweaking the electrical properties of individual cells in simulations of brain networks, the networks learned faster than simulations with identical cells. Read More

#human

US Leadership in Artificial Intelligence is Still Possible

Open-source algorithms are disrupting the meaning of global artificial intelligence (AI) leadership. Here’s how the US government can use the next wave of AI to its advantage.

What does it mean to be first in developing applications of artificial intelligence (AI), and does it matter? In a recent interview, the former Chief Software Officer of the U.S. Air Force Nicolas Chaillan stated that he resigned in part because he believed that, “We have no competing chance against China in fifteen to twenty years. Right now, it’s already a done deal; it is already over.” He reasoned that a failure of the U.S. Department of Defense (DoD) to follow through on stated intentions to build up in AI and cyber means many departments within DoD still operate at what Chaillan considers a “kindergarten level.” Those are strong words, but Chaillan’s overall assessment misses the mark—the United States becoming an AI also-ran is not a foregone conclusion. Leadership in AI is not necessarily achieved by the first adopter.

There is No AI Arms Race Read More

#china-vs-us, #dod

Missing the Point

When AI manipulates free speech, censorship is not the solution. Better code is.

Every issue is easy — if you just ignore the facts. And Glenn Greenwald has now given us a beautiful example of this eternal, and increasingly vital, truth.

In his Substack, Glenn attacks the Facebook whistleblower (he doesn’t call her that; he calls her a quote-whistleblower-unquote), Frances Haugen, for being an unwitting dupe of the Vast Leftwing Conspiracy that is now focused so intently on censoring free speech. To criticize what Facebook has done, in Glenn’s simple world, is to endorse the repeal of the First Amendment. To regulate Facebook is to start us down the road, if not to serfdom, then certainly to a Substack-less world.

But all this looks so simple to Glenn, because he’s so good at ignoring how technology matters — to everything, and especially to modern media. Glenn doesn’t do technology.  Read More

#bias, #big7

DeepMind and Alphabet: who needs markets?

DeepMind, the artificial intelligence company founded in 2010 by Demis HassabisShane Legg and Mustafa Suleyman, and acquired by Alphabet in 2014 for $650 million, has published its financial results, revealing what might be politely called a “creative accounting” issue.

In principle, it all sounds very promising: after a few years, DeepMind is now apparently profitable, with revenues of $1.13 billion in 2020, three times 2019’s $361 million, in the face of relatively restrained expenses that rose from $976 million in 2019 to $1.06 billion in 2020. Seen in this light, the picture is one of a cutting-edge company that, after years of heavy investment and significant losses, achieves profitability thanks to strong revenue growth and relative containment of its expenses. At last, Alphabet can put DeepMind among the companies that, under its umbrella, generate revenue. From red to black in just a few years. When all is said and done, it is fairly common for pioneering companies like this one to often spend long periods investing and incurring in heavy losses. Read More

#big7, #investing

Buffalo Wild Wings is Allowing Robots to Take Over the Deep Fryer

Artificial intelligence-enhanced robots have already overtaken the burger flippers of America — and they’re coming for the deep fryers at Buffalo Wild Wings next.

Miso Robotics launched its burger-cooking robot arm Flippy back in 2018, as an easy way for restaurants to cut labor costs. Even Walmart tested Flippy in its many kitchens. Then in 2020, White Castle hired its own fleet of Flippys.

Now, the company is moving on to the next step of its world robotics domination plan with a new robot called Flippy Wings, or “Wingy.” Read More

#robotics

Artificial intelligence sheds light on how the brain processes language

Neuroscientists find the internal workings of next-word prediction models resemble those of language-processing centers in the brain.

In the past few years, artificial intelligence models of language have become very good at certain tasks. Most notably, they excel at predicting the next word in a string of text; this technology helps search engines and texting apps predict the next word you are going to type.

The most recent generation of predictive language models also appears to learn something about the underlying meaning of language. These models can not only predict the word that comes next, but also perform tasks that seem to require some degree of genuine understanding, such as question answering, document summarization, and story completion. 

Such models were designed to optimize performance for the specific function of predicting text, without attempting to mimic anything about how the human brain performs this task or understands language. But a new study from MIT neuroscientists suggests the underlying function of these models resembles the function of language-processing centers in the human brain. Read More

#nlp, #human

Examining algorithmic amplification of political content on Twitter

As we shared earlier this year, we believe it’s critical to study the effects of machine learning (ML) on the public conversation and share our findings publicly. This effort is part of our ongoing work to look at algorithms across a range of topics. We recently shared the findings of our analysis of bias in our image cropping algorithm and how they informed changes in our product. 

Today, we’re publishing learnings from another study: an in-depth analysis of whether our recommendation algorithms amplify political content. The first part of the study examines Tweets from elected officials* in seven countries (Canada, France, Germany, Japan, Spain, the United Kingdom, and the United States). Since Tweets from elected officials cover just a small portion of political content on the platform, we also studied whether our recommendation algorithms amplify political content from news outlets. Read More

#bias

AI That Can Learn Cause-and-Effect: These Neural Networks Know What They’re Doing

A certain type of artificial intelligence agent can learn the cause-and-effect basis of a navigation task during training.

Neural networks can learn to solve all sorts of problems, from identifying cats in photographs to steering a self-driving car. But whether these powerful, pattern-recognizing algorithms actually understand the tasks they are performing remains an open question.

… Researchers at MIT have now shown that a certain type of neural network is able to learn the true cause-and-effect structure of the navigation task it is being trained to perform. Because these networks can understand the task directly from visual data, they should be more effective than other neural networks when navigating in a complex environment, like a location with dense trees or rapidly changing weather conditions. Read More

#human

Never invest your time in learning complex things.

The data scientist hype train has come to a grinding halt . It has been a joy ride for me for I was one of the people who got hooked into data science as soon as it came out. Math, engineering and the ability to predict stuff was very attractive indeed for a self-professed geek . I couldn’t resist and soon I was devouring one book after the other. I started with Springer Publications (Max Kuhn) , Tevor Hastie, a lot of Orielly books and followed it up with Statistics and Math courses until I had the math and the techniques (Linear/Logistic Regression, SVM,Random Forests, Decision Trees and few 20 others) down pat. Sounds great right, not quite.

Then came the Deep Learning revolution. I was first exposed to it thanks to Jeremy Howard who in my opinion still runs the best damn Deep learning course on the internet. He explains vision, NLP and even structured data machine learning. The guy is literally able to translate gobbledygook for the masses ( Me :-)) Plug: https://www.fast.ai/ . Read More

#data-science, #training