America Could Lose the Tech Contest With China

The United States is in the midst of a high-stakes competition with China for dominance in the next wave of technological innovation. Despite a flurry of activity at the federal level over the past three years, Washington has for the most part been playing catch-up.

This summer, with the CHIPS and Science Act, the U.S. government committed to provide the semiconductor chip industry with more than $50 billion in federal investment over the next five years. But that was only after a supply-chain crisis had roiled the U.S. economy for two years as a result of the COVID-19 pandemic, and after the Pentagon had warned that it had become dependent on East Asian suppliers for 98 percent of the commercial chips it uses.

In 2019, the United States ramped up a diplomatic campaign to thwart China’s bid to dominate the world’s 5G infrastructure. But that was only after the massively state-subsidized Chinese companies Huawei and ZTE undercut major Western competitors, seemingly cemented positions in the communications networks of U.S. allies, and flooded the zone in standard-setting bodies.

And last year, the National Security Commission on Artificial Intelligence (on which both of us served) delivered its final report, calling for a comprehensive approach to sustaining U.S. leadership in education, research, and applications in AI that would mean an infusion of millions in new federal investment and a sustained government focus. But that report came out four years after China had already launched its national strategy on artificial intelligence, which generated billions in new funding, identified national-champion companies, and integrated AI into Beijing’s military-civil fusion strategy.

This reactive approach is hardly a recipe for future success. The United States needs to win on these tech battlegrounds and make sure it is not caught by surprise again.  Read More

#china-vs-us

A memory prosthesis could restore memory in people with damaged brains

A unique form of brain stimulation appears to boost people’s ability to remember new information—by mimicking the way our brains create memories.

The “memory prosthesis,” which involves inserting an electrode deep into the brain, also seems to work in people with memory disorders—and is even more effective in people who had poor memory to begin with, according to new research. In the future, more advanced versions of the memory prosthesis could help people with memory loss due to brain injuries or as a result of aging or degenerative diseases like Alzheimer’s, say the researchers behind the work.

…It works by copying what happens in the hippocampus—a seahorse-shaped region deep in the brain that plays a crucial role in memory. The brain structure not only helps us form short-term memories but also appears to direct memories to other regions for long-term storage. Read More

#human

Alphabet CEO Sundar Pichai says ‘broken’ Google Voice assistant proves that A.I. isn’t sentient

Alphabet CEO Sundar Pichai said the company’s artificial intelligence technology is not anywhere near being sentient and may never get there, even as he touted A.I. as central to the $1.4 trillion company’s future.

“LaMDA is not sentient by any stretch of the imagination,” Pichai said during an onstage interview at Vox Media’s Code conference in Beverly Hills on Tuesday evening, referring to the name of one of Google’s A.I. technologies. Read More

#human

Dumb AI is a bigger risk than strong AI

The year is 2052. The world has averted the climate crisis thanks to finally adopting nuclear power for the majority of power generation. Conventional wisdom is now that nuclear power plants are a problem of complexity; Three Mile Island is now a punchline rather than a disaster. Fears around nuclear waste and plant blowups have been alleviated primarily through better software automation. What we didn’t know is that the software for all nuclear power plants, made by a few different vendors around the world, all share the same bias. After two decades of flawless operation, several unrelated plants all fail in the same year. The council of nuclear power CEOs has realized that everyone who knows how to operate Class IV nuclear power plants is either dead or retired. We now have to choose between modernity and unacceptable risk. Read More

#strategy

Inside Fog Data Science, the Secretive Company Selling Mass Surveillance to Local Police

A data broker has been selling raw location data about individual people to federal, state, and local law enforcement agencies, EFF has learned. This personal data isn’t gathered from cell phone towers or tech giants like Google — it’s obtained by the broker via thousands of different apps on Android and iOS app stores as part of the larger location data marketplace.

The company, Fog Data Science, has claimed in marketing materials that it has “billions” of data points about “over 250 million” devices and that its data can be used to learn about where its subjects work, live, and associate. Fog sells access to this data via a web application, called Fog Reveal, that lets customers point and click to access detailed histories of regular people’s lives. This panoptic surveillance apparatus is offered to state highway patrolslocal police departments, and county sheriffs across the country for less than $10,000 per year. Read More

#surveillance

Using AI to decode speech from brain activity

Every year, more than 69 million people around the world suffer traumatic brain injury, which leaves many of them unable to communicate through speech, typing, or gestures. These people’s lives could dramatically improve if researchers developed a technology to decode language directly from noninvasive brain recordings. Today, we’re sharing research that takes a step toward this goal. We’ve developed an AI model that can decode speech from noninvasive recordings of brain activity.

From three seconds of brain activity, our results show that our model can decode the corresponding speech segments with up to 73 percent top-10 accuracy from a vocabulary of 793 words, i.e., a large portion of the words we typically use on a day-to-day basis.

Decoding speech from brain activity has been a long-standing goal of neuroscientists and clinicians, but most of the progress has relied on invasive brain-recording techniques, such as stereotactic electroencephalography and electrocorticography. These devices provide clearer signals than noninvasive methods but require neurosurgical interventions. While results from that work suggest that decoding speech from recordings of brain activity is feasible, decoding speech with noninvasive approaches would provide a safer, more scalable solution that could ultimately benefit many more people. This is very challenging, however, since noninvasive recordings are notoriously noisy and can greatly vary across recording sessions and individuals for a variety of reasons, including differences in each person’s brain and where the sensors are placed. Read More

Read the Paper

#human, #nlp