Artificial Intelligence (AI) and Machine Learning (ML) are amongst the emerging trends in Business and Marketing. Yet, a lot of this cleverness is located in the Cloud. Read: in big server parks with high-end processing capabilities. In a not too distant future, several applications will enter our lives that require an increased amount of Intelligence and Computation being implemented closer to the user. Be it for reasons of speed, energy-efficiency or privacy. Think about self-driving vehicles who have to respond more quickly than the time it takes to send data up and down to the cloud. Or privacy-sensitive tasks as voice analysis and face- or fingerprint recognition, for which legal or user constraints might keep you from sending data over the air. In a way, this leads to the question we engineers are now facing: ‘how to get a server rack in your back pocket’? Read More
Tag Archives: Nvidia
This Tesla Mod Turns a Model S Into a Mobile ‘Surveillance Station’
Automatic license plate reader cameras are controversial enough when law enforcement deploys them, given that they can create a panopticon of transit throughout a city. Now one hacker has found a way to put a sample of that power—for safety, he says, and for surveillance—into the hands of anyone with a Tesla and a few hundred dollars to spare. Read More
Meet Tesla's self-driving car computer and its two AI brains
Designing your own chips is hard. But Tesla, one of the most aggressive developers of autonomous vehicle technology, thinks it’s worth it. The company shared details Tuesday about how it fine-tuned the design of its AI chips so two of them are smart enough to power its cars’ upcoming “full self-driving” abilities.
Tesla Chief Executive Elon Musk and his colleagues revealed the company’s third-generation computing hardware in April. But at the Hot Chips conference Tuesday, chip designers showed how heavy optimizations in Tesla’s custom AI chips dramatically boosted performance — a factor of 21 compared to the earlier Nvidia chips. As a bonus, they’re only 80% the cost, too. Read More
To power AI, this startup built a really, really big chip
COMPUTER CHIPS ARE usually small. The processor that powers the latest iPhones and iPads is smaller than a fingernail; even the beefy devices used in cloud servers aren’t much bigger than a postage stamp. Then there’s this new chip from a startup called Cerebras: It’s bigger than an iPad all by itself.
The silicon monster is almost 22 centimeters—roughly 9 inches—on each side, making it likely the largest computer chip ever, and a monument to the tech industry’s hopes for artificial intelligence. Cerebras plans to offer it to tech companies trying to build smarter AI more quickly. Read More
Brain Talker Makes “Mind Reading” Possible—Tianjin Creates the World’s First Brain-Computer Codec Chip
The world’s first Brain-Computer Codec Chip (BC3), Brain Talker, was announced on May 17, 2019, during the 3rd World Intelligence Congress at Tianjin. The Brain Talker was a joint effort of Tianjin University and China Electronics Corporation with fully independent intellectual property.
This BC3 chip was specially designed to improve the Brain-Computer Interface (BCI) technology, which aims at decoding a user’s mental intent solely through neural electrical signals, without the use of the human body’s natural neuromuscular pathways. Read More
This autonomous bicycle shows China’s rising expertise in AI chips
It might not look like much, but this wobbly self-driving bicycle is a symbol of growing Chinese expertise in advanced chip design

One chip to rule them all: It natively runs all types of AI software
We tend to think of AI as a monolithic entity, but it has actually developed along multiple branches. One of the main branches involves performing traditional calculations but feeding the results into another layer that takes input from multiple calculations and weighs them before performing its calculations and forwarding those on. Another branch involves mimicking the behavior of traditional neurons: many small units communicating in bursts of activity called spikes, and keeping track of the history of past activity.
Each of these, in turn, has different branches based on the structure of its layers and communications networks, types of calculations performed, and so on. Rather than being able to act in a manner we would recognize as intelligent, many of these are very good at specialized problems, like pattern recognition or playing poker. And processors that are meant to accelerate the performance of the software can typically only improve a subset of them.
That last division may have come to an end with the development of Tianjic by a large team of researchers primarily based in China. Read More
Machine learning training puts Google and Nvidia on top
Artificial intelligence (AI) has advanced to the point where leading research universities and dozens of technology companies including Google and Nvidia are taking part in comparisons of their chips.
Results of the latest round of benchmarks released this week showed that both Nvidia and Google have demonstrated they can reduce from days to hours the compute time necessary to train deep neural networks used in some common AI applications.
“The new results are truly impressive,” Karl Freund, senior analyst for machine learning at Moor Insights & Strategy, wrote in a commentary posted on EE Times. Of the six benchmarks, Nvidia and Google each racked up three top spots. Nvidia reduced its run-time by up to 80% using the V100 TensorCore accelerator in the DGX2h building block. Read More
A Benchmark for Machine Learning from an Academic/Industry Cooperative
MLPerf is a consortium involving more than 40 leading companies and university researchers, which has released several rounds of results. MLPerf’s goals are:
Accelerate progress in ML via fair and useful measurement
Encourage innovation across state-of-the-art ML systems
Serve both industrial and research communities
Enforce replicability to ensure reliable results
Keep benchmark effort affordable so all can play Read More
The Vision Behind MLPerf
A broad ML benchmark suite for measuring the performance of ML software frameworks, ML hardware accelerators, and ML cloud and edge platforms.
… since 2012 the amount of compute used in the largest AI training runs has been increasing exponentially with a 3.5 month-doubling time (by comparison, Moore’s Law had an 18-month doubling period). Since 2012, this metric has grown by more than 300,000x (an 18-month doubling period would yield only a 12x increase). Improvements in compute have been a key component of AI progress, so as long as this trend continues, it’s worth preparing for the implications of systems far outside today’s capabilities.” Read More