As a human you instinctively know that a leopard is closer to a cat than a motorbike, but the way we train most AI makes them oblivious to these kinds of relations. Building the concept of similarity into our algorithms could make them far more capable, writes the author of a new paper in Science Robotics.
Convolutional neural networks have revolutionized the field of computer visionto the point that machines are now outperforming humans on some of the most challenging visual tasks. But the way we train them to analyze images is very different from the way humans learn, says Atsuto Maki, an associate professor at KTH Royal Institute of Technology.
“Imagine that you are two years old and being quizzed on what you see in a photo of a leopard,” he writes. “You might answer ‘a cat’ and your parents might say, ‘yeah, not quite but similar’.” Read More
Monthly Archives: July 2019
Building Better Deep Learning Requires New Approaches Not Just Bigger Data
In its rush to solve all the world’s problems through deep learning, Silicon Valley is increasingly embracing the idea of AI as a universal solver that can be rapidly adapted to any problem in any domain simply by taking a stock algorithm and feeding it relevant training data. The problem with this assumption is that today’s deep learning systems are little more than correlative pattern extractors that search large datasets for basic patterns and encode them into software. While impressive compared to the standards of previous eras, these systems are still extraordinarily limited, capable only of identifying simplistic correlations rather than actually semantically understanding their problem domain. In turn, the hand-coded era’s focus on domain expertise, ethnographic codification and deeply understanding a problem domain has given way to parachute programming in which deep learning specialists take an off-the-shelf algorithm, shove in a pile of training data, dump out the resulting model and move on to the next problem. Truly advancing the state of deep learning and way in which companies make use of it will require a return to the previous era’s focus on understanding problems rather than merely churning canned models off assembly lines. Read More
Ethical Artificial Intelligence Becomes A Supreme Competitive Advantage
Ethical AI ensures more socially conscious approaches to customer and employee interactions, and in the long run, may be the ultimate competitive differentiatior as well, a recent survey suggests. Three in five consumers who perceive their AI interactions to be ethical place higher trust in the company, spread positive word of mouth, and are more loyal. More than half of consumers participating in a recent survey say they would purchase more from a company whose AI interactions are deemed ethical.
That’s the word coming out of a study of 1,580 executives and 4,400 consumers from the Capgemini Research Institute. As organizations progress to harness the benefits of AI, consumers, employees and citizens are watching closely and are ready to reward or punish behavior. Those surveyed said that they would be more loyal to, purchase more from, or be an advocate for organizations whose AI interactions are deemed ethical. Read More
AI+EI – A recipe for success or disaster?
If one thing is for sure, it is that businesses are reaping the benefits of AI’s ability to free us from the more repetitive tasks in the workplace. AI is changing the nature of work. It’s helping to remove the mundane, enabling us to make more informed decisions with its analytical capabilities and its ability to wade through large amounts of data through machine learning.
Yet, according to a report from Gartner, EI accounts for more than 90% of a person’s performance and success in a technical and leadership role. With this in mind, it would be unlikely for AI to completely replace human beings in the workplace at this stage, given its lack of emotional intelligence (among other things). Emotional intelligence, deep domain expertise and a set of “soft skills” cannot yet be automated by current AI technologies. Read More
AI for AI: IBM debuts AutoAI in Watson Studio
Today’s machine learning models are rapidly becoming highly complex, involving labor-intensive data preparation and feature engineering. As a result, enterprises are quickly deploying sophisticated neural network architectures with tens of millions of parameters. Consistent breakthroughs from researchers produce new machine learning methods and new architectures for neural networks designed to solve unique problems.
Faced with these complex challenges, your team’s process for getting the most from AI involves designing, optimizing, and governing models.
AI for AI makes it possible to automate the end-to-end data science and AI process, allowing your business to take the next steps in complementing human-led expertise and innovation with machine-generated insights. Read More
China’s tech sector faces ‘hangover after the party’, with trade war and economic slowdown hitting employment
Tech sector demand for new hires down 25 per cent in first quarter from a year earlier, while jobs seekers up 37 per cent, meaning demand outpaces supply
Baidu, Tencent and JD.com are all ‘optimising’ their workforces, as analysts point to a sector in decline after years of expanding at an unrealistic pace Read More
Data can now be stored inside the molecules that power our metabolism
DNA isn’t the only molecule we could use for digital storage. It turns out that solutions containing sugars, amino acids and other small molecules could replace hard drives too.
Jacob Rosenstein and his colleagues at Brown University, Rhode Island, stored and retrieved pictures of an Egyptian cat, an ibex and an anchor using an array of these small molecules. They say the approach could make storage that is less vulnerable to hacking and that could function in more extreme environmental conditions.
Inspired by recent research showing that it is possible to store data on DNA, Rosenstein’s team wanted to see if smaller and simpler molecules could also encode abstract information. Read More
3 Under-the-Radar Artificial Intelligence Stocks to Buy
Artificial intelligence (AI) is one of the fastest-growing markets in the world, with global revenues zooming from $3.22 billion in 2016 to an estimated $11.28 billion this year. AI sales are expected to more than double again by 2021, and rise to nearly $90 billion by 2025. Given that growth, it makes sense to invest in artificial intelligence stocks.
But you can’t just invest in the usual suspects. Those include the likes of Alphabet (GOOG), Microsoft (MSFT), IBM (IBM), Salesforce.com (CRM) and Nvidia (NVDA). Don’t get me wrong—those are all fine companies and some are solid investments (to varying degrees). But they’re not artificial intelligence stocks, per se; in other words, they’re large and diversified enough companies that AI is just one segment—albeit a fast-growing one—of what they do.
The purer artificial intelligence plays are less diversified, and highly levered to the AI boom. As the artificial intelligence industry has exploded in the last couple years, the following three stocks have all more than doubled the market. And all three are coming off of huge starts to 2019. Read More
Weight Agnostic Neural Networks
Not all neural network architectures are created equal, some perform much better than others for certain tasks. But how important are the weight parameters of a neural network compared to its architecture? In this work, we question to what extent neural network architectures alone, without learning any weight parameters, can encode solutions for a given task. We propose a search method for neural network architectures that can already perform a task without any explicit weight training. To evaluate these networks, we populate the connections with a single shared weight parameter sampled from a uniform random distribution, and measure the expected performance. We demonstrate that our method can find minimal neural network architectures that can perform several reinforcement learning tasks without weight training. On supervised learning domain, we find architectures that can achieve much higher than chance accuracy on MNIST using random weights. Read More
Are Weights Really Important to Neural Networks?
Architecture and weights are two essential considerations for artificial neural networks. Architecture is akin to the innate human brain, and contains the neural network’s initial settings such as hyperparameters, layers, node connections (or wiring), etc. Weights meanwhile are the relative strength of the different connections between nodes after model training, which can be likened to a human brain that has learned for example how to multiply numbers or speak French.
As with the age-old “nature versus nurture” debate, AI researchers want to know whether architecture or weights play the main role in the performance of neural networks. In a blow to the “nurture” side, Google researchers have now demonstrated that a neural network which has not learned weights through training can still achieve satisfactory results in machine learning tasks. Read More