Closing Keynote | AIDC 2018 | Andrew NG, CEO

Read More

#artificial-intelligence, #videos

AI in Five, Fifty and Five Hundred Years — Part One

Prediction is a tricky business.

You have to step outside of your own limitations, your own beliefs, your own flawed and fragmented angle on the world and see it from a thousand different perspectives. You have to see giant abstract patterns and filter through human nature, politics, technology, social dynamics, trends, statistics and probability.

It’s so mind-numbingly complex that our tiny little simian brains stand very little chance of getting it right. Even predicting the future five or ten years out is amazingly complicated. Read More

#artificial-intelligence

When AI Becomes a Part of Our Daily Lives

As we live longer and technology continues its rapid arc of development, we can imagine a future where machines will augment our human abilities and help us make better life choices, from health to wealth. Instead of conducting a question and answer with a device on the countertop, we will be able to converse naturally with our virtual assistant that is fully embedded in our physical environment. Through our dialogue and digital breadcrumbs, it will understand our life goals and aspirations, our obligations and limitations. It will seamlessly and automatically help us budget and save for different life events, so we can spend more time enjoying life’s moments.

While we can imagine this future, the technology itself is not without challenges — at least for now. The ability for artificial intelligence to understand the complexities and nuances of human conversation is one hurdle. There are more than 7,111 known living languages in the world today, according to Ethnologue. Adding to the intricacies are the varied ways words are shared and used across different cultures, including grammar and the level of education and style of the speakers. Google Duplex, the technology supporting Google Assistant, which places phone calls using a natural-sounding human voice instead of a robotic one, is an early attempt to address such challenges in human communications. But these are just initial whispers in voice AI’s long journey. Read More

#artificial-intelligence

Revisiting Unreasonable Effectiveness of Data in Deep Learning Era

The success of deep learning in vision can be attributed to: (a) models with high capacity; (b) increased computational power; and (c) availability of large-scale labeled data. Since 2012, there have been significant advances in representation capabilities of the models and computational capabilities of GPUs. But the size of the biggest dataset has surprisingly remained constant. What will happen if we increase the dataset size by 10× or 100×? This paper takes a step towards clearing the clouds of mystery surrounding the relationship between ‘enormous data’ and visual deep learning. By exploiting the JFT-300M dataset which has more than 375M noisy labels for 300M images, we investigate how the performance of current vision tasks would change if this data was used for representation learning. Our paper delivers some surprising (and some expected) findings. First, we find that the performance on vision tasks increases logarithmically based on volume of training data size. Second, we show that representation learning (or pretraining) still holds a lot of promise. One can improve performance on many vision tasks by just training a better base model. Finally, as expected, we present new state-of-the art results for different vision tasks including image classification, object detection, semantic segmentation and human pose estimation. Our sincere hope is that this inspires vision community to not undervalue the data and develop collective efforts in building larger datasets. Read More

#artificial-intelligence

The Unreasonable Effectiveness of Data

Eugene Wigner’s article “The Unreasonable Effectiveness of Mathematics in the Natural Sciences”1 examines why so much of physics can be neatly explained with simple mathematical formulas such as f = ma or e = mc2. Meanwhile, sciences that involve human beings rather than elementary particles have proven more resistant to elegant mathematics. Economists suffer from physics envy over their inability to neatly model human behavior. An informal, incomplete grammar of the English language runs over 1,700 pages.2 Perhaps when it comes to natural language processing and related fields, we’re doomed to complex theories that will never have the elegance of physics equations. But if that’s so, we should stop acting as if our goal is to author extremely elegant theories, and instead embrace complexity and make use of the best ally we have: the unreasonable effectiveness of data.

One of us, as an undergraduate at Brown University, remembers the excitement of having access to the Brown Corpus, containing one million English words.3 Since then, our fi eld has seen several notable corpora that are about 100 times larger, and in 2006, Google released a trillion-word corpus with frequency counts for all sequences up to five words long.4 In some ways this corpus is a step backwards from the Brown Corpus: it’s taken from unfiltered Web pages and thus contains incomplete sentences, spelling errors, grammatical errors, and all sorts of other errors. It’s not annotated with carefully hand-corrected part-of-speech tags. But the fact that it’s a million times larger than the Brown Corpus outweighs these drawbacks. A trillion-word corpus—along with other Web-derived corpora of millions, billions, or trillions of links, videos, images, tables, and user interactions—captures even very rare aspects of human behavior. So, this corpus could serve as the basis of a complete model for certain tasks—if only we knew how to extract the model from the data. Read More

#artificial-intelligence

In Ten Years: The Future of AI and ML

When you take a minute to stop and look around, the technological advancements of today could be perceived as something out of a futuristic novel. Cars are learning to drive, hands-free devices can turn on your lights or toast your bread, and flying drones are circling the skies. This is 2018. While the manifestation of Artificial Intelligence (AI) and Machine Learning (ML) haven’t been realized, impressive progress has certainly been made.

As a location technology platform, we at Foursquare understand the power that something like AI and ML can have on the way people live and move throughout the world. Take for instance, our own Pilgrim SDK technology, the most sophisticated contextual awareness engine. Our robust understanding of where consumers go in the physical world via Pilgrim was developed using machine-learned algorithms, enabling brands to gain a profound understanding of their audience, drive foot traffic and increase engagement. Technology can change the way people interact with their surroundings forever. Read More

#artificial-intelligence

Visualizing the AI Revolution in One Infographic

Read More

#artificial-intelligence

AI Knowledge Map: How To Classify AI Technologies

Read More

#artificial-intelligence

The next wave of Artificial Intelligence: on-device AI

Artificial Intelligence is no more a science fiction word today. It has turned into a basic piece of our reality as cell phones, smartwatches, tablets, to give some examples. Our lives today revolve around these gadgets to an incomprehensible degree. Utilization of virtual Personal Assistants like Siri and Cortana is on the ascent. We would be powerless if Google maps weren’t there to guide us. To put it plainly, AI is progressing quickly and is changing the manner in which we live our lives. The smart devices today are way ‘more intelligent’ than their ancestors. The fast upgrades in the hardware and software spaces have set off an era where Intelligence is moving from the cloud onto the device and altering our lives.

With new abilities for existing solutions, on-gadget AI makes them all the more brilliant and faster. That could mean more intelligent assistants, more secure vehicles, enhanced security, leaps in robotics, a development in healthcare solutions, and much more. ML and data processing in the cloud won’t leave, yet on-device AI conveys customized experiences with some gigantic advantages, including colossally improved performance, particularly for those AI use cases that can’t manage the cost of even a microsecond slack: like auto security. On-device AI supports security and privacy, ensuring sensitive information like voice ID and face checks that could be undermined in the cloud. Also, when your AI power is in your grasp or readily available, reliability is never again an issue of network accessibility or bandwidth.

It is a typical conviction that AI is about Big data and cloud. Unexpectedly, AI can likewise be localized, directly in the palm of our hands in our cell phones. There has been a consistent development of AI towards the edge devices. This has been possible because of an expansion in computing power combined with upgrades in AI algorithms and the creation of strong software and hardware. These progressions have made it conceivable to run Machine Learning solutions on cell phones and cars instead of in the cloud, and this pattern is on the ascent. Read More

#artificial-intelligence, #iot

Artificial Intelligence Can Now Write Amazing Content — What Does That Mean For Humans?

If you believe anything can and will be automated with artificial intelligence (AI), then you might not be surprised to know how many notable media organizations including The New York Times, Associated Press, Reuters, Washington Post, and Yahoo! Sports already use AI to generate content. The Press Association, for example, can now produce 30,000 local news stories a month using AI. You might think that these are formulaic who, what, where and when stories and you are right, some of them certainly are. But, today, AI-written content has expanded beyond formulaic writing to more creative writing endeavors such as poetry and novels.  Read More

#artificial-intelligence, #nlp