Four Architectures that Showcase Meta AI’s Progress in Multimodal Deep Learning

One of the marvels of human cognition is our ability to simultaneously process information from different sensorial inputs. In most cognitive tasks, humans natively combine information in different form such as audio, language or speech. Recreating this ability has been one of the traditional goals of machine learning(ML). However, the current generation of ML models are dominated by supervised techniques that specialized on a single task in a specific domain. This challenge is very well known and there are several companies advancing the agenda in multi-modal ML. Among those, Meta(Facebook) AI Research(FAIR) have been pioneering several techniques that can work with diverse data inputs such as text, images, vide or audio. Recently, FAIR published a blog post summarizing some of their top contributions to the multi-modal deep learning field.

FAIR contributions to multi-modal deep learning methods are part of a more ambitious plan to develop intelligent systems that resemble the way humans learn. From the multi-modal techniques created by the FAIR team, there are four that lay down the path to more immersive, interactive and intelligent models. Read More

#metaverse

The Great Data Debate

Over a decade after the idea of “big data” was first born, data has become the central nervous system for decision-making in organizations of all sizes. But the modern data stack is evolving and which infrastructure trends and technologies will ultimately win out remains to be decided. Five leaders in data infrastructure debate the future of the modern data stack. Read More

#podcasts

HERE’S HOW AN ALGORITHM GUIDES A MEDICAL DECISION

Artificial intelligence algorithms are everywhere in healthcare. They sort through patients’ data to predict who will develop medical conditions like heart disease or diabetes, they help doctors figure out which people in an emergency room are the sickest, and they screen medical images to find evidence of diseases. But even as AI algorithms become more important to medicine, they’re often invisible to people receiving care. 

To help demystify the AI tools used in medicine today, we’re going to break down the components of one specific algorithm and see how it works. We picked an algorithm that flags patients in the early stages of sepsis — a life-threatening complication from an infection that results in widespread inflammation through the body. It can be hard for doctors to identify sepsis because the signs are subtle, especially early on, so it’s a common target for artificial intelligence-based tools. This particular program also uses mathematical techniques, like neural networks, that are typical of medical algorithms.  Read More

#artificial-intelligence

Towards Realistic Market Simulations: a Generative Adversarial Networks Approach

Simulated environments are increasingly used by trading firms and investment banks to evaluate trading strategies before approaching real markets. Backtesting, a widely used approach, consists of simulating experimental strategies while replaying historical market scenarios. Unfortunately, this approach does not capture the market response to the experimental agents’ actions. In contrast, multi-agent simulation presents a natural bottom-up approach to emulating agent interaction in financial markets. It allows to set up pools of traders with diverse strategies to mimic the financial market trader population, and test the performance of new experimental strategies. Since individual agent-level historical data is typically proprietary and not available for public use, it is difficult to calibrate multiple market agents to obtain the realism required or testing trading strategies. To addresses this challenge we propose a synthetic market generator based on Conditional Generative Adversarial Networks (CGANs) trained on real aggregate-level historical data. A CGAN-based “world” agent can generate meaningful orders in response to an experimental agent. We integrate our synthetic market generator into ABIDES, an open source simulator of financial markets. By means of extensive simulations we show that our proposal outperforms previous work in terms of stylized facts reflecting market responsiveness and realism. Read More

#investing

‘No-Code’ Brings the Power of A.I. to the Masses

A growing number of new products allow anyone to apply artificial intelligence without having to write a line of computer code. Proponents believe the “no-code” movement will change the world.


This article is part of a new series on how artificial intelligence has the potential to solve everyday problems.

Read More

Tools such as Teachable Machine from Google and Lobe from Microsoft, in addition to natural language low-code options, like those from OpenAI and DeepMind , are making applications development increasingly accessible.

#devops

LandingLens for Machine Vision

The LandingLens platform includes a wide array of features to help teams develop and deploy reliable and repeatable inspection systems utilizing deep learning technology for a wide range of tasks in a production environment. We describe this software tool as a composition of three modules: Data, Model, and Deployment. With a data-centric approach throughout, LandingLens manages data, accelerates troubleshooting, and scales to deployment. Read More

Paper

#mlops