Google rolls out AI improvements to aid with Search safety and ‘personal crisis’ queries

Google today announced it will be rolling out improvements to its AI model to make Google Search a safer experience and one that’s better at handling sensitive queries, including those around topics like suicide, sexual assault, substance abuse and domestic violence. It’s also using other AI technologies to improve its ability to remove unwanted explicit or suggestive content from Search results when people aren’t specifically seeking it out.

Currently, when people search for sensitive information — like suicide, abuse or other topics — Google will display the contact information for the relevant national hotlines above its search results. But the company explains that people who are in crisis situations may search in all kinds of ways, and it’s not always obvious to a search engine that they’re in need, even if it would raise flags if a human saw their search queries. With machine learning and the latest improvements to Google’s AI model called MUM (Multitask Unified Model), Google says it will be able to automatically and more accurately detect a wider range of personal crisis searches because of how MUM is able to better understand the intent behind people’s questions and queries. Read More

#big7

Otter.ai rolls out a new AI-generated meeting summary feature and more collaboration tools

AI-powered voice transcription service Otter.ai is releasing a set of new meeting-focused features to boost collaboration, the company announced on Tuesday. Most notably, the company is adding a new “Automatic Outline” feature that uses Otter’s proprietary AI to automatically create a meeting summary. The new feature aims to give you an idea of what your colleagues said during a meeting without having to listen to a recording or read an entire transcript. The new meeting summaries will be displayed in the “Outline” panel on the platform.

Otter.ai is also introducing a new “Meeting Gems” panel to capture action items, decisions and key moments from meetings. You can use the panel to assign items, add comments or ask questions. Users are able to generate a Meeting Gem directly from their meeting by highlighting snippets within the notes. Read More

#nlp

Artificial intelligence beats eight world champions at bridge

Victory marks milestone for AI as bridge requires more human skills than other strategy games

An artificial intelligence has beaten eight world champions at bridge, a game in which human supremacy has resisted the march of the machines until now.

The victory represents a new milestone for AI because in bridge players work with incomplete information and must react to the behaviour of several other players – a scenario far closer to human decision-making. Read More

#human

The Puzzling Reason AI May Never Compete With Human Consciousness

Two immersive thought experiments lead us right into a flurry of questions surrounding the human mind. You decide where you stand.

Constructing humanlike artificial intelligence often starts with deconstructing humans. Take fingerprints: When holding soapy dishes, we intuitively adjust our grip based on our fingerprint structure. It just doesn’t cross our mind, because we chalk it up to reflex – and for the longest time, so did scientists. No one had any equations to unravel how this works because, well, it didn’t matter much. But the rise of robotics has complicated things.

For a robot to do this, we have to figure out precisely what’s going on, and even turn that knowledge into writable code. Now decoding fingerprint grip matters, and researchers are finally trying to find a new law of physics to explain it.  Read More

#human

NVIDIA Research Turns 2D Photos Into 3D Scenes in the Blink of an AI

Instant NeRF is a neural rendering model that learns a high-resolution 3D scene in seconds — and can render images of that scene in a few milliseconds.

When the first instant photo was taken 75 years ago with a Polaroid camera, it was groundbreaking to rapidly capture the 3D world in a realistic 2D image. Today, AI researchers are working on the opposite: turning a collection of still images into a digital 3D scene in a matter of seconds.

Known as inverse rendering, the process uses AI to approximate how light behaves in the real world, enabling researchers to reconstruct a 3D scene from a handful of 2D images taken at different angles. The NVIDIA Research team has developed an approach that accomplishes this task almost instantly — making it one of the first models of its kind to combine ultra-fast neural network training and rapid rendering. Read More

#image-recognition, #nvidia

Trends in AI—March 2022

A monthly selection of ML papers by Zeta Alpha: Audio generation, Gradients without Backprop, Mixture of Experts, Multimodality, Information Retrieval, and more. Read More

#artificial-intelligence

Four Architectures that Showcase Meta AI’s Progress in Multimodal Deep Learning

One of the marvels of human cognition is our ability to simultaneously process information from different sensorial inputs. In most cognitive tasks, humans natively combine information in different form such as audio, language or speech. Recreating this ability has been one of the traditional goals of machine learning(ML). However, the current generation of ML models are dominated by supervised techniques that specialized on a single task in a specific domain. This challenge is very well known and there are several companies advancing the agenda in multi-modal ML. Among those, Meta(Facebook) AI Research(FAIR) have been pioneering several techniques that can work with diverse data inputs such as text, images, vide or audio. Recently, FAIR published a blog post summarizing some of their top contributions to the multi-modal deep learning field.

FAIR contributions to multi-modal deep learning methods are part of a more ambitious plan to develop intelligent systems that resemble the way humans learn. From the multi-modal techniques created by the FAIR team, there are four that lay down the path to more immersive, interactive and intelligent models. Read More

#metaverse

The Great Data Debate

Over a decade after the idea of “big data” was first born, data has become the central nervous system for decision-making in organizations of all sizes. But the modern data stack is evolving and which infrastructure trends and technologies will ultimately win out remains to be decided. Five leaders in data infrastructure debate the future of the modern data stack. Read More

#podcasts

HERE’S HOW AN ALGORITHM GUIDES A MEDICAL DECISION

Artificial intelligence algorithms are everywhere in healthcare. They sort through patients’ data to predict who will develop medical conditions like heart disease or diabetes, they help doctors figure out which people in an emergency room are the sickest, and they screen medical images to find evidence of diseases. But even as AI algorithms become more important to medicine, they’re often invisible to people receiving care. 

To help demystify the AI tools used in medicine today, we’re going to break down the components of one specific algorithm and see how it works. We picked an algorithm that flags patients in the early stages of sepsis — a life-threatening complication from an infection that results in widespread inflammation through the body. It can be hard for doctors to identify sepsis because the signs are subtle, especially early on, so it’s a common target for artificial intelligence-based tools. This particular program also uses mathematical techniques, like neural networks, that are typical of medical algorithms.  Read More

#artificial-intelligence

Towards Realistic Market Simulations: a Generative Adversarial Networks Approach

Simulated environments are increasingly used by trading firms and investment banks to evaluate trading strategies before approaching real markets. Backtesting, a widely used approach, consists of simulating experimental strategies while replaying historical market scenarios. Unfortunately, this approach does not capture the market response to the experimental agents’ actions. In contrast, multi-agent simulation presents a natural bottom-up approach to emulating agent interaction in financial markets. It allows to set up pools of traders with diverse strategies to mimic the financial market trader population, and test the performance of new experimental strategies. Since individual agent-level historical data is typically proprietary and not available for public use, it is difficult to calibrate multiple market agents to obtain the realism required or testing trading strategies. To addresses this challenge we propose a synthetic market generator based on Conditional Generative Adversarial Networks (CGANs) trained on real aggregate-level historical data. A CGAN-based “world” agent can generate meaningful orders in response to an experimental agent. We integrate our synthetic market generator into ABIDES, an open source simulator of financial markets. By means of extensive simulations we show that our proposal outperforms previous work in terms of stylized facts reflecting market responsiveness and realism. Read More

#investing