Reframing Superintelligence — Comprehensive AI Services as General Intelligence

Studies of superintelligent-level systems have typically posited AI func-tionality that plays the role of a mind in a rational utility-directed agent,and hence employ an abstraction initially developed as an idealized model of human decision makers. Today, developments in AI technology highlight intelligent systems that are quite unlike minds, and provide a basis for a different approach to understanding them: Today, we can consider how AI systems are produced (through the work of research and development), what they do (broadly, provide services by performing tasks), and what they will enable (including incremental yet potentially thorough automation of human tasks).

Because tasks subject to automation include the tasks that comprise AI research and development, current trends in the field promise accelerating AI-enabled advances in AI technology itself, potentially leading to asymptotically recursive improvement of AI technologies in distributed systems, a prospect that contrasts sharply with the vision of self-improvement internal to opaque, unitary agents. Read More

#human, #singularity

Trust, control and personalization through human-centric AI

Our virtual lives lie in the hands of algorithms that govern what we see and don’t see, how we perceive the world and which life choices we make. Artificial intelligence decides which movies are of interest to you, how your social media feeds should look like, and which advertisements have the highest likelihood of convincing you. These algorithms are either controlled by corporations or by governments, each of which tend to have goals that differ from the individual’s objectives.

In this article, we dive into the world of human-centric AI, leading to a new era where the individual not only controls the data, but also steers the algorithms to ensure fairness, privacy and trust. Breaking free from filter bubbles and detrimental echo chambers that skew the individual’s worldview allows the user to truly benefit from today’s AI revolution.

While the devil is in the implementation and many open questions still remain, the main purpose of this think piece is to spark a discussion and lay out a vision of how AI can be employed in a human-centric way. Read More

#explainability, #trust

The Seven Patterns Of AI

Read More

#governance, #standards

Complete Hands-Off Automated Machine Learning

Here’ a proposal for real ‘zero touch’, ‘set-em-and-forget-em’ machine learning from the researchers at Amazon.  If you have an environment as fast changing as e-retail and a huge number of models matching buyers and products you could achieve real cost savings and revenue increases by making the refresh cycle faster and more accurate with automation.  This capability likely will be coming soon to your favorite AML platform. Read More



Read Amazon’s paper here

#deep-learning, #machine-learning

What's New In Gartner's Hype Cycle For AI, 2019

Between 2018 and 2019, organizations that have deployed artificial intelligence (AI) grew from 4% to 14%, according to Gartner’s 2019 CIO Agenda survey.

Conversational AI remains at the top of corporate agendas spurred by the worldwide success of Amazon Alexa, Google Assistant, and others.

Enterprises are making progress with AI as it grows more widespread, and they’re also making more mistakes that contribute to their accelerating learning curve.



Read More

#strategy

Logchain: Blockchain-assisted Log Storage

During the normal operation of a Cloud solution,no one usually pays attention to the logs except technical department, which may periodically check them to ensure that the performance of the platform conforms to the Service Level Agreements. However, the moment the status of a component changes from acceptable to unacceptable, or a customer com-plains about accessibility or performance of a platform, the importance of logs increases significantly. Depending on the scope of the issue, all departments, including management, customer support, and even the actual customer, may turn to logs to find out what has happened, how it has happened, and who is responsible for the issue. The party at fault may be motivated to tamper the logs to hide their fault. Given the number of logs that are generated by the Cloud solutions, there are many tampering possibilities. While tamper detection solution can be used to detect any changes in the logs, we argue that critical nature of logs calls for immutability. In this work, we propose a blockchain-based log system, called Logchain, that collects the logs from different providers and avoids log tampering by sealing the logs cryptographically and adding them to a hierarchical ledger, hence, providing an immutable platform for log storage. Read More

#blockchain

Improving VIX Futures Forecasts using Machine Learning Methods

The problem of forecasting market volatility is a difficult task for most fund managers. Volatility forecasts are used for risk management, alpha (risk) trading, and the reduction of trading friction. Improving the forecasts of future market volatility assists fund managers in adding or reducing risk in their portfolios as well as in increasing hedges to protect their portfolios in anticipation of a market sell-off event. Our analysis compares three existing financial models that forecast future market volatility using the Chicago Board Options Exchange Volatility Index (VIX) to six machine/deep learning supervised regression methods. This analysis determines which models provide best market volatility forecast. Using VIX futures and options data along with other technical indicators, our analysis compares multiple forecasting models for estimating the 1-month VIX futures contract (UX1) both 3 and 5-days forward. This analysis finds that machine/deep learning methods of Recurrent Neural Networks (RNN) and Long Short-Term Memory (LSTM) provide improved results over existing linear regression, principal components analysis (PCA) and ARIMA methods. Comparing estimated versus actual test data, both the RNN and LSTM methods show lower mean squared error (MSE), lower mean absolute error (MAE), higher explained variance, and higher correlation. Finally, an accuracy matrix was generated for each model, which showed RNN and LSTM had better overall accuracy due to high true positive and negative forecasts as well as much lower false positive forecasts. Read More

#investing

Quantum supremacy using a programmable superconducting processor

The promise of quantum computers is that certain computational tasks might be executed exponentially faster on a quantum processor than on a classical processor1. A fundamental challenge is to build a high-fidelity processor capable of running quantum algorithms in an exponentially large computational space. Here we report the use of a processor with programmable superconducting qubits2–7 to create quantum states on 53 qubits, corresponding to a computational state-space of dimension 253 (about 1016). Measurements from repeated experiments sample the resulting probability distribution, which we verify using classical simulations. Our Sycamore processor takes about 200 seconds to sample one instance of a quantum circuit a million times—our benchmarks currently indicate that the equivalent task for a state-of-the-art classical supercomputer would take approximately 10,000 years. This dramatic increase in speed compared to all known classical algorithms is an experimental realization of quantum supremacy8–14 for this specific computational task, heralding a much-anticipated computing paradigm. Read More

#big7, #quantum

Keras vs. tf.keras: What’s the difference in TensorFlow 2.0?

The intertwined relationship between Keras and TensorFlow

Just in case you didn’t hear, the long-awaited TensorFlow 2.0 was officially released on September 30th.

And while it’s certainly a time for celebration, many deep learning practitioners such as Jeremiah are scratching their heads:

— What does the TensorFlow 2.0 release mean for me as a Keras user?
— Am I supposed to use the keras package for training my own neural networks?
— Or should I be using the tf.keras submodule inside TensorFlow 2.0 instead?
— Are there TensorFlow 2.0 features that I should care about as a Keras user?

The transition from TensorFlow 1.x to TensorFlow 2.0 is going to be a bit of a rocky one, at least to start, but with the right understanding, you’ll be able to navigate the migration with ease. Read More

#frameworks

China’s Surveillance State Has Tens of Millions of New Targets

One evening in the summer of 2017, local police in China made a surprise inspection of a small private language school, checking the visas of all non-Chinese attendees. Among those present was a foreign doctoral student, who had left his passport at his hotel. “Not to worry,” said the officer. “What’s your name?” The officer took out a handheld device and entered the student’s name. “Is this you?” Displayed on the screen was the researcher’s name, his passport number, and the address of his hotel.

This kind of incident is common in Xinjiang, where China has extensively deployed technology against Muslim minorities. But this episode took place in Yunnan province, near China’s southern border with Myanmar. In fact, public security bureaus—the network of agencies in China that deal with domestic security and intelligence—across the country are using electronic databases coupled with handheld tools to keep track of certain categories of people. These “key individuals,” as they are officially known, range from paroled criminals and users of drugs to foreigners, petitioners, and religious believers. Read More

#china, #surveillance