Forgot Password

#humor, #videos

How Will You Measure Your Life? Clay Christensen at TEDxBoston

#ted-talks, #videos

Start with why — how great leaders inspire action | Simon Sinek | TEDxPugetSound

#ted-talks, #videos

Artificial Intelligence is the New Electricity — Andrew Ng

How artificial intelligence (AI) is transforming industry and business.

On Wednesday, January 25, Andrew Ng — former Baidu Chief Scientist, Coursera co-founder, and Stanford Adjunct Professor — gave a talk at the Stanford MSx Future Forum. During the talk, Professor Ng shared his opinion on AI. He mainly discussed how artificial intelligence (AI) is transforming industry and business. Read More

#artificial-intelligence

AI is the New Electricity – Dr. Andrew Ng

#artificial-intelligence, #videos

The Mythos of Model Interpretability

Supervised machine learning models boast remarkable predictive capabilities. But can you trust your model? Will it work in deployment? What else can it tell you about the world? We want models to be not only good, but interpretable . Read More

#model-interpretability

Towards reverse-engineering black-box neural networks

Many deployed learned models are black boxes: given input, returns output. Internal information about the model, such as the architecture, optimisation procedure, or training data, is not disclosed explicitly as it might contain proprietary information or make the system more vulnerable. This work shows that such attributes of neural networks can be exposed from a sequence of queries. Read More

#model-attacks

Stealing Machine Learning Models via Prediction APIs

Machine learning (ML) models may be deemed confidential due to their sensitive training data, commercial value, or use in security applications. Increasingly often, confidential ML models are being deployed with publicly accessible query interfaces. ML-as-a-service (“predictive analytics”) systems are an example: Some allow users to train models on potentially sensitive data and charge others for access on a pay-per-query basis. Read More

#model-attacks

Stealing Hyperparameters in Machine Learning

Hyperparameters are critical in machine learning, as different hyperparameters often result in models with significantly different performance. Hyperparameters may be deemed confidential because of their commercial value and the confidentiality of the proprietary algorithms that the learner uses to learn them. In this work, we propose attacks on stealing the hyperparameters that are learnt by a learner. Read More

#model-attacks

Confidentiality and Privacy Threats in Machine Learning

New threat models in Machine Learning. Read More

#model-attacks