Star Trek actor William Shatner is celebrating his 90th birthday by creating an AI-powered version of himself that’ll live forever.
We use the term AI pretty loosely, though. No, scientists aren’t going to scan Shatner’s brain and build an AI from his neural activity. (Nor is his head ending up in a green jar like it did in the sci-fi cartoon Futurama.)
Instead, a company called StoryFile is taping interviews with Shatner to create an interactive video program discussing his life. The AI component is how watchers can verbally ask the video program a question. StoryFile’s system will then look through the footage for a video segment that supplies the best response. Read More
Daily Archives: March 22, 2021
Data Scientists are Increasingly Deserting their Jobs. But Why?
According to a report, data scientists spend two hours a week searching for new jobs
‘I’m a data scientist,’ feels pretty prestigious to say this, isn’t it? Then why is there a downward trend recently in data science professions, and especially, among data scientists? Lately, for over a couple of years, data scientists are quitting their jobs from top technology companies. Despite getting paid handsomely, they choose to walk out on many scenarios. The worst case is that most of them don’t even complete a whole year in the company.
…Data scientist was named as the ‘sexiest job of the 21st century’ by Harvard Business Review not long back. Starting from Fortune 500 companies to retail stores, organizations around the world want to build a team of top data science professionals to drive their company towards success.
Despite getting a lot of attention for a long time, the positive trend is taking a u-turn in recent years. According to Financial Time’s investigation, data scientists are spending an average of two hours a week looking for a new job. While machine learning specialists topped the list of developers who said they were looking for a new job, at 14.3%, data scientists followed the trend with 13.2%. Read More
Why machine learning struggles with causality
When you look at a baseball player hitting the ball, you can make inferences about causal relations between different elements. For instance, you can see the bat and the baseball player’s arm moving in unison, but you also know that it is the player’s arm causing the bat’s movement and not the other way around. You also don’t need to be told that the bat is causing the sudden change in the ball’s direction.
Likewise, you can think about counterfactuals, such as what would happen if the ball flew a bit higher and didn’t hit the bat.
Such inferences come to us humans intuitively. We learn them at a very early age, without being explicitly instructed by anyone and just by observing the world. But for machine learning algorithms, which have managed to outperform humans in complicated tasks such as Go and chess, causality remains a challenge. Machine learning algorithms, especially deep neural networks, are especially good at ferreting out subtle patterns in huge sets of data. They can transcribe audio in real time, label thousands of images and video frames per second, and examine X-ray and MRI scans for cancerous patterns. But they struggle to make simple causal inferences like the ones we just saw in the baseball example above.
In a paper titled “Towards Causal Representation Learning,” researchers at the Max Planck Institute for Intelligent Systems, the Montreal Institute for Learning Algorithms (Mila), and Google Research discuss the challenges arising from the lack of causal representations in machine learning models and provide directions for creating artificial intelligence systems that can learn causal representations. Read More
Data Science vs. Artificial Intelligence – What are the Differences?
With technological advancement, there are so many career opportunities that have come up. Surely, you might be aware of Artificial intelligence and data science. Well, these two are the most crucial technologies that are trending in today’s time. It is highly in demand across the globe and which is why the individuals with desired skills are also in demand. Since you may wonder what exactly the difference between the two is, let us explore this post in a better way. It is the data science that uses artificial intelligence in certain of the operations but not entirely. Data science also contributes to AI to some extent. Many people are in understanding that contemporary Data Science is nothing but Artificial Intelligence, but that is not true at all. Let us understand more about Data Science vs. Artificial Intelligence for clarity. Read More
Is AI Adoption Going Way Too Fast?
The COVID-19 pandemic has accelerated the pace of AI adoption, but many industry insiders find the speed of adoption a bit overwhelming, according to a KPMG survey.
The KPMG report, based on a survey of 950 full-time business/IT decision-makers with at least a moderate amount of AI knowledge working at companies with over $1 billion in revenue, analysed the uptake, concerns, and confidence in AI across seven industries – tech, government, retail, financial services, industrial manufacturing, healthcare & life sciences.
According to Traci Gusher, Principal of AI at KPMG, industries are experiencing a COVID-19’ whiplash’ with AI adoption skyrocketing due to the pandemic. Meanwhile, experts have reposed faith in AI’s ability to solve significant business challenges. Read More
Deeper Neural Networks Lead to Simpler Embeddings
Recent research is increasingly investigating how neural networks, being as hyper-parametrized as they are, generalize. That is, according to traditional statistics, the more parameters, the more the model overfits. This notion is directly contradicted by a fundamental axiom of deep learning: Increased parametrization improves generalization.
Although it may not be explicitly stated anywhere, it’s the intuition behind why researchers continue to push models larger to make them more powerful.
There have been many efforts to explain exactly why this is so. Most are quite interesting; the recently proposed Lottery Ticket Hypothesis states that neural networks are just giant lotteries finding the best subnetwork, and another paper suggests through theoretical proof that such phenomenon is built into the nature of deep learning.
Perhaps one of the most intriguing, though, is one proposing that deeper neural networks lead to simpler embeddings. Alternatively, this is known as the “simplicity bias” — neural network parameters have a bias towards simpler mappings. Read More