OpenAI’s Chief Scientist Claimed AI May Be Conscious — and Kicked Off a Furious Debate

A month ago, Ilya Sutskever tweeted that large neural networks may be “slightly conscious.” He’s one of the co-founders and Chief Scientist of OpenAI, and also co-authored the landmark paper that sparked the deep learning revolution. Having such titles under his name, he certainly knew his bold claim — accompanied by neither evidence nor an explanation — would attract the attention of the AI community, cognitive scientists, and philosophy lovers alike. In a matter of days, the Tweet got more than 400 responses and twice that number of retweets.

People in AI’s vanguard circles like to ponder about the future of AI: When will we achieve artificial general intelligence (AGI)? What are the capabilities and limitations of large transformer-based systems like GPT-3 and super-human reinforcement learning models like AlphaZero? When — if ever — will AI develop consciousness? Read More

#human

The Metaverse Isn’t a Destination. It’s a Metaphor

Is this the hype peak of the metaverse? Or are we seeing something emerge that’s been evolving for a long time?

It was about as meta as it gets. After donning VR headsets, Stanford University Professor Jeremy Bailenson and I “stood” in front of his students in a virtual classroom, our avatars watching theirs discuss the nature of virtual existence. Except his students weren’t “there.” The discussion was a recording. The professor and I stood as living avatars among ghosts.

Bailenson, who founded Stanford’s Virtual Human Interaction Lab, then paused the recording and walked through the class. His avatar gliding, he explained how these playbacks will produce insights into what social life will mean in the “metaverse.” Of course, he doesn’t know what he’ll discover, just like the many companies that are now busily touting this much hyped but as-yet-unformed next evolution of the internet. Read More

#metaverse

Deep Learning on Electronic Medical Records is doomed to fail

A few years ago, I worked on a project to investigate the potential of machine learning to transform healthcare through modeling electronic medical records. I walked away deeply disillusioned with the whole field and I really don’t think that the field needs machine learning right now. What it does need is plenty of IT support. But even that’s not enough. Here are some of the structural reasons why I don’t think deep learning models on EMRs are going to be useful any time soon.

  • Data is fragmented
  • Data is Workflow, Workflow is Data. (with apologies to Lisp)
  • Data reflects an adversarial process
  • Data encodes clinical expertise
  • Causal inference is hard
Read More

#deep-learning