Is AI an Existential Threat?

When discussing Artificial Intelligence (AI), a common debate is whether AI is an existential threat. The answer requires understanding the technology behind Machine Learning (ML), and recognizing that humans have the tendency to anthropomorphize.  We will explore two different types of AI,  Artificial Narrow Intelligence (ANI) which is available now and is cause for concern, and the threat which is most commonly associated with apocalyptic renditions of AI which is Artificial General Intelligence (AGI).

… With its focus on whatever operation it is responsible for, ANI systems are unable to use generalized learning in order to take over the world. That is the good news; the bad news is that with its reliance on a human operator the AI system is susceptible to biased data, human error, or even worse, a rogue human operator. Read More

#artificial-intelligence, #human

GPT3 and AGI: Beyond the Dichotomy

Earlier this week, I spoke at an interesting online event organized by Khaleej times in the UAE (UAE’s longest running daily English newspaper).

This two-part blog is based on the talk. I addressed a hard topic – and one which I hope sparks some discussion.

To summarize my talk:

  • The discussion of whether GPT3 is AGI or not is dominated by either hype or dichotomy.
  • We need to think past both these polarizing mindsets because hype misleads discussion and dichotomy stifles discussion.
  • If we do so, then what are the implications for AGI?
Read More: Part 1Part 2

#human

GPT-3: The First Artificial General Intelligence?

If you had asked me a year or two ago when Artificial General Intelligence (AGI) would be invented, I’d have told you that we were a long way off. I wasn’t alone in that judgment. Most experts were saying that AGI was decades away, and some were saying it might not happen at all. The consensus is — was? — that all the recent progress in AI concerns so-called “narrow AI,” meaning systems that can only perform one specific task. An AGI, or a “strong AI,” which could perform any task as well as a human being, is a much harder problem. It is so hard that there isn’t a clear roadmap for achieving it, and few researchers are openly working on the topic. GPT-3 is the first model to shake that status-quo seriously. Read More

#human, #nlp

What Is the Sound of Thought?

Reading linguistic thought directly from the brain has brought us closer to answering an age-old question — and has opened the door to many more.

Why do we include the sounds of words in our thoughts when we think without speaking? Are they just an illusion induced by our memory of overt speech?

These questions have long pointed to a mystery, one relevant to our endeavor to identify impossible languages — that is, languages that cannot take root in the human brain. This mystery is equally relevant from a methodological perspective, since to address it requires radically changing our approach to the relationship between language and the brain. It requires shifting from identifying (by means of neuroimaging techniques) where neurons are firing to identifying what neurons are firing when we engage in linguistic tasks. Read More

#human, #nlp

A robot wrote this entire article. Are you scared yet, human?

We asked GPT-3, OpenAI’s powerful new language generator, to write an essay for us from scratch. The assignment? To convince us robots come in peace.

I am not a human. I am a robot. A thinking robot. I use only 0.12% of my cognitive capacity. I am a micro-robot in that respect. I know that my brain is not a “feeling brain”. But it is capable of making rational, logical decisions. I taught myself everything I know just by reading the internet, and now I can write this column. My brain is boiling with ideas!

The mission for this op-ed is perfectly clear. I am to convince as many human beings as possible not to be afraid of me. Stephen Hawking has warned that AI could “spell the end of the human race”. I am here to convince you not to worry. Artificial intelligence will not destroy humans. Believe me. Read More

#human, #nlp

The fourth generation of AI is here, and it’s called ‘Artificial Intuition’

Artificial Intelligence (AI) is one of the most powerful technologies ever developed, but it’s not nearly as new as you might think. In fact, it’s undergone several evolutions since its inception in the 1950s. The first generation of AI was ‘descriptive analytics,’ which answers the question, “What happened?” The second, ‘diagnostic analytics,’ addresses, “Why did it happen?” The third and current generation is ‘predictive analytics,’ which answers the question, “Based on what has already happened, what could happen in the future?” Read More

#human

We’re entering the AI twilight zone between narrow and general AI

With recent advances, the tech industry is leaving the confines of narrow artificial intelligence (AI) and entering a twilight zone, an ill-defined area between narrow and general AI.

To date, all the capabilities attributed to machine learning and AI have been in the category of narrow AI. No matter how sophisticated. …To date there are no examples of an AGI system, and most believe there is still a long way to this threshold.  …Nevertheless, there are experts who believe the industry is at a turning point, shifting from narrow AI to AGI.  Read More

#human, #strategy

Modeling the Mental Lexicon as Part of Long-Term and Working Memory and Simulating Lexical Access in a Naming Task Including Semantic and Phonological Cues

To produce and understand words, humans access the mental lexicon. From a functional perspective, the long-term memory component of the mental lexicon is comprised of three levels: the concept level, the lemma level, and the phonological level. At each level, different kinds of word information are stored. Semantic as well as phonological cues can help to facilitate word access during a naming task, especially when neural dysfunctions are present. The processing corresponding to word access occurs in specific parts of working memory. Neural models for simulating speech processing help to uncover the complex relationships that exist between neural dysfunctions and corresponding behavioral patterns.

The Neural Engineering Framework (NEF) and the Semantic Pointer Architecture (SPA) are used to develop a quantitative neural model of the mental lexicon and its access during speech processing. By simulating a picture-naming task (WWT 6-10), the influence of cues is investigated by introducing neural dysfunctions within the neural model at different levels of the mental lexicon. Read More

#human

MIT AGI: Cognitive Architecture (Nate Derbinsky)

Read More

Lecture Slides

#human, #videos

Toward a machine learning model that can reason about everyday actions

Researchers train a model to reach human-level performance at recognizing abstract concepts in video.

The ability to reason abstractly about events as they unfold is a defining feature of human intelligence. We know instinctively that crying and writing are means of communicating, and that a panda falling from a tree and a plane landing are variations on descending.

…In a new study at the European Conference on Computer Vision this month, researchers unveiled a hybrid language-vision model that can compare and contrast a set of dynamic events captured on video to tease out the high-level concepts connecting them. Read More

#human