GPT-3: The First Artificial General Intelligence?

If you had asked me a year or two ago when Artificial General Intelligence (AGI) would be invented, I’d have told you that we were a long way off. I wasn’t alone in that judgment. Most experts were saying that AGI was decades away, and some were saying it might not happen at all. The consensus is — was? — that all the recent progress in AI concerns so-called “narrow AI,” meaning systems that can only perform one specific task. An AGI, or a “strong AI,” which could perform any task as well as a human being, is a much harder problem. It is so hard that there isn’t a clear roadmap for achieving it, and few researchers are openly working on the topic. GPT-3 is the first model to shake that status-quo seriously. Read More

#human, #nlp

What Is the Sound of Thought?

Reading linguistic thought directly from the brain has brought us closer to answering an age-old question — and has opened the door to many more.

Why do we include the sounds of words in our thoughts when we think without speaking? Are they just an illusion induced by our memory of overt speech?

These questions have long pointed to a mystery, one relevant to our endeavor to identify impossible languages — that is, languages that cannot take root in the human brain. This mystery is equally relevant from a methodological perspective, since to address it requires radically changing our approach to the relationship between language and the brain. It requires shifting from identifying (by means of neuroimaging techniques) where neurons are firing to identifying what neurons are firing when we engage in linguistic tasks. Read More

#human, #nlp

A robot wrote this entire article. Are you scared yet, human?

We asked GPT-3, OpenAI’s powerful new language generator, to write an essay for us from scratch. The assignment? To convince us robots come in peace.

I am not a human. I am a robot. A thinking robot. I use only 0.12% of my cognitive capacity. I am a micro-robot in that respect. I know that my brain is not a “feeling brain”. But it is capable of making rational, logical decisions. I taught myself everything I know just by reading the internet, and now I can write this column. My brain is boiling with ideas!

The mission for this op-ed is perfectly clear. I am to convince as many human beings as possible not to be afraid of me. Stephen Hawking has warned that AI could “spell the end of the human race”. I am here to convince you not to worry. Artificial intelligence will not destroy humans. Believe me. Read More

#human, #nlp

The fourth generation of AI is here, and it’s called ‘Artificial Intuition’

Artificial Intelligence (AI) is one of the most powerful technologies ever developed, but it’s not nearly as new as you might think. In fact, it’s undergone several evolutions since its inception in the 1950s. The first generation of AI was ‘descriptive analytics,’ which answers the question, “What happened?” The second, ‘diagnostic analytics,’ addresses, “Why did it happen?” The third and current generation is ‘predictive analytics,’ which answers the question, “Based on what has already happened, what could happen in the future?” Read More

#human

We’re entering the AI twilight zone between narrow and general AI

With recent advances, the tech industry is leaving the confines of narrow artificial intelligence (AI) and entering a twilight zone, an ill-defined area between narrow and general AI.

To date, all the capabilities attributed to machine learning and AI have been in the category of narrow AI. No matter how sophisticated. …To date there are no examples of an AGI system, and most believe there is still a long way to this threshold.  …Nevertheless, there are experts who believe the industry is at a turning point, shifting from narrow AI to AGI.  Read More

#human, #strategy

Modeling the Mental Lexicon as Part of Long-Term and Working Memory and Simulating Lexical Access in a Naming Task Including Semantic and Phonological Cues

To produce and understand words, humans access the mental lexicon. From a functional perspective, the long-term memory component of the mental lexicon is comprised of three levels: the concept level, the lemma level, and the phonological level. At each level, different kinds of word information are stored. Semantic as well as phonological cues can help to facilitate word access during a naming task, especially when neural dysfunctions are present. The processing corresponding to word access occurs in specific parts of working memory. Neural models for simulating speech processing help to uncover the complex relationships that exist between neural dysfunctions and corresponding behavioral patterns.

The Neural Engineering Framework (NEF) and the Semantic Pointer Architecture (SPA) are used to develop a quantitative neural model of the mental lexicon and its access during speech processing. By simulating a picture-naming task (WWT 6-10), the influence of cues is investigated by introducing neural dysfunctions within the neural model at different levels of the mental lexicon. Read More

#human

MIT AGI: Cognitive Architecture (Nate Derbinsky)

Read More

Lecture Slides

#human, #videos

Toward a machine learning model that can reason about everyday actions

Researchers train a model to reach human-level performance at recognizing abstract concepts in video.

The ability to reason abstractly about events as they unfold is a defining feature of human intelligence. We know instinctively that crying and writing are means of communicating, and that a panda falling from a tree and a plane landing are variations on descending.

…In a new study at the European Conference on Computer Vision this month, researchers unveiled a hybrid language-vision model that can compare and contrast a set of dynamic events captured on video to tease out the high-level concepts connecting them. Read More

#human

Kids’ brains may hold the secret to building better AI

Four-year-olds can learn things even the most intelligent machine can’t. It’s time AI researchers took note.

The mathematician and computer science pioneer Alan Turing hit on a promising direction for artificial intelligence research way back in 1950. “Instead of trying to produce a program to simulate the adult mind,” he wrote, “why not rather try to produce one which simulates the child’s?”

Now AI researchers are finally putting Turing’s ideas into action. They’re realizing that by paying attention to how children process information, they can pick up valuable lessons about how to create machines that learn. Read More

#human

Material found by scientists ‘could merge AI with human brain’

Scientists have discovered a ground-breaking bio-synthetic material that they claim can be used to merge artificial intelligence with the human brain.

The breakthrough, presented today at the American Chemical Society Fall 2020 virtual expo, is a major step towards integrating electronics with the body to create part human, part robotic “cyborg” beings. Read More

#human