This week, Forbes reported that a Russian spyware company called Social Links had begun using ChatGPT to conduct sentiment analysis. The creepy field by which cops and spies collect and analyze social media data to understand how web users feel about stuff, sentiment analysis is one of the sketchier use-cases for the little chatbot to yet emerge.
Social Links, which was previously kicked off Meta’s platforms for alleged surveillance of users, showed off its unconventional use of ChatGPT at a security conference in Paris this week. The company was able to weaponize the chatbot’s ability for text summarization and analysis to troll through large chunks of data, digesting it quickly. — Read More
Monthly Archives: November 2023
AI Hallucinations
Sam Altman, Mira Murati, Emmett Shear — 3 CEOs in 3 Days
Amazon will host free ‘AI Ready’ courses in an effort to boost the AI talent pool
OpenAI may grab all the headlines, but Amazon has been quietly toiling on AI across all its divisions and even using AI-powered robots in its warehouses. Now, in a bid to expand the AI talent pool, the company is launching a free program called “AI Ready,” with the aim of providing generative AI training to two million people globally by 2025.
Consisting of eight free courses, the classes will be available through Amazon’s learning website and offered to non-Amazon employees as well. They’ll teach people AI skills including the generative AI technology that powers ChatGPT and other language models. — Read More
Microsoft hires former OpenAI CEO Sam Altman
Microsoft is hiring former OpenAI CEO Sam Altman and co-founder Greg Brockman.
Altman was fired from OpenAI on Friday, after the board said it “no longer has confidence in his ability to continue leading OpenAI.” After a weekend of negotiations to potentially bring Altman back to OpenAI, Microsoft CEO Satya Nadella announced that both Sam Altman and Greg Brockman will be joining to lead Microsoft’s new advanced AI research team. — Read More
NOIR: Neural Signal Operated Intelligent Robots for Everyday Activities
We present Neural Signal Operated Intelligent Robots (NOIR), a general-purpose, intelligent brain-robot interface system that enables humans to command robots to perform everyday activities through brain signals. Through this interface, humans communicate their intended objects of interest and actions to the robots using electroencephalography (EEG). Our novel system demonstrates success in an expansive array of 20 challenging, everyday household activities, including cooking, cleaning, personal care, and entertainment. The effectiveness of the system is improved by its synergistic integration of robot learning algorithms, allowing for NOIR to adapt to individual users and predict their intentions. Our work enhances the way humans interact with robots, replacing traditional channels of interaction with direct, neural communication. Project website: this https URL. — Read More
Generative AI Passes the Legal Ethics Exam in Study by LegalOn Technologies
In a groundbreaking development, researchers at LegalOn Technologies have demonstrated that both OpenAI’s GPT-4 and Anthropic’s Claude 2 can pass the legal ethics exam, a test nearly all US lawyers are required to pass, alongside the bar exam. This milestone underscores the potential for AI to assist lawyers in legal work and demonstrates the increasingly advanced capabilities of large language models applied to law.
Earlier this year, research found that the generative AI model GPT-4 could surpass law students in passing the Uniform Bar Examination. LegalOn’s study extends this discovery, revealing that these models can also navigate complex rules and fact patterns around professional responsibility — Read More
Exclusive poll: AI is already great at faking video and audio, experts say
Nearly every respondent (95%) in a new Axios-Generation Lab-Syracuse University AI Experts Survey described AI’s audio and video deepfake capabilities as “advanced.”
Driving the news: 68% said the capabilities are moderately advanced; 27% said they are highly advanced. — Read More
Domain Adaptation of A Large Language Model
Large language models (LLMs) like BERT are usually pre-trained on general domain corpora like Wikipedia and BookCorpus. If we apply them to more specialized domains like medical, there is often a drop in performance compared to models adapted for those domains.
In this article, we will explore how to adapt a pre-trained LLM like Deberta base to medical domain using the HuggingFace Transformers library. Specifically, we will cover an effective technique called intermediate pre-training where we do further pre-training of the LLM on data from our target domain. This adapts the model to the new domain, and improves its performance.
This is a simple yet effective technique to tune LLMs to your domain and gain significant improvements in downstream task performance. — Read More
Exploring GPTs: ChatGPT in a trench coat?
The biggest announcement from last week’s OpenAI DevDay (and there were a LOT of announcements) was GPTs. Users of ChatGPT Plus can now create their own, custom GPT chat bots that other Plus subscribers can then talk to.
My initial impression of GPTs was that they’re not much more than ChatGPT in a trench coat—a fancy wrapper for standard GPT-4 with some pre-baked prompts.
Now that I’ve spent more time with them I’m beginning to see glimpses of something more than that. The combination of features they provide can add up to some very interesting results. — Read More
500 chatbots read the news and discussed it on social media. Guess how that went.
On a simulated day in July of a 2020 that didn’t happen, 500 chatbots read the news — real news, our news, from the real July 1, 2020. ABC News reported that Alabama students were throwing “COVID parties.” On CNN, President Donald Trump called Black Lives Matter a “symbol of hate.” The New York Times had a story about the baseball season being canceled because of the pandemic.
Then the 500 robots logged into something very much (but not totally) like Twitter, and discussed what they had read. Meanwhile, in our world, the not-simulated world, a bunch of scientists were watching. — Read More