AI is finally useful for business, and everyone is likely underestimating its impact. But unless AI is open-source and truly owned by the end users the future for everyone but the software providers looks grim.
The last time your author opined about the state of artificial intelligence I predicted that commercial success required two things: first, that AI researchers focus on solving a specific business problem, and second, that enough data exists for that specific business problem. The premise for this prediction was that researchers needed to develop an intuition of the business process involved so they could encode that intuition into their models. In other words, that a general-purpose solution would not crack every business problem. This might have been true temporarily, but it’s doomed to be wrong more permanently. I missed a reoccurring pattern in the history of AI: that eventually enough computational power wins. In the same way chess-playing engines that tried to encode heuristics about the game eventually lost to models that had enough computational power, these AI models for “specific business problems” have all just lost to the hundred billion parameters of GPT-3.
I am not known for being overly bullish on technology, but I struggle to think of everyday sorts of business examples where such a large language model would not do well. It is true that in the above example the model did terribly on questions requiring basic arithmetic (converting rent per square foot per month to rent per square metre per year, for example), but these limitations are missing the point. Computers are known to be adequate arithmetic-performing machines (hence the name), and surely future models would correct this and other deficiencies. Artificial intelligence is now generally useful for business, and I am probably not thinking broadly enough about where it will end up.
One decent guess, however, might be augmented intelligence – the idea that AI is best deployed as a tool to increase the power and productivity of human operators rather than replace them. Read More
Monthly Archives: February 2023
ChatGPT for Robotics: Design Principles and Model Abilities
This paper presents an experimental study regarding the use of OpenAI’s ChatGPT [1] for robotics applications. We outline a strategy that combines design principles for prompt engineering and the creation of a high-level function library which allows ChatGPT to adapt to different robotics tasks, simulators, and form factors. We focus our evaluations on the effectiveness of different prompt engineering techniques and dialog strategies towards the execution of various types of robotics tasks. We explore ChatGPT’s ability to use free-form dialog, parse XML tags, and to synthesize code, in addition to the use of task-specific prompting functions and closed-loop reasoning through dialogues. Our study encompasses a range of tasks within the robotics domain, from basic logical, geometrical, and mathematical reasoning all the way to complex domains such as aerial navigation, manipulation, and embodied agents. We show that ChatGPT can be effective at solving several of such tasks, while allowing users to interact with it primarily via natural language instructions. In addition to these studies, we introduce an open-sourced research tool called PromptCraft, which contains a platform where researchers can collaboratively upload and vote on examples of good prompting schemes for robotics applications, as well as a sample robotics simulator with ChatGPT integration, making it easier for users to get started with using ChatGPT for robotics. Read More
I Made an AI Clone of Myself
I spent a day recording videos in front of a green screen and reading all types of scripts to create a digital clone of myself that can say anything I want her to using a platform called Synthesia.
In November, a company called Synthesia emailed Motherboard and offered “an exclusive date with your AI twin.”
“Hello, ever thought about creating your own digital twin? You’ve been invited to Synthesia’s New York studio to build your own virtual avatar, like me!” an AI clone of Synthesia spokesperson Laura Morelli said in a video embedded in the email. “Don’t miss out on learning more about the new sexy sector. Lock in your one-hour slot now to build your own avatar with Synthesia. Hurry now because spots are limited and filling up fast.” Read More
AI-created images lose U.S. copyrights in test for new technology
Images in a graphic novel that were created using the artificial-intelligence system Midjourney should not have been granted copyright protection, the U.S. Copyright Office said in a letter seen by Reuters.
“Zarya of the Dawn” author Kris Kashtanova is entitled to a copyright for the parts of the book Kashtanova wrote and arranged, but not for the images produced by Midjourney, the office said in its letter, dated Tuesday.
The decision is one of the first by a U.S. court or agency on the scope of copyright protection for works created with AI, and comes amid the meteoric rise of generative AI software like Midjourney, Dall-E and ChatGPT. Read More
Will Russian President Vladimir Putin use nuclear weapons in Ukraine? What ChatGPT thinks
NEW DELHI: When President Vladimir Putin ordered a full-scale invasion of Ukraine a year ago, most of the world expected Kyiv to fall within a few days and the superior Russian forces to prevail on the battlefield – similar to Taliban’s lightening quick takeover of Afghanistan.
But a resiliant Kyiv rewrote Putin’s script by putting up a brave front and eventually pushing back the Russian forces with the help of Western aid.
… For now, US thinks that Russia will not resort to nuclear use.
…Amid the raging ‘will he, won’t he’ debate, we asked ChatGPT about the possibility of Putin using nuclear weapons in Ukraine and when the war is likely to end. Here’s what it said… Read More
From retail to transport: how AI is changing every corner of the economy
The high profile race to enhance their search products has underscored the importance of artificial intelligence to Google and Microsoft – and the rest of the economy, too. Two of the world’s largest tech companies announced plans for AI-enhanced search this month, ratcheting up a tussle for supremacy in the artificial intelligence space. However, the debut of Google’s new chatbot, Bard, was scuppered when an error appeared, knocking $163bn (£137bn) off the parent company Alphabet’s share price. The stock’s plunge showed how crucial investors think AI could be to Google’s future.
However, the increasing prominence of AI has implications for every corner of the economy. From retail to transport, here’s how AI promises to usher in a wave of change across industries. Read More
AI #1: Sydney and Bing
Previous AI-related recent posts: Jailbreaking ChatGPT on Release Day, Next Level Seinfeld, Escape Velocity From Bullshit Jobs, Movie Review: Megan, On AGI Ruin: A List of Lethalities.
Microsoft and OpenAI released the chatbot Sydney as part of the search engine Bing. It seems to sometimes get more than a little bit unhinged. A lot of people are talking about it. A bunch of people who had not previously freaked out are now freaking out.
This is an attempt to be a roundup of Sydney and the AI-related events of the past week. Read More
Is ChatGPT the future of cheating or the future of teaching?
ChatGPT, the cutting-edge chatbot from OpenAI that was released in November 2022, can solve math equations, write a history term paper, compose a sonnet and almost everything in between. So it’s not surprising that many educators support banning the chatbot in schools to prevent plagiarism, cheating and just plain inaccuracy.
In response to these concerns, some major districts have banned the chatbot in schools. In December, the Los Angeles Unified School District “preemptively” blocked access to ChatGPT while “a risk/benefit assessment is conducted,” a district spokesperson told the Washington Post. And in January, New York City Public Schools banned access to ChatGPT from devices and networks that the school owns, per the Washington Post. A spokesperson for the NYC Department of Education told Chalkbeat that the decision was made “due to concerns about negative impacts on student learning and concerns regarding the safety and accuracy of content.”
But not everyone is on board with a complete ban — some in the education world say instead of banning it, teach kids how to use it smartly and fairly, and it could be a beneficial educational tool. Read More
Sci-fi publisher Clarkesworld halts pitches amid deluge of AI-generated stories
Founding editor says 500 pitches rejected this month and their ‘authors’ banned, as influencers promote ‘get rich quick’ schemes
One of the most prestigious publishers of science fiction short stories has closed itself to submissions after a deluge of AI-generated pitches overwhelmed its editorial team.
… In a typical month, the magazine would normally receive 10 or so such submissions that were deemed to have plagiarised other authors, he wrote in a blogpost. But since the release of ChatGPT last year pushed AI language models into the mainstream, the rate of rejections has rocketed. Read More
Could Big Tech be liable for generative AI output?
In a surprise moment during today’s Supreme Court hearing about a Google case that could impact online free speech, justice Neil M. Gorsuch touched upon potential liability for generative AI output, according to Will Oremus at the Washington Post.
In the Gonzalez v. Google case in front of the Court, the family of an American killed in a 2015 ISIS terrorist attack in Paris argued that Google and its subsidiary YouTube did not do enough to remove or stop promoting ISIS terrorist videos seeking to recruit members. According to attorneys representing the family, this violated the Anti-Terrorism Act.
In lower court rulings, Google won with the argument that Section 230 of the Communications Decency Act shields it from liability for what its users post on its platform. Read More