Google Calls In Help From Larry Page and Sergey Brin for A.I. Fight

A rival chatbot has shaken Google out of its routine, with the founders who left three years ago re-engaging and more than 20 A.I. projects in the works.

Last month, Larry Page and Sergey Brin, Google’s founders, held several meetings with company executives. The topic: a rival’s new chatbot, a clever A.I. product that looked as if it could be the first notable threat in decades to Google’s $149 billion search business.

Mr. Page and Mr. Brin, who had not spent much time at Google since they left their daily roles with the company in 2019, reviewed Google’s artificial intelligence product strategy, according to two people with knowledge of the meetings who were not allowed to discuss them. They approved plans and pitched ideas to put more chatbot features into Google’s search engine. And they offered advice to company leaders, who have put A.I. front and center in their plans.

The re-engagement of Google’s founders, at the invitation of the company’s current chief executive, Sundar Pichai, emphasized the urgency felt among many Google executives about artificial intelligence and that chatbot, ChatGPT. Read More

Google’s Blog


#big7, #chatbots

What Happens When AI Has Read Everything?

The dream of an artificial mind may never become a reality if AI runs out of quality prose to ingest—and there isn’t much left.

Artificial intelligence has in recent years proved itself to be a quick study, although it is being educated in a manner that would shame the most brutal headmaster. Locked into airtight Borgesian libraries for months with no bathroom breaks or sleep, AIs are told not to emerge until they’ve finished a self-paced speed course in human culture. On the syllabus: a decent fraction of all the surviving text that we have ever produced.

When AIs surface from these epic study sessions, they possess astonishing new abilities. People with the most linguistically supple minds—hyperpolyglots—can reliably flip back and forth between a dozen languages; AIs can now translate between more than 100 in real time. They can churn out pastiche in a range of literary styles and write passable rhyming poetry. DeepMind’s Ithaca AI can glance at Greek letters etched into marble and guess the text that was chiseled off by vandals thousands of years ago. Read More

#nlp

Machine Learning AI Has Beat Chess, but Now It’s Close to Beating Physics-Based Sports Games as Well

A machine learning-based AI called Nexto is so supremely good at Rocket League even top tier players are having trouble in online matches.

Artificial intelligence has already beaten chess. Hell, the most sophisticated AI systems have a very good chance against top players in the incredibly complicated game of Go.

But, in the uber-complicated car-based soccer game of Rocket League, can an AI do a boosted 360 aerial bicycle kick power shot from the midline? Can it pinch a ball off the side ramp so precisely it sails into the goal at 90 MPH? No, at least not yet, but AI can apparently dribble like a madman. It can fake out legitimately skilled players and score goals by flicking the ball off the hood and into the net. Read More

#robotics, #vfx

AI Claude Passes Law and Economics Exam

The Claude AI from Anthropic earned a marginal pass on a recent GMU law and economics exam! Read More

… Claude was created using a technique Anthropic developed called “constitutional AI.” As the company explains in a recent Twitter thread, “constitutional AI” aims to provide a “principle-based” approach to aligning AI systems with human intentions, letting AI similar to ChatGPT respond to questions using a simple set of principles as a guide.

To engineer Claude, Anthropic started with a list of around ten principles that, taken together, formed a sort of “constitution” (hence the name “constitutional AI”). The principles haven’t been made public, but Anthropic says they’re grounded in the concepts of beneficence (maximizing positive impact), nonmaleficence (avoiding giving harmful advice) and autonomy (respecting freedom of choice). Read More

#chatbots

Lions and Tigers and Lawsuits, Oh my!

AI Art Generators Hit With Copyright Suit Over Artists’ Images

A group of artists is taking on AI generators Stability AI Ltd., Midjourney Inc., and DeviantArt Inc. in what would be a first-of-its-kind copyright infringement class action over using copyrighted images to train AI tools.

Sarah Andersen, author of the web comic “Sarah Scribbles,” along with fellow artists Kelly McKernan and Karla Ortiz, sued the AI companies in a purported class action that claims they downloaded and used billions of copyrighted images without obtaining the consent of or compensating any of the artists. Read More

Getty Images is suing the creators of AI art tool Stable Diffusion for scraping its content

Getty Images claims Stability AI ‘unlawfully’ scraped millions of images from its site. It’s a significant escalation in the developing legal battles between generative AI firms and content creators.

Getty Images is suing Stability AI, creators of popular AI art tool Stable Diffusion, over alleged copyright violation.

In a press statement shared with The Verge, the stock photo company said it believes that Stability AI “unlawfully copied and processed millions of images protected by copyright” to train its software and that Getty Images has “commenced legal proceedings in the High Court of Justice in London” against the firm. Read More

#legal, #image-recognition

AI Lawyer: It’s Starting as a Stunt, but There’s a Real Need

People have a hard time getting help from lawyers. Advocates say AI could change that.

Next month, AI will enter the courtroom, and the US legal system may never be the same. 

An artificial intelligence chatbot, technology programmed to respond to questions and hold a conversation, is expected to advise two individuals fighting speeding tickets in courtrooms in undisclosed cities. The two will wear a wireless headphone, which will relay what the judge says to the chatbot being run by DoNotPay, a company that typically helps people fight traffic tickets through the mail. The headphone will then play the chatbot’s suggested responses to the judge’s questions, which the individuals can then choose to repeat in court.  Read More

#chatbots, #legal

AI and the future of work: 5 experts on what ChatGPT, DALL-E and other AI tools mean for artists and knowledge workers

From steam power and electricity to computers and the internet, technological advancements have always disrupted labor markets, pushing out some jobs while creating others. Artificial intelligence remains something of a misnomer – the smartest computer systems still don’t actually know anything – but the technology has reached an inflection point where it’s poised to affect new classes of jobs: artists and knowledge workers.

Specifically, the emergence of large language models – AI systems that are trained on vast amounts of text – means computers can now produce human-sounding written language and convert descriptive phrases into realistic images. The Conversation asked five artificial intelligence researchers to discuss how large language models are likely to affect artists and knowledge workers. And, as our experts noted, the technology is far from perfect, which raises a host of issues – from misinformation to plagiarism – that affect human workers. Read More

#artificial-intelligence, #strategy

GitHub Code Brushes uses ML to update code ‘like painting with Photoshop’

GitHub Next has unveiled a project called Code Brushes which uses machine learning to update code “like painting with Photoshop”.

Using the feature, developers can “brush” over their code to see it update in real-time.

Several different brushes are included to achieve various aims. For example, one brush makes code more readable—especially important when coding as part of a team or contributing to open-source projects.

… Code Brushes also supports the creation of custom brushes. One example is a brush to make a form “more accessible” automatically. Read More

#devops

Company creates 2 artificial intelligence interns: ‘They are hustling and grinding’

Codeword created two interns to work in editorial and engineering.

Artificial intelligence isn’t just making inroads in technology. Soon, AI may replace human beings in jobs as evidenced by one company that has created two AI interns.

Kyle Monson, co-founder of the digital marketing company Codeword, appeared on ABC News’ daily podcast “Start Here” to talk about the creation of AI interns Aiden and Aiko, who will be assisting in editorial and engineering. Their creation comes amid the sensation of the artificial intelligence-driven program ChatGPT, which has gone viral for responding to user prompts, utilizing Shakespeare and poetry in their efforts to recreate human interaction.

Monson spoke about the implications of these digital hires that mirror humans and if there is a potential to erase human intelligence. Read More

#chatbots, #nlp

Abstracts written by ChatGPT fool scientists

Researchers cannot always differentiate between AI-generated and original abstracts.

An artificial-intelligence (AI) chatbot can write such convincing fake research-paper abstracts that scientists are often unable to spot them, according to a preprint posted on the bioRxiv server in late December1. Researchers are divided over the implications for science.

“I am very worried,” says Sandra Wachter, who studies technology and regulation at the University of Oxford, UK, and was not involved in the research. “If we’re now in a situation where the experts are not able to determine what’s true or not, we lose the middleman that we desperately need to guide us through complicated topics,” she adds Read More

#chatbots, #nlp