AI Lawyer: It’s Starting as a Stunt, but There’s a Real Need

People have a hard time getting help from lawyers. Advocates say AI could change that.

Next month, AI will enter the courtroom, and the US legal system may never be the same. 

An artificial intelligence chatbot, technology programmed to respond to questions and hold a conversation, is expected to advise two individuals fighting speeding tickets in courtrooms in undisclosed cities. The two will wear a wireless headphone, which will relay what the judge says to the chatbot being run by DoNotPay, a company that typically helps people fight traffic tickets through the mail. The headphone will then play the chatbot’s suggested responses to the judge’s questions, which the individuals can then choose to repeat in court.  Read More

#chatbots, #legal

Company creates 2 artificial intelligence interns: ‘They are hustling and grinding’

Codeword created two interns to work in editorial and engineering.

Artificial intelligence isn’t just making inroads in technology. Soon, AI may replace human beings in jobs as evidenced by one company that has created two AI interns.

Kyle Monson, co-founder of the digital marketing company Codeword, appeared on ABC News’ daily podcast “Start Here” to talk about the creation of AI interns Aiden and Aiko, who will be assisting in editorial and engineering. Their creation comes amid the sensation of the artificial intelligence-driven program ChatGPT, which has gone viral for responding to user prompts, utilizing Shakespeare and poetry in their efforts to recreate human interaction.

Monson spoke about the implications of these digital hires that mirror humans and if there is a potential to erase human intelligence. Read More

#chatbots, #nlp

Abstracts written by ChatGPT fool scientists

Researchers cannot always differentiate between AI-generated and original abstracts.

An artificial-intelligence (AI) chatbot can write such convincing fake research-paper abstracts that scientists are often unable to spot them, according to a preprint posted on the bioRxiv server in late December1. Researchers are divided over the implications for science.

“I am very worried,” says Sandra Wachter, who studies technology and regulation at the University of Oxford, UK, and was not involved in the research. “If we’re now in a situation where the experts are not able to determine what’s true or not, we lose the middleman that we desperately need to guide us through complicated topics,” she adds Read More

#chatbots, #nlp

Machine Generated Text: A Comprehensive Survey of Threat Models and Detection Methods

Advances in natural language generation (NLG) have resulted in machine generated text that is increasingly difficult to distinguish from human authored text. Powerful open-source models are freely available, and user-friendly tools democratizing access to generative models are proliferating. The great potential of state-of-the-art NLG systems is tempered by the multitude of avenues for abuse. Detection of machine generated text is a key countermeasure for reducing abuse of NLG models, with significant technical challenges and numerous open problems. We provide a survey that includes both 1) an extensive analysis of threat models posed by contemporary NLG systems, and 2) the most complete review of machine generated text detection methods to date. This survey places machine generated text within its cybersecurity and social context, and provides strong guidance for future work addressing the most critical threat models, and ensuring detection systems themselves demonstrate trustworthiness through fairness, robustness, and accountability. Read More

#adversarial, #chatbots, #nlp

Elon Musk, Pikachu, God, and more are waiting for the talk with you in Character AI

Thanks to the Character AI, the science-fiction dream of collaborative interactions and open-ended dialogues with machines is becoming a reality. Although it is still beta, the outcomes are outstanding.

You can chat with Elon Musk, learn English from Pikachu, or even talk to the God!

… Character AI is a chatbot web application with a neural language model that can produce text responses that sound like those of real people and engage in natural conversation. The beta model was created by Noam Shazeer and Daniel De Freitas, who had previously worked on Google’s LaMDA. It was completely released to the public in September 2022. Read More

#chatbots

ChatGPT is enabling script kiddies to write functional malware

For a beta, ChatGPT isn’t all that bad at writing fairly decent malware.

Since its beta launch in November, AI chatbot ChatGPT has been used for a wide range of tasks, including writing poetry, technical papers, novels, and essays and planning parties and learning about new topics. Now we can add malware development and the pursuit of other types of cybercrime to the list.

Researchers at security firm Check Point Research reported Friday that within a few weeks of ChatGPT going live, participants in cybercrime forums—some with little or no coding experience—were using it to write software and emails that could be used for espionage, ransomware, malicious spam, and other malicious tasks. Read More

#chatbots, #cyber

People are already trying to get ChatGPT to write malware

Analysis of chatter on dark web forums shows that efforts are already under way to use OpenAI’s chatbot to help script malware.

The ChatGPT AI chatbot has created plenty of excitement in the short time it has been available and now it seems it has been enlisted by some in attempts to help generate malicious code.

ChatGPT is an AI-driven natural language processing tool which interacts with users in a human-like, conversational way. Among other things, it can be used to help with tasks like composing emails, essays and code Read More

#chatbots, #cyber

Top AI conference bans ChatGPT in paper submissions (and why it matters)

machine learning conference debating the use of machine learning? While that might seem so meta, in its call for paper submissions on Monday, the International Conference on Machine Learning did, indeed, note that “papers that include text generated from a large-scale language model (LLM) such as ChatGPT are prohibited unless the produced text is presented as a part of the paper’s experimental analysis.”

It didn’t take long for a brisk social media debate to brew, in what may be a perfect example of what businesses, organizations and institutions of all shapes and sizes, across verticals, will have to grapple with going forward: How will humans deal with the rise of large language models that can help communicate — or borrow, or expand on, or plagiarize, depending on your point of view — ideas? Read More

#chatbots

ChatGPT banned from New York City public schools’ devices and networks

A spokesperson for OpenAI, which developed ChatGPT, said it is “already developing mitigations to help anyone identify text generated by that system.”

New York City’s Department of Education announced a ban on the wildly popular chatbot ChatGPT — which some have warned could inspire more student cheating — from its schools’ devices and networks.

Jenna Lyle, a spokesperson for the department, said the decision to ban ChatGPT, which is able to generate conversational responses to text prompts, stemmed from concerns about the “negative impacts on student learning.”

“While the tool may be able to provide quick and easy answers to questions, it does not build critical-thinking and problem-solving skills, which are essential for academic and lifelong success,” Lyle said in a email statement. Read More

#chatbots

AI legal assistant will help defendant fight a speeding case in court

In February, an AI from DoNotPay is set to tell a defendant exactly what to say and when during an entire court case. It is likely to be the first ever case defended by an artificial intelligence

An artificial intelligence is set to advise a defendant in court for the first time ever. The AI will run on a smartphone and listen to all speech in the courtroom in February before instructing the defendant on what to say via an earpiece.

The location of the court and the name of the defendant are being kept under wraps by DoNotPay, the company that created the AI. But it is understood that the defendant is charged with speeding and that they will say only what DoNotPay’s tool tells them to via an earbud. The case is being considered as a test by the company, which has agreed to pay any fines, should they be imposed, says the firm’s founder, Joshua Browder. Read More

#chatbots, #human, #legal