Homework Machine Hand Writes AI-Generated Assignments

Devadeth’s homework machine generates text based on the user’s own handwriting to result in more convincing penmanship.

I believe that laziness should be encouraged in many situations. Hardworking people will spend hours laboring on a project, but lazy people will find clever ways to achieve the same result with minimal effort. Laziness gave us tools, machines, computers, and ChatGPT. If you’re a lazy student, then ChatGPT is a tempting solution for essay assignments. But most teachers don’t share my enlightened principles, so they require that students write out their essays by hand in order to thwart ChatGPT submissions. To give those students a viable workaround, Devadath P R designed a homework machine that hand writes ChatGPT essays convincingly.

This is still a work in progress, but the project seeks to solve one of the biggest problems with other homework machines, such as this one that I covered a few months ago after it blew up on social media. The problem with most homework machines is that they’re too perfect. — Read More

#chatbots

On-boarding your AI Intern

There’s a somewhat weird alien who wants to work for free for you. You should probably get started.

Let’s get to work.

In previous posts, I have made the argument that, for a variety of reasons, it is better to think of AI as a person (even though it isn’t) than a piece of software. In fact, perhaps one of the most interesting aspects of our current AI moment is that several billion people just got free interns. They are weird, somewhat alien interns that work infinitely fast and sometimes lie to make you happy, but interns nonetheless.

So, how can you figure out how to best use your intern? — Read More

#chatbots

‘Low Background’ Content

and the possibility of a self-referential AI death spiral …

One of the unexpected side-effects of humanity’s entry into the nuclear age was a scramble for so-called ‘low-background’ steel. After the bombings of Hiroshima and Nagasaki and prolific atmospheric nuclear testing, new radioactive elements filled our atmosphere. As a result, due to the air injection process in steelmaking, any steel made after the summer of 1945 had an increased radioactive signature. For most uses, like cars or buildings, this didn’t matter. But for certain sensitive scientific and medical equipment, steel’s radioactivity became a real issue. Thus was created a market need for steel that was created in the less-radioactive atmosphere before 1945—low-background steel. Interestingly, a big source of this important resource came from the enthusiastic scrapping of sunken battleships, including the scuttled WWI German fleet.

I mention this because we are now entering another new age—the age of AI.  The story of low-background steel came to mind recently as I started working with AI/ML companies in my consulting business. Like anybody with a healthy sense of self preservation, I’ve been immersing myself in this extraordinary, fast-moving revolution. I lived through several previous ones (personal computer, internet, smartphone), but I don’t remember any of them moving quite this fast. It’s obviously going to change just about every aspect of our lives. And the more I delve into the mechanics of Large Language Models and generative AI—and the more I watch AI’s light-speed propagation into daily life—the more I wonder if we’d crossed a line (in roughly the spring of 2022) where any content that existed before that moment should be considered “low-background content.” That is to say, content that was certifiably created by actual human beings rather than AI. Everything after should be considered suspect. — Read More

#chatbots

ChatGPT Plugins Mega Guide

Read More

#chatbots

Open-Source AI Is Gaining on Google and ChatGPT

In February, Meta Platforms set off an explosion of artificial intelligence development when it gave academics access to sophisticated machine-learning models that can understand conversational language. Within weeks, the academics turned those models into open-source software that powered free alternatives to ChatGPT and other proprietary AI software.

Free AI models are now “reasonably close” in performance to proprietary models from Google and ChatGPT creator OpenAI, and most software developers will eventually opt to use the free ones, said Ion Stoica, a professor of computer science at University of California, Berkeley, who helped develop a key open-source AI model using Meta’s technology. — Read More

#chatbots

Google’s Sundar Pichai talks Search, AI, and dancing with Microsoft

AI is one of the deepest platform shifts ever, says Google’s CEO, and he’s not worried about being first.

Sundar Pichai is the CEO of Google and Alphabet. We spoke the day after Google I/O, the company’s big developer conference, where Sundar introduced new generative AI features in virtually all of the company’s products.

It’s an important moment for Google, which invented a lot of the core technology behind the current AI moment. The company is very quick to point out that the “T” in ChatGPT stands for transformer, the large language model technology first invented at Google, but OpenAI and others have been first to market with generative AI products, and OpenAI has partnered with Microsoft on a new version of Bing that feels like the first real competitor to Google Search in a long time. I wanted to know what Sundar thinks of this moment and, in particular, what he thinks of the future of Search, which is the heart of Google’s business. — Read More

#big7, #chatbots

ChatGPT vs. Bard: A realistic comparison

Let’s see how Bard does vs. ChatGPT, without preconceptions or hype. One person’s totally unscientific, anecdotal, but realistic field experiment.

… This is not a scientific study, clearly. Once upon a time, I enjoyed doing controlled, in-depth, technical comparisons of ML models, but those days are past. In this post, I’m going to take about an hour to explore a few use cases, make a decision, and move on to the rest of my long to-do list. — Read More

#chatbots

AI Claude: Introducing 100K Context WindowsAI Claude:

We’ve expanded Claude’s context window from 9K to 100K tokens, corresponding to around 75,000 words! This means businesses can now submit hundreds of pages of materials for Claude to digest and analyze, and conversations with Claude can go on for hours or even days.

The average person can read 100,000 tokens of text in ~5+ hours1, and then they might need substantially longer to digest, remember, and analyze that information. Claude can now do this in less than a minute. For example, we loaded the entire text of The Great Gatsby into Claude-Instant (72K tokens) and modified one line to say Mr. Carraway was “a software engineer that works on machine learning tooling at Anthropic.” When we asked the model to spot what was different, it responded with the correct answer in 22 seconds. — Read More

#chatbots

Enter PaLM 2 (New Bard): Full Breakdown – 92 Pages Read and Gemini Before GPT 5? Google I/O

Read More

#chatbots, #videos

Can AI actually write good fanfiction?


Since artificial intelligence-powered text-generation tools were made widely available to the public in the past few months, they’ve been heralded by some as the future of email, internet search, and content generation. But these AI-powered tools also have some clear shortcomings: They tend to be incorrect, and often generate answers that reinforce racial biases, for example. There are also serious ethical concerns about their unspecified training data.

It is not surprising that debates over using these tools have also been happening in fandom spaces. Excited fans almost immediately turned to them as a new way of exploring their favorite characters. With the right prompt, AI can spit out a few paragraphs of fic-like writing. But just as quickly, many fanfic writers began to speak out against the practice. Read More

#nlp, #chatbots