Robots learn to perform chores by watching YouTube

Learning has been a holy grail in robotics for decades. If these systems are going to thrive in unpredictable environments, they’ll need to do more than just respond to programming — they’ll need to adapt and learn. What’s become clear the more I read and speak with experts is true robotic learning will require a combination of many solutions.

Video is an intriguing solution that’s been the centerpiece of a lot of recent work in the space. Roughly this time last year, we highlighted WHIRL (in-the-Wild Human Imitating Robot Learning), a CMU-developed algorithm designed to train robotic systems by watching a recording of a human executing a task.

This week, CMU Robotics Institute assistant professor Deepak Pathak is showcasing VRB (Vision-Robotics Bridge), an evolution to WHIRL. — Read More

#observational-learning, #robotics

AI and Moore’s Law: It’s the Chips, Stupid

Moore’s Law, which began with a random observation by the late Intel co-founder Gordon Moore that transistor densities on silicon substrates were doubling every 18 months, has over the intervening 60+ years been both borne-out yet also changed from a lithography technical feature to an economic law. It’s getting harder to etch ever-thinner lines, so we’ve taken as a culture to emphasizing the cost part of Moore’s Law (chips drop in price by 50 percent on an area basis (dollars per acre of silicon) every 18 months). We can accomplish this economic effect through a variety of techniques including multiple cores, System-On-Chip design, and unified memory — anything to keep prices going-down.

I predict that Generative Artificial Intelligence is going to go a long way toward keeping Moore’s Law in force and the way this is going to happen says a lot about the chip business, global economics, and Artificial Intelligence, itself. — Read More

#strategy

#china-vs-us

The people paid to train AI are outsourcing their work… to AI

A significant proportion of people paid to train AI models may be themselves outsourcing that work to AI, a new study has found. 

It takes an incredible amount of data to train AI systems to perform specific tasks accurately and reliably. Many companies pay gig workers on platforms like Mechanical Turk to complete tasks that are typically hard to automate, such as solving CAPTCHAs, labeling data and annotating text. This data is then fed into AI models to train them. The workers are poorly paid and are often expected to complete lots of tasks very quickly. 

No wonder some of them may be turning to tools like ChatGPT to maximize their earning potential. But how many? — Read More

#strategy, #training

AI could shore up democracy – here’s one way

It’s become fashionable to think of artificial intelligence as an inherently dehumanizing technology, a ruthless force of automation that has unleashed legions of virtual skilled laborers in faceless form. But what if AI turns out to be the one tool able to identify what makes your ideas special, recognizing your unique perspective and potential on the issues where it matters most?

You’d be forgiven if you’re distraught about society’s ability to grapple with this new technology. So far, there’s no lack of prognostications about the democratic doom that AI may wreak on the U.S. system of government. There are legitimate reasons to be concerned that AI could spread misinformationbreak public comment processes on regulations, inundate legislators with artificial constituent outreach, help to automate corporate lobbying, or even generate laws in a way tailored to benefit narrow interests.

But there are reasons to feel more sanguine as well.  — Read More

#strategy

Inflection debuts its own foundation AI model to rival Google and OpenAI LLMs

Inflection, a well-funded AI startup aiming to create “personal AI for everyone,” has taken the wraps off the large language model powering its Pi conversational agent. It’s hard to evaluate the quality of these things in any way, let alone objectively and systematically, but a little competition is a good thing.

Inflection-1, as the model is called, is of roughly GPT-3.5 (AKA ChatGPT) size and capabilities — as measured in the computing power used to train them. The company claims that it’s competitive or superior with other models on this tier, backing it up with a “technical memo” describing some benchmarks it ran on its model, GPT-3.5, LLaMA, Chinchilla and PaLM-540B. — Read More

#chatbots

Do Foundation Model Providers Comply with the Draft EU AI Act?

Foundation models like ChatGPT are transforming society with their remarkable capabilities, serious risks, rapid deployment, unprecedented adoption, and unending controversy. Simultaneously, the European Union (EU) is finalizing its AI Act as the world’s first comprehensive regulation to govern AI, and just yesterday the European Parliament adopted a draft of the Act by a vote of 499 in favor, 28 against, and 93 abstentions. The Act includes explicit obligations for foundation model providers like OpenAI and Google.

In this post, we evaluate whether major foundation model providers currently comply with these draft requirements and find that they largely do not. — Read More

#legal