Leo, Brave’s browser-native AI assistant, is now available in Nightly version for testing

… Brave Leo is a chat assistant hosted by Brave without the use of third-party AI services, available to Brave users on the desktop Nightly channel. The model behind Leo is Llama 2, a source-available large language model released by Meta with a special focus on safety. We’ve made sure that user inputs are always submitted anonymously through a reverse-proxy to our inference infrastructure. In this way, Brave can offer an AI experience with unparalleled privacy.

We’ve specifically tuned the model prompt to adhere to Brave’s core values. However, as with any other LLM, the outputs of the model should be treated with care for potential inaccuracies or errors. — Read More

#big7

AI Cameras Took Over One Small American Town. Now They’re Everywhere

Spread across four computer monitors arranged in a grid, a blue and green interface shows the location of more than 50 different surveillance cameras. Ordinarily, these cameras and others like them might be disparate, their feeds only available to their respective owners: a business, a government building, a resident and their doorbell camera. But the screens, overlooking a pair of long conference tables, bring them all together at once, allowing law enforcement to tap into cameras owned by different entities around the entire town all at once.

This is a demonstration of Fusus, an AI-powered system that is rapidly springing up across small town America and major cities alike. Fusus’ product not only funnels live feeds from usually siloed cameras into one central location, but also adds the ability to scan for people wearing certain clothes, carrying a particular bag, or look for a certain vehicle.  — Read More

#surveillance

Image of Palestinian carrying children out of rubble shows signs of AI

An image of a man carrying children through rubble has been shared tens of thousands of times in social media posts linking it to Israel’s bombing of the Gaza Strip, which the Hamas-run health ministry says has killed more than 3,700 children. But experts say the image shows signs of artificial intelligence — and it was not published by news organizations with photographers covering the war, which was triggered by a deadly Hamas attack on Israel. — Read More

#fake

Google DeepMind boss hits back at Meta AI chief over ‘fearmongering’ claim

The boss of Google DeepMind pushed back on a claim from Meta’s artificial intelligence chief alleging the company is pushing worries about AI’s existential threats to humanity to control the narrative on how best to regulate the technology.

In an interview with CNBC’s Arjun Kharpal, Hassabis said that DeepMind wasn’t trying to achieve “regulatory capture” when it came to the discussion on how best to approach AI. It comes as DeepMind is closely informing the U.K. government on its approach to AI ahead of a pivotal summit on the technology due to take place on Wednesday and Thursday.

Over the weekend, Yann LeCun, Meta’s chief AI scientist, said that DeepMind’s Hassabis, along with OpenAI CEO Sam Altman, Anthropic CEO Dario Amodei were “doing massive corporate lobbying” to ensure only a handful of big tech companies end up controlling AI. — Read More

#strategy

Stopping Innovation is how companies are trying to get ahead in AI

Ben Thompson (Stratechery) discusses how big tech companies and AI researchers are lobbying the government to heavily regulate AI development, likely to lock in their market positions.

What’s going on here?
Large tech companies and AI labs are urging the government to regulate AI development in the name of safety. However, their calls for regulation align closely with their business interests, indicating an ulterior motive of stifling competition. — Read More

#strategy

Regulating AI by Executive Order is the Real AI Risk

The President’s Executive Order on Artificial Intelligence is a premature and pessimistic political solution to unknown technical problems and a clear case of regulatory capture at a time when the world would be best served by optimism and innovation1.

This week President Biden released the “Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence” as widely anticipated.

I wanted to offer some thoughts on this because as a technologist, student of innovation, and executive that long experienced the impact of regulation on innovation I feel there is much to consider when seeing such an order and approach to technology innovation. — Read More

#strategy

What the executive order means for openness in AI

The Biden-Harris administration has issued an executive order on artificial intelligence. It is about 20,000 words long and tries to address the entire range of AI benefits and risks. It is likely to shape every aspect of the future of AI, including openness: Will it remain possible to publicly release model weights while complying with the EO’s requirements? How will the EO affect the concentration of power and resources in AI? What about the culture of open research?

We cataloged the space of AI-related policies that might impact openness and grouped them into six categories. The EO includes provisions from all but one of these categories. Notably, it does not include licensing requirements. On balance, the EO seems to be good news for those who favor openness in AI.

But the devil is in the details. We will know more as agencies start implementing the EO. And of course, the EO is far from the only policy initiative worldwide that might affect AI openness. — Read More

#strategy