Spread across four computer monitors arranged in a grid, a blue and green interface shows the location of more than 50 different surveillance cameras. Ordinarily, these cameras and others like them might be disparate, their feeds only available to their respective owners: a business, a government building, a resident and their doorbell camera. But the screens, overlooking a pair of long conference tables, bring them all together at once, allowing law enforcement to tap into cameras owned by different entities around the entire town all at once.
This is a demonstration of Fusus, an AI-powered system that is rapidly springing up across small town America and major cities alike. Fusus’ product not only funnels live feeds from usually siloed cameras into one central location, but also adds the ability to scan for people wearing certain clothes, carrying a particular bag, or look for a certain vehicle. — Read More
Recent Updates Page 151
Image of Palestinian carrying children out of rubble shows signs of AI
An image of a man carrying children through rubble has been shared tens of thousands of times in social media posts linking it to Israel’s bombing of the Gaza Strip, which the Hamas-run health ministry says has killed more than 3,700 children. But experts say the image shows signs of artificial intelligence — and it was not published by news organizations with photographers covering the war, which was triggered by a deadly Hamas attack on Israel. — Read More
Google DeepMind boss hits back at Meta AI chief over ‘fearmongering’ claim
The boss of Google DeepMind pushed back on a claim from Meta’s artificial intelligence chief alleging the company is pushing worries about AI’s existential threats to humanity to control the narrative on how best to regulate the technology.
In an interview with CNBC’s Arjun Kharpal, Hassabis said that DeepMind wasn’t trying to achieve “regulatory capture” when it came to the discussion on how best to approach AI. It comes as DeepMind is closely informing the U.K. government on its approach to AI ahead of a pivotal summit on the technology due to take place on Wednesday and Thursday.
Over the weekend, Yann LeCun, Meta’s chief AI scientist, said that DeepMind’s Hassabis, along with OpenAI CEO Sam Altman, Anthropic CEO Dario Amodei were “doing massive corporate lobbying” to ensure only a handful of big tech companies end up controlling AI. — Read More
Stopping Innovation is how companies are trying to get ahead in AI
Ben Thompson (Stratechery) discusses how big tech companies and AI researchers are lobbying the government to heavily regulate AI development, likely to lock in their market positions.
What’s going on here?
Large tech companies and AI labs are urging the government to regulate AI development in the name of safety. However, their calls for regulation align closely with their business interests, indicating an ulterior motive of stifling competition. — Read More
Regulating AI by Executive Order is the Real AI Risk
The President’s Executive Order on Artificial Intelligence is a premature and pessimistic political solution to unknown technical problems and a clear case of regulatory capture at a time when the world would be best served by optimism and innovation1.
This week President Biden released the “Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence” as widely anticipated.
I wanted to offer some thoughts on this because as a technologist, student of innovation, and executive that long experienced the impact of regulation on innovation I feel there is much to consider when seeing such an order and approach to technology innovation. — Read More
What the executive order means for openness in AI
The Biden-Harris administration has issued an executive order on artificial intelligence. It is about 20,000 words long and tries to address the entire range of AI benefits and risks. It is likely to shape every aspect of the future of AI, including openness: Will it remain possible to publicly release model weights while complying with the EO’s requirements? How will the EO affect the concentration of power and resources in AI? What about the culture of open research?
We cataloged the space of AI-related policies that might impact openness and grouped them into six categories. The EO includes provisions from all but one of these categories. Notably, it does not include licensing requirements. On balance, the EO seems to be good news for those who favor openness in AI.
But the devil is in the details. We will know more as agencies start implementing the EO. And of course, the EO is far from the only policy initiative worldwide that might affect AI openness. — Read More
Artists Lose First Round of Copyright Infringement Case Against AI Art Generators
Artists suing generative artificial intelligence art generators have hit a stumbling block in a first-of-its-kind lawsuit over the uncompensated and unauthorized use of billions of images downloaded from the internet to train AI systems, with a federal judge’s dismissal of most claims.
U.S. District Judge William Orrick on Monday found that copyright infringement claims cannot move forward against Midjourney and DeviantArt, concluding the accusations are “defective in numerous respects.” Among the issues are whether the AI systems they run on actually contain copies of copyrighted images that were used to create infringing works and if the artists can substantiate infringement in the absence of identical material created by the AI tools. Claims against the companies for infringement, right of publicity, unfair competition and breach of contract were dismissed, though they will likely be reasserted. — Read More
The Next Big Questions in AI Research with Andrew Ng
Introducing EdgeLLama – An Open Standard for Decentralized AI
We, the GPU poor, have come up with a peer-to-peer network design to enable running Mistral7B and other models which will make AI use more free, both as in beer and as in speech. We believe in e/acc, and we want to make AI abundant. This is the moment in time, when we start taking back control from the few powerful AI companies.
Right now, our AI use is a function of expensive monthly subscriptions, rate and usage limits imposed by datacenter-cloud run AI companies. This gives them the power to decide what we can prompt with and how much of AI we even have access to. The immense power they wield also imposes an emotional burden on them, and they are trying to appeal to the government to now impose stifling regulations (a concept called “regulatory capture”, see@bgurley‘s talk).
Well, we, a bunch of AI and open network aficionados, want to make their lives’ easier and take that power away from them. Think BitTorrent from the early 2000s, when you could make your own computer available and effortlessly share files with each other in an open network. The advent of that technology, which was used by over 100 million people running nodes on their home computers, imposed a forcing function on entertainment business models in general. Better user experiences emerged, providing unlimited access to top-tier content for an insanely low fees. — Read More
Sweeping new Biden order aims to alter the AI landscape
The White House is poised to make an all-hands effort to impose national rules on a fast-moving technology, according to a draft executive order.
President Joe Biden will deploy numerous federal agencies to monitor the risks of artificial intelligence and develop new uses for the technology while attempting to protect workers, according to a draft executive order obtained by POLITICO.
The order, expected to be issued as soon as Monday, would streamline high-skilled immigration, create a raft of new government offices and task forces and pave the way for the use of more AI in nearly every facet of life touched by the federal government, from health care to education, trade to housing, and more. — Read More