Sam Hammond of the Foundation for American Innovation published his 95 Theses on AI last week. I believe that this post, like some of Hammond’s other writing, suffers from misplaced negativity and overconfidence in some assertions (biology, for example, is always more complicated than you think). …[T]here is one of the theses that deserves greater attention, about regulatory approaches to AI:
The dogma that we should only regulate technologies based on “use” or “risk” may sound more market-friendly, but often results in a far broader regulatory scope than technology-specific approaches (see: the EU AI Act)
Zvi Moshowitz picked up on this too: …”When you regulate ‘use’ or ‘risk’ you need to check on everyone’s ‘use’ of everything, and you make a lot of detailed micro interventions, and everyone has to file lots of paperwork and do lots of dumb things, and the natural end result is universal surveillance and a full ‘that which is not compulsory is forbidden’ regime across much of existence.”
… This is a serious misunderstanding. — Read More
Tag Archives: Governance
The AI Power Paradox
Can States Learn to Govern Artificial Intelligence—Before It’s Too Late?
It’s 2035, and artificial intelligence is everywhere. AI systems run hospitals, operate airlines, and battle each other in the courtroom. Productivity has spiked to unprecedented levels, and countless previously unimaginable businesses have scaled at blistering speed, generating immense advances in well-being. New products, cures, and innovations hit the market daily, as science and technology kick into overdrive. And yet the world is growing both more unpredictable and more fragile, as terrorists find new ways to menace societies with intelligent, evolving cyberweapons and white-collar workers lose their jobs en masse.
Just a year ago, that scenario would have seemed purely fictional; today, it seems nearly inevitable. Generative AI systems can already write more clearly and persuasively than most humans and can produce original images, art, and even computer code based on simple language prompts. And generative AI is only the tip of the iceberg. Its arrival marks a Big Bang moment, the beginning of a world-changing technological revolution that will remake politics, economies, and societies.
Like past technological waves, AI will pair extraordinary growth and opportunity with immense disruption and risk. But unlike previous waves, it will also initiate a seismic shift in the structure and balance of global power as it threatens the status of nation-states as the world’s primary geopolitical actors. — Read More
Seven AI companies commit to safeguards at the White House’s request
Microsoft, Google, Meta and OpenAI pledge to abide by certain measures.
Microsoft, Google and OpenAI are among the leaders in the US artificial intelligence space that have committed to certain safeguards for their technology, following a push from the White House. The companies will voluntarily agree to abide by a number of principles though the agreement will expire when Congress passes legislation to regulate AI.
The Biden administration has placed a focus on making sure that AI companies develop the technology responsibly. Officials want to make sure tech firms can innovate in generative AI in a way that benefits society without negatively impacting the safety, rights and democratic values of the public. — Read More
Brookings Institute — AI Governance
Artificial intelligence, machine learning, and data analytics are upending everything from education and transportation to health care and finance. In this series led by Governance Studies Vice President Darrell West, scholars from in and outside Brookings will identify key governance and norm issues related to AI and propose policy remedies to address the complex challenges associated with emerging technologies. Read More
DeepCode taps AI for code reviews
By leveraging artificial intelligence to help clean up code, DeepCode aims to become to programming what writing assistant Grammarly is to written communications.
Likened to a spell checker for developers, DeepCode’s cloud service reviews code and provides alerts about critical vulnerabilities, with the intent of stopping security bugs from making it into production. The goal is to enable safer, cleaner code and deliver it faster. Read More
The Seven Patterns Of AI
Model Cards for Model Reporting
Trained machine learning models are increasingly used to perform high-impact tasks in areas such as law enforcement, medicine, education, and employment. In order to clarify the intended use cases of machine learning models and minimize their usage in contexts for which they are not well suited, we recommend that released models be accompanied by documentation detailing their performance characteristics. In this paper, we propose a framework that we call model cards, to encourage such transparent model reporting. Model cards are short documents accompanying trained machine learning models that provide benchmarked evaluation in a variety of conditions, such as across different cultural, demographic, or phenotypic groups (e.g., race, geographic location, sex, Fitzpatrick skin type [15]) and intersectional groups (e.g., age and race, or sex and Fitzpatrick skin type) that are relevant to the intended application domains. Model cards also disclose the context in which models are intended to be used, details of the performance evaluation procedures, and other relevant information. While we focus primarily on human-centered machine learning models in the application fields of computer vision and natural language processing, this framework can be used to document any trained machine learning model. To solidify the concept, we provide cards for two supervised models: One trained to detect smiling faces in images, and one trained to detect toxic comments in text. We propose model cards as a step towards the responsible democratization of machine learning and related artificial intelligence technology, increasing transparency into how well artificial intelligence technology works. We hope this work encourages those releasing trained machine learning models to accompany model releases with similar detailed evaluation numbers and other relevant documentation. Read More
Datasheets for Datasets
The machine learning community currently has no standardized process for documenting datasets. To address this gap, we propose datasheets for datasets. In the electronics industry, every component, no matter how simple or complex, is accompanied with a datasheet that describes its operating characteristics, test results, recommended uses, and other information. By analogy, we propose that every dataset be accompanied with a datasheet that documents its motivation, composition, collection process, recommended uses, and so on. Datasheets for datasets will facilitate better communication between dataset creators and dataset consumers, and encourage the machine learning community to prioritize transparency and accountability. Read More
Artificial intelligence is no silver bullet for governance
There is considerable interest from policymakers and scientists around the world around how artificial intelligence is going to transform their work. In their haste to jump on the AI bandwagon, however, everybody is forgetting we have not solved some older, deeper problems about data that will stymie attempts to get the technology off the ground. Read More
