his story is the introduction to MIT Technology Review’s series on AI colonialism, which was supported by the MIT Knight Science Journalism Fellowship Program and the Pulitzer Center. YOU CAN READ PART ONE HERE.
…The AI industry does not seek to capture land as the conquistadors of the Caribbean and Latin America did, but the same desire for profit drives it to expand its reach. The more users a company can acquire for its products, the more subjects it can have for its algorithms, and the more resources—data—it can harvest from their activities, their movements, and even their bodies.
Neither does the industry still exploit labor through mass-scale slavery, which necessitated the propagation of racist beliefs that dehumanized entire populations. But it has developed new ways of exploiting cheap and precarious labor, often in the Global South, shaped by implicit ideas that such populations don’t need—or are less deserving of—livable wages and economic stability.
MIT Technology Review’s new AI Colonialism series, which will be publishing throughout this week, digs into these and other parallels between AI development and the colonial past by examining communities that have been profoundly changed by the technology. In part one, we head to South Africa, where AI surveillance tools, built on the extraction of people’s behaviors and faces, are re-entrenching racial hierarchies and fueling a digital apartheid. Read More
Tag Archives: Artificial Intelligence
Trends in AI—March 2022
A monthly selection of ML papers by Zeta Alpha: Audio generation, Gradients without Backprop, Mixture of Experts, Multimodality, Information Retrieval, and more. Read More
HERE’S HOW AN ALGORITHM GUIDES A MEDICAL DECISION
Artificial intelligence algorithms are everywhere in healthcare. They sort through patients’ data to predict who will develop medical conditions like heart disease or diabetes, they help doctors figure out which people in an emergency room are the sickest, and they screen medical images to find evidence of diseases. But even as AI algorithms become more important to medicine, they’re often invisible to people receiving care.
To help demystify the AI tools used in medicine today, we’re going to break down the components of one specific algorithm and see how it works. We picked an algorithm that flags patients in the early stages of sepsis — a life-threatening complication from an infection that results in widespread inflammation through the body. It can be hard for doctors to identify sepsis because the signs are subtle, especially early on, so it’s a common target for artificial intelligence-based tools. This particular program also uses mathematical techniques, like neural networks, that are typical of medical algorithms. Read More
AI Researchers Portal
Connecting AI researchers to Federal resources that can support their AI work – from grant funding and datasets to computing and testbeds. The National AI Initiative Office’s official site for AI researchers to access datasets, computing resources, and federal grant information. Read More
#artificial-intelligence, #dod, #icAmerica Needs AI Literacy Now
Can artificial intelligence (AI) replace a doctor in the operating room? Are some AI algorithms inherently biased, or are they merely trained on biased data? If you’re not sure about the answers to these questions, you are not alone. We recently conducted a national survey with Echelon Insights of 1,547 US adults, including a twenty-question ‘True/False/Don’t Know’ quiz, and found that most Americans are remarkably ill-informed about AI. Only 16% of participants “passed” the test (scoring above 60%) indicating that the majority of Americans are AI illiterate. Read More
Beethoven’s last symphony finished with the help of artificial intelligence
Has AI found a new Foundation?
I
n August, 32 faculty and 117 research scientists, postdocs, and students at Stanford University, long one of the biggest players in AI, declared that there has been a “sweeping paradigm shift in AI”. They coined a new term, “Foundation Models” to characterize the new paradigm, joined forces in a “Center for Research on Foundation Models”, and published the massive 212-page report “On the Opportunities and Risks of Foundation Models.”
Although the term is new, the general approach is not. You train a big neural network (like the well-known GPT-3) on an enormous amount of data, and then you adapt (“fine-tune”) the model to a bunch of more specific tasks (in the words of the report, “a foundation model …[thus] serves as [part of] the common basis from which many task-specific models are built via adaptation”). The basic model thus serves as the “foundation” (hence the term) of AIs that carry out more specific tasks. The approach started to gather momentum in 2018, when Google developed the natural language processing model called BERT, and it became even more popular with the introduction last year of OpenAI’s GPT-3. Read More
Toward a Critical Technical Practice: Lessons Learned in Trying to Reform AI
Every technology fits, in its own unique way, into a far-flung network of different sites of social practice. Some technologies are employed in a specific site, and in those cases we often feel that we can warrant clear cause-and-effect stories about the transformations that have accompanied them, either in that site or others. Other technologies are so ubiquitous — found contributing to the evolution of the activities and relationships of so many distinct sites of practice — that we have no idea how to begin reckoning their effects upon society, assuming that such a global notion of “effects” even makes sense.
Computers fall in this latter category of ubiquitous technologies. In fact, from an analytical standpoint, computers are worse than that. Computers are representational artifacts, and the people who design them often start by constructing representations of the activities that are found in the sites where they will be used. This is the purpose of systems analysis, for example, and of the systematic mapping of conceptual entities and relationships in the early stages of database design. A computer, then, does not simply have an instrumental use in a given site of practice; the computer is frequently about that site in its very design. In this sense computing has been constituted as a kind of imperialism; it aims to reinvent virtually every other site of practice in its own image. Read More
AI Conference Recap: Google, Microsoft, Facebook, and Others at ICLR 2021
At the recent International Conference on Learning Representations (ICLR), research teams from several tech companies, including Google, Microsoft, IBM, Facebook, and Amazon, presented nearly 250 papers out of a total of 860 on a wide variety of AI topics related to deep learning.
The conference was held online in early May and featured a “round-the-clock” program of live talks and Q&A sessions, in addition to pre-recorded videos for all accepted papers. Each day of the four-day conference featured two Invited Talks from leading deep-learning researchers. Although most of the papers were from academia, many prominent tech companies were well represented by their AI researchers: Google contributed over 100 papers, including several winning Outstanding Paper awards, Microsoft 53, IBM 35, Facebook 23, Salesforce 7, and Amazon 4. Read More
Composite AI: What Is It, and Why You Need It
You might have noticed a new term, “composite AI,” floating around the cybersphere. Don’t worry–it’s not a complex new technology that you must master. In fact, while the term may be new, the core idea behind it is not. Nevertheless, it’s likely a technique that you should be thinking about incorporating in your enterprise AI processes.
Gartner helped put composite AI on the map last summer, when it published its 2020 Hype Cycle for Emerging Technologies. Simply put, Composite AI refers to the “combination of different AI techniques to achieve the best result,” according to Gartner. That’s it. Simple enough, right? Read More