Can an AI write an episode of Stargate?

The process of writing a television show typically involves a writers room and a lot of time, as humans figure out the plot and the dialogue that makes a show work. 

For the cult classic Stargate science fiction franchise, which spanned three series (SG-1, Stargate Atlantis and Stargate Universe), character and plot development was helmed by Stargate co-creator Brad Wright. In 2021, Wright publicly posted a message on Twitter asking if it was possible for AI to write an episode of Stargate that would appear on SciFi insider site The Companion

None other than Laurence Moroney, AI lead at Google, responded by picking up the gauntlet to try and prove what AI could do.– though he wasn’t initially worried that AI would replace him or other writers. Read More

#nlp

Artificial intelligence predicts patients’ race from their medical images

Study shows AI can identify self-reported race from medical images that contain no indications of race detectable by human experts.

The miseducation of algorithms is a critical problem; when artificial intelligence mirrors unconscious thoughts, racism, and biases of the humans who generated these algorithms, it can lead to serious harm. Computer programs, for example, have wrongly flagged Black defendants as twice as likely to reoffend as someone who’s white. When an AI used cost as a proxy for health needs, it falsely named Black patients as healthier than equally sick white ones, as less money was spent on them. Even AI used to write a play relied on using harmful stereotypes for casting. 

Removing sensitive features from the data seems like a viable tweak. But what happens when it’s not enough? 

Examples of bias in natural language processing are boundless — but MIT scientists have investigated another important, largely underexplored modality: medical images. Using both private and public datasets, the team found that AI can accurately predict self-reported race of patients from medical images alone. Using imaging data of chest X-rays, limb X-rays, chest CT scans, and mammograms, the team trained a deep learning model to identify race as white, Black, or Asian — even though the images themselves contained no explicit mention of the patient’s race. This is a feat even the most seasoned physicians cannot do, and it’s not clear how the model was able to do this.  Read More

#bias, #ethics