Transparency — being clear about what you’ve done and what the impact is. It sounds wonky, but matters enormously. Companies like Microsoft often give lip service to “transparency,” but provide precious little actual transparency into how their systems work, how they are trained, or how they are tested internally, let alone what trouble they may have caused.
We need to know what goes into [AI] systems, so we can understand their biases (political and social), their reliance on purloined works, and how to mitigate their many risks. We need to know how they are tested, so we can know whether they are safe.
Companies don’t really want to share, which doesn’t mean they don’t pretend otherwise. … [Y]ou can’t find out enough about what they were trained on to do good science (e.g., in order to figure out how well the models are reasoning versus whether they simply regurgitate what they are trained on). — Read More –