Monday, January 25, 2021 - 4 p.m. to 5 p.m.
Computer vision has ceased to be a purely academic endeavor. From law enforcement, to border control, to employment, healthcare diagnostics, and assigning trust scores, computer vision systems are being rapidly integrated into all aspects of society. In research, there are works that purport to determine a person’s sexuality from their social network profile images, others that claim to classify “violent individuals” from drone footage. These works were published in high impact journals, and some were presented at workshops in top tier computer vision conferences such as CVPR.
A critical public discourse surrounding the use of computer-vision based technologies has also been mounting. For example, the use of facial recognition technologies by policing agencies has been heavily critiqued and, in response, companies such as Microsoft, Amazon, and IBM have pulled or paused their facial recognition software services. Gender Shades showed that commercial gender classification systems have high disparities in error rates by skin-type and gender, and other works discuss the harms caused by the mere existence of automatic gender recognition systems. Recent papers have also exposed shockingly racist and sexist labels in popular computer vision datasets–resulting in the removal of some. In this talk, I will highlight some of these issues and proposed solutions to mitigate bias, as well as how some of the proposed fixes could exacerbate the problem rather than mitigate it.
Computer Scientist, former Co-Lead Ethical AI Research Team, Google Brain, Founder of Black in AI
Timnit Gebru was a senior research scientist at Google co-leading the Ethical Artificial Intelligence research team. Her work focuses on mitigating the potential negative impacts of machine learning based systems. Timnit is also the co-founder of Black in AI, a non profit supporting Black researchers and practitioners in artificial intelligence. Prior to this, she did a postdoc at Microsoft Research, New York City in the FATE (Fairness Transparency Accountability and Ethics in AI) group, where she studied algorithmic bias and the ethical implications underlying any data mining project. She received her Ph.D. from the Stanford Artificial Intelligence Laboratory, studying computer vision under Fei-Fei Li. Prior to joining Fei-Fei’s lab, she worked at Apple designing circuits and signal processing algorithms for various Apple products including the first iPad.