Biased AI Is Another Sign We Need to Solve the Cybersecurity Diversity Problem
February 16, 2020
Artificial intelligence (AI) excels at finding patterns like unusual human behavior or abnormal incidents. It can also reflect human flaws and inconsistencies, including 180 known types of bias. Biased AI is everywhere, and like humans, it can discriminate against gender, race, age, disability and ideology.
AI bias has enormous potential to negatively affect women, minorities, the disabled, the elderly and other groups. Computer vision has more issues with false-positive facial identification for women and people of color, according to research by MIT and Stanford University.
Sixty-three percent of organizations will deploy artificial intelligence in at least one area of cybersecurity this year, according to Capgemini. AI can scale security and augment human skills, but it can also create risks. Cybersecurity AI requires diverse data and context to act effectively, which is only possible with diverse cyber teams who recognize subtle examples of bias in security algorithms. The cybersecurity diversity problem isn’t new, but it’s about to create huge issues with biased cybersecurity AI if left unchecked.
Read more: Biased AI Is Another Sign We Need to Solve the Cybersecurity Diversity Problem (securityintelligence.com)
Latest Newsview all
Ensuring Gender Mainstreaming in Small Arms Control - webinar 8 March 2023
March 7, 2023
This year's International Women's Day sees UNODA and IANSA combine to put on a webinar to launch UNODA's "Training Manual on Mainstreaming Gender in Small Arms Control"
CSW67 2023 Bridge to the Future: Women and Girls at the Center of Progress – Cybersecurity and Emerging Technologies
March 2, 2023
SecurityWomen and WIIS (Women In International Security) are curating an important discussion around women’s influence in cybersecurity, innovation, and emerging technologies at this year's CSW, on 14 March at 10.00am ETZ (2.00pm UK time)