(and how Actuate's working to solve it)
AI bias is when errors in a computer system occur that create unfair outcomes, such as privileging one group over others.
LEARN MORE
Facial recognition serves as a prime example of how algorithmic bias can impact the accuracy of AI technology, and with concerning implications. A 2018 study conducted by MIT found that gender classification algorithms had error rates of up to 34% for dark-skinned women—49 times that for white men.
The Hidden Bias in AI
Actuate is built from the ground up to avoid algorithmic and user bias, respect privacy, and is compliant by design. Here’s how we did it.
Actuate works by detecting objects and actions. The system doesn't analyze individuals, meaning there is no facial recognition or search functionality that can result in bias or invade privacy.
Actuate is unbiased by design, but we're mindful of edge cases, such as algorithm accuracy across skin tones. We test with diverse datasets to ensure that there is no bias in results based on the race, ethnic, or gender identity of people caught on footage.