Addressing AI Bias

Actuate is built from the ground up to respect privacy and eliminate key biases, including:

Algorithmic Bias

Errors in a computer system that create unfair outcomes, such as privileging one group over others

Human Bias

Even an unbiased system can be used in ways that introduce biases, such as selective enforcement of alerts

Overview

AI Bias is a complex issue, especially in security and building management applications. Eliminating algorithmic bias by itself is not sufficient: There are also concerns that unbiased algorithms can be utilized in ways that introduce bias and compromise privacy.

Actuate is built from the ground up to avoid algorithmic and user bias, respect privacy, and is compliant by design. Here’s how we did it.

How Actuate Addresses AI Bias and Privacy:

Detects Objects and Actions, Not Individuals

Actuate works by detecting objects and actions. The system doesn't analyze individuals: There is no facial recognition or search functionality that can result in bias or invade privacy.

Learn how Actuate compares with Facial Recognition
Analytics-Focused User Experience

Actuate only sends alerts when a threat is detected, such as a weapon. For Building Management and COVID-19 Response, the system instead surfaces key trends. This approach accelerates decision-making and eliminates bias that can come from watching raw video.

Explore the Actuate Analytics UI
Extensive Bias Testing

Actuate is unbiased by design, but we're mindful of edge cases, such as algorithm accuracy across skin tones. We test with diverse datasets to ensure that there is no bias in results based on the race, ethnic, or gender identity of people caught on footage.

Learn how Actuate is Compliant by Design

Our Team's Perspective

We regularly publish articles on bias and privacy

A university campus with students walking around in the foreground.

We Don’t Have to Sacrifice Students’ Privacy for Campus Security

Colleges and universities face a distinctly modern conundrum: They want and need to keep students safe, but smart security technologies that can track and monitor students’ activities on and off campus threaten their right to privacy. Schools and technology vendors must collaborate to find solutions that increase campus security while also protecting individual privacy.

Read more at readwrite

The False Choice Between AI Bias, Privacy and Safety in Smart Surveillance

Is there a choice between privacy, bias and safety in surveillance? As artificial intelligence continues to advance, we’ve seen an increased focus on bias and privacy. A 2019 report from the American Civil Liberties Union contains an ominous warning: Artificial intelligence-enabled video surveillance will soon compromise our civil liberties in a dangerous way. We consider this a false choice between privacy and safety in smart surveillance.

Read more at readwrite

 

An outdoor security camera attached to a brown brick building.
Outdoor surveillance cameras shown on top of a poll, towering over barbed wire fence.

Privacy-First Tech Doesn’t Have to Suck

The world is on the cusp of a reckoning about the value and sensitivity of data. In Europe, GDPR is causing companies to think more about how they store and manage data, especially across borders. Even in the United States, long the Wild West of data privacy and regulation, high-profile voices are starting to call for regulation. How should leaders respond?

Read more at Propmodo

Have Questions about AI Bias, Privacy, and Compliance?