Can AI Screening Tools Protect Campaigns Against Extremists?

Social media screening company Ferretly has launched a tool to help officials weed out extremists who apply for such election-season jobs as canvassers and poll watchers, the latest example of election-securing tech.

  • Facebook
  • LinkedIn
  • Twitter
  • linkText
  • Email
Shutterstock/Blackboard
Thieves. Anti-Semites. People who make threats.

Election campaigns are difficult enough without being weighed down by criminals and extremists.

That’s why Ferretly, a Maryland-based social media screening company, has launched a tool designed to help election officials weed out potential workers who might bring nothing but embarrassment, disruption and other forms of trouble.

Ferretly’s new Election Workforce Screening Platform uses artificial intelligence and large language model training to “evaluate campaign and election personnel’s online presence and activities, helping to ensure high character, integrity and alignment with your organizational values,” according to the company.

The product provides the latest example of how technology is being deployed in the service of election transparency, security, tracking and other areas.

The launch is happening amid reports of an Arizona election worker allegedly stealing a voting-related security fob, highlighting one of many apparent risks to free and fair elections.

Ferretly says its new tool can spot “red flags” from such campaign workers as canvassers and poll watchers. Such flags could include hate speech, bullying online behavior, drug use, violence, nudity and ties to extremist groups.

Clients of the company would decide whether such signals disqualify potential employees.

“Our philosophy is that you should have total freedom of speech, but you can’t have freedom from consequences,” Darrin Lipscomb, Ferretly CEO, told Government Technology.

The company, which was founded in 2019 and has raised $2.5 million, already provides screening for the National Football League and other organizations.

The new product uses what the company calls an “AI-based social profile search” along with facial comparison and fuzzy matching techniques.

The company touts its ability to analyze text and images on social media via “advanced machine learning algorithms,” which can then identify such behaviors as disparaging comments, threats and bigotry. The product also finds web and news articles about potential employees.

Content analysis also involves the use of keywords relevant to clients. As well, the company provides guidance to clients so they don’t run afoul of anti-discrimination rules from the Equal Employment Opportunity Commission.

Lipscomb knocked down any comparison of his company’s social media screening technology with the so-called “social credit” systems reportedly being deployed in China.

The idea behind that effort is to use various signals from a person’s daily life that include financial data and violations of the law to determine how trustworthy that person is. That, in turn, can influence business and job opportunities, among other activities, though there remains significant dispute about how robust and pervasive the system or systems really are.

For starters, he said, the Chinese system focuses heavily on a person’s financial status, at least according to reports in Western media. As well, the new Ferretly tool has a much narrower purpose: to help employers find the best workers for their organizations — in this case, election campaigns.

“The main thing is to weed out folks who are extremists on both ends,” he said.
  • Facebook
  • LinkedIn
  • Twitter
  • linkText
  • Email
Thad Rueter writes about the business of government technology. He covered local and state governments for newspapers in the Chicago area and Florida, as well as e-commerce, digital payments and related topics for various publications. He lives in Wisconsin.