By Bill Bratton, as seen in The Hill
Artificial intelligence (AI) quickly has become a transformative technology impacting many aspects of our lives through augmentation of processes and tasks that normally require human intelligence, such as visual perception, speech recognition, decision-making and language translation. This technology allows for machines to learn from experience, refine new data inputs and perform tasks with almost human-like responsiveness.
As companies increasingly rely on AI technology to solve the most complex and pressing business challenges, law enforcement has turned to AI as a tool to help execute on the multifaceted mission of modern-day policing. However, for all the potential that AI possesses for law enforcement, we are still at the early stages of achieving fully viable and legally permissible options to meet law enforcement needs — particularly when it comes to capabilities such as video analytics and facial recognition.
Both have introduced challenges related to accuracy and bias, already generating skepticism by the public and, in some cases, legal action or bans by elected officials in pockets around the country. The latest development garnering attention in the world of AI and law enforcement is “deception analysis,” which utilizes AI technology to assess an individual’s truthfulness in criminal investigations and judicial administrative proceedings.
In policing, we’ve benefitted from AI by gaining the ability to rapidly analyze large data sets to aid in the identification of individuals, make predictions on criminal activity and facilitate enhanced communications. As the police commissioner for the New York Police Department (NYPD), I embraced the utilization of AI technology to aid in the management of our precision policing methodology using CompStat — an accountability and crime reduction approach that leverages intelligence and crime data to inform the rapid deployment of police resources. However, AI for the purpose of assessing truthfulness via deception analysis signals yet another example of the near-term limitations of using this technology.
Deception analysis, which utilizes algorithms to classify facial micro expressions (there are seven universal micro expressions: disgust, anger, fear, sadness, happiness, surprise and contempt), coupled with vocal patterns to indicate an individual’s truthfulness, is attempting to find its way into the criminal justice process. Deception systems currently being tested by the Department of Homeland Security try to detect changes in a suspect’s eye movement, voice and body posture to assess the individual’s likelihood of acting deceptively.
However, as with polygraph examinations — commonly known as “lie detector tests” — results from these deception analyses can be skewed by various physiological and psychological factors. AI-based deception systems face a similar criticism: To date, there is no scientific evidence of a consistent relationship between an individual’s internal mental state, his or her intent, and any kind of external inducements. As a result, models and algorithms designed to predict or identify deceptiveness may be deemed unreliable.
For instance, machine learning in AI technology, which is based on the idea that systems can identify patterns and make decisions with minimal human intervention, need to ingest baseline data to “learn” behavior patterns or other indicators of, in this instance, “deception.” This raises concerns that, currently, in the absence of a data set representative of a predictable correlation between individual intentions, actions, motivations and deceptiveness, the results of deception analysis remain limited in terms of judicial admissibility.
Additional challenges to the reliability of AI-based deception technology include the process by which the systems are built and deployed, combined with the legality of the information that law enforcement can collect and utilize in the face of changing privacy regulations.
Deception analysis is one type of risk assessment process, which seeks to draw a conclusion about an individual’s truthfulness or dishonesty based on certain inputs, assumptions and logic. Both law enforcement and government agencies must fully understand these concerns — and their potential legal and ethical implications — before they adopt an automated assessment process to understand an individual’s tendency towards deception, especially when attempting to adjudicate a suspect’s guilt or innocence.
In my nearly 50 years of law enforcement experience, I can attest that not everyone behaves in the same manner, especially when they are trying to hide the truth. Thus, finding a baseline pattern of behavior from which to develop machine-learning algorithms remains a difficult task. My concern is that a high probability that someone is lying does not guarantee certainty that that person is untruthful. And when it comes to enforcing the law, any mistake could come with a significant toll on individual lives and overall public safety and trust.
Similar to the recent ban on facial recognition technology for police and other agency use in San Francisco, I foresee near-horizon legal challenges for the utilization of AI-based deception technology. Agencies that seek to quickly deploy micro-facial AI-based technology for deception identification, without the necessary testing and data validation, may bear the brunt of the legal challenges. However, these challenges will not negate or stop further development of this technology. Instead, they will provide precedents for resolving future privacy and developmental concerns so that AI-based deception technology eventually can be a viable tool for law enforcement.
Going forward, the adoption of artificial intelligence by law enforcement agencies ultimately will help align safety and mitigation strategies in a dynamically changing threat environment. However, the legal, technical and ethical challenges which accompany deception analysis or facial recognition capabilities today should guide law enforcement’s implementation of AI for investigative support, not investigative conclusion. In the realms of law enforcement and justice — where the stakes are measured in human lives, and both nuance and precision are paramount — there is no room for uncertainty or error.
William J. Bratton is executive chairman of Teneo Risk Advisory, a global consulting firm headquartered in New York, and vice chairman of the U.S. Secretary of Homeland Security's advisory council. He was twice police commissioner of the City of New York, 2014-16 and 1994-96, and was police chief in Los Angeles for seven years — the only person ever to lead the nation's two largest police departments. His 46-year career in law enforcement includes serving as Boston's police commissioner and New York City's transit police chief.
Comentários