Whilst the majority of AIs are being built to organise, categorise, and optimise processes to provide information and insight, some groups are using AI for criminal purposes, to try and circumvent security and financial controls. Some examples of potential misuse involve creating fake audio.
Synthetic media
AI is being used to generate voice fonts, for specific voices (or people) to
simulate that person. This is done by sampling techniques, re-constructing a
voice font, to generate set phrases, to sound like (impersonate) the victim.
Implications: when such techniques advance and become more widely
available, it will be easy to create natural sounding fake audio of events that are impossible to distinguish from real audio.
AI impersonation
AIs are used to analyse speech techniques and patterns so a conversation can be mimicked, to represent the same style as the victim. When combined with the voice font, that can generate artificial conversation using similar phrasing and voice to the victim.
Unintentional misuse
AI can cause bias in decision making, particularly if the training dataset is
skewed. This reinforces the need for quality data.
Example: Amazon’s computer models were trained to vet applicants by observing patterns in résumés submitted to the company over a 10-year period. Most came from men, a reflection of male dominance across the tech industry. In essence, Amazon’s system taught itself that male candidates were preferable. It penalised résumés that included the word “women’s”, as in “women’s chess club captain”.
Interested in more information? Please check out this link for access to our full white paper on AI in Policing ~ https://lnkd.in/ew6PsJyC