20200123

[OPINION] 'Deep fake' imagery manipulation poses threat to society not just military, US General warns



https://www.telegraph.co.uk/news/2020/01/15/deep-fake-imagery-manipulation-poses-threat-society-not-just/

‘Deep fakes’ pose a major threat to elections and even world peace, a US military leader has warned.

The manipulation of digital pictures to show misleading ‘deep fake’ images of people will become more widespread as artificial intelligence (AI) systems develop, says a senior American official.

Speaking on a visit to Nato headquarters in Brussels, Lieutenant General Jack Shanahan said he was “deeply concerned” of the “corrosive influence” of disinformation campaigns against political election cycles.

"What if a senior leader was to come on and announce that the nation was at war, but it was a deep fake?," he asked.

“It's something I think a lot about because the level of realism and fidelity has vastly increased from just a year ago," the Director of the US Joint Artificial Intelligence Center said.

"People have such a growing cynicism and scepticism about what they're reading, seeing and hearing, that this could become such a corrosive effect over time that nobody knows what is reality anymore.

"Those are areas that are of increasing concern across the whole of society, not just the US military.”

The process of creating a deep fake image or video requires many pictures of a person's face, which makes politicians and celebrities particularly vulnerable.

However, the process is not yet foolproof and many such false images leave visible signs they are fake. Blurring or flickering around a person's face is a common indicator an image has been doctored, especially when the face changes angles rapidly.

Lt Gen Shanahan warns it is increasingly likely society will be influenced by false imagery, as hackers’ expertise and the sophistication of AI technology increases.

“Within probably 30 minutes you can get online and start developing fairly high fidelity deep fakes," he said.

The General also sought to provide reassure that as AI capability developed, the US military would not seek to develop autonomous “killer robots”.

“There are aspects of AI that feel different - the black box aspect of machine learning - but overall we have the process and policies in place to ensure that we [stick to] the laws of war, rules of engagement and proportionality.

“We will not violate those core principles.

“Humans will be held accountable. It will not be something that we say ‘the black box did it, no-one will be held accountable’. Just like in every mistake that has happened on a battlefield in our history, there will be accountability.

“We are not looking to go to this future of..killer robots: unsupervised, independent, self-targeting systems.

“Lethal, autonomous weapon systems, right now for the Department of Defence, is not something we are working actively towards.”

No comments: