Fighting a problem that never existed before
“We work in an arena that will only grow in the future,” says Giorgio Patrini, the CEO and chief scientist for Deeptrace. “It’s about technology creating a problem which in turn forces us to apply technology to defend ourselves and solve that problem.”
Deeptrace is confronting a very topical challenge – finding fake videos made to look real through the use of digital technology – and has received international attention from news organisations, social media giants, brands and law enforcement. “Only by all of us working together as a united front will we be able to control this problem,” says Patrini.
Giorgio Patrini, the CEO and chief scientist for Deeptrace
Identifying more than 44,000 fake videos
As of late May 2020, Deeptrace has identified over 44,000 deepfakes. This number continues to rise exponentially as the technology has become more available and user friendly. Meanwhile, with increased sophistication, these AI-generated synthetic videos are also becoming increasingly difficult to detect.
Over 95% of such videos are sex-related, whether it’s putting celebrity faces on adult film actors, or producing fake sexual content to publicly shame an ex. The remaining percentage is usually deployed for comedic and/or political gains – for example, an Oval Office address emphasising the orange-ness of the president. More recently, we saw Extinction Rebellion release a deepfake video of Belgian Prime Minister linking COVID-19 with the climate crisis.
The criminal connection to deep fakes
Deepfake audio is also being used more for fraud and identity theft. For example, last year, the chief of a UK subsidiary of a German energy firm paid around €220,000 into a Hungarian bank account after being phoned by a voice he thought to be his German CEO.
“We’re also seeing more and more grim stuff appearing,” says Patrini. “An increasing number of serious criminals are using the technologies.” Perhaps the most potentially dangerous aspect of deepfakes is a side-effect: real video evidence of a crime can now be merely dismissed as fabricated.
At the forefront of machine learning and computer vision
Patrini, originally from Italy, came to Amsterdam as a machine-learning researcher after getting his PhD in Australia. “Actually, Amsterdam is a place where much of the innovation around deep learning and ‘generative models’ is occurring. Those technologies can be used to create synthetic media, such as images of things that do not exist yet [but are as] realistic as real photos. One example of this is deepfakes. And we’re using these same tools to fight their misuse.”
Patrini is referring to how the University of Amsterdam is at the forefront of machine learning and computer vision. “This community, with Max Welling being one of the top scientists, is coming up with new techniques around generative models, GANs and neural networks,” says Patrini. Of course, in this context, these technologies are being developed for the greater good – from modelling dark matter to identifying disease.
The Deeptrace team in their Amsterdam office
Finding support for startups in Amsterdam
Meanwhile, the work of Deeptrace is currently getting more attention – and much of its funding – from outside the Netherlands. “At one point, we even considered relocating, but the government here is very supportive via tax relief for startups – which is not very common, especially in Europe. And as for funding, that’s shifting too. We just closed another round, and actually, one of the main investors is from Amsterdam – but I can’t talk about any specifics yet,” smiles Patrini.
Getting ready for the digital arms race
Though it was founded just over 18 months ago, Deeptrace has already seen the deepfake landscape change radically. “It’s basically an arms race, with the analogy being it’s a cybersecurity problem and we have to keep updating our defences – like antivirus software,” says Patrini. “You have to stay up-to-date.”
Last year, there was a brief period when people thought the deepfake problem was solved. It was discovered that deepfakes did not blink, as they are built up of thousands or millions of pictures of people, who usually have their eyes open. Within days, deepfakes started to blink.
“As soon as a weakness is found, someone fixes it,” says Patrini. “So, I don’t think this is a problem that will ever be solved completely.”
A two-pronged approach
Meanwhile, Deeptrace approaches the problem in two ways. “The first is protection: finding those artefacts in a potential deepfake that show it was manipulated by a machine. The other is providing threat intelligence: who are the actors and what tech are they using and for what reasons?”
With the 2020 US presidential elections serving as a deadline of sorts, how does Patrini see the year playing out? “Well, it’s very hard to say what will happen in practice. I think it will be a year of experimenting to see how we can solve the problem as a community. From a political standpoint, there’s a lot of pressure and it seems that policymakers are becoming more aware of the issue. Without legislation, it will stay the Wild West.”
From Amsterdam to the world
Patrini is optimistic about how recently, the major social media platforms – Twitter, Facebook, YouTube – all came out with policies around deepfakes, essentially banning any synthetic video that has “malicious intent”.
“The problem of disinformation is so huge [that] it’s about fighting it as a community. Social platforms, traditional media and security companies need to all work together to address this problem. It must be a united effort.
At the same time, Patrini wants his company to grow. “That’s another reason we love Amsterdam: the recruitment possibilities. The universities here are just factories when it comes to pumping out AI talent – engineers who are already experienced in cutting-edge technologies. Here, we’re close to the source.”