Sensity deepfake detection
Sensity is enjoying the international spotlight thanks to its deepfake detection tools, which use deep learning and computer vision to find and monitor AI-generated synthetic videos. StartupAmsterdam spoke to co-founder Giorgio Patrini about life on the vanguard of a new technological arms race and the advantages of being headquartered in Amsterdam.
Fighting a problem that never existed before
Sensity is confronting a new type of challenge. Their ability to spot deepfakes - fake videos made to look real through the use of digital technology – and has received international attention from news organisations, social media giants, brands and law enforcement. Giorgio Patrini, the CEO and chief scientist for Sensity, says “We work in an arena that will only grow in the future. It’s about technology creating a problem which in turn forces us to apply technology to defend ourselves and solve that problem.”“Only by all of us working together as a united front will we be able to control this problem,” says Patrini.
Identifying thousands of false identities
Sensity has identified over 44,000 deepfakes. This number continues to rise exponentially as the technology has become more available and user friendly. However, with increased sophistication, these AI-generated synthetic videos are also becoming increasingly difficult to detect. Over 95% of such videos are sex-related, whether it’s putting celebrity faces on adult film actors, or producing fake sexual content to publicly shame an ex. The remaining percentage is usually deployed for comedic and/or political gains – for example, we saw Extinction Rebellion release a deepfake video of Belgian Prime Minister linking COVID-19 with the climate crisis.
The criminal connection to deep fakes
Deepfake audio is also being used more for fraud and identity theft. For example, the chief of a UK subsidiary of a German energy firm paid around €220,000 into a Hungarian bank account after being phoned by a voice he thought to be his German CEO.
“We’re also seeing more and more grim stuff appearing.An increasing number of serious criminals are using the technologies.” Perhaps the most potentially dangerous aspect of deepfakes is a side-effect: real video evidence of a crime can now be merely dismissed as fabricated.
At the forefront of machine learning and computer vision
Originally from Italy, Patrini came to Amsterdam as a machine-learning researcher after getting his PhD in Australia. “Amsterdam is a place where much of the innovation around deep learning and ‘generative models’ is occurring. Those technologies can be used to create synthetic media, such as images of things that do not exist yet [but are as] realistic as real photos. One example of this is deepfakes. And we’re using these same tools to fight their misuse.”
University of Amsterdam is at the forefront of machine learning and computer vision. “This community, with Max Welling being one of the top scientists, is coming up with new techniques around generative models, GANs and neural networks.” Of course, in this context, these technologies are being developed for the greater good – from modelling dark matter to identifying disease.
Finding support for startups in Amsterdam
Sensity's work is garnering more attention – and much of its funding – from outside the Netherlands. “At one point, we even considered relocating, but the government here is very supportive via tax relief for startups – which is not very common, especially in Europe. And as for funding, that’s shifting too. We just closed another round, and actually, one of the main investors is from Amsterdam – but I can’t talk about any specifics yet.” Patrini says this last part with a smile.
Getting ready for the digital arms race
In their time as a company, Sensity has already seen the deepfake landscape change radically. “It’s basically an arms race, with the analogy being it’s a cybersecurity problem and we have to keep updating our defences – like antivirus software. You have to stay up-to-date.”
Last year, there was a brief period when people thought the deepfake problem was solved. It was discovered that deepfakes did not blink, as they are built up of thousands or millions of pictures of people, who usually have their eyes open. Within days, deepfakes started to blink.
“As soon as a weakness is found, someone fixes it. So, I don’t think this is a problem that will ever be solved completely.”
From Amsterdam to the world
Patrini is optimistic about the future, especially about the next generation of AI scientists: how recently, the major social media platforms have handled policies against deepfakes.
"That’s another reason we love Amsterdam: the recruitment possibilities. The universities here are just factories when it comes to pumping out AI talent – engineers who are already experienced in cutting-edge technologies. Here, we’re close to the source.”