With Deepfakes anticipated to cause a major hurdle in the forthcoming election next year and beyond, Facebook stated one method in which it aims to take on the issue. As a fraction of a new association that comprises, amongst others, MIT, Microsoft, and the University of Oxford, Facebook aims to spend over $10 Million to participate in an industry-broad effort to deal with deepfakes.
The program is dubbed as the DFDC (Deepfake Detection Challenge). It plans to make open-source tools that governments, companies, and media agencies can use to better identify when a video has been tailored. The contribution of Facebook to the project comprises appointing developers to create videos that scientists can utilize to try the detection tools they make.
The requirement for huge data sets to deal with deepfakes was underlined by a latest breakthrough. In June, scientists from the USC ISI (USC Information Sciences Institute) made an algorithm they claimed might detect fake videos with almost 96% precision. To skill their system, the team employed a data set that comprised over 1,000 altered videos. As the tools bad actors employ to make deepfakes turned out to be more complicated, researchers will likely that even more data will be needed to design effective solutions.
On a related note, With AI tools making it more and more simple to make false explicit images, the issue of revenge porn is also getting worse. Earlier, Virginia has extended its law in opposition to harassment via the sharing of sexual pics to cover Deepfake videos and images.
The prohibition went live earlier on July 1, 2019. Earlier, the law criminalized employing sexual images or nudes to “harass, coerce, or intimidate” another individual, and now a line has been included to cover such pictures that comprise “a wrongly made still or videographic image.”