TECHNOLOGY

The AI ​​fake face generators can be rewound to reveal the real faces they’ve trained on


However, this assumes that you can get the training data, Kautz says. He and his colleagues at Nvidia have come up with a different way to expose private data, including images of faces and other objects, medical data, and more, that requires no access to the training data at all.

Instead, they developed an algorithm that can recreate the data that a trained model exposed to Reverse the steps the model goes through when processing that data. Take a trained image recognition network: To determine what’s in an image, the network passes it through a series of artificial neuron layers, with each layer extracting different levels of information, from abstract edges, to shapes, to more recognizable features.

Kautz’s team found that they could interrupt and reverse a model in the middle of these steps, reconstructing the input image from the model’s internal data. They tested this technology on a variety of common image recognition models and GANs. In one of the tests, they showed that they could accurately reconstruct images from ImageNet, one of the best image recognition datasets.

Images from ImageNet (top) along with a reconstruction of those images created by re-wrapping an ImageNet-trained model (bottom)

Like Webster’s work, the recreated images are very similar to the real ones. “We were surprised by the final quality,” Kautz says.

Researchers argue that this type of attack is not just a hypothetical attack. Smartphones and other small devices are starting to use more artificial intelligence. Because of battery and memory limitations, sometimes only half of the models are processed on the same device and sent to the cloud for the ultimate computing crisis, an approach known as split computing. Most researchers assume that split computing won’t reveal any private data from a person’s phone because only the form is shared, Kautz says. But his attack shows that is not the case.

Kautz and his colleagues are now working on ways to prevent models from leaking private data. We wanted to understand the risks so we could reduce vulnerabilities, he says.

Although they use very different techniques, he believes his work and Webster complement each other well. Webster’s team showed that private data can be found in the model’s output. Kautz’s team showed that data for the inverse can be detected, and the input reconfigured. “Exploring both directions is important to gain a better understanding of how to prevent attacks,” Kautz says.



Source link

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button