University of Wisconsin-Madison

04/26/2024 | News release | Distributed by Public on 04/26/2024 08:21

Popular social media apps use AI to analyze photos on your phone, introducing both bias and errors

Vision models built into the social media apps Instagram and TikTok can instantly categorize different concepts extracted from photos. TikTok predicts age and gender while Instagram predicts more than 500 different traits in photos. Photo by Joel Hallberg/UW-Madison

Digital privacy and security engineers at the University of Wisconsin-Madison have found that the artificial intelligence-based systems that TikTok and Instagram use to extract personal and demographic data from user images can misclassify aspects of the images. This could lead to mistakes in age verification systems or introduce other errors and biases into platforms that use these types of systems for digital services.

Led by Kassem Fawaz, an associate professor of electrical and computer engineering at UW-Madison, the researchers studied the two platforms' mobile apps to understand what types of information their machine learning vision models collect about users from their photographs - and importantly, whether the models accurately recognize demographic differences and age.

The team will present its findings at the IEEE Symposium on Security and Privacy in San Francisco in May 2024. They are also available on the preprint server arXiv.

Many mobile applications use machine learning or AI systems called "vision models" to look at images on a user's phone and extract data, which can be useful in facial recognition or in verifying a user's age. These models can collect a lot of other information too, including demographic info, objects in a photo, and possible locations, though it's not clear what, if anything, this data is used for, according to Fawaz. Not too long ago, this process took place in the cloud; vision models would send user data to an offsite server to be processed.

"Nowadays, phones are fast enough where they can actually do the machine learning directly on the device, which not only saves the platforms money, but it also allows for more data to be used, and for different types of data to be produced," says PhD student Jack West, who worked on the project with PhD student Shimaa Ahmed and Fawaz.

Because that processing now happens on people's devices, it also means researchers can look more closely at the AI vision models and the types of data they collect and process.

The UW-Madison team analyzed the two platforms' models to determine what types of information they collect and how they process information. West created a custom operating system to track information fed into the vision model and to collect the model's output. The team did not try to extract or reverse-engineer the vision model itself, which would violate the apps' terms of service.

"We opened the app and found where the input is happening and what the output is," explains Fawaz. "We were basically watching the apps in action."

Vision models built into the social media apps Instagram and TikTok can instantly categorize different concepts extracted from photos. TikTok predicts age and gender while Instagram predicts more than 500 different traits in photos. Photo by Joel Hallberg/UW-Madison

They found when users choose a photo from their phone's camera app to upload to TikTok, its vision model automatically predicts the age and gender of any person or people in that image. Using that understanding, they ran a model data set of more than 40,000 faces through the vision model and found that it made more mistakes classifying people under 18 than over 18. For people ages 0 to 2, the model often classified them as being between 12 and 18 years old.

An analysis of Instagram found that its vision model categorized more than 500 different "concepts," including age and gender, time of day, background images and even what foods people were eating in the photographs.

"That's a lot of information," says Ahmed. "We found 11 of these concepts to be related to facial features, like hair color, having a beard, eyeglasses, jewelry, et cetera."

The researchers showed Instagram's model a set of AI-generated images of people representing different ethnicities, then gauged if Instagram could correctly determine the model's 11 face-related characteristics. While Instagram was much better at classifying images by age than TikTok, it had its own set of issues.

"It didn't perform as well across all demographics, and seemed biased against one group," says Ahmed.

So what, exactly, are the apps doing with this information? It's not totally clear.

"The moment you select a photo on Instagram, regardless of whether you discard it, the app analyzes the photo and grows a local cache of information," says West. "The data is stored locally, on your device - and we have no evidence it was accessed or sent. But it's there."

If Instagram and TikTok are using the data for purposes like age or identity verification, the researchers believe the technology has room for improvement. Decreasing bias in these types of vision models, they say, can help ensure all users receive fair and accurate digital services in the future.

Other UW-Madison authors include Maggie Bartig and Professor Suman Banerjee. Other authors include Lea Thiemt of the Technical University of Munich.