NEW YORK — Artificial intelligence imaging can be used to create art, try on clothes in virtual fitting rooms or help design advertising campaigns.
But experts fear the darker side of the easily accessible tools could worsen something that primarily harms women: nonconsensual deepfake pornography.
Deepfakes are videos and images that have been digitally created or altered with artificial intelligence or machine learning. Porn created using the technology first began spreading across the internet several years ago when a Reddit user shared clips that placed the faces of female celebrities on the shoulders of porn actors.
Since then, deepfake creators have disseminated similar videos and images targeting online influencers, journalists and others with a public profile. Thousands of videos exist across a plethora of websites. And some have been offering users the opportunity to create their own images — essentially allowing anyone to turn whoever they wish into sexual fantasies without their consent, or use the technology to harm former partners.
People are also reading…
The problem, experts say, grew as it became easier to make sophisticated and visually compelling deepfakes. And they say it could get worse with the development of generative AI tools that are trained on billions of images from the internet and spit out novel content using existing data.
“The reality is that the technology will continue to proliferate, will continue to develop and will continue to become sort of as easy as pushing the button,” said Adam Dodge, the founder of EndTAB, a group that provides trainings on technology-enabled abuse. “And as long as that happens, people will undoubtedly … continue to misuse that technology to harm others, primarily through online sexual violence, deepfake pornography and fake nude images.”
Australian Noelle Martin poses for a photo March 9 in New York.
Noelle Martin, of Perth, Australia, has experienced that reality. The 28-year-old found deepfake porn of herself 10 years ago when out of curiosity one day she used Google to search an image of herself. To this day, Martin says she doesn’t know who created the fake images, or videos of her engaging in sexual intercourse that she would later find. She suspects someone likely took a picture posted on her social media page or elsewhere and doctored it into porn.
Horrified, Martin contacted different websites for a number of years in an effort to get the images taken down. Some didn't respond. Others took it down but she soon found it up again.
“You cannot win,” Martin said. "This is something that is always going to be out there. It’s just like it's forever ruined you.”
The more she spoke out, she said, the more the problem escalated. Some people even told her the way she dressed and posted images on social media contributed to the harassment — essentially blaming her for the images instead of the creators.
Eventually, Martin turned her attention toward legislation, advocating for a national law in Australia that would fine companies 555,000 Australian dollars ($370,706) if they didn’t comply with removal notices for such content from online safety regulators.
But governing the internet is next to impossible when countries have their own laws for content that's sometimes made halfway around the world. Martin, currently an attorney and legal researcher at the University of Western Australia, says she believes the problem has to be controlled through some sort of global solution.
In the meantime, some AI models say they're already curbing access to explicit images.
OpenAI says it removed explicit content from data used to train the image generating tool DALL-E, which limits the ability of users to create those types of images. The company also filters requests and says it blocks users from creating AI images of celebrities and prominent politicians. Midjourney, another model, blocks the use of certain keywords and encourages users to flag problematic images to moderators.
Meanwhile, the startup Stability AI rolled out an update in November that removes the ability to create explicit images using its image generator Stable Diffusion. Those changes came following reports that some users were creating celebrity inspired nude pictures using the technology.
Stability AI spokesperson Motez Bishara said the filter uses a combination of keywords and other techniques like image recognition to detect nudity and returns a blurred image. But it’s possible for users to manipulate the software and generate what they want since the company releases its code to the public. Bishara said Stability AI’s license “extends to third-party applications built on Stable Diffusion” and strictly prohibits “any misuse for illegal or immoral purposes.”
Some social media companies have also been tightening up their rules to better protect their platforms against harmful materials.
TikTok said in March all deepfakes or manipulated content that show realistic scenes must be labeled to indicate they’re fake or altered in some way, and that deepfakes of private figures and young people are no longer allowed. Previously, the company had barred sexually explicit content and deepfakes that mislead viewers about real-world events and cause harm.
The gaming platform Twitch also recently updated its policies around explicit deepfake images after a popular streamer named Atrioc was discovered to have a deepfake porn website open on his browser during a livestream in late January. The site featured phony images of fellow Twitch streamers.
Twitch already prohibited explicit deepfakes, but now showing a glimpse of such content — even if it’s intended to express outrage — “will be removed and will result in an enforcement,” the company wrote in a blog post. And intentionally promoting, creating or sharing the material is grounds for an instant ban.
How AI predicts what you’ll buy
How AI predicts what you’ll buy
It’s a jungle out there—few places so much so as the world of “smart” advertising.
There, marketing geniuses have developed increasingly sophisticated algorithms that take all the information gathered about you online or from your phone and piece together a customer profile that may include everything from your favorite pair of socks to your children’s names.
Analyzing current market practices, Wicked Reports explored how artificial intelligence, or AI, can be wielded to gather data and make sales predictions across the internet. Some techniques you may know, such as persistent cookies that turn your computer into a ping hub for the websites you visit. Others are much more sophisticated, compiling all of your characteristics by analyzing what you’ve bought in the past, what you’ve put in your cart and abandoned, and what you’ve searched for. From there, advertisers can even make a version of similar customers to market to them as well.
The digital advertising industry is expected to crest $20 billion in 2022. That’s far from enough to crack the top 10 biggest industries in the U.S., but it’s a substantial amount of money—particularly when compared to the big-ticket ad buys of the past in splashy magazine spreads. Companies today are more eager than ever to spend what it takes to bring in ideal customers.
Continue reading to discover some of the tactics AI uses to predict buying behaviors.

Compiling user movement across the web
You may know about cookies: tiny text files that websites deposit on your computer as a way to track online behavior.
When you visit websites from Europe, for example, a law there mandates that you click through a cookie agreement that’s much more transparent than in the U.S. There are session cookies lasting one browsing “session” (until you restart your computer or browser) and persistent cookies that stay until you delete them. Think of a cookie as a waving arm each time you visit the same website. Together, they form a heat map of how often and when you visit every website in your browsing history. They can even flag your presence to other websites as a way to combine your data.
Identifying user characteristics
User characteristics, and something called demographic segmentation, is a key way online advertising targets you. User characteristics are any of your qualities, from your gender and age to what car you drive and the pets you own. These user characteristics lead to the advertising concept of demographic segmentation, in which companies can buy lists of really specific people.
Are you a 25-year-old white man with one dog, a full-time job as an auto tech, and an apartment rental in a “transitional” neighborhood? We have just the plaid shirt for you.
Mapping user location data
If you’ve used GPS in your smartphone or any of the hyperlocal dating apps, you’ve leveraged location data to your advantage—at least for now.
How does your phone know where you are? Cellphone towers ping your phone when you’re nearby. In your home, your Wi-Fi network is likely hardcoded with your location. That’s also true of any Wi-Fi network you hop into or onto during your errands, at school, at work, and so forth. After that, GPS can pinpoint your phone to an alarmingly small area as you carry it around, so not just in your home but in one corner of one room.
Matching new users to known customers who look and act in similar ways
Some items on this list are not very surprising, or we’re used to being told about them so they don’t seem as insidious and scary as they once did. But people are likely still surprised by the depths that companies will go to in order to better advertise to you. Your favorite clothing store, for example, might put together a complete data “picture” of you: what you’ve purchased from them, what size you shop for, where your address is, and more. Then they can reverse engineer someone just like you and buy a demographically matching list.
Anything can be filtered until just the exact desired customer base remains, and then they buy the ads.
IP address targeting by network connection
How much do you know about your IP address? Many of us are old enough to remember a time when connecting to the internet required knowing a specific IP address and typing it into our PC settings.
Today, the router you likely have in your home has a hard-coded IP address whose number values reflect where you are as well as which “node” you have on your local network. That information may be for sale to different companies because, with the right technology, they can use some IP addresses in order to infer the rest—and guess where you live. Apple is among the tech companies pushing back on IP targeting of this nature by masking IP addresses in its proprietary browser Safari.
This story originally appeared on Wicked Reports and was produced and distributed in partnership with Stacker Studio.

