Trying to conceptualize the Internet is like trying to draw the universe. At any given moment, it is expanding, complexifying, and mutating.
Unfortunately, within that expanse, there is the unavoidable seedy underbelly. The shadows in which the worst of humanity skulk, and multiply. At the bottom of that deep dark pit, is the taboo subject of child sexual abuse (CSA) content - or child porn - an issue that is imperative to tackle.
While the Internet is not limited to the World Wide Web, it is the tool that many of us use to interact with it, and the two have become somewhat synonymous.
The World Wide Web was founded by Tim Berners-Lee while he was at the European Organization for Nuclear Research - better known as CERN - in 1980 and opened to the public in 1993. It was designed as a “universal linked information system” and while initially used by CERN and other academic and scientific institutions, once the protocol and code were made royalty-free, it became widespread.
People were able to make websites for whatever they wanted, and boy, did they take that literally.
While many would have turned to the Internet for quaint blogs or legitimate business offerings, once you have a new avenue for committing crime, those interested will most certainly take advantage of the situation.
The Internet Watch Foundation (IWF) has dedicated itself to a particular niche - that of CSA content, and more specifically, the removal of it.
Keeping watch
“We were founded really because, once the Internet started to become available mainstream, there started to be reports of child sexual abuse material being found on there. At the time, the Internet service providers in the UK were the ones held responsible and told to do something about it, so the IWF was founded as a hotline that the public could report to,” Dan Sexton, CTO at IWF, tells DCD.
“Now, we’ve got more than 200 members, including all the big tech companies, ISPs in the UK, and increasingly from around the world. Our mission is an Internet free from child sexual abuse.”
Initially, IWF was a sort of ‘hotline.’ People would report content that they came across on the Internet, and IWF would then get it taken down.
“Around 10 years ago, we were getting reports of content and we would action those reports and see links on the website for other content, or additional images and videos, but we couldn’t do anything. We couldn’t work outside the remit of the content that was reported to us,” says Sexton.
“So we requested permission from the UK government and Crown Prosecution Service to allow us to start doing proactive searching, so now we can use the initial intelligence for further investigation.”
Once the IWF has found content, and has established that it is indeed imagery or videos of CSA, the foundation then goes about getting it removed.
“It’s finding where it is on the Internet. So if we find imagery or videos on a website, it’s working out who owns that website, what country it is registered in, and how we facilitate the removal of that.
“With the open web, you can track down website addresses, and IP addresses to regions and registers to work out the owner and administrators, and then alert authorities along the way that the content is illegal.”
Notably, the majority of the content IWF deals with is not actually websites themselves dedicated to CSA (although this does play an important part in the issue). Instead, it is people sharing content on websites that allow user-generated content.
These cases are a bit easier - simply alerting the host and getting that image removed. When it is an entire website, “these are potentially monetized. Those ones we take much more seriously. Again, in those cases, it's tracking down where they are and getting them blocked and removed from the Internet. In this case, the entire site needs to get blocked, removed, and deregistered.”
Once the CSA content has been found, it has to be reviewed and tagged with detailed data about what is happening in the content, how old the child is, the age, the race, and any other descriptive data. Once this is done, the information can be turned into a “hash” - or an identification number, which is kept in IWF’s massive database.
“In very simple terms, our hash list takes an image, and we run our algorithm against it, and it turns it into a number. A database of those numbers is provided to industry members and then every time an image or file is shared on a member’s website they can compare it to our database,” he explains.
“That means that once we have found an image, you can block it everywhere.”
The value of this work goes without saying. But, as IWF also works with the Police, and the Child Abuse Image Database (CAID), the content must also be stored.
Storing unsavory content
Understandably, IWF is not comfortable sharing the exact infrastructure underpinning its storage systems. This data is, after all, very personal - or as Sexton says - “It’s about as secure and sensitive as any data could be.”
IWF’s infrastructure is seemingly separate from CAID, with Sexton noting - “[CAID] is a unique facility where all the law enforcement agencies in the UK can find content. It all gets centrally uploaded to help law enforcement in their investigations… All of our findings are shared with CAID.”
He adds: “We have this really great relationship with CAID where we are contributing to CAID to help law enforcement, and then that law enforcement data is being shared with the IWF.”
Sexton explains that, while he can't go into detail about the “exact infrastructure that we use,” IWF does keep everything within a “very secure network.”
“We work on the Internet, but we have a very robust pipeline to ensure that any content we assess is passed through various stages and kept in a secure area. It’s a really big concern. This is highly illegal content, and it’s also highly sensitive content.”
Interestingly, while this might be expected to mean only on-premise, IWF’s attitude towards this seems to have shifted. During a 2022 interview with Tech Monitor, Sexton said that the foundation couldn’t really embrace cloud computing - “The really sensitive stuff is all kept on-premise, and we have a dedicated, secure air-gap network where all the data and images are stored.”
However, he seems more open to the concept of cloud computing three years later.
“As far as on-premise and the cloud, which is an ongoing conversation within the community, there have always been concerns about the security of the cloud but I think as an industry we have sort of moved past that,” he says.
“It’s more and more accepted that secure public cloud is a thing that is very much being used - I mean it's being used by governments and organizations around the world. Our stance is that it needs to be very secure, so at the moment that sees a lot of on-premise being used as a solution, but that could well change. It certainly seems that the public cloud is being used for more and more sensitive information, and there are definitely advantages to the cloud such as leveraging greater processing power.”
CAID itself is also nearing the end of its hosting agreement. In a tender published in May 2024, CAID noted that its current hosting contract would expire in March 2026, and was looking to procure these services.
While similarly, details for this are limited, the requirements are for “hosted and managed scalable infrastructure,” to be “hosted in UK-based data centers with Police Assured Secured Facility status, including the use of Tier 1 public cloud providers,” and that it should be “Separated from other tenants so that underlying infrastructure providers have no access to the CAID applications or data.”
Most importantly, the tender notes that the previous legal opinion, last updated in 2020, said that “CAID data should not be held on cloud-hosted infrastructure and should remain within Police-owned data centers.”
“Since this advice was given there has been considerable development in both cloud hosting technology and the needs and approaches of law enforcement efforts to combat child sexual abuse and exploitation. As such, this opinion has been revisited and it is now considered that removing these restrictions can be compliant with data protection requirements.”
While both the Police and IWF are working to remove CSA content from the Internet, some of that content remains untraceable.
Sexton explains to DCD: “We can do this on the open web, but we can’t do this on the dark web or Tor networks. They are designed to make it anonymous, and that's why you see all these horrendous reports of illegal content on the dark web. We can find that content, but we can’t see who runs the website or where it is located. It's much, much harder to get that removed.”
Those websites only accessible via the dark web, while keeping their location hidden, are still being physically stored somewhere, and by someone.
The web is dark and deep
Bulletproof Hosting, according to Netscout’s director of threat intelligence Richard Hummel, previously was used to describe networks that were “very clearly criminal.”
“The point of these networks was that anyone subscribing could randomize their IP addresses and domain names. So, as a security professional, if you were trying to track command and control or track a specific adversary, you would have to know the algorithm in order to figure out what the IP address would be next,” says Hummel, adding however, that the “nomenclature has morphed over time.”
Bulletproof Hosting now is more typically used to refer to a provider that is very resilient.
“They are resistant to takedowns, and they don’t typically adhere to law enforcement requests. They turn a blind eye to what their users are doing.”
The actual service of Bulletproof Hosting is not illegal, and many that fall into the category are something of a grey area. “These networks will sometimes have legitimate purposes, and then turn the other way when users do certain things. Then there are the Bulletproof networks that very clearly know that their user is doing bad stuff and they just don’t care.”
This is reiterated by Gerald Beuchelt, CISO of cybersecurity firm Acronis, who tells DCD that Bulletproof Hosters typically don’t require the same level of scrutiny that other, more above-board, providers might.
“These providers are popping up all over the world, and depending on who the users are, might be in ‘unfriendly’ countries,” says Beuchelt, offering Russia and China as examples.
“There have also been quite a few Bulletproof Hosters in Western countries. The Netherlands is particularly known for having a regulatory regime that enables Bulletproof Hosters to exist,” he says, noting that at a conference in the last couple of years in the Netherlands, he recalls a discussion about how they could de-anonymize those providers.
Hummel adds that one of the ways to tell if a provider is a Bulletproof Hosting Provider is if you go to their website and they state that they are not a BHP - “it's kind of a giveaway. If they specifically say 'we aren’t this,' they probably are.”
Beuchelt affirmed that “they are probably not advertising it [that they are BPH] on the front page of the Wall Street Journal, but if you go on the darknet, you will find marketplaces where it is publicly advertised.”
In the case of some Bulletproof networks, they will sometimes actually use legitimate cloud hosting accounts and “tumble their traffic through them” instead of operating their own infrastructure, though Hummel notes that in the case of CSA, this is likely not to be the case. “That’s probably going to be very much ‘underground.'”
There have been some cases of successfully taking down CSA and websites dedicated to it even on the dark web.
Freedom Hosting, which was operated by Eric Eoin Marques and, according to a Krebs on Security report, had a reputation as being a “safe haven” for hosting CSA content, is one of them. Marques, described in an FBI warrant as “the largest facilitator of child porn on the planet” was arrested in Ireland in 2013, and in 2021 was sentenced to 27 years imprisonment.
In 2019, a “CyberBunker” facility in Traben-Trarbach, western Germany, was raided by more than 600 police officers, eventually leading to eight convictions. Among the illegal services allegedly hosted at the German data center were Cannabis Road, Fraudsters, Flugsvamp, Flight Vamp 2.0, orangechemicals, and the world's second-largest narcotics marketplace, Wall Street Market, and according to reports - CSA sites.
Built by the West German military in the 1970s, the site was used by the Bundeswehr’s meteorological division until 2012. A year later, it was sold to Herman-Johan Xennt, who told locals he would build a web hosting business there. In total, around 200 servers were seized.
Perhaps most famously is the “Welcome to Video” case. Welcome to Video was a South Korean website that was owned and operated by Son Jung-woo. Son hosted the site from servers operating in his home in Chungchongnam-do in South Korea and between 2015 and 2018 distributed around 220,000 pieces of CSA content which were available for purchase with cryptocurrency.
The cryptocurrency transactions were found by the US Internal Revenue Service's Criminal Investigations department, which asked Homeland Security to investigate. The servers hosting Welcome to Video had the IP address embedded in the source code, enabling the Korean National Police Agency (KNPA) to arrest Son.
In total, eight terabytes of CSA content were seized, 45 percent of which had never been seen by law enforcement before, and 337 site users were arrested.
Since the publication of DCD’s magazine, the Kidflix platform was shut down by State Criminal Police of Bavaria (Bayerisches Landeskriminalamt) and the Bavarian Central Office for the Prosecution of Cybercrime (ZCB) with help from Europol after an investigation starting in 2022.
Kidflix was described as one of the largest paedophile platforms in the world, and its closure led to 79 arrests, 1,393 suspect identifications, and the seizure of more than 3,000 electronic devices.
Actually breaking down the infrastructure and arresting the hosting providers is crucial to actually making an impact. Hummel explains, that if not, these services are very resilient, and will simply move to another network and “continue as if nothing happened.”
“If we succeed at actually confiscating the infrastructure, which has happened before, and shut down servers - to actually get a Bulletproof Hosting Provider shut down, maybe you could make some arrests of the people that actually established that service and then I think you could start to feel some of the effects of that,” he says, adding “But it’s very problematic. It’s very difficult to do, and I imagine law enforcement has just as much frustration with this.”
A Herculean battle
In Greek mythology, the Hydra of Lerna is a many-headed serpentine lake monster, and in some iterations, chopping its head off would see the Hydra grow two more.
While the takedown of sites hosting CSA cannot be directly described in the same light, the issue is ramping up. The Internet continues to expand - like the universe - and attempting to monitor it is a never-ending challenge.
As IWF’s Sexton puts it: “Right now, the Internet is so big that its sort of anonymity with obscurity.”
While some emerging (and already emerged) technologies such as AI can play a role in assisting those working on the side of the light - for example, the IWF has tested using AI for triage when assessing websites with thousands of images, and AI can be trained for content moderation by industry and others, the proliferation of AI has also added to the problem.
AI-generated content has now also entered the scene. From a legality standpoint, it remains the same as CSA content. Just because an AI created it, does not mean that it’s permitted - at least in the UK where IWF primarily operates.
“The legislation in the UK is robust enough to cover both real material, photo-realistic synthetic content, or sheerly synthetic content. The problem it does create is one of quantity. Previously, to create CSA, it would require someone to have access to a child and conduct abuse.
“Then with the rise of the Internet we also saw an increase in self-generated content. Now, AI has the ability to create it without any contact with a child at all. People now have effectively an infinite ability to generate this content.”
Sexton adds: “From the perspective of law enforcement, their job isn't just to find content and remove it, it's also safeguarding children and arresting those who are abusing them and distributing that content. This is much harder if you can't tell whether a child is real, and there is a real risk that time will be spent chasing synthetic children that don’t exist or indeed not following up on real abuse because they think it looks like, or it looks like or appears to be, AI-generated.”
Ultimately, while the Hydra isn’t being killed, it is still important to keep chopping its heads off no matter how many new ones grow back.
Sexton remains optimistic about the IWF’s work: “It's a continual battle of adding friction and making it harder. And if it's too hard, the hope is that they will just stop doing it.”