nsfw ai systems also have potential privacy risks regarding the collection, processing, and storage of personal data. To precisely detect pornographic material, these AI models depend on terabytes of data including pictures, videos, text and metadata, which may house sensitive personal information. Not having appropriate legal frameworks, gives rise to over 80% of content moderation systems, (nsfw ai Models among many such entities) collecting user information, IP Address and other browsing history along with device information [1] as per a report released by Electronic Frontier Foundation [2] in the year 2022. Such data threat for privacy breaches is huge, especially when such data is misused or access by unauthorized persons.
Also, nsfw ai systems are also most likely processing user-generated content which is potentially personal and private. As an example, nfsw ai systems adopted on platforms like social networks as well as immediate messaging platforms might sift through exclusive messages, photos or even videos without individuals recognizing. In 2021, a case study of Facebook’s content moderation system showed automated systems could access private conversations. Even though these platforms may not have direct access to your personal life, it raises the question of how far away they really are from accessing this information? Well-respected data-privacy experts like Dr. Julia Angwin, editor-in-chief at The Markup, argue that using AI to monitor content leaves users vulnerable to unnecessary surveillance with little-to-no regulation.
The data retention practices tied to nsfw ai also presents another privacy concern. A report from the Data Protection Commission in Ireland although dated back to 2020 stated that some AIs kept data for very long periods of time, even forever as a method of improving their algorithms. However keeping hold of personal data for such a long time period creates opportunities for misuse or abuse of private information. In addition, it may create risks for data leaks, as in the 2021 hack of a massive AI dataset containing private information that was collected with content moderation systems. Some of the 300 million images and videos stolen were private and explicit in nature.
A proposed solution to curtail these risks was enactment of tighter privacy laws. The General Data Protection Regulation (GDPR), which establishes rules for the processing of personal data within the European Union, requires that personal data shall not be kept in a form which permits identification of data subjects only to the extent necessary for achieving the purpose(s) for which they are stored. The sad news is that a 2023 survey from the International Association of Privacy Professionals (IAPP) claims only one in four nsfw ai systems comply with data retention and user consent (among other things), and companies do not appear to be taking user privacy very seriously anyway.
Some experts suggest that an increasing reliance on such types of content moderation systems as nsfw ai in the age of AI will require organizations to be privacy-first and have express user consent. Twitter, for instance, adopted a user-consent model in 2022 and opted to have users “choose-in” before their data could be processed by its nsfw ai. While this was considered a step in the right direction to mitigate privacy issues, there is still more work needed to be done to make nsfw ai systems actually privacy preserving.
Check out nsfw ai to know more details about how it works and the privacy features it includes!