TL;DR:
Cosplan now uses the Open-NSFW AI model to automatically assess the safety of public content. Content is scored 0–100%, and items above a threshold are reviewed by humans before being published. The system helps keep Cosplan a safe space. All data stays within Cosplan’s infrastructure nothing is shared with third parties.
Introduction
In today’s social media landscape, ensuring user safety especially for younger users is a top priority for Cosplan. Our platform must remain a safe and welcoming space for everyone, while still allowing diverse content to be shared.
Content Classification on Cosplan:
To better manage published content, Cosplan uses a three-level categorization system:
- Allowed and Safe: content with no issues, fully appropriate for all users
- Allowed but Not Safe: content that may include blood, weapons, or suggestive material
- Prohibited: gore, explicit nudity, off-topic advertisements, and other inappropriate content
Each user can choose to filter their feed to view only “Allowed and Safe” content or also include “Not Safe” content.
The Need for Automatic Verification:
In this context, Cosplan needed a tool to automatically assess the “safety” level of content before it is made public. After extensive research, we decided to collaborate with open-nsfw.com and integrate their Open-NSFW prediction model developed by Yahoo.

How the Model Works:
The Open-NSFW model assigns a score from 0 to 100% for each piece of content, indicating its safety level. Content that exceeds a certain threshold will be flagged for human moderation to determine whether it can be published on Cosplan.
Privacy:
The prediction model is fully installed within Cosplan’s infrastructure and operates in a sealed environment. No user data is shared with third parties, ensuring complete privacy.
Conclusion:
Through this collaboration with Open-NSFW, Cosplan takes another step toward providing a safe and enjoyable environment for all users, while still allowing controlled freedom of expression and content sharing.