Skip to main content
Manage graphic content
Updated over 2 months ago

Media on Check can be graphic, disturbing, or otherwise sensitive. To protect your mental health, Check allows you to manually and automatically add content warnings to this media, preventing Check users from inadvertently viewing disturbing content.

Manually add a content warning to media

To add a content warning to media:

  1. On the media’s item page, click on the cross eye icon overlaid above the sensitive media.

  2. Click Enable content warning.

  3. Select the sensitivity’s category. Options include violence, nudity, medical, or a custom category.

  4. Click Save.

Automatically detect sensitive media and add a content warning

To automatically detect and flag sensitive media, use Rules:

  1. Navigate to the Rules settings page.

  2. Click New Rule.

  3. Name the rule.

  4. As your If condition, choose Item has been detected as.

  5. Then select the flag to detect. Check can detect adult content, medical conditions or procedures, and violence.

Check uses the Google Cloud Vision API to detect sensitive content to detect:

  • Adult: Sexual intercourse, nudity, and adult content in cartoon images, like hentai (this filter will generally not flag people in bathing suits).

  • Medical: Explicit images of surgery, diseases, or body parts. This filter primarily identifies graphic photographs of open wounds, genital close-ups, and egregious disease symptoms.

  • Violence: Depictions of killing, shooting, blood, and gore.

To learn more about what content we can and can’t detect, read this article on Google Cloud Vision’s inappropriate content filters.

6. Choose a likelihood of Low, Medium, or High, which indicates how sensitive the rule will be.

A High likelihood will cause more content to be detected, while a Low likelihood will cause less content to be detected.

7. As your Then action, choose Add content warning cover. Consider also adding the Ban submitter Then action.

8. Click Save.

Notes:

  • Rules will not apply to media added before the rule was created.

  • Automated content covers will be applied when tipline users are blocked on a tipline due to excessive messages sent with the media being flagged as SPAM.

Did this answer your question?