Instagram has added a new way to help users better manage their experience on a platform with the ‘Sensitive Content’ variable, which provides three options to restrict the content displayed in the app.
As you can see here, new sensitive content control options are now available in the latest version of the app.
If you go to Settings> Account> Sensitive Content Controls, you will now be able to choose from these options:
- Allow – You may see more photos or videos that could be annoying or offensive
- Restriction (default) – You may see some photos or videos that could be annoying or offensive
- Limit even more – You may see fewer photos or videos that could be upsetting or offensive
The mean “Restriction” is the default setting for all users, while only those over the age of 18 will be able to select the “Allow” option, which removes all restrictions on the content displayed.
To be clear, Instagram notes that it already has different rules and processes to protect users from offensive content, with certain parameters for both your regular feed / Stories and for Research.
“We do not allow hate speech, harassment or other content that may pose a risk of harm to people. We also have rules about what content we show you in places like Explore; we call them ours. Guidelines for recommendations. These guidelines are designed to ensure that we don’t show you sensitive content from accounts you don’t follow. You may consider sensitive content to be posts that do not necessarily violate our policies, but may bother some people – such as posts that may be sexually suggestive or violent. “
So it’s about providing an extra level of protection for users who may not want to see any of this type of material, with Instagram systems now being able to automatically detect certain types of content and then keep them out of sight for those who choose their sensitivity settings.
Instagram has progressed in its systems over time. As early as 2019, Instagram exhibited how its image recognition systems were increasingly able to recognize content that was close to violating its community guidelines, but didn’t quite cross the line.
This borderline content will often be less reached, as part of Instagram’s efforts to protect users from offensive exposure to in-app content – but to ensure it doesn’t pose a wrong penalty for creators, it must ensure its systems have a high level of accuracy in detecting these elements within transmitted posts.
As a result, Instagram’s content moderators marked borderline content in their regular work, which Instagram then uses to enable its AI systems. This process has over time better enabled the platform to limit the reach of borderline material, while it has also now progressed to a stage where Instagram can provide users with more control options, based on this advanced understanding of the system.
Which won’t always be right. As with any AI system, they will be false positives, but if you want to avoid this type of material, it could be an easy way to limit exposure and improve your application experience.
And given the younger distortion of Instagram’s audience and the reach it now sees, it’s important for Instagram to protect its audience wherever it can – although that’s an element worth noting in your marketing approach, especially if you want to push boundaries, or if your visuals would potentially could be mistakenly identified as offensive, based on the above categories.
New sensitive content control options are now available in the latest version of the app.