-
Notifications
You must be signed in to change notification settings - Fork 28
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
AP: Bridgy ignores sensitive content label #1100
Comments
Great feature request, like #1099. Keyword scanning and detection may be less likely, that takes some NLP work and word list curation to do well, not to mention localization, but just translating the |
Probably fine, as long as CWs continue to mean the images aren't bridged. Afaik most people do apply a CW if there's actual nudity beyond suggestiveness, so translating |
Closing this since it's working as per #1099 (comment). It's unfortunate that Bluesky specifies "non-sexual nudity" in the UI, but this seems as good as it gets. |
Hm, actually quite interesting on that end: Bluesky's UI says 'Adult Content', 'Non-sexual Nudity', 'Adult Content' for the three levels (see here), so I assume Bridgy Fed now labels at the 'Nudity' level. I'm going to file an issue there to get it differentiated and also ask which level should be applied if a service can't determine it accurately. (Edit: Added links to those discussions.) |
Ah I see, the warning is "Graphic Media" now due to snarfed/granary@48e57b7. I think that works, the labeller's description is "Explicit or potentially disturbing media." so it's quite general! |
This is the inverse of #1099.
I made this post self-labelled as sensitive: https://tiggi.es/@Qazm/112546008212972221
![The post in question.](https://cdn.statically.io/img/private-user-images.githubusercontent.com/17500283/335849218-9528d45f-880b-45e3-8ced-44a95b1c8c0b.png?jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3MjI3MjM5MDMsIm5iZiI6MTcyMjcyMzYwMywicGF0aCI6Ii8xNzUwMDI4My8zMzU4NDkyMTgtOTUyOGQ0NWYtODgwYi00NWUzLThjZWQtNDRhOTViMWM4YzBiLnBuZz9YLUFtei1BbGdvcml0aG09QVdTNC1ITUFDLVNIQTI1NiZYLUFtei1DcmVkZW50aWFsPUFLSUFWQ09EWUxTQTUzUFFLNFpBJTJGMjAyNDA4MDMlMkZ1cy1lYXN0LTElMkZzMyUyRmF3czRfcmVxdWVzdCZYLUFtei1EYXRlPTIwMjQwODAzVDIyMjAwM1omWC1BbXotRXhwaXJlcz0zMDAmWC1BbXotU2lnbmF0dXJlPTgwYzY2NGJiNmEyYzI4ZTA1M2ZlYThiN2RiYTIzY2NmNDBlODFmMTQwZDMxZDMwNGNhYWFlNDg2ODEyZjZhNjImWC1BbXotU2lnbmVkSGVhZGVycz1ob3N0JmFjdG9yX2lkPTAma2V5X2lkPTAmcmVwb19pZD0wIn0.vMpQg6Ti7_kL9FooOnyR0r4HumDtQLLDzNnWBbXCqNo)
JSON
This was bridged to Bluesky with no indication of sensitive content:
Which of course means that the image isn't hidden by default even though I've set all default labels to "Warn".
What I expected:
The post should have a sensitive content label applied like on this native Bsky post I just made:
![The post in question.](https://cdn.statically.io/img/private-user-images.githubusercontent.com/17500283/335848888-83f5ad59-36fa-4408-a3f8-2e593e2639fa.png?jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3MjI3MjM5MDMsIm5iZiI6MTcyMjcyMzYwMywicGF0aCI6Ii8xNzUwMDI4My8zMzU4NDg4ODgtODNmNWFkNTktMzZmYS00NDA4LWEzZjgtMmU1OTNlMjYzOWZhLnBuZz9YLUFtei1BbGdvcml0aG09QVdTNC1ITUFDLVNIQTI1NiZYLUFtei1DcmVkZW50aWFsPUFLSUFWQ09EWUxTQTUzUFFLNFpBJTJGMjAyNDA4MDMlMkZ1cy1lYXN0LTElMkZzMyUyRmF3czRfcmVxdWVzdCZYLUFtei1EYXRlPTIwMjQwODAzVDIyMjAwM1omWC1BbXotRXhwaXJlcz0zMDAmWC1BbXotU2lnbmF0dXJlPThjMTdhOWZkMmVmMmE4NjViNzVmMTY5YTM0YmEzMzBmMWM1YjJlYzJmYTQxMzhhN2FkNTkyMjQ0YTA2YWVkY2MmWC1BbXotU2lnbmVkSGVhZGVycz1ob3N0JmFjdG9yX2lkPTAma2V5X2lkPTAmcmVwb19pZD0wIn0.aBDSlI-Odz4XIG_V4ff2eXtbO-q_IeTIpV1SjWaYepw)
(Note: You probably should have nudity set to at least "Warn" if you look at my account beyond that page.)
This results in hiding the image or post depending on the viewer's settings:
![The post in question again, this time with the image hidden behind a collapsible that reads "Adult Content".](https://cdn.statically.io/img/private-user-images.githubusercontent.com/17500283/335850411-2b75000d-19c6-4e58-929e-54429c945e8e.png?jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3MjI3MjM5MDMsIm5iZiI6MTcyMjcyMzYwMywicGF0aCI6Ii8xNzUwMDI4My8zMzU4NTA0MTEtMmI3NTAwMGQtMTljNi00ZTU4LTkyOWUtNTQ0MjljOTQ1ZThlLnBuZz9YLUFtei1BbGdvcml0aG09QVdTNC1ITUFDLVNIQTI1NiZYLUFtei1DcmVkZW50aWFsPUFLSUFWQ09EWUxTQTUzUFFLNFpBJTJGMjAyNDA4MDMlMkZ1cy1lYXN0LTElMkZzMyUyRmF3czRfcmVxdWVzdCZYLUFtei1EYXRlPTIwMjQwODAzVDIyMjAwM1omWC1BbXotRXhwaXJlcz0zMDAmWC1BbXotU2lnbmF0dXJlPTllZGQxMWMzZmI4ODkxM2E5MzgxN2Q3YzkwYjk2NWQ4YzU2YzFhNTYwYzQzZmE5OTE3ZTlhODAyNTc3MWQ2NzImWC1BbXotU2lnbmVkSGVhZGVycz1ob3N0JmFjdG9yX2lkPTAma2V5X2lkPTAmcmVwb19pZD0wIn0.KukCQTuhRx8ICcudFNSJtTtzLH1c7NLqL3705g4FH-Y)
You probably should also do a keyword scan and apply higher categories of labels (i.e. "Nudity" or "Porn" instead of just "Sensitive" if the post mentions "nudity" or "porn" or some related keywords anywhere in its summary, content or hashtags).
You may also want to ask the Bluesky devs to exclude Bridgy from their classifier training, since there may be other reasons AP users may mark media as sensitive like violence etc, which I don't think can be self-labelled on Bsky yet.
The text was updated successfully, but these errors were encountered: