Seem to be a keyword thing. I used "ugly" and got this.
Thanks. Still, quite weird.
This usually happens when your post includes something that can be interpreted as personally identifiable information (IP addresses, phone numbers, etc) by a loosely defined auto flagging rule (i.e. a regular expression).
This flags have to be cleared by a forum moderator (not to confuse with a community moderator) but they usually don't take too long to do so.
Yeah. Happens, disconcerting, but not often enough to be a real prob.
Sorry for the delay in reply here.
We use discourse, which has a number of auto flagging systems in place. Including,
- a somewhat blackbox anti-spam ML program discourse supports
- flags based on admin settings, that is scripts that get triggered by typical spammy behavior (e.g. if a post was obviously copy-pasted, if a new user is creating too many new topics)
- watch words and regex triggers (e.g. curse words, private emails, private ip addresses, likely passwords)
- and then users of a high enough trust level can flag a post (which is incredibly helpful, flags are quite welcome).
For any of these triggers, a human needs to review the flag to allow the post to be visible.
If you're curious, the biggest source of false positives seems to be ip addresses, which is easily confused with package version numbers, and text that commonly appears in error messages and logs. We have a few situations were users shared private IPs linked to sensitive data, and so decided to be better safe than sorry.
Your recent post was triggered in this thread (convert email to lower to get count), because it had several email addresses listed. I saw that you had faked emails, and so approved the post.
Why do we watch out for emails, ips and other personally identifying informaiton?
- Often (way too often), users who have questions dealing with emails don't anonymize the email addresses in their reprex data.
- Additionally, we have a policy that users should not list their email in a reply post (only via DM or on their profile page), since it is sometimes "impossible" to delete a user's full activity. That is, if a user interacted with many threads and wiki pages, those interaction may not be able to be deleted without deleting everything they touch. (Instead, we anonymize the user — github works similarly with user PRs and issues). Thus we ask users to be careful where htey share any personally identifying information.
Ugly isn't a trigger...unless Discourse's ML thing goes insane!.
I think it was this, validate name with email, which has those email addresses.
Sorry this is annoying. We actually do get a far amount of spam posts, and those auto-flags do alert me to a number of issues that I address manually.
And again, I really appreciate all your individual flags as well.
I ran a test reply to you before, it seems even email patterns with invalid tlds cause a flag.
Is there a way to update the email pattern to only flag valid tlds?
so that when users have questions which involve data with email addresses, we can easily construct reprexs to help them without needing each post to be approved?
e.g. firstname.lastname@example.org is an email pattern, but the tld is invalid and should be allowable.
"Watched words" (which are what trigger approval) are regular expressions, so, yes, if there's a regexp to account for the >1,300 valid TLDs and only filter those out, then that's doable.
This topic was automatically closed 7 days after the last reply. New replies are no longer allowed.
If you have a query related to it or one of the replies, start a new topic and refer back with a link.