I’d like to use Anubis to protect my instance from AI-powered plunder bots (and in a boader way mitigate DDoS), but putting it as-is will certainly break API calls from clients (since they cant/won’t execute js).
Fortunately, I can whitelist both URL and/or useragents using regex in Anubis botPolicy.json.
Here some example configuration:
{
"name": "robots-txt",
"path_regex": "^/robots.txt$",
"action": "ALLOW"
},
{
"name": "internal-traffic",
"remote_addresses": ["10.0.0.0/8", "172.16.0.0/12", "192.168.0.0/16"],
"action": "ALLOW"
},
{
"name": "generic-browser",
"user_agent_regex": "Mozilla|Opera",
"action": "CHALLENGE"
}
Now, if i want to protect my NextCloud instance, what do you think would be the best?
- Allowing .well-known and API URL?
- is there documentation for API endpoints?
- Allowing specific useragents (NxC and apps windows and android clients)
- is there documentation for any and all clients useragents?
disclaimer : Anubis is not an all-powerful protection, and allowing specific url/useragent is like putting a big hole in it, BUT, it’s still better than getting DDoS by some moron botnet and waiting for them to calm down.