- 최신
- 최다 투표
- 가장 많은 댓글
if your method hasn't blocked the site effectively, it might be due to the dynamic nature of IP addresses used by such domains, making them difficult to block through IP-based rules alone. A better approach could include:
- Use DNS filtering to block requests to or from the domain. This approach can be more effective as it doesn't rely on IP addresses, which can change frequently.
- Configure your WAF rules to block requests based on HTTP header values that uniquely identify traffic from the unwanted source. For example, the
Host
header might containnews.grets.store
or aUser-Agent
header might identify bot traffic.
Hello, before proceeding, I want to highlight that it is always a good idea, if you are testing in production, to initially put the rules in count mode, and analyze for 1-2 weeks minimum to see if they are working as you expect.
That being said, when it comes to bots, even more when this is about non-legit ones(as it seems to be the case here) you need to consider that both user agent and IP can most likely vary, as well as other headers. With this in mind, you need to adapt and a good idea would be to place a rate limit rule, combining with captcha. This way you don't need to block an IP that could eventually be changed and end up being a valid one. I also recommend you to read this blog post, which covers best practices on how to use and prioritize rate-based rules.
Finally, you might also want to consider Bot Control. Keep in mind it has extra costs(depending on whether you use Common or Targetted, you will be charged by the number of captchas analyzed(see here for pricing details), so you might want to place Bot rules below more specific ones to reduce the traffic that.
관련 콘텐츠
- AWS 공식업데이트됨 2년 전
- AWS 공식업데이트됨 2년 전
- AWS 공식업데이트됨 2년 전