1개 답변
- 최신
- 최다 투표
- 가장 많은 댓글
0
Hello.
Looking at the document below, it seems that if you return "X-Robots-Tag: noindex" in the response header, it will not be indexed.
In other words, I thought it would be a good idea to add a process on the application side that would return "X-Robots-Tag: noindex" in the response header when accessed from a domain other than the domain.
https://developers.google.com/search/docs/crawling-indexing/robots-meta-tag
Or, how about configuring the web server (Apache, etc.) to block access from users other than the domain?
<VirtualHost *:443>
ServerName howtoabroad.com
DocumentRoot /var/www/html
SSLEngine on
SSLCertificateFile /path/to/your/certificate.crt
SSLCertificateKeyFile /path/to/your/private.key
SSLCertificateChainFile /path/to/your/chainfile.pem
<Directory "/var/www/html">
RewriteEngine on
RewriteCond %{HTTP_HOST} !^domain\.com$
RewriteRule ^.*$ - [F,L]
</Directory>
</VirtualHost>
관련 콘텐츠
- AWS 공식업데이트됨 3년 전
- AWS 공식업데이트됨 3년 전