Questions tagged with Website Provisioning
Content language: English
Sort by most recent
After adding a new CNAME record, the record [will not resolve](https://www.whatsmydns.net/#CNAME/outreach.charliehealth.com)
The new record:
* Record name: outreach
* Record type: CNAME
* Value: proxy-ssl.webflow.com
I also noticed some other records that are set do not resolve. For example, `links`: [links.charliehealth.com](https://www.whatsmydns.net/#CNAME/links.charliehealth.com).
Meanwhile, `www` *does* [resolve](https://www.whatsmydns.net/#CNAME/www.charliehealth.com).
Is there some kind of cacheing/flushing I can do? Or A possible misconfiguration?
[Here is a link](https://ibb.co/YDhr3hC) to a screenshot of all the records in the hosted zone
NOTE: I am only concerned with the CNAME record for **outreach**. I have been trying to get this subdomain to work for weeks now. The way I have added the record is exactly what webflow support has directed me to do.
I am using buddyboss app. There is an option to enable API CDN. The Cloudfront url i was provided is not being accepted.
What am I missing? Is there a different CDN URL I need to use?
Is there a way to bring down a Site to Site VPN tunnel manually? I want to test tunnel2
Thank You
May I ask how to solve this problem? My apache servers are placed in the east and west respectively, and ALB is set up. After monitoring two ALBs through the global accelerator to achieve load balancing, now I need to configure failover: when both servers are down, The domain name access jumps to the maintenance page. The method I use is to set an A record for the R53 hosting domain name, and the routing policy is failover to S3. S3 does not set a static site, because there is a maintenance page hosted by a third-party service provider, and it can be accessed normally. Therefore, the redirection to the maintenance page hosted by the third party is set, but the failover jumps over. After the web service is restored, it is still redirected to the maintenance page. It is necessary to clear the browser cache to access the site normally. Is there any way to use it as a web service? after normal access
My third-party domain name service is bound to AWS Global Accelerator, and the ALB that AWS Global Accelerator listens to. Now the failover of R53 cannot go to the S3 static website. What's going on?
I am attempting to deploy a flask app with AWS Elasticbeanstalk. I can get the demo to work but when I attempt the same steps with my code, it breaks. My main file is called
In the logs, I get ModuleNotFoundError: No module named 'application' and lots of [error] 3501#3501: *1 connect() failed (111: Connection refused) while connecting to upstream, client: 172.31.3.135, server: , request: "GET / HTTP/1.1", upstream: "http://127.0.0.1:8000/", host: "172.31.37.25", but I am assuming the latter is because of the former. When SSH into the box and go to /var/app/current I can see other files of mine are there, but application.py is not.
To deploy, like I said, I am following the AWS demo which is basically just eb create <name>.
The only other major things different between mine and the demo is I actually have a templates folder and mode code, but I do not see how that would make a difference. I have not been able to find anything that would indicate I need to do something different with deployment and creation. Also, there is a similar question here, but my application.py is already in the root. Any help would be greatly appreciated.
I purchased a domain name via AWS route 53 that is similar to example.co.uk
Is it possible for me to route any traffic from example.co.uk to www.example.co.uk and if so, how do I do so?
Or would I need to purchase the www.example.co.uk url from elsewhere.
I have a S3 bucket that I have been using to serve up static web pages for a couple of years. I finally decided to get a domain to make it easier to share the location. Following the documentation I tried to create a Simple Record:
Record Type: A
Value/Route traffic to: Alias to website S3 endpoint
Region: US-East (Ohio) [us-east-2]
It should then show me available S3 endpoints, but it says "No resources found".
The static site is https://kghhome.s3.us-east-2.amazonaws.com/index.html
What I have tried so far:
- Entering variations of the S3 address in the search bar.
- Logging off and back in again.
- Waiting 48 hours in case the database mapping the endpoint and user was slow to update.
- Logging off and back in a second time.
The next thing that I can think of to try is to rebuild the static website in another bucket, but I'm hoping that there is something a little less obnoxious to try first.
Thanks,
Kai
I have hosted two apps (2 domain). The primary app (3000) and domain works fine but the seondary domain is not pointing to port 3001 as configured in its vhost file. I have created seperate vhost file for both following this documentation: https://docs.bitnami.com/aws/infrastructure/nodejs/get-started/get-started/
but still secondary domain does not points to 3001. If I access the secondary app through <serverip>:3001 . I can see the app but the secondary domain does not points to 3001 instead it points to 3000.
What am I doing wrong here?
Is there any documentation available for this setup?
I have a wordpress plugin with aws, but i don't know if my website is enable in aws, server --> AWS --> website with the correct name.
(service: cloudfront)
i hope i am clear !
my website:
https://plprod74.fr
thank you
I am trying to build an application to connect to the site chat.openai.com in the amazon ECS.
BUT there is something wrong with access the site.
The access request is denied. The response code is 1020 error.
Would you help me out?
On my lightsail instance I have tried to use the bncert-tool to setup an SSL cert, but it fails on the final part which is enabling auto-renewal. I got it working by manually renewing it https://aws.amazon.com/premiumsupport/knowledge-center/lightsail-bitnami-renew-ssl-certificate/ (It kept renewing successfully but would not show on the website, except for the first time, which I have no idea why?
```
2023/03/16 22:59:39 [INFO] [MYDOMAIN] acme: Trying renewal with 2158 hours remaining
2023/03/16 22:59:39 [INFO] [MYDOMAIN] acme: Obtaining bundled SAN certificate
2023/03/16 22:59:39 [INFO] [MYDOMAIN] AuthURL: https://acme-v02.api.letsencrypt.org
/acme/authz-v3/
2023/03/16 22:59:39 [INFO] [MYDOMAIN] acme: authorization already valid; skipping c
hallenge
2023/03/16 22:59:39 [INFO] [MYDOMAIN] acme: Validations succeeded; requesting certi
ficates
2023/03/16 22:59:40 [INFO] [MYDOMAIN] Server responded with a certificate.
```
And now I've reached the limit of 5 certs) but then I tried to use bncert again and now no method is working. But regardless I would like to get the automatic method working if possible.
```
Domain MYDOMAIN did not pass HTTP challenge validation
```
https://docs.bitnami.com/google/how-to/understand-bncert/#certificates-not-renewed-automatically
This page lists a solution but I still can't manage to get it working. I'm not sure if I have set the flags in the correct place?
```
RewriteCond %{REQUEST_URI} !^/\.well-known
```
```
ProxyPass /.well-known !
```
I placed them in my virtual host files
myapp-https-vhost.conf
```
<VirtualHost _default_:443>
RewriteCond %{REQUEST_URI} !^/\.well-known
ServerAlias *
SSLEngine on
SSLCertificateFile "/opt/bitnami/apache/conf/MYDOMAIN.crt"
SSLCertificateKeyFile "/opt/bitnami/apache/conf/MYDOMAIN.key"
DocumentRoot "/home/bitnami/htdocs/staging-api"
<Directory "/home/bitnami/htdocs/staging-api">
Require all granted
</Directory>
ProxyPass /.well-known !
ProxyPass / http://localhost:3000/
ProxyPassReverse / http://localhost:3000/
</VirtualHost>
```
myapp-http-vhost.conf
```
<VirtualHost _default_:80>
RewriteCond %{REQUEST_URI} !^/\.well-known
ServerAlias *
DocumentRoot "/home/bitnami/htdocs/staging-api"
<Directory "/home/bitnami/htdocs/staging-api">
Require all granted
</Directory>
ProxyPass /.well-known !
ProxyPass / http://localhost:3000/
ProxyPassReverse / http://localhost:3000/
</VirtualHost>
```
I also placed it in the public/.htaccess file because someone suggested it should go there.
```
Options -MultiViews
RewriteEngine On
RewriteCond %{REQUEST_FILENAME} !-f
RewriteRule ^ index.html [QSA,L]
RewriteCond %{REQUEST_URI} !^/\.well-known
```
Not really sure where these flags are meant to go
`
All via Lightsail, I created an Instance, attached a Static IP to it, created a Distribution, then set a Custom Domain on the Distribution, creating a Certificate. I attached that Certificate to the Distribution and all was well. This was being done in an experimental fashion. Deciding I wanted to use a different type of Instance, I nuked everything. No more Instance, no more Distribution, etc...
BUT, when I try to create everything all over again, the Instance is created just fine. The Static IP is created fine and attached as well. The Distribution setup is a breeze. Everything is perfect, except for the final steps, pertaining to the Certificate.
When I create the Certificate, the system acts as though it was still hanging around because the DNS entries for validation are exactly the same as before. As a result, the certificate seems to become validated almost instantly, quicker than before. Then, when I try to attach the Certificate to the Distribution, it throws the following error:
AttachCertificateToDistribution[us-east-1]
Alternative Domain Names [thefullyqualified.domainname.here] have one or more parameter that is already associated with a different resource.
InvalidInputException
In AWS Dashboard GUI for Lightsail, when picking a Certificate to attach to the Distribution, it says the Certificate is "Valid, not in use". But, still it throws this error.
So, I tried a different method where I made sure everything was detached and deleted via the AWS CLI. All seemed to be free and clear. Nothing hanging around that could be seen. I went through all of the normal steps that work via the AWS CLI to perform the same setup. Again, during the Certificate creation it seems to go much faster than usual, is instantly validated, and the validation CNAME record is the exact same as before. When I go to attach the Certificate via the AWS CLI, it gives this error:
An error occurred (InvalidInputException) when calling the AttachCertificateToDistribution operation: Alternative Domain Names [thefullyqualified.domainname.here] have one or more parameter that is already associated with a different resource.
I feel like either the Distribution (though getting deleted) is still hanging around and is still attached to the domain OR the Certificate is hanging around and is somehow referencing what it used to be attached to (which I believe would be the CloudFront Distribution, which goes back to my feeling that the Distribution itself is still hanging around even though it has been nuked via Lightsail.)
Any idea what I can do to get this to move forward without having to just pick another domain to use? I'm concerned that I'm going to end up in this boat one day with something that's fully in production and I'll be stuck. Is this just the risk of using Lightsail versus putting in the extra effort to setting up the EC2 instance and other configurations outside of Lightsail?