Recent questions

see all
1/18
  • I am using Lightsail for my wordpress and was trying to configure the "Converter for Media plugin with WebP and AVIF support". However, I am getting the following error after activating the plugin and I have no idea what this means... Can anyone shed a light on where to go from here? ``` It appears that your server does not support using .htaccess files from custom locations, or it requires additional configuration for the plugin to function properly. If you are using Nginx server, please contact your hosting support (or server administrator) and provide them with the following message: I am trying to configure Converter for Media plugin with WebP and AVIF support. In order to do this, I need your help adding the required rules to the Nginx configuration of my website - https://www.aquavue.co. More information can be found in the plugin FAQ: https://wordpress.org/plugins/webp-converter-for-media/faq/ (in the question: Configuration for Nginx) If you are using Apache server, this issue is usually related to the virtual host settings in the Apache configuration. In the .conf file appropriate for your VirtualHost, in the <Directory>...</Directory> section, replace the value of AllowOverride None with the value of AllowOverride All. In this case, please contact your server administrator. --- Error codes: rewrites_not_executed ```
    0
    answers
    0
    votes
    3
    views
    asked 2 hours ago
  • I use mysqldump nightly to ensure provider-redundant backups of my rds mysql instance. My last successful dump was Jan 26 02:26 (UTC). Now, I get permission denied errors even as the db administrator user. As the original user, I get ``` mysqldump: Couldn't execute 'FLUSH TABLES': Access denied; you need (at least one of) the RELOAD or FLUSH_TABLES privilege(s) for this operation (1227) ``` I tried to grant that user `FLUSH TABLES` but was unable to grant that privilege as the db administrator. The db administrator has `RELOAD`, so I tried the mysqldump as the db administrator, but then I get ``` mysqldump: Couldn't execute 'FLUSH TABLES WITH READ LOCK': Access denied for user 'dbadmin'@'%' (using password: YES) (1045) ``` My research turned up this knowledge center article: https://aws.amazon.com/premiumsupport/knowledge-center/mysqldump-error-rds-mysql-mariadb/ But I'm unable to follow the advice to exclude the `--master-data` argument because I'm already not including it. My failing command line is ``` /usr/bin/mysqldump --login-path='{login_path}' --ssl-ca=/etc/ssl/certs/rds-combined-ca-bundle.pem --ssl-mode=VERIFY_IDENTITY --max_allowed_packet=1G --single-transaction --quick --lock-tables=false --column-statistics=0 {database_name} ``` The most obvious culprit is a mysql upgrade on the OS on the machine trying to do the dump though it confuses me about why the _client_ permissions needs would change? ```dpkg.log ... 2023-01-26 06:45:09 upgrade mysql-client-core-8.0:amd64 8.0.31-0ubuntu0.20.04.2 8.0.32-0buntu0.20.04.1 ... ``` So, I'll roll back that upgrade, but if anyone has pointers on how to both keep the mysql client current _and_ continue to successfully mysqldump from RDS, I'd certainly appreciate it. Client: Ubuntu 20.04.5 mysqldump Ver 8.0.32-0buntu0.20.04.1 for Linux on x86_64 ((Ubuntu)) Server: RDS with MySQL engine version 8.0.28 TIA, AC
    1
    answers
    0
    votes
    2
    views
    asked 3 hours ago
  • Hello there, We are having issues with Oregon region. Session Manager with all the EC2 Instances are too slow. Therefore I tried to stop an Instance and started again. Now it's not working. Unable to connect to that instance via Serial console too. Can't see anything specific in the Health Dashboard/Status Page. [Update] We spotted an issue with a attached volume. Fixed it. But still Session Manager is slow. But able to access via SSH without any issues.
    1
    answers
    0
    votes
    11
    views
    asked 3 hours ago
  • I've been trying to connect an ec2 machine from aws to my localhost (WSL on windows) docker swarmer cluster, but i keep getting displayerd : Error response from daemon: Timeout was reached before node joined. The attempt to join the swarm will continue in the background. Use the "docker info" command to see the current swarm status of your node." and the ec2 is not being added as a node (even if later I try to add it again, it says that it is already part of a cluster, on my localhost it does not appear added). **What I've been tried:** Open the doors 2377, 7946 and 4789 (required by docker) on my wsl and ec2. approved all traffic to all ports on my ec2 firewall. desable my windows firewall (Tried to init a windows cluster to add ec2, but did not worked too. **Aditional information:** to open the doors on my wsl/ec2 I mainly used ufw and telnet. I was able to connect my windows docker to my wsl cluster. I'm being able to ping my ec2 ipv4 adress from mylocalhost, but not my localhost ip from ec2. Any suggestions and solutions are welcome, i'm seriously HOURS in this, any progress will make me happy Systems: I'm using ubuntu 18-04 on wsl and ecs, and windows 11 ![a print of my ec2 security firewall](/media/postImages/original/IMUPeb2tCDQe653UdL_rAMjQ)
    0
    answers
    0
    votes
    5
    views
    asked 5 hours ago
  • According to: https://docs.aws.amazon.com/eks/latest/userguide/platform-versions.html For an 1.24 EKS Cluster with eks.3 platform version, the kubernetes version (control plane) should be 1.24.7, however all our clusters (created using AWS CDK) are reporting 1.24.8 as kubernetes version (v1.24.8-eks-ffeb93d to be more precise). I re-read that platform-versions.html multiple times and cannot interpret it in such a way that what we're observing is expected behaviour. Would anyone be able to confirm if that's just bad wording/documentation and if yes, what would be a way to get possible kubernetes version(s) for a given EKS platform version (i.e. 1.24/eks.3). Many thanks, damjan
    0
    answers
    0
    votes
    5
    views
    damjan
    asked 5 hours ago
  • My landing page images will not load when I import files.![problem](/media/postImages/original/IMfAzYHKBGQAWNSMsRIxQF9Q) How do I fix this?
    0
    answers
    0
    votes
    3
    views
    asked 6 hours ago
  • I created a PostgreSQL database instance. I made sure to choose the free tier option. I put maybe 20 entries into one of my tables but I got an email today saying my account has exceeded 85% of the usage limit for one or more AWS Free Tier-eligible services for the month of January.
    1
    answers
    0
    votes
    6
    views
    asked 7 hours ago
  • Months ago I went through [this documentation ](https://betterprogramming.pub/integrating-amazon-cognito-with-ethereum-blockchain-7e87f1425422) and built a cognito pool with four lambda triggers that control signing in from a website. This mechanism adds the user cognito if they do not already exist in the pool and then authorizes them and get a jwt token from cognito which controls access to API gateway APIs. The problem is that when I use this setup, the JWT takes *5 MINUTES* at the minimum to expire and there doesnt seem to be any valid way to expire that token before that time frame. If this is indeed true, I effectively cannot rely on Cognito for senstiive APIs where I must make sure a user can only use it once with the credentials they are given. For instance, maybe I have an order creation API. In my testing, a user can grab the token from F12's network response and make thousands of fake orders using postman before the 5 minute expiration time expires after authorizing. I have seen documentation about token revocation, and found two api endpoints involved with cancelling tokens but even after using them this 'feature' of being able to reuse the token as much as you want still exists until the 5 minute timeout is over. To contrast, with OAuth, it looks as through the token can be set to be short lived and expire 10 seconds after issuance. I could give the user a 10-20 second valid window and this would probably cover me as it would take most of that time to break into F12 and get the JWT in the first place. Am I maybe using this wrong and is there a way that the cognito token doesnt provide wide open access beyond its initial use? I'd prefer to use Cognito but I think unless I can get around this I have to look at other options. Thanks!
    0
    answers
    0
    votes
    12
    views
    oggie
    asked 9 hours ago
  • I've tried to use Glue Spark Job for very basic partitioning over GZIP JSON data about 50GB. The reason for trying Glue Job is my data could have more than 100 partitions and it is not really convinians to do it in Athena. The code is almost from AWS template. ``` import sys from awsglue.transforms import * from awsglue.utils import getResolvedOptions from pyspark.context import SparkContext from awsglue.context import GlueContext from awsglue.job import Job from awsglue.dynamicframe import DynamicFrame from pyspark.sql.functions import concat, col, lit, substring ## @params: [JOB_NAME] args = getResolvedOptions(sys.argv, ['JOB_NAME']) sc = SparkContext() glueContext = GlueContext(sc) spark = glueContext.spark_session job = Job(glueContext) job.init(args['JOB_NAME'], args) my_partition_predicate = "partition='main' and year='2022' and month='12' and day='17'" datasource0 = glueContext.create_dynamic_frame.from_catalog(database = "my_db", table_name = "my_table", push_down_predicate = my_partition_predicate, transformation_ctx = "datasource0", additional_options = {"recurse": True}) datasource0 = datasource0.toDF() datasource0.printSchema() datasource0 = datasource0.withColumn("od", concat(substring('originalrequest',9,3), lit("_"), substring('originalrequest',12,3))) datasource0 = DynamicFrame.fromDF(datasource0, glueContext, "datasource0") datasink4 = glueContext.write_dynamic_frame.from_options( frame = datasource0, connection_type = "s3", connection_options = {"path": "s3://my_b/glue4", "partitionKeys": ["od"] , "compression": "gzip"}, format = "json" ) job.commit() ``` The job were executed with 60 DPUs and after 20 minutes it timed out... This failed execution is cost $8.8. Meanwhile, totally the same job were done in Athena in about 2 minutes and cost $0.25. Am I doing something wrong, or Athena (Presto) is leaps ahead of Spark in terms of speed and cost effectiveness?
    0
    answers
    0
    votes
    6
    views
    profile picture
    Smotrov
    asked 10 hours ago
  • I'm looking at a few articles where the author describes how to route traffic from an AWS API Gateway to Fargate tasks without any load balancing. * https://medium.com/@chetlo/ecs-fargate-docker-container-securely-hosted-behind-api-gateway-using-terraform-10d4963b65a3 * https://medium.com/@toddrosner/ecs-service-discovery-1366b8a75ad6 The solution appear to rely on AWS Service Discovery which, from what I can tell, creates private DNS records. If my ECS services starts 3 Fargate tasks, is API Gateway smart enough to spread the traffic across all the 3 tasks or not?
    0
    answers
    0
    votes
    14
    views
    asked 11 hours ago
  • hello, i have an instance that SSH not working, the problem is with EBS, when i attach another EBS its working!
    1
    answers
    0
    votes
    15
    views
    asked 14 hours ago
  • I'm unable to use AWS Lambda and to find a reasonable explanation. - Dashboard shows error box with empty message - Create form shows a spinner for 10 seconds and then stops and nothing happens - API returns `{"message": null}` - The user has enough permissions to use Lambda ![AWS Lambda dashboard error](/media/postImages/original/IM_s3FBKPBRRm6boEVBcldQA) I can just ask if someone faced something similar, since support doesn't want to answer my ticket. Thanks
    2
    answers
    0
    votes
    30
    views
    asked 16 hours ago
  • Hi, I am trying to setup Lambda functions with API Gateway as the trigger. I'll be making external API calls from the functions and I need my IP to be allowlist with the provider, so it should be static. I also need to provide them the hostname from where the API calls will originate from, so the API gateway will be using custom domain. I have the domain registered on Godaddy and for this API Gateway, I want to use a subdomain. At the moment, what I have done is: 1. Created a VPC Endpoint with subnets in all the availability zones in the region. 2. Created a private Rest API and assigned the above VPCE to it. 3. Created the same number of Elastic IPs as the availability zones. 4. Requested a new certificate from ACM for the subdomain, put the CNAME records on GoDaddy and got the certificate issued. 5. Created a Target Group with IP as target type, TLS as protocol and HTTPS as health check protocol and registered the default subnet's IPs of each availability zone. I used 403 as the health check status expected as this will be the status when the API will be invoked using NLB's DNS for health checks. The health check comes out to be positive. 5. Created Internet Facing, IPv4 Network Load Balancer. The listener was setup with TLS as the protocol. I assigned the above created EIPs to this load balancer and the above generated certificate too. At this point, I am successfully able to invoke the private API Gateway using the NLBs domain. However, I get a security warning because the domain for which the certificate was issued for is not being used to invoke the API. I created a Custom domain for the API and assigned the same certificate to it as well. But still, I get the same warning on the client side. And if I try to invoke the API with the custom domain name, I get no response at all because the name does not get resolved. If I had my domain registered on AWS Route 53, I would've been able to create an Alias record that pointed to the NLB. Can I still do this with external registrar and will this even do anything for me? Can somebody please guide me what needs to be done to get this working? Really appreciate it & thanks in advance. PS. Sorry for the long detail if it's unnecessary.
    1
    answers
    0
    votes
    12
    views
    asked 16 hours ago
  • Good morning, I have an Elastic Beanstalk environment stuck updating. I switched the auto-scaling to zero (desired, min, max) and the instance has been removed. But the status is still "updating". The last operation was to add a listener to the ELB through the EB UI to open port 443. I tried to remove manually the listeners from the ELB but nothing changed. I can't deploy, abort the operation, or clone. What should I do?
    0
    answers
    0
    votes
    9
    views
    Ale
    asked 17 hours ago
  • Hello, When I click on the link for Management Console in[ this article](https://docs.aws.amazon.com/awsconsolehelpdocs/latest/gsg/learn-whats-new.html), it takes me to Console Home page. Are they two the same thing?
    1
    answers
    0
    votes
    3
    views
    asked 17 hours ago
  • I created token signature with the below command: echo -n tokenKeyValue | openssl dgst -sha256 -sign private-key.pem| openssl base64 NOw testing the authorizer by test invoke authorizer in aws CLI with the command: aws iot test-invoke-authorizer \ --authorizer-name my-new-authorizer \ --token tokenKeyValue \ --token-signature {created signature} I am getting an error : unknown options:tokenKeyValue. Please guide
    0
    answers
    0
    votes
    14
    views
    asked 18 hours ago
  • I want to build a topology in the aws, but I'm not sure how to set up route and as path to route the traffic of instances to some specific instances balancedly. ![Enter image description here](/media/postImages/original/IMO7Y5b2umQ2i0yo_cTUgTEw) expect: 192.168.0.100(az1) & 192.168.1.100(az2), announce a cidr 10.0.0.0/24 to 192.168.3.1 az1: 192.168.0.1 --> 10.0.0.1 via 192.168.3.1 & 192.168.0.100 az2: 192.168.1.1 --> 10.0.0.1 via 192.168.3.1 & 192.168.1.100 when the 192.168.1.100 is down, the traffic of az2 can be sent to 192.168.3.1 & 192.168.0.100
    1
    answers
    0
    votes
    8
    views
    asked 18 hours ago
  • hi i am new here and in aws all. i need to know if the vm ip or network will be related to my pc i need to stay anonymous
    2
    answers
    0
    votes
    3
    views
    asked 20 hours ago

Recent Knowledge Center content

see all
1/18

Recent articles

see all
1/18