Questions tagged with Amazon Relational Database Service

Content language: English

Sort by most recent

Browse through the questions and answers listed below or filter and sort to narrow down your results.

I use mysqldump nightly to ensure provider-redundant backups of my rds mysql instance. My last successful dump was Jan 26 02:26 (UTC). Now, I get permission denied errors even as the db administrator user. As the original user, I get ``` mysqldump: Couldn't execute 'FLUSH TABLES': Access denied; you need (at least one of) the RELOAD or FLUSH_TABLES privilege(s) for this operation (1227) ``` I tried to grant that user `FLUSH TABLES` but was unable to grant that privilege as the db administrator. The db administrator has `RELOAD`, so I tried the mysqldump as the db administrator, but then I get ``` mysqldump: Couldn't execute 'FLUSH TABLES WITH READ LOCK': Access denied for user 'dbadmin'@'%' (using password: YES) (1045) ``` My research turned up this knowledge center article: https://aws.amazon.com/premiumsupport/knowledge-center/mysqldump-error-rds-mysql-mariadb/ But I'm unable to follow the advice to exclude the `--master-data` argument because I'm already not including it. My failing command line is ``` /usr/bin/mysqldump --login-path='{login_path}' --ssl-ca=/etc/ssl/certs/rds-combined-ca-bundle.pem --ssl-mode=VERIFY_IDENTITY --max_allowed_packet=1G --single-transaction --quick --lock-tables=false --column-statistics=0 {database_name} ``` The most obvious culprit is a mysql upgrade on the OS on the machine trying to do the dump though it confuses me about why the _client_ permissions needs would change? ```dpkg.log ... 2023-01-26 06:45:09 upgrade mysql-client-core-8.0:amd64 8.0.31-0ubuntu0.20.04.2 8.0.32-0buntu0.20.04.1 ... ``` So, I'll roll back that upgrade, but if anyone has pointers on how to both keep the mysql client current _and_ continue to successfully mysqldump from RDS, I'd certainly appreciate it. Client: Ubuntu 20.04.5 mysqldump Ver 8.0.32-0buntu0.20.04.1 for Linux on x86_64 ((Ubuntu)) Server: RDS with MySQL engine version 8.0.28 TIA, AC
1
answers
0
votes
2
views
asked 4 hours ago
I created a PostgreSQL database instance. I made sure to choose the free tier option. I put maybe 20 entries into one of my tables but I got an email today saying my account has exceeded 85% of the usage limit for one or more AWS Free Tier-eligible services for the month of January.
1
answers
0
votes
6
views
asked 8 hours ago
Hi, I am new here, excuse me if I am asking basic questions. I got today that my AmazonRDS has exceeded 85% of the usage limit. (17GB out of 20GB) , the project currently has 1 developer working on it and the database size is less than 200MB so I don't see how that is even possible. I start thinking to switch to firebase or even traditional DB hosting as it does not make sense cost-wise if we go live at this rate. my question what the usage limit includes? how I can debug/check this claim? I see also "Amazon Relational Database Service 634 Hrs", what does it mean exactly? I thought this only counted for each time we access the DB only not the running time, otherwise, why do they even bother to count it? Thank you
1
answers
0
votes
14
views
asked a day ago
To Connect to AWSRDS Oracle I usually type sqlplus on the EC2 instance and I get a prompt user-name and I enter all the connection string and can connected Example [oracle@ip-xx-xx-xx-xx eeop]$ sqlplus SQL*Plus: Release 19.0.0.0.0 - Production on Fri Jan 27 15:45:52 2023 Version 19.3.0.0.0 Copyright (c) 1982, 2019, Oracle. All rights reserved. Enter user-name: Enter user-name: bhardwar/xxxxxxx@(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST=oxxxx-yxxxxxx.xxxxxxx.rds.amazonaws.com)(PORT=1521)))(CONNECT_DATA=(SERVICE_NAME=xxxx))) Last Successful login time: Fri Jan 27 2023 15:13:27 -05:00 Connected to: Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production Version 19.17.0.0.0 When I use sqlldr command I get an error from the EC2 Instance sqlldr user_id=bhardwar/xxxxxxx@(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST=xxxx-xxxx-xxxxx-xxxx.amazonaws.com)(PORT=1521)))(CONNECT_DATA=(SERVICE_NAME=xxxx))) control=/app/loaddata/eeop/load_ocom_data.ctl log=/app/loaddata/eeop/load_ocom_data.log [oracle@ip-xx-xx-xx-xx]$ sqlldr user_id=bhardwar/xxxxxxx@(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST=xxxx-xxxx-xxxxx-xxxx.amazonaws.com)(PORT=1521)))(CONNECT_DATA=(SERVICE_NAME=xxxx))) control=/app/loaddata/eeop/load_ocom_data.ctl log=/app/loaddata/eeop/load_ocom_data.log -bash: syntax error near unexpected token `(' How to use SQL LOADER to load data into an AWS RDS Oracle database? Thanks Rajan
0
answers
0
votes
9
views
raja
asked a day ago
Hi, as per the announcement that these instance types are being deprecated, it is not clear what happens to remaining days already purchased for reserved instances of this older types. I have a db.r3.large instance, and a fully paid for Reserved Instance for that which has 462 days still remaining on it. The announcement email I received simply says "If you have Reserved Instances on M1, M2, M3, R3, or T1, you will be able to cancel them after you create new Reserved Instances for the same value or higher with M5, R5, or T3 respectively." However, what does that mean with respect to the loss of all of those already pre-paid for days on the existing Reserved Instance? Does canceling the Reserved Instance issue me a refund or credit towards the new Reserved Instance?
1
answers
0
votes
17
views
asked 2 days ago
We are using SQL Server RDS and looking to implement full-text search. We need to search on a variety of file types but mainly on pdf. My understanding for pdf there is an Adobe iFilter plugin that will allow this. I have accomplished this on-premise before but not RDS. Would I install Adobe iFilter plugin on RDS? Is it even possible to install this pdf iFilter plugin on RDS?
1
answers
0
votes
12
views
asked 3 days ago
I recently started to use AWS services and periodically I check the usage quantity of the various resources that I have allocated. My AWS configuration consists of AppSync with a Lambda resolver which interacts with an RDS MySQL DB through a RDS Proxy. While the Lambda authenticates to the RDS Proxy through an IAM role, the authentication between proxy and MySQL database is through a password stored as a secret in the AWS Secret Manager. I am sure that my database has been queried less than 400 times however, in the billing page I see that more than 60000 API requests have been performed to the secret manager. Why so many API requests? Is there a way to monitor the amount of requests destined to the secret manager?
0
answers
0
votes
14
views
asked 3 days ago
Is anyone able to explain the EBS IO Balance (Percent) metric in RDS please? We have a MySQL t3.small instance with a 400GB GP3 disk. We're running a process from an EC2 instance to pull data from an external API and store it in our DB. We're hitting 0 on the EBS IO Balance (Percent) metric and seeing slow downs - which I pursue is a result of the read & write latency metrics increasing when we hit 0. We're struggling to understand what this metric is - and therefore how we can improve our process. As for other metrcs: * We peak at around 5,000 IOPS (combined read & write) - well under the 12k we get with our GP3 disk * "EBS Byte Balance (Percent)" has fallen to 25% by the time our import process ends * Write throughout peaks at around 160 mb / second * Read throughput peaks at around 30 mb / second * CPU utilisation peaks at around 50% but our credit balance is very healthy * Disk capacity is certainly not an issue [Screenshotted metrics are here](https://imgur.com/a/ZnS0LdK) (import process started ~16:15:00) Any help / pointers would be much appreciated. ***I've asked elsewhere, and someone has suggested the issue might be related to volume striping?***
1
answers
0
votes
15
views
asked 3 days ago
We recently migrated our RDS databases to 5.7+ to prepare for AWS' retirement of MySQL 5.6 support. We have snapshots of previous databases from the 5.6 days - will those be accessible down the line or should we plan to upgrade them? Per the [announcement here](https://repost.aws/questions/QUImshxjRKSRq-t-AQppM6SA/announcement-deprecated-amazon-relational-database-service-rds-for-my-sql-5-6-end-of-life-date-is-august-3-2021): > You can continue to restore your MySQL 5.6 snapshots as well as create read replicas with version 5.6 until the August 3, 2021 end of support date. This makes it sound like we should prepare to restore, upgrade, and re-snapshot existing snapshots to be able to maintain access to them. Is this something Amazon is planning to automate or should I make a ticket for our own teams to do it ourselves?
1
answers
0
votes
20
views
asked 3 days ago
I tried to upgrade from Aurora MySQL 5.7 (2.10.2) to Aurora MySQL 8.0 (3.02.2) and I got this pre-check error in the logs. ``` { "id": "engineMixupCheck", "title": "Tables recognized by InnoDB that belong to a different engine", "status": "OK", "description": "Error: Following tables are recognized by InnoDB engine while the SQL layer believes they belong to a different engine. Such situation may happen when one removes InnoDB table files manually from the disk and creates e.g. a MyISAM table with the same name.\n\nA possible way to solve this situation is to e.g. in case of MyISAM table:\n\n1. Rename the MyISAM table to a temporary name (RENAME TABLE).\n2. Create some dummy InnoDB table (its definition does not need to match), then copy (copy, not move) and rename the dummy .frm and .ibd files to the orphan name using OS file commands.\n3. The orphan table can be then dropped (DROP TABLE), as well as the dummy table.\n4. Finally the MyISAM table can be renamed back to its original name.", "detectedProblems": [ { "level": "Error", "dbObject": "mysql.general_log_backup", "description": "recognized by the InnoDB engine but belongs to CSV" } ] }, ``` Looking at the [MySQL shell code ](https://github.com/mysql/mysql-shell/blob/8.0.23/modules/util/upgrade_check.cc#L1301-L1316) and running that SQL, I get this result. ``` SELECT a.table_schema, a.table_name, concat('recognized by the InnoDB engine but belongs to') FROM information_schema.tables a JOIN (SELECT substring_index(NAME, '/', 1) AS table_schema, substring_index(substring_index(NAME, '/', -1), '#', 1) AS TABLE_NAME FROM information_schema.innodb_sys_tables WHERE NAME like '%/%') b ON a.table_schema = b.table_schema AND a.table_name = b.table_name WHERE a.engine != 'Innodb' +--------------+--------------------+----------------------------------------------------------+ | table_schema | table_name | concat('recognized by the InnoDB engine but belongs to') | +--------------+--------------------+----------------------------------------------------------+ | mysql | general_log_backup | recognized by the InnoDB engine but belongs to | +--------------+--------------------+----------------------------------------------------------+ 1 row in set (0.11 sec) ``` And it is because this entry is present in the information_schema.innodb_sys_tables which should not really be present in the first place. ``` mysql> select * from information_schema.innodb_sys_tables where NAME like '%general%'; +----------+--------------------------+------+--------+-------+-------------+------------+---------------+------------+ | TABLE_ID | NAME | FLAG | N_COLS | SPACE | FILE_FORMAT | ROW_FORMAT | ZIP_PAGE_SIZE | SPACE_TYPE | +----------+--------------------------+------+--------+-------+-------------+------------+---------------+------------+ | 16462 | mysql/general_log_backup | 33 | 9 | 16448 | Barracuda | Dynamic | 0 | Single | +----------+--------------------------+------+--------+-------+-------------+------------+---------------+------------+ 1 row in set (0.09 sec) ``` Coincidentally, according to the release notes of [Aurora 3.02.0](https://docs.aws.amazon.com/AmazonRDS/latest/AuroraMySQLReleaseNotes/AuroraMySQL.Updates.3020.html), it says this: > Fixed an issue that can cause upgrade failures from Aurora MySQL 2 to Aurora MySQL 3 due to schema inconsistency errors reported by upgrade prechecks for the general log and slow log tables. While it says "schema inconsistency errors" and my error is "engineMixupCheck", it feels like both errors are somewhat related to each other since it involves the general_log. Also, when I look at [this](https://repost.aws/questions/QUPC7D-_ZuTgCZSLALluxW9g/need-help-in-upgrading-the-aurora-mysql-5-7-to-mysql-8-urgent), it mentions about > mysql.general_log_backup recognized by the InnoDB engine but belongs to CSV. which is exactly the error that I am getting but it does not seem a solution has been provided. So, has anyone seen this error and is there a workaround for this?
0
answers
0
votes
65
views
asked 3 days ago
We have deployed a Django application in EKS and used RDS PostgreSQL with RDS proxy as a database backend. Over the last month, we have started noticing occasional 500 "Internal Server Error" responses from our web app with the following error coming from Django: `django.db.utils.OperationalError: connection to server at "<proxy DNS name>" (<proxy IP address>), port 5432 failed: server closed the connection unexpectedly` This suggests that RDS proxy closed the client connection. In Django settings, the configured value of `CONN_MAX_AGE` parameter is the default 0, which means Django opens a new database connection for every query - this means that the observed failures cannot be related to RDS proxy's idle client connection timeout setting, which we have set to 30 minutes. To deal with this issue, we have implemented retries on the service mesh level (Istio). However, we would like to know more about the root cause of the failures and why we have seen an increased frequency of them during the last month - this almost never happened previously. Looking at the proxy and the database metrics in Cloudwatch, it doesn't look like there was increased traffic during the failures. Nevertheless, could the proxy close a client connection during a scaling operation? How can we get more insight into RDS Proxy internal operations? Turning on Enhanced Logging keeps it enabled only for 24 hours and there is no guarantee that the error will occur during that time window - we are also a bit nervous enabling it on production since it can slow down performance.
1
answers
0
votes
19
views
nikos64
asked 5 days ago
Our instance with a single reader/writer had been humming along for some time without issue. Then last week in the middle of the night it got stuck in a bit of an automatic recovery loop, going through the full recovery process about 6 times. It did the same thing again this morning. I know we are responsible for managing these sorts of outages, and plan to add a second instance per AWS recommendation and recommendations here on similar threads, but this seems a bit abnormal? A single recovery due to bad hardware or an underlying systems change is one thing, getting stuck in a loop for several hours, recovering multiple times, does not? The main messages to kick it off is: "Recovery of the DB instance has started. Recovery time will vary with the amount of data to be recovered." We don't see any issues in the underlying MySql logs, just repeated startups, the recovery happens in the middle of the night (us-west-2), and we haven't made any recent changes except bumping to 5.7 several weeks ago.
2
answers
0
votes
19
views
asked 5 days ago