Questions tagged with Database

Content language: English

Sort by most recent

Browse through the questions and answers listed below or filter and sort to narrow down your results.

Hi, I am new here, excuse me if I am asking basic questions. I got today that my AmazonRDS has exceeded 85% of the usage limit. (17GB out of 20GB) , the project currently has 1 developer working on it and the database size is less than 200MB so I don't see how that is even possible. I start thinking to switch to firebase or even traditional DB hosting as it does not make sense cost-wise if we go live at this rate. my question what the usage limit includes? how I can debug/check this claim? I see also "Amazon Relational Database Service 634 Hrs", what does it mean exactly? I thought this only counted for each time we access the DB only not the running time, otherwise, why do they even bother to count it? Thank you
1
answers
0
votes
14
views
asked a day ago
To Connect to AWSRDS Oracle I usually type sqlplus on the EC2 instance and I get a prompt user-name and I enter all the connection string and can connected Example [oracle@ip-xx-xx-xx-xx eeop]$ sqlplus SQL*Plus: Release 19.0.0.0.0 - Production on Fri Jan 27 15:45:52 2023 Version 19.3.0.0.0 Copyright (c) 1982, 2019, Oracle. All rights reserved. Enter user-name: Enter user-name: bhardwar/xxxxxxx@(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST=oxxxx-yxxxxxx.xxxxxxx.rds.amazonaws.com)(PORT=1521)))(CONNECT_DATA=(SERVICE_NAME=xxxx))) Last Successful login time: Fri Jan 27 2023 15:13:27 -05:00 Connected to: Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production Version 19.17.0.0.0 When I use sqlldr command I get an error from the EC2 Instance sqlldr user_id=bhardwar/xxxxxxx@(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST=xxxx-xxxx-xxxxx-xxxx.amazonaws.com)(PORT=1521)))(CONNECT_DATA=(SERVICE_NAME=xxxx))) control=/app/loaddata/eeop/load_ocom_data.ctl log=/app/loaddata/eeop/load_ocom_data.log [oracle@ip-xx-xx-xx-xx]$ sqlldr user_id=bhardwar/xxxxxxx@(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST=xxxx-xxxx-xxxxx-xxxx.amazonaws.com)(PORT=1521)))(CONNECT_DATA=(SERVICE_NAME=xxxx))) control=/app/loaddata/eeop/load_ocom_data.ctl log=/app/loaddata/eeop/load_ocom_data.log -bash: syntax error near unexpected token `(' How to use SQL LOADER to load data into an AWS RDS Oracle database? Thanks Rajan
0
answers
0
votes
9
views
raja
asked a day ago
Hi, I have an application that connects to docDB and start listening to the changestream, but Performance Insights is showing high Database Load the moment we start watching the changestream, like shown above: ![Enter image description here](/media/postImages/original/IMZOzjTVuXQDex6Xp_CkClAQ) ![Enter image description here](/media/postImages/original/IMtKf2Dli4REKrIa9NDyRTdA) I would like to know the best way to read this chart. Apparently the changestream utilizes all CPUs from the instance, even when the database is idle and not receiving any queries. Do I need to worry about that high Database Load? Will enabling and reading the changestream make my database slower in general? What would be the effects on performance? AWS docs seen to be lacking on this information.
0
answers
0
votes
14
views
asked a day ago
We are using SQL Server RDS and looking to implement full-text search. We need to search on a variety of file types but mainly on pdf. My understanding for pdf there is an Adobe iFilter plugin that will allow this. I have accomplished this on-premise before but not RDS. Would I install Adobe iFilter plugin on RDS? Is it even possible to install this pdf iFilter plugin on RDS?
1
answers
0
votes
12
views
asked 3 days ago
I recently started to use AWS services and periodically I check the usage quantity of the various resources that I have allocated. My AWS configuration consists of AppSync with a Lambda resolver which interacts with an RDS MySQL DB through a RDS Proxy. While the Lambda authenticates to the RDS Proxy through an IAM role, the authentication between proxy and MySQL database is through a password stored as a secret in the AWS Secret Manager. I am sure that my database has been queried less than 400 times however, in the billing page I see that more than 60000 API requests have been performed to the secret manager. Why so many API requests? Is there a way to monitor the amount of requests destined to the secret manager?
1
answers
0
votes
15
views
asked 3 days ago
The error message I'm getting: Error message not found: ATHENA_CLIENT_ERROR. Can't find bundle for base name com.simba.athena.athena.jdbc42.messages, locale en_US We have a datalake architecture which we stood up on AWS s3. When I'm trying to run queries against the tables in the Curated db, in Athena, I'm getting results. When I copy the same query and paste it in the custom SQL funtion in Tableau it gives me an error. This issue is affecting our business and needs to resolve as soon as possible. Please send me an answer if you have previously dealt with this kind of issue. Important stuff: I have the JDBCSIMBA4.2 driver. I have an athena properties file directing to our S3 location. I have JAVA8.0 Installed with JAVA HOME. I have * access meaning I have all access in AWS. I am able to connect to tables in the database. I am able to view all the tables in the database. I also made couple dashboards using this database.
0
answers
0
votes
8
views
asked 3 days ago
We recently migrated our RDS databases to 5.7+ to prepare for AWS' retirement of MySQL 5.6 support. We have snapshots of previous databases from the 5.6 days - will those be accessible down the line or should we plan to upgrade them? Per the [announcement here](https://repost.aws/questions/QUImshxjRKSRq-t-AQppM6SA/announcement-deprecated-amazon-relational-database-service-rds-for-my-sql-5-6-end-of-life-date-is-august-3-2021): > You can continue to restore your MySQL 5.6 snapshots as well as create read replicas with version 5.6 until the August 3, 2021 end of support date. This makes it sound like we should prepare to restore, upgrade, and re-snapshot existing snapshots to be able to maintain access to them. Is this something Amazon is planning to automate or should I make a ticket for our own teams to do it ourselves?
1
answers
0
votes
20
views
asked 3 days ago
The error message I'm getting: Error message not found: ATHENA_CLIENT_ERROR. Can't find bundle for base name com.simba.athena.athena.jdbc42.messages, locale en_US We have a datalake architecture which we stood up on AWS s3. When I'm trying to run queries against the tables in the Curated db, in Athena, I'm getting results. When I copy the same query and paste it in the custom SQL funtion in Tableau it gives me an error. This issue is affecting our business and needs to resolve as soon as possible. Please send me an answer if you have previously dealt with this kind of issue. Other stuff: I have * access meaning I have all access in AWS.
0
answers
0
votes
5
views
asked 3 days ago
I tried to upgrade from Aurora MySQL 5.7 (2.10.2) to Aurora MySQL 8.0 (3.02.2) and I got this pre-check error in the logs. ``` { "id": "engineMixupCheck", "title": "Tables recognized by InnoDB that belong to a different engine", "status": "OK", "description": "Error: Following tables are recognized by InnoDB engine while the SQL layer believes they belong to a different engine. Such situation may happen when one removes InnoDB table files manually from the disk and creates e.g. a MyISAM table with the same name.\n\nA possible way to solve this situation is to e.g. in case of MyISAM table:\n\n1. Rename the MyISAM table to a temporary name (RENAME TABLE).\n2. Create some dummy InnoDB table (its definition does not need to match), then copy (copy, not move) and rename the dummy .frm and .ibd files to the orphan name using OS file commands.\n3. The orphan table can be then dropped (DROP TABLE), as well as the dummy table.\n4. Finally the MyISAM table can be renamed back to its original name.", "detectedProblems": [ { "level": "Error", "dbObject": "mysql.general_log_backup", "description": "recognized by the InnoDB engine but belongs to CSV" } ] }, ``` Looking at the [MySQL shell code ](https://github.com/mysql/mysql-shell/blob/8.0.23/modules/util/upgrade_check.cc#L1301-L1316) and running that SQL, I get this result. ``` SELECT a.table_schema, a.table_name, concat('recognized by the InnoDB engine but belongs to') FROM information_schema.tables a JOIN (SELECT substring_index(NAME, '/', 1) AS table_schema, substring_index(substring_index(NAME, '/', -1), '#', 1) AS TABLE_NAME FROM information_schema.innodb_sys_tables WHERE NAME like '%/%') b ON a.table_schema = b.table_schema AND a.table_name = b.table_name WHERE a.engine != 'Innodb' +--------------+--------------------+----------------------------------------------------------+ | table_schema | table_name | concat('recognized by the InnoDB engine but belongs to') | +--------------+--------------------+----------------------------------------------------------+ | mysql | general_log_backup | recognized by the InnoDB engine but belongs to | +--------------+--------------------+----------------------------------------------------------+ 1 row in set (0.11 sec) ``` And it is because this entry is present in the information_schema.innodb_sys_tables which should not really be present in the first place. ``` mysql> select * from information_schema.innodb_sys_tables where NAME like '%general%'; +----------+--------------------------+------+--------+-------+-------------+------------+---------------+------------+ | TABLE_ID | NAME | FLAG | N_COLS | SPACE | FILE_FORMAT | ROW_FORMAT | ZIP_PAGE_SIZE | SPACE_TYPE | +----------+--------------------------+------+--------+-------+-------------+------------+---------------+------------+ | 16462 | mysql/general_log_backup | 33 | 9 | 16448 | Barracuda | Dynamic | 0 | Single | +----------+--------------------------+------+--------+-------+-------------+------------+---------------+------------+ 1 row in set (0.09 sec) ``` Coincidentally, according to the release notes of [Aurora 3.02.0](https://docs.aws.amazon.com/AmazonRDS/latest/AuroraMySQLReleaseNotes/AuroraMySQL.Updates.3020.html), it says this: > Fixed an issue that can cause upgrade failures from Aurora MySQL 2 to Aurora MySQL 3 due to schema inconsistency errors reported by upgrade prechecks for the general log and slow log tables. While it says "schema inconsistency errors" and my error is "engineMixupCheck", it feels like both errors are somewhat related to each other since it involves the general_log. Also, when I look at [this](https://repost.aws/questions/QUPC7D-_ZuTgCZSLALluxW9g/need-help-in-upgrading-the-aurora-mysql-5-7-to-mysql-8-urgent), it mentions about > mysql.general_log_backup recognized by the InnoDB engine but belongs to CSV. which is exactly the error that I am getting but it does not seem a solution has been provided. So, has anyone seen this error and is there a workaround for this?
0
answers
0
votes
65
views
asked 3 days ago
It appears that Aurora PostgreSQL major version upgrade requires us to first drop all replication slots, then perform upgrade, and then recreate replication slots. We use logical slots for Debezium/Kafka replication for inter-process work flows. When we drop and recreate these replication slots as part of major version upgrade, how can we ensure replication restarts from where it left of (meaning, replication resumes incremental changes only) and not force us to do FULL sync. We cannot afford to have FULL sync due to large table sizes.
1
answers
0
votes
23
views
asked 3 days ago
Hi, I am getting the following error when testing a lambda function which exports fields from a dynamodb if they are populated : "errorMessage": "'Win'", "errorType": "KeyError", The lambda function is: ----- import boto3 def lambda_handler(event, context): # Connect to DynamoDB dynamodb = boto3.resource('dynamodb') table = dynamodb.Table('ticketing') # Get all items from the table items = table.scan()['Items'] # Create a list to hold the filtered items filtered_items = [] # Iterate through the items and check if the fields are not empty for item in items: if item['username'] and item['Win'] and item['Lose'] and item['Score']: filtered_items.append({'username': item['username'], 'Win': item['Win'], 'Lose': item['Lose'], 'Score': item['Score']}) # Return the filtered items return filtered_items ------- Can anyone shed some light on this please? Thank you :-)
1
answers
0
votes
15
views
ND Bhav
asked 4 days ago
Hello Experts, It's open ended one, I am new to AWS and being into mainly database background that to mostly Oracle in past and now trying to learn and work on the AWS databases(Aurora mysql, postgress, redshift , Dynamo etc.) what will be the best way to get the quick understanding of these databases and start working on these database technologies? And I believe there exists AWS certification for database specialist too. So should I go for that or any pre requisite for that is needed like any other basic AWS certifications(AWS practitioner, AWS solution architect). Also if any recommended books or tutorials which will help in getting these certifications and quickly up to speed? Please suggest.
Accepted AnswerDatabase
2
answers
0
votes
24
views
asked 4 days ago