Questions tagged with Database
Content language: English
Sort by most recent
I have set up a new instance in Lightsail with Debian 11 LAMP stack,
1. Connected PhpMyAdmin through a tunnelled PuTTY and uploaded my data which looks good.
2. Connected with FIleZilla and uploaded some php which works, and need to connect to the MySql with command line.
I have used mysql -u bitnami -p with password in the PuTTY command line and get ERROR 1045 (28000): Access denied for user 'bitnami'@'localhost' (using password: YES).
I have tried all the passwords I have used with this and am getting nowhere, not sure what i am missing. Can anyone help please?
Where can I find the listener logs for AWS RDS Oracle databases. I can only find alert logs and audit logs in the AWS console but not listener logs
RDS databases have DNS names. The primary DB and read replicas have different DNS names.
1) Does my code need to have access to the read replicas and if yes then in what way?
2)Once I have created my read replicas, is it automatic that my read throughput will be scaled out or does one make other configurations?
Postgres database upgrade failed (11.6 -> 13.7) with error:
*pg_restore: while PROCESSING TOC:
pg_restore: from TOC entry 6052; 1259 720106 INDEX unq_nickname_ci dba
pg_restore: error: could not execute query: ERROR: function public.unaccent(unknown, text) does not exist
LINE 3: SELECT public.unaccent('public.unaccent', $1) -- schema-qua...
^
HINT: No function matches the given name and argument types. You might need to add explicit type casts.*
After the first failed upgrade, the index and function were deleted, so they don't exist in the catalog, but subsequent upgrades still ends with the same error. Do you have any idea how to proceed?
In Redshift, I'm trying to update a table using another table from another database. The error details:
SQL Error [XX000]: ERROR: Assert
Detail:
-----------------------------------------------
error: Assert
code: 1000
context: scan->m_src_id == table_id -
query: 17277564
location: xen_execute.cpp:5251
process: padbmaster [pid=30866]
The context is not helpful. I have used a similar join based approach for other tables and there the update statement has been working fine.
Update syntax used:
```
UPDATE ods.schema.tablename
SET
"TimeStamp" = GETDATE(),
"col" = S."col",
FROM ods.schema.tablename T
INNER JOIN stg.schema.tablename S
ON T.Col = S.Col;
```
Is there a Boto3 python script available that gives Date and Time, when was last time Table in Amazon Redshift was written (INSERT, UPDATE and DELETE), just need data and time, not the content which was written.
The other day I went to apply the update to 5.7. It failed do to some stuff I missed. The log was:
```
1) MySQL 5.7 preupgrade check to catch orphan table issues
For the following tables, either the datadir directory or frm file was removed or corrupted.
More Information:
https://dev.mysql.com/doc/refman/5.6/en/innodb-troubleshooting-datadict.html
[table_schema, table_name]
DB_NAME, - Check the server logs or examine datadir to determine the issue and correct it before upgrading.
2) The DB instance must have enough space to rebuild the largest table that uses an old temporal data format.
The DB instance must have enough space for the largest table that uses an old temporal data format to rebuild.
More Information:
https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_UpgradeDBInstance.MySQL.html#USER_UpgradeDBInstance.MySQL.DateTime57
You must rebuild tables that use an old temporal data format, it might take an extended period of time. - Scale storage so that there is enough space for the largest table that uses an old temporal data format, and make sure the storage type is gp2 or io1, if possible.
3) Check for heavy load or a high number of write operations on instance before upgrade
* History list length
No issues found.
* Insert buffer size
No issues found.
Errors: 1
Warnings: 1
Database Objects Affected: 2
----------------------- END OF LOG ----------------------
```
That day I did what should have been done. I dropped all the orphan tables. (There was only one) I updated all the tables and data that had the old date format. I then moved the maintenance window to the next morning so it would automatically do it in the maintenance window since, and this is the most annoying part, because I manually applied the recommendation, and it failed, It now shows the recommendation to update as applied, and I can no longer just click apply in the console.
Anyways. It didn't update the next morning. The compatibility log last written is still from the first time I ran it manually. There has been no update to it. I then changed the maintenance window again to the next morning. (this morning. 15th of March) Got to work this morning and checked it. Still hasn't been updated. By the looks of the log, it hasn't even been attempted again.
That being said. I did check a few more things and realized I had forgotten about storage space. (I know it says it in the log. Lol) I only had 18GB left so I just upped that now so there is enough space for the rebuild. I again have changed the maintenance window to tomorrow morning so hopefully, it will update. Not super hopeful since the logs haven't been touched all the other times.
Does anyone have any ideas on how I can manually update it if it doesn't work again? Or how I can get the recommendation back so I can use it?
Newer versions of Amazon Aurora PostgreSQL-compatible edition are now available and database cluster(s) running Aurora PostgreSQL minor versions 11.13, 11.14, 11.15, 12.8, 12.10, 13.4, 13.5, and 13.6 need to be upgraded by September 15, 2023. These newer minor versions include important updates that will improve the operations of your Aurora PostgreSQL instances and workloads. We strongly encourage you to upgrade to at least a recommended minimum minor version at your earliest convenience.
* For PostgreSQL Minor Version 11.13, 11.14, and 11.15, the recommended minimum minor version is 11.18.
* For PostgreSQL Minor Version 12.8 and 12.10, the recommended minimum minor version is 12.13.
* For PostgreSQL Minor Version 13.4, 13.5 and 13.6, the recommended minimum minor version is 13.9.
Starting on or after 12:00 PM PDT on September 15, 2023, if your Amazon Aurora PostgreSQL cluster has not been upgraded to a newer minor version, we will schedule the relevant recommended minimum minor version to be automatically applied during your next maintenance window. Changes will apply to your cluster during your next maintenance window even if auto minor version upgrade is disabled.
Restoration of Amazon Aurora PostgreSQL 11.13, 11.14, 11.15, 12.8, 12.10, 13.4, 13.5, and 13.6 database snapshots after September 15, 2023 will result in an automatic upgrade of the restored database to a supported version at the time.
How to Determine Which Instances are Running These Minor Versions?
* In the Amazon RDS console, you can see details about a database cluster, including the Aurora PostgreSQL version of instances in the cluster, by choosing Databases from the console's navigation pane.
* To view DB cluster information by using the AWS CLI, use the describe-db-clusters command.
* To view DB cluster information using the Amazon RDS API, use the DescribeDBClusters operation. You can also query a database directly to get the version number by querying the aurora_version() system function i.e., "SELECT * FROM aurora_version();".
How to Apply a New Minor Version
You can apply a new minor version in the AWS Management Console, via the AWS CLI, or via the RDS API. Customers using CloudFormation are advised to apply updates in CloudFormation. We advise you to take a manual snapshot before upgrading. For detailed upgrade procedures, please see the available User Guide [1]. Please be aware that your cluster will experience a short period of downtime when the update is applied.
Visit the Aurora Version Policy [2] and the documentation [3] for more information and detailed release notes about minor versions, including existing supported versions.
If you have any questions or concerns, the AWS Support Team is available on AWS re:Post and via Premium Support [4].
[1] https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/USER_UpgradeDBInstance.PostgreSQL.html
[2] https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/Aurora.VersionPolicy.html
[3] https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/AuroraPostgreSQL.Updates.20180305.html
[4] https://aws.amazon.com/support
Hi Team,
I have two RDS Postgres instances in Oregon and Ireland region Both RDS instances are on the engine version 9.6.12. We need to upgrade this version to Postgres engine version 13.3. We tired multiple ways to upgrade the database version upgrading first to minor version 9.6.22 to upgrading to major version 9.6.12. However, when preforming the minor version upgrade, it does not show the list of version available to upgrade in the versions and when trying to upgrade to another minor version, the parameter group for the RDS instance is grayed out. Therefore this is a blocker for us to upgrade the database.
The screenshot of the error is attached. Appreciate your guidance.
Hello, I am running a job to apply an ETL on a semi-colon-separated CSV on S3, however, when I read the file using the DynamicFrame feature of AWS and try to use any method like `printSchema` or `toDF`, I get the following error:
```
py4j.protocol.Py4JJavaError: An error occurred while calling o77.schema.
: org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 1.0 failed 1 times, most recent failure: Lost task 0.0 in stage 1.0 (TID 1) (52bff5da55da executor driver): com.amazonaws.services.glue.util.FatalException: Unable to parse file: s3://my-bucket/my-file.csv
```
I have already verified the codification, it is UTF-8 so there should be no problem. When I read the CSV using `spark.read.csv`, it works fine, and the Crawlers can also recognize the schema. The data has some special characters that shouldn't be there, and that's part of the ETL I am looking to perform. Neither using the `from_catalog` nor `from_options` function from AWS Glue works, the problem is the same whether I run the job locally on docker or Glue Studio. My data have a folder date partition so I would prefer to avoid using directly Spark to read the data and take advantage of the Glue Data Catalog as well.
Thanks in advance.
I am using the Aurora Blue/Green deployment process to upgrade by database from mySQL5.7 to mySQL8.0.26. This also is upgrading the Aurora engine from 2 to 3.
The upgrade fails due to a pre-check failure:
```
{
"id": "engineMixupCheck",
"title": "Tables recognized by InnoDB that belong to a different engine",
"status": "OK",
"description": "Error: Following tables are recognized by InnoDB engine while the SQL layer believes they belong to a different engine. Such situation may happen when one removes InnoDB table files manually from the disk and creates e.g. a MyISAM table with the same name.\n\nA possible way to solve this situation is to e.g. in case of MyISAM table:\n\n1. Rename the MyISAM table to a temporary name (RENAME TABLE).\n2. Create some dummy InnoDB table (its definition does not need to match), then copy (copy, not move) and rename the dummy .frm and .ibd files to the orphan name using OS file commands.\n3. The orphan table can be then dropped (DROP TABLE), as well as the dummy table.\n4. Finally the MyISAM table can be renamed back to its original name.",
"detectedProblems": [
{
"level": "Error",
"dbObject": "mysql.general_log_backup",
"description": "recognized by the InnoDB engine but belongs to CSV"
}
]
}
```
As an Aurora user, it is not possible for me to delete, move, move, alter or change any tables in the `mysql` tablespace, so the recommend remediation is not possible.
So my question is, how can I force the Blue/Green process to skip this check, or even better, how can I manually DROP the `mysql.general_log_backup` table as I do not need it?
Please note I am using "FILE" based logging the DB parameters.
Steps to reproduce:
- Create an aurora instance with Engine version 5.7.mysql_aurora.2.10.3
- start a blue green deployment with
* engine version 8.0 and aurora3+
* use custom cluster parameter group
* use custom instance parameter group
- Blue Green environment created
- DB Engine Upgrade fails
Thanks!
```
select * from abdc a where a.employee_id in (select employee_id from abc)
```
The table **abdc** is partition projected on employee_id.
** I got this error while executing this query **
` Amazon Athena experienced an internal error while executing this query. Please try submitting the query again and if the issue reoccurs, contact AWS support for further assistance. We apologize for the inconvenience.
`
The same query got executed when I executed --
```
select * from abdc a where a.employee_id in ('123456789')
```