Questions tagged with Amazon Redshift
Content language: English
Sort by most recent
Redshift error: authentication method 13 not supported when creating a user with sha256 hashed password
The redshift docs say when creating a new user, you can specify your password as cleartext, md5 hash, or sha256 hash with a salt. Two valid formats they give for sha256 are `sha256|<cleartext password>`, and `sha256|<digest>|<salt>`. I tried both formats when making a user and came up with the same error: `error: authentication method 13 not supported`. I tried psql that came with postgres version 14, and psql that came with version 13, both reported the same error. I also tried connecting with Navicat, and it reported the same error. Do we need to run `SET password_encryption TO sha256` or something similar?
AWS Real-Time Ad Tracker Architecture
Hello. I'm attempting to build an ad-tracking application that can attribute, store, and then query and analyze website visitor information in real or near real-time. Unfortunately, I'm finding difficulty designing the application architecture as I am new to AWS overall. So far, I expected my application to look like this: 1. API Gateway to serve as a secure endpoint for websites and ad servers to send website visitor information (think utm parameters, device resolution, internal ID's etc) 2. Lambda/Node.js to route and attribute session information 3. DynamoDB for its ability to handle high-volume write rates in a cost-efficient way. 4. S3 to create frequent/on-demand backups of DynamoDB which can then be analyzed by 5. ? Considering passing all S3 data back for client-side processing in my dashboard. **However:** I just found [this case study with Nasdaq](https://aws.amazon.com/solutions/case-studies/nasdaq-case-study/?pg=ln&sec=c) utilizing [redshift and other services shown here](https://aws.amazon.com/redshift/?p=ft&c=aa&z=3). Judging from the 'Data' label featured in the first illustration of the latter link (clickstreams, transactions, etc) it appears to be exactly what I need. So, I suppose my question would be from a cost, simplicity and efficiency standpoint: Would it just be easier to eliminate dynamodb and s3 and instead configure my lambda functions to send their data directly into redshift? Any guidance would be greatly appreciated, thank you!
Using Redshift Serverless for Datasharing
Can we use Redshift serverless for data sharing? * Example 1: Redshift Cluster Producer and Redshift Serverless Consumer * Example 2: Redshift Serverless Producer and Redshift Cluster Consumer * Example 3: Redshift Serverless Producer and Redshift Serverless Consumer Note that Redshift Cluster Producer and Redshift Cluster Consumer is confirmed in the documentation. The [documentation](https://docs.aws.amazon.com/redshift/latest/dg/considerations.html) limitations states "Amazon Redshift only supports data sharing on the ra3.16xlarge, ra3.4xlarge, and ra3.xlplus instance types for producer and consumer clusters." However, the serverless [documentation](https://docs.aws.amazon.com/redshift/latest/mgmt/serverless-datasharing.html) has a section for how to share data using serverless, but again mentions the instance types.
Hi All, I was doing some reading related to stv_partitions. (https://docs.aws.amazon.com/redshift/latest/dg/r_STV_PARTITIONS.html). The example in the link states that the pct_used is evenly distributed across the disk partitions. I tired the same against the instance I work and see that we have 8 node cluster with 2 disk partions per node. Within each node, only one disk space per node has pct used of 70 while the other show it as 0. Is this something we should be concerned about?
Hi Amazon Redshift cluster is erroring out "ERROR: Query (2842874) cancelled on user's request". We are using Redshift query editor v2. Is there any script or query we can check or logs which will tell us why this is happening. If there is any setting which we need to change please let know, Thanks in advance.
Can we get user inputs for lambda function or redshift plsql procedure from console/UI
Is there any option to get inputs from user using any AWS service and pass those inputs for lambda function or backend redshift query. Instead of creating custom form and deploying it in server ex : period - Jan-21 as user i/p based on this list all asins shipped on jan/21
How to truncate a Redshift-serverless table by Temporary credentials?
Hi, I connect to the Redshift-serverless with the latest boto3 with temporary credentials. I saw the page below and only set WorkgroupName, Database and Sql, **without DbUser**. https://dev.classmethod.jp/articles/quicksight-folder-group/ In this situation, the truncate operation failed with a error; "Error": "**ERROR: must be owner of relation** <table name>". I think temporary user doesn't have the grant to do. I see the temporary user-ish in the redshift by 'select * from pg_user;'. IAMR:****** usesuper=false When I used a redshift cluster, I set DbUser=admin who has a grant (usesuper=true) to be able to truncate the table. How should I do to truncate a table in Redshift serverless? Thanks,
Transfer data periodically from Redshift to DynamoDB
Hi Team, I have a table in my redshift cluster, for which I want to create a script that will run every 1 hour per day and scan this table to get some data based on some conditions, which it would dump to a table in the dynamo DB. I was checking source and target options under the AWS glue jobs section, but when I select Redshift as the source, there is no option of Dynamo DB in the target. Is there any way to achieve this ?
How can I file bugs against Redshift?
Hello, I regularly run into bugs I can work around with AWS RedShift, is there some way to issues to the AWS team? We're a startup and don't have a deep support plan where I can just hand it over to an account manager. For instance, RedShift will return a message this type of correlated subquery isn't supported due to an internal error when doing a correlated subquery on a table where the correlated subquery column doesn't exist in the table, rather than return the column doesn't exist. I have like 10 of these.
Invalid operation when creating Stored Procedure
Questions Similar to this : https://repost.aws/questions/QUp2M_5mszQu6q8CHjbqVHkA/invalid-operation-when-creating-stored-procedure I am trying to write a stored procedure but facing the very same error as in the above link. My Redshift cluster is an updated version : PostgreSQL 8.0.2 on i686-pc-linux-gnu, compiled by GCC gcc (GCC) 3.4.2 20041017 (Red Hat 3.4.2-6.fc3), Redshift 1.0.41881 Please help
Datasync to copy data from one S3 bucket to another S3 in the same account
Hi All, We are looking to copy approx 500 TB of Data from one S3 bucket to another S3 bucket in the same region. Do you think that DataSync is the fastest and best available option for the transfer? And how much time will it it approximately to copy 500 TB of data? If we have to copy 7 folders, do we have to create the 7 tasks and can this impact the maximum throughput which 10 GBPS. Thanks Rio