Questions tagged with Amazon Redshift
Content language: English
Sort by most recent
Can we use Redshift serverless for data sharing? * Example 1: Redshift Cluster Producer and Redshift Serverless Consumer * Example 2: Redshift Serverless Producer and Redshift Cluster Consumer * Example 3: Redshift Serverless Producer and Redshift Serverless Consumer Note that Redshift Cluster Producer and Redshift Cluster Consumer is confirmed in the documentation. The [documentation](https://docs.aws.amazon.com/redshift/latest/dg/considerations.html) limitations states "Amazon Redshift only supports data sharing on the ra3.16xlarge, ra3.4xlarge, and ra3.xlplus instance types for producer and consumer clusters." However, the serverless [documentation](https://docs.aws.amazon.com/redshift/latest/mgmt/serverless-datasharing.html) has a section for how to share data using serverless, but again mentions the instance types.
Hi All, I was doing some reading related to stv_partitions. (https://docs.aws.amazon.com/redshift/latest/dg/r_STV_PARTITIONS.html). The example in the link states that the pct_used is evenly distributed across the disk partitions. I tired the same against the instance I work and see that we have 8 node cluster with 2 disk partions per node. Within each node, only one disk space per node has pct used of 70 while the other show it as 0. Is this something we should be concerned about?
Hi Amazon Redshift cluster is erroring out "ERROR: Query (2842874) cancelled on user's request". We are using Redshift query editor v2. Is there any script or query we can check or logs which will tell us why this is happening. If there is any setting which we need to change please let know, Thanks in advance.
Is there any option to get inputs from user using any AWS service and pass those inputs for lambda function or backend redshift query. Instead of creating custom form and deploying it in server ex : period - Jan-21 as user i/p based on this list all asins shipped on jan/21
Hi, I connect to the Redshift-serverless with the latest boto3 with temporary credentials. I saw the page below and only set WorkgroupName, Database and Sql, **without DbUser**. https://dev.classmethod.jp/articles/quicksight-folder-group/ In this situation, the truncate operation failed with a error; "Error": "**ERROR: must be owner of relation** <table name>". I think temporary user doesn't have the grant to do. I see the temporary user-ish in the redshift by 'select * from pg_user;'. IAMR:****** usesuper=false When I used a redshift cluster, I set DbUser=admin who has a grant (usesuper=true) to be able to truncate the table. How should I do to truncate a table in Redshift serverless? Thanks,
Hi Team, I have a table in my redshift cluster, for which I want to create a script that will run every 1 hour per day and scan this table to get some data based on some conditions, which it would dump to a table in the dynamo DB. I was checking source and target options under the AWS glue jobs section, but when I select Redshift as the source, there is no option of Dynamo DB in the target. Is there any way to achieve this ?
We are planning to redshift chaos testing. Just to know what feasible chaos experients can be considered for testing. Thanks.
Hello, I regularly run into bugs I can work around with AWS RedShift, is there some way to issues to the AWS team? We're a startup and don't have a deep support plan where I can just hand it over to an account manager. For instance, RedShift will return a message this type of correlated subquery isn't supported due to an internal error when doing a correlated subquery on a table where the correlated subquery column doesn't exist in the table, rather than return the column doesn't exist. I have like 10 of these.
Questions Similar to this : https://repost.aws/questions/QUp2M_5mszQu6q8CHjbqVHkA/invalid-operation-when-creating-stored-procedure I am trying to write a stored procedure but facing the very same error as in the above link. My Redshift cluster is an updated version : PostgreSQL 8.0.2 on i686-pc-linux-gnu, compiled by GCC gcc (GCC) 3.4.2 20041017 (Red Hat 3.4.2-6.fc3), Redshift 1.0.41881 Please help
Hi All, We are looking to copy approx 500 TB of Data from one S3 bucket to another S3 bucket in the same region. Do you think that DataSync is the fastest and best available option for the transfer? And how much time will it it approximately to copy 500 TB of data? If we have to copy 7 folders, do we have to create the 7 tasks and can this impact the maximum throughput which 10 GBPS. Thanks Rio
We are trying to copy a dataset from EMR to Redshift which consist of around 13 billion records and 20-25 columns. I tried copying the dataset with the traditional method suing the COPY command through S3 Bucket pointing to EMR but it is taking more than 24 hours to copy the dataset. Current EMR configuration is 1 main Node of r5.4xlarge and 2 core nodes of r5.4xlarge and Redshift configuration is 2 ra.3xplus nodes. We have also enabled sortkey, distkey and compression. The dataset is stored in parquet format in S3 bucket. Please suggest a way to copy the dataset in less time as much as possible.
Hi! re:Post community, We are trying to figure out a way to automatically move data from Redshift to Open search using AWS Glue. In our initial research so far we found that connecting to Redshift is possible in Glue but connecting to Open Search is not natively supported. The suggested workaround is to use this connector from the marketplace: https://aws.amazon.com/marketplace/pp/prodview-v5ygernwn2gb6 However we are seeing that this open source connector only supports legacy elastic-search versions and not open search. Is it possible to do it in Glue or we should be using some other service altogether ? Thanks! in advance