Questions tagged with AWS Data Pipeline

AWS Data Pipeline is a web service that helps you reliably process and move data between different AWS compute and storage services, as well as on-premises data sources, at specified intervals.

Content language: English

Select up to 5 tags to filter
Sort by most recent

Browse through the questions and answers listed below or filter and sort to narrow down your results.

I HAVE MULTIPLE CSVS ABOUT A SINGLE PATIENT AND I WOULD LIKE TO KNOW HOW DO I COMBINE ALL THE CSVS BECAUSE ALL THE COLUMNS INSIDE THE CSVS MAKE UP AN ALL THE INFORMATION FOR ONE PATIENT. THE CSV'S ARE...
1
answers
0
votes
528
views
CYN
asked a year ago
I am getting following error in my data pipeline `We currently do not have sufficient t1.micro capacity in the Availability Zone you requested (ap-southeast-2a). Our system will be working on...
2
answers
0
votes
324
views
asked a year ago
Im trying to delete all the data pipeline that shows up in ``` aws datapipeline list-pipelines ``` in one go. How do I do that using aws-cli?
1
answers
0
votes
387
views
Kaviya
asked a year ago
I want to get the notification on Failed DataPipelines through event bridge only when they fail. I used the following event pattern but this didn't work. It can be done through the data pipeline SNS...
2
answers
0
votes
498
views
shyam
asked a year ago
We have one API gateway that receive data 24x7 of gzipped meter data and the data come in concurrently(some times 5000 posts per second, sometimes not much), we are sure the compressed data won't...
1
answers
1
votes
446
views
nobady
asked a year ago
I'm trying to set up an AWS Data Pipeline so I can clone large huggingface repo's to S3. I'm encountering issue's when creating the permissions policy to use with a role for my data pipeline. [I'm...
2
answers
0
votes
361
views
Nas
asked a year ago
Hello. I'm running SageMaker training jobs through a library called ZenML. The library is just there as an abstraction layer, so that when I return the artifacts gets automatically saved to S3. The...
0
answers
0
votes
156
views
asked a year ago
Hi, I'm running a data pipeline from legacy DB(oracle) to Redshift using AWS Glue. I want to test the connection to the legacy DB before executing ETL without test query in working python...
1
answers
0
votes
331
views
Woo
asked a year ago
Hi Team, I have set up an **AWS DataPipeline** to run my EMR jobs on `On-Demand` instances. However, I now want to switch to using `Spot` Instances to reduce costs. I have configured the...
2
answers
0
votes
390
views
asked a year ago
Hello team, As per our client requirement we need to migrate data from quickbook to aws rds, for that we need to use appflow service and use connector for quickbook. Is anyone able to connect via...
0
answers
0
votes
141
views
asked a year ago
I have to convert a pipeline to step function and I have this step which loads a dynamo db table with data from S3. I am not sure how to convert this to step function. Below is the step. Any idea how...
1
answers
0
votes
441
views
asked a year ago
Good afternoon. I would like to know SE when we send a snapshoot of mysql 5.7 rds allocated storage size of 8tb. When I restore from S3 to a new RDS instance, it can only consume the instance that has...
1
answers
0
votes
348
views
asked a year ago