By using AWS re:Post, you agree to the Terms of Use

Welcome to AWS re:Post

re:Post gives you access to a vibrant community that helps you become even more successful on AWS

Learn AWS faster by following popular topics

see all
1/18

Recent questions

see all
1/18

Manage Greengrass-V2 Components in central account

I'm currently trying to create a component in a tenant account using the artifact packaged in a central account S3 bucket. The tenant account and central account are in the same AWS Organization. I've tried the following settings to enable the tenant accounts to access the S3 bucket: 1. On the central account S3 bucket (I wasn't sure what Principal Service/User was trying to test this access, so I just "shotgunned" it): ``` { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Principal": { "Service": [ "greengrass.amazonaws.com", "iot.amazonaws.com", "credentials.iot.amazonaws.com" ] }, "Action": [ "s3:GetObject", "s3:GetObjectVersion" ], "Resource": "arn:aws:s3:::MY-CENTRAL-ACCOUNT-BUCKET/*" }, { "Effect": "Allow", "Principal": { "AWS": "*" }, "Action": [ "s3:GetObject", "s3:GetObjectVersion", "s3:GetObjectTorrent", "s3:GetObjectVersionAcl", "s3:GetObjectAcl" ], "Resource": "arn:aws:s3:::MY-CENTRAL-ACCOUNT-BUCKET/*", "Condition": { "StringEquals": { "aws:PrincipalOrgID": "o-abc123def456" } } }, ... ] } ``` 2. On the `GreengrassV2TokenExchangeRole` in the tenant account, I've added the `AmazonS3FullAccess` AWS Managed policy (just to see if I could eliminate this Role as the blocker) I've verified that, as a User in the tenant account, I have access to the object in S3 and can do `aws s3 cp` as a tenant User (so the bucket policy doesn't seem to be blocking things). Whenever I try creating the Component in the tenant account, I'm met with: ``` Invalid Input: Encountered following errors in Artifacts: {s3://MY-CENTRAL-ACCOUNT-BUCKET/com.example.my-component-name/1.0.0-dev.0/application.zip = Specified artifact resource cannot be accessed} ``` ... using either the AWS IoT Greengrass Console and the AWS CLI. What am I missing? Is there a different service-linked role, I should be allowing in the S3 Bucket Resource Policy? It just seems like an access-test during Component creation and not an actual attempt to access the resource. I'm fairly certain if I assumed the Greengrass-TES role, I'd be able to download the artifact too (although I haven't explicitly done that yet).
0
answers
0
votes
6
views
asked 4 hours ago

Data Catalog schema table getting modified when I run my Glue ETL job

I created a Data Catalog with a table that I manually defined. I run my ETL job and all works well. I added partitions to both the table in the Data Catalog, as well as the ETL job. it creates the partitions and I see the folders being created in S3 as well. But my table data types change. I originally had: | column | data type | | --- | --- | | vid | string | | altid | string | | vtype | string | | time | timestamp | | timegmt | timestamp | | value | float | | filename | string | | year | int | | month | int | | day | int | But now after the ETL job with partitions, my table ends up like so: | column | data type | | --- | --- | | vid | string | | altid | string | | vtype | string | | time | bigint | | timegmt | bigint | | value | float | | filename | string | | year | bigint | | month | bigint | | day | bigint | Before this change of data types, I could do queries in Athena. Including a query like this: ``` SELECT * FROM "gp550-load-database"."gp550-load-table-beta" WHERE vid IN ('F_NORTH', 'F_EAST', 'F_WEST', 'F_SOUTH', 'F_SEAST') AND vtype='LOAD' AND time BETWEEN TIMESTAMP '2021-05-13 06:00:00' and TIMESTAMP '2022-05-13 06:00:00' ``` But now with the data types change, I get an error when trying to do a query like above ``` "SYNTAX_ERROR: line 1:154: Cannot check if bigint is BETWEEN timestamp and timestamp This query ran against the "gp550-load-database" database, unless qualified by the query. Please post the error message on our forum or contact customer support with Query Id: 2a5287bc-7ac2-43a8-b617-bf01c63b00d5" ``` So then if I go into the the table and change the data type back to "timestamp", I then run the query and get a different error: ``` "HIVE_PARTITION_SCHEMA_MISMATCH: There is a mismatch between the table and partition schemas. The types are incompatible and cannot be coerced. The column 'time' in table 'gp550-load-database.gp550-load-table-beta' is declared as type 'timestamp', but partition 'year=2022/month=2/day=2' declared column 'time' as type 'bigint'. This query ran against the "gp550-load-database" database, unless qualified by the query. Please post the error message on our forum or contact customer support with Query Id: f788ea2b-e274-43fe-a3d9-22d80a2bbbab" ``` Does anyone know what is happening?
0
answers
0
votes
6
views
asked 7 hours ago

Token Exchange Service (TES) failing to fetch credentials

**Gentlefolks, why is the the token exchange service (TES) failing to fetch credentials?** I have greengrass installed on Ubuntu 20.x.x running on a virtual machine. At the end of the numbered items is an error log (truncated) obtained from `/greengrass/v2/logs/greengrass.log`. Thank you What I've done or what I think you should know: 1. There exist `GreenGrassServiceRole` which contains `AWSGreengrassResourceAccessRolePolicy`, and every other policy containing the word "greengrass" in its name. It also has a trust relationship as below. This was created when I installed Greengrass, but I added additional policies. ``` { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Principal": { "Service": "greengrass.amazonaws.com" }, "Action": "sts:AssumeRole", "Condition": { "StringEquals": { "aws:SourceAccount": "<MY_ACCOUNT_NUMBER>" }, "ArnLike": { "aws:SourceArn": "<ARN_THAT_SHOWS_MY_REGION_AND_ACC_NUMBER>:*" } } }, { "Effect": "Allow", "Principal": { "Service": "credentials.iot.amazonaws.com" }, "Action": "sts:AssumeRole" } ] } ``` 2. `aws.greengrass.Nucleus` component is deployed with the following configuration update. The alias also exists. ``` { "reset": [], "merge": { "ComponentConfiguration": { "DefaultConfiguration": { "iotRoleAlias": "GreengrassV2TokenExchangeRoleAlias", "awsRegion": "us-east-1", "iotCredEndpoint": "https://sts.us-east-1.amazonaws.com", "iotDataEndpoint": "<ENDPOINT_OBTAINED_FROM_IOT_CORE_SETTINGS>" } } } } ``` 3. `aws.greengrass.TokenExchangeService` is deployed. 4. There's a custom component that uses the Greengrass SDK to publish to IoT Core. It has the following configuration update. ``` { "reset": [], "merge": { "ComponentDependencies": { "aws.greengrass.TokenExchangeService": { "VersionRequirement": "^2.0.0", "DependencyType": "HARD" } } } } ``` 5. There is an IoT policy from a previous exercise which attached to the core device (Ubuntu on virtual machine) certificate. It allows **all** actions. There's also another `GreengrassTESCertificatePolicyGreengrassV2TokenExchangeRoleAlias` which is associated with the thing's certificate. The policy allows `iot:AssumeRoleWithCertificate` **ERROR LOG BELOW** ``` 2022-05-21T21:07:39.592Z [ERROR] (pool-2-thread-28) com.aws.greengrass.tes.CredentialRequestHandler: Error in retrieving AwsCredentials from TES. {iotCredentialsPath=/role-aliases/GreengrassV2TokenExchangeRoleAlias/credentials, credentialData=Failed to get connection} 2022-05-21T21:08:38.071Z [WARN] (pool-2-thread-28) com.aws.greengrass.tes.CredentialRequestHandler: Encountered error while fetching credentials. {iotCredentialsPath=/role-aliases/GreengrassV2TokenExchangeRoleAlias/credentials} com.aws.greengrass.deployment.exceptions.AWSIotException: Unable to get response at com.aws.greengrass.iot.IotCloudHelper.getHttpResponse(IotCloudHelper.java:95) at com.aws.greengrass.iot.IotCloudHelper.lambda$sendHttpRequest$1(IotCloudHelper.java:80) at com.aws.greengrass.util.BaseRetryableAccessor.retry(BaseRetryableAccessor.java:32) at com.aws.greengrass.iot.IotCloudHelper.sendHttpRequest(IotCloudHelper.java:81) at com.aws.greengrass.tes.CredentialRequestHandler.getCredentialsBypassCache(CredentialRequestHandler.java:207) at com.aws.greengrass.tes.CredentialRequestHandler.getCredentials(CredentialRequestHandler.java:328) at com.aws.greengrass.tes.CredentialRequestHandler.getAwsCredentials(CredentialRequestHandler.java:337) at com.aws.greengrass.tes.LazyCredentialProvider.resolveCredentials(LazyCredentialProvider.java:24) at software.amazon.awssdk.awscore.internal.AwsExecutionContextBuilder.resolveCredentials(AwsExecutionContextBuilder.java:165) at software.amazon.awssdk.awscore.internal.AwsExecutionContextBuilder.invokeInterceptorsAndCreateExecutionContext(AwsExecutionContextBuilder.java:102) at software.amazon.awssdk.awscore.client.handler.AwsSyncClientHandler.invokeInterceptorsAndCreateExecutionContext(AwsSyncClientHandler.java:69) at software.amazon.awssdk.core.internal.handler.BaseSyncClientHandler.lambda$execute$1(BaseSyncClientHandler.java:78) at software.amazon.awssdk.core.internal.handler.BaseSyncClientHandler.measureApiCallSuccess(BaseSyncClientHandler.java:175) at software.amazon.awssdk.core.internal.handler.BaseSyncClientHandler.execute(BaseSyncClientHandler.java:76) at software.amazon.awssdk.core.client.handler.SdkSyncClientHandler.execute(SdkSyncClientHandler.java:45) at software.amazon.awssdk.awscore.client.handler.AwsSyncClientHandler.execute(AwsSyncClientHandler.java:56) at software.amazon.awssdk.services.s3.DefaultS3Client.getBucketLocation(DefaultS3Client.java:3382) at com.aws.greengrass.componentmanager.builtins.S3Downloader.lambda$getRegionClientForBucket$2(S3Downloader.java:134) at com.aws.greengrass.util.RetryUtils.runWithRetry(RetryUtils.java:50) at com.aws.greengrass.componentmanager.builtins.S3Downloader.getRegionClientForBucket(S3Downloader.java:133) at com.aws.greengrass.componentmanager.builtins.S3Downloader.getDownloadSize(S3Downloader.java:115) at com.aws.greengrass.componentmanager.ComponentManager.prepareArtifacts(ComponentManager.java:420) at com.aws.greengrass.componentmanager.ComponentManager.preparePackage(ComponentManager.java:377) at com.aws.greengrass.componentmanager.ComponentManager.lambda$preparePackages$1(ComponentManager.java:338) at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264) at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) at java.base/java.lang.Thread.run(Thread.java:829) Caused by: java.net.UnknownHostException: https at java.base/java.net.InetAddress$CachedAddresses.get(InetAddress.java:797) at java.base/java.net.InetAddress.getAllByName0(InetAddress.java:1509) at java.base/java.net.InetAddress.getAllByName(InetAddress.java:1368) at java.base/java.net.InetAddress.getAllByName(InetAddress.java:1302) at org.apache.http.impl.conn.SystemDefaultDnsResolver.resolve(SystemDefaultDnsResolver.java:45) at org.apache.http.impl.conn.DefaultHttpClientConnectionOperator.connect(DefaultHttpClientConnectionOperator.java:112) at org.apache.http.impl.conn.PoolingHttpClientConnectionManager.connect(PoolingHttpClientConnectionManager.java:376) at jdk.internal.reflect.GeneratedMethodAccessor51.invoke(Unknown Source) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.base/java.lang.reflect.Method.invoke(Method.java:566) at software.amazon.awssdk.http.apache.internal.conn.ClientConnectionManagerFactory$Handler.invoke(ClientConnectionManagerFactory.java:80) at com.sun.proxy.$Proxy15.connect(Unknown Source) at org.apache.http.impl.execchain.MainClientExec.establishRoute(MainClientExec.java:393) at org.apache.http.impl.execchain.MainClientExec.execute(MainClientExec.java:236) at org.apache.http.impl.execchain.ProtocolExec.execute(ProtocolExec.java:186) at org.apache.http.impl.client.InternalHttpClient.doExecute(InternalHttpClient.java:185) at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:83) at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:56) at software.amazon.awssdk.http.apache.internal.impl.ApacheSdkHttpClient.execute(ApacheSdkHttpClient.java:72) at software.amazon.awssdk.http.apache.ApacheHttpClient.execute(ApacheHttpClient.java:253) at software.amazon.awssdk.http.apache.ApacheHttpClient.access$500(ApacheHttpClient.java:106) at software.amazon.awssdk.http.apache.ApacheHttpClient$1.call(ApacheHttpClient.java:232) at com.aws.greengrass.iot.IotCloudHelper.getHttpResponse(IotCloudHelper.java:88) ... 27 more 2022-05-21T21:08:38.073Z [ERROR] (pool-2-thread-28) com.aws.greengrass.tes.CredentialRequestHandler: Error in retrieving AwsCredentials from TES. {iotCredentialsPath=/role-aliases/GreengrassV2TokenExchangeRoleAlias/credentials, credentialData=Failed to get connection} ```
1
answers
0
votes
10
views
asked 7 hours ago

Elemental Mediaconvert job template for Video on Demand

I launched the fully managed video on demand template from here https://aws.amazon.com/solutions/implementations/video-on-demand-on-aws/?did=sl_card&trk=sl_card. I have a bunch of questions on how to tailor this service to my use case. I will each separate questions for each. Firstly, is possible to use my own GUID as an identifier for the mediaconvert jobs and outputs. The default GUID tagged onto the videos in this workflow are independent of my application server. So it's difficult for the server to track who owns what video on the destination s3 bucket. Secondly, I would like to compress the video input for cases where the resolution is higher than 1080p. For my service i don't want to process any videos higher than 1080p. Is there a way i can achieve this without adding a lamda during the ingestion stage to compress it? I know it can by compressed on the client, i am hoping this can be achieved on this workflow, perhaps using mediaconvert? Thirdly, based on some of the materials i came across about this service, aside from the hls files mediaconvert generates, its supposed to generate an mp4 version of my video for cases where a client wants to download the full video as opposed to streaming. That is not the default behaviour, how do i achieve this? Lastly, how do i add watermarks to my videos in this workflow. Forgive me if some of these questions feel like things i could have easily researched on and gotten solutions. I did do some research, but i failed to grasp a clear understanding on anything
0
answers
0
votes
2
views
asked 8 hours ago