All Questions

Content language: English

Sort by most recent

Browse through the questions and answers listed below or filter and sort to narrow down your results.

Hello, I'm trying to setup Amazon Connect to integrate it with ZohoCRM, however I'm trying to get a Honduras Phone Number but it's not listed in the instance that I created on US East 1. Also it's listed as a supported country https://aws.amazon.com/connect/pricing/
0
answers
0
votes
3
views
asked an hour ago
Hi, we use log4j to manage logging in our custom components for greengrass (v2). i.e. `log.error("Something terrible happened")` or `log.info("Just something I like to log")` However, I find that regardless of the level, it is always logged as "INFO" level in the greengrass compoment logs in cloudwatch. i.e. `{"thread":"Copier","level":"INFO","eventType":"stdout","message":"ERROR [<cut>:64] Something terrible happened {error_message=<cut>}` Is there a way to map this properly? Now we have to sync all log levels to cloudwatch to do i.e. monitoring or cloudwatch alarms. Thanks J
0
answers
0
votes
1
views
johans
asked an hour ago
Hi, I have a instance EC2 associate with a EBS volumen, this instance was stopped but the used disk size keeps increasing. I don't understand why the used volume size keeps increasing if the instance was stopped. Thanks.
0
answers
0
votes
2
views
asked an hour ago
Hello everyone! I'm using a lambda function that saves a log into a bucket but I'm having a problem trying to save these files with a specific format name and file extension. This is my code: **const createBucket = async (s3, bucket, path, data) => { const putObjectRequest = { // Adds a bucket and key to the body. Bucket: bucket, Key: path, Body: data, ContentType: 'text/plain', }; return s3 .putObject(putObjectRequest) .promise() .then(respo => { return respo; }) .catch(errs => { throw errs; }); } const saveFile = async (bucket, current, date, historyData) => { const logName = util.format('ekt-%s_%s000000.log', current.toLowerCase(), date); const savedData = await createBucket( s3, bucket.logs, `${current}/${logName}`, data ); if (!savedData) { throw new Error( `The file hasn't been created: ${current}/${logName}` ); } console.log(`The file has been created: ${current}/${logName}`); }** I'm trying to save a file with this format and extension **ekt-cp_20230112000000.log** but at the end the file is created with this format and extension **ekt-cp_20230112.txt ** I have no clue whats going on!
0
answers
0
votes
2
views
asked an hour ago
Hi could anyone help on these? we are facing this when we migrating to glue catalog, it cannot show timestamp in the glue athena which cause this table unselectable. HIVE_INVALID_PARTITION_VALUE: Invalid partition value ‘2022-08-09 23%3A59%3A59’ for TIMESTAMP partition key: xxx_timestamp=2022-08-09 23%253A59%253A59
0
answers
0
votes
2
views
asked an hour ago
Is this compulsory that we have to use linux for EC2 connection, because in window its showing connection error.. can anybody help to do this task? I am new on AWS so i didn't understand.
0
answers
0
votes
2
views
Dhrupal
asked an hour ago
``` 3:54:27 PM | CREATE_FAILED | AWS::S3::Bucket | loggingBucketE84AEEE7 acda-server-access-logging-all-buckets-beta-feamazon already exists new Bucket (/Users/xiaouye/workspace/AlexaCustomerDataAggregatorCDK/src/AlexaCustomerDataAggregatorCDK/node_modules/monocdk/lib/aws-s3/lib/bucket.js:738:26) \_ RequestCallbackStack.createLoggingBucket (/Users/xiaouye/workspace/AlexaCustomerDataAggregatorCDK/src/AlexaCustomerDataAggregatorCDK/dist/lib/stack/callback.js:106:24) \_ new RequestCallbackStack (/Users/xiaouye/workspace/AlexaCustomerDataAggregatorCDK/src/AlexaCustomerDataAggregatorCDK/dist/lib/stack/callback.js:38:35) \_ Object.<anonymous> (/Users/xiaouye/workspace/AlexaCustomerDataAggregatorCDK/src/AlexaCustomerDataAggregatorCDK/dist/lib/app.js:99:31) \_ Module._compile (internal/modules/cjs/loader.js:1085:14) \_ Object.Module._extensions..js (internal/modules/cjs/loader.js:1114:10) \_ Module.load (internal/modules/cjs/loader.js:950:32) \_ Function.Module._load (internal/modules/cjs/loader.js:790:12) \_ Function.executeUserEntryPoint [as runMain] (internal/modules/run_main.js:75:12) \_ internal/main/run_main_module.js:17:47 ❌ AlexaCustomerDataAggregatorCDK-Callback-beta-PDX failed: Error: The stack named AlexaCustomerDataAggregatorCDK-Callback-beta-PDX failed to deploy: UPDATE_ROLLBACK_COMPLETE: acda-server-access-logging-all-buckets-beta-feamazon already exists at prepareAndExecuteChangeSet (/Users/xiaouye/workspace/AlexaCustomerDataAggregatorCDK/build/AlexaCustomerDataAggregatorCDK/AlexaCustomerDataAggregatorCDK-1.0/AL2_x86_64/DEV.STD.PTHREAD/build/private/cdk-cli/node_modules/aws-cdk/lib/api/deploy-stack.ts:385:13) at processTicksAndRejections (internal/process/task_queues.js:95:5) at CdkToolkit.deploy (/Users/xiaouye/workspace/AlexaCustomerDataAggregatorCDK/build/AlexaCustomerDataAggregatorCDK/AlexaCustomerDataAggregatorCDK-1.0/AL2_x86_64/DEV.STD.PTHREAD/build/private/cdk-cli/node_modules/aws-cdk/lib/cdk-toolkit.ts:209:24) at initCommandLine (/Users/xiaouye/workspace/AlexaCustomerDataAggregatorCDK/build/AlexaCustomerDataAggregatorCDK/AlexaCustomerDataAggregatorCDK-1.0/AL2_x86_64/DEV.STD.PTHREAD/build/private/cdk-cli/node_modules/aws-cdk/lib/cli.ts:341:12) ``` We have a deployment issue to our beta environment, we have problem in creating this s3 bucket, the SOP is: 1. Deployment from dev host, this deployment succeed, however this change will skip the updating and it modify some previous resources as ``` │ + │ ${loggingBucket.Arn}/* │ Allow │ s3:PutObject │ Service:logging.s3.amazonaws.com │ │ │ │ │ │ s3:PutObjectAcl │ ``` 2. I then find that the function change will not work so I flush the manual deployment by through pipeline. 3. however this build failed, see this pipeline Is there any way that we could recover this deployment without delete any of the current resources. Why we are keep creating the resources already exist?
0
answers
0
votes
4
views
asked 2 hours ago
Why this question? If `eb deploy` does change the IP address of ec2 instance then it is okay to use `eb ssh` and then select the instance number or id, depending upon how many instances are there in a single environment. If not, why not just directly `ssh -i <key.pem> ec2-user@<instance-ip>` Am I missing something here?
0
answers
0
votes
1
views
asked 2 hours ago
I already train a BERT model in Python 3.9.16 and I save the .pth files in the models directory (my model ia about 417MB) and I also have my Dockerfile and requirements.txt as following: # Dockerfile ``` FROM public.ecr.aws/lambda/python:3.9-x86_64 ENV TRANSFORMERS_CACHE=/tmp/huggingface_cache/ COPY requirements.txt . #RUN pip install torch==1.10.1+cpu -f https://download.pytorch.org/whl/torch_stable.html RUN pip install torch==1.9.0 RUN pip install transformers==4.9.2 RUN pip install numpy==1.21.2 RUN pip install pandas==1.3.2 RUN pip install -r requirements.txt --target "${LAMBDA_TASK_ROOT}/dependencies" COPY app.py ${LAMBDA_TASK_ROOT} COPY models ${LAMBDA_TASK_ROOT}/dependencies/models CMD [ "app.handler" ] ``` # requirements.txt ``` torch==1.9.0 transformers==4.9.2 numpy==1.21.2 pandas==1.3.2 ``` # app.py ``` import torch from transformers import BertTokenizer, BertForSequenceClassification, BertConfig #from keras.preprocessing.sequence import pad_sequences #from keras_preprocessing.sequence import pad_sequences #from tensorflow.keras.preprocessing.sequence import pad_sequences from torch.utils.data import TensorDataset, DataLoader, RandomSampler, SequentialSampler import numpy as np import pandas as pd from typing import Dict import json # Path to the directory containing the pre-trained model files #model_dir = "./models/" model_dir= "./dependencies/models/" dict_path = f"{model_dir}/model_BERT_DAVID_v2.pth" state_dict = torch.load(dict_path,map_location=torch.device('cpu')) vocab_path=f"{model_dir}/vocab_BERT_DAVID_v2.pth" vocab = torch.load(vocab_path,map_location=torch.device('cpu')) model = BertForSequenceClassification.from_pretrained('bert-base-uncased', num_labels=4, state_dict=state_dict) tokenizer = BertTokenizer.from_pretrained('bert-base-uncased', do_lower_case=True, vocab=vocab) def handler(event): #payload = json.loads(event) payload=event # dict with the text text = payload['text'] df = pd.DataFrame() df['TEXT']=[text] sentences = df['TEXT'].values sentences = ["[CLS] " + sentence + " [SEP]" for sentence in sentences] tokenized_texts = [tokenizer.tokenize(sent) for sent in sentences] MAX_LEN = 256 # Use the BERT tokenizer to convert the tokens to their index numbers in the BERT vocabulary input_ids = [tokenizer.convert_tokens_to_ids(x) for x in tokenized_texts] # Pad our input tokens #input_ids = pad_sequences(input_ids, maxlen=MAX_LEN, dtype="long", truncating="post", padding="post") # Pad our input tokens input_ids = [torch.tensor(seq)[:MAX_LEN].clone().detach() for seq in input_ids] input_ids = torch.nn.utils.rnn.pad_sequence(input_ids, batch_first=True, padding_value=0) input_ids = torch.nn.functional.pad(input_ids, (0, MAX_LEN - input_ids.shape[1]), value=0)[:, :MAX_LEN] input_ids = input_ids.type(torch.LongTensor) # Create attention masks attention_masks = [] # Create a mask of 1s for each token followed by 0s for padding for seq in input_ids: seq_mask = [float(i>0) for i in seq];attention_masks.append(seq_mask) prediction_inputs = input_ids.to('cpu') # cuda prediction_masks = torch.tensor(attention_masks, device='cpu') # cuda batch_size = 32 prediction_data = TensorDataset(prediction_inputs, prediction_masks) prediction_sampler = SequentialSampler(prediction_data) prediction_dataloader = DataLoader(prediction_data, sampler=prediction_sampler, batch_size=batch_size) # Prediction # Put model in evaluation mode model.eval() # Tracking variables predictions = [] # Predict for batch in prediction_dataloader: # Add batch to GPU #batch = tuple(t.to(device) for t in batch) batch = tuple(t for t in batch) # Unpack the inputs from our dataloader b_input_ids, b_input_mask = batch # Telling the model not to compute or store gradients, saving memory and speeding up prediction with torch.no_grad(): # Forward pass, calculate logit predictions logits = model(b_input_ids, token_type_ids=None, attention_mask=b_input_mask) # Move logits and labels to CPU logits = logits['logits'].detach().cpu().numpy() #label_ids = b_labels.to('cpu').numpy() # Store predictions and true labels predictions.append(logits) #true_labels.append(label_ids) key = {0:'VERY_NEGATIVE', 1:'SOMEWHAT_NEGATIVE', 2:'NEUTRAL',3:'POSITIVE'} values=np.argmax(predictions[0], axis=1).flatten() # prediccion maxima de likehood converted_values = [key.get(val) for val in values] # valor del dict al que corresponde al optimo valor de likehood # Obtain the score for the intensity exponents = np.exp(predictions) # Operar sobre la softmax para sacar la prob softmax = exponents / np.sum(exponents) intensity={'VERY_NEGATIVE':softmax[0][0][0],'SOMEWHAT_NEGATIVE':softmax[0][0][1],'NEUTRAL':softmax[0][0][2],\ 'POSITIVE':softmax[0][0][3]} score=max(intensity.values()) return converted_values[0] ``` Everything seems correct in local when i create the aws lambda function in the 3.9 version I got this error: ``` { "errorMessage": "invalid load key, 'v'.", "errorType": "UnpicklingError", "requestId": "", "stackTrace": [ " File \"/var/lang/lib/python3.9/importlib/__init__.py\", line 127, in import_module\n return _bootstrap._gcd_import(name[level:], package, level)\n", " File \"<frozen importlib._bootstrap>\", line 1030, in _gcd_import\n", " File \"<frozen importlib._bootstrap>\", line 1007, in _find_and_load\n", " File \"<frozen importlib._bootstrap>\", line 986, in _find_and_load_unlocked\n", " File \"<frozen importlib._bootstrap>\", line 680, in _load_unlocked\n", " File \"<frozen importlib._bootstrap_external>\", line 850, in exec_module\n", " File \"<frozen importlib._bootstrap>\", line 228, in _call_with_frames_removed\n", " File \"/var/task/app.py\", line 25, in <module>\n state_dict = torch.load(dict_path,map_location=torch.device('cpu'))\n", " File \"/var/lang/lib/python3.9/site-packages/torch/serialization.py\", line 608, in load\n return _legacy_load(opened_file, map_location, pickle_module, **pickle_load_args)\n", " File \"/var/lang/lib/python3.9/site-packages/torch/serialization.py\", line 777, in _legacy_load\n magic_number = pickle_module.load(f, **pickle_load_args)\n" ] } ``` I try multiple things but no solution so far anyone can help me
0
answers
0
votes
7
views
asked 4 hours ago
Hi, I'm trying to deploy a Nextjs (v12) to Amplify. All the checkmarks are green, but when I visit the URL (provided by AWS), it 404. My build settings : ``` version: 1 frontend: phases: preBuild: commands: - npm ci build: commands: - npm run buildamp artifacts: baseDirectory: .next files: - '**/*' cache: paths: - node_modules/**/* ``` The command `npm run buildamp` is simply `next build` The build directory is indeed .next. I have no /src directory. The App is quite big and old, with both TS and JS files. There are some dubious settings in tsconfig.json, jsconfig.json and next.config files. What should I look into to find the source of the error ? Thanks !
0
answers
0
votes
5
views
asked 5 hours ago
Hello AWS Team. We are trynig to implement ABAC instead of RBAC. on 2023 it is support for all services on AWS? Regards, Diego
1
answers
0
votes
3
views
asked 5 hours ago
Is there an API call you can make that gives you the state capacity (used/available) of the bucket (capacity and refill rate). CloudWatch monitors failures (RequestLimitExceeded) but only once you exceeded the limit, so you can be at the 99% mark and see zero failures and then pass it without noticing.
1
answers
0
votes
2
views
Kobster
asked 6 hours ago