Compute

Whether you are building enterprise, cloud-native or mobile apps, or running massive data clusters using AWS Compute services, AWS provides services that support virtually any workload. Work with AWS Compute services to develop, deploy, run, and scale your applications and workloads.

Recent questions

see all
1/18
  • I am having some quality trouble on my call center This call center doesnt have agents online so it Transfers all the calls to a "Desk Phone" of the agents (amz connect calls the agent phone number) and the agents talk to the clients with their own phones. My question is what can I do to improve the quality in the calls? Im located in Mexico using the Oregon AWS Server. If I change to another server would that help? thanks
    0
    answers
    0
    votes
    3
    views
    asked 17 minutes ago
  • Hi, I have a instance EC2 associate with a EBS volumen, this instance was stopped but the used disk size keeps increasing. I don't understand why the used volume size keeps increasing if the instance was stopped. Thanks.
    0
    answers
    0
    votes
    2
    views
    asked 2 hours ago
  • Is this compulsory that we have to use linux for EC2 connection, because in window its showing connection error.. can anybody help to do this task? I am new on AWS so i didn't understand.
    0
    answers
    0
    votes
    2
    views
    Dhrupal
    asked 2 hours ago
  • Why this question? If `eb deploy` does change the IP address of ec2 instance then it is okay to use `eb ssh` and then select the instance number or id, depending upon how many instances are there in a single environment. If not, why not just directly `ssh -i <key.pem> ec2-user@<instance-ip>` Am I missing something here?
    0
    answers
    0
    votes
    1
    views
    asked 3 hours ago
  • I already train a BERT model in Python 3.9.16 and I save the .pth files in the models directory (my model ia about 417MB) and I also have my Dockerfile and requirements.txt as following: # Dockerfile ``` FROM public.ecr.aws/lambda/python:3.9-x86_64 ENV TRANSFORMERS_CACHE=/tmp/huggingface_cache/ COPY requirements.txt . #RUN pip install torch==1.10.1+cpu -f https://download.pytorch.org/whl/torch_stable.html RUN pip install torch==1.9.0 RUN pip install transformers==4.9.2 RUN pip install numpy==1.21.2 RUN pip install pandas==1.3.2 RUN pip install -r requirements.txt --target "${LAMBDA_TASK_ROOT}/dependencies" COPY app.py ${LAMBDA_TASK_ROOT} COPY models ${LAMBDA_TASK_ROOT}/dependencies/models CMD [ "app.handler" ] ``` # requirements.txt ``` torch==1.9.0 transformers==4.9.2 numpy==1.21.2 pandas==1.3.2 ``` # app.py ``` import torch from transformers import BertTokenizer, BertForSequenceClassification, BertConfig #from keras.preprocessing.sequence import pad_sequences #from keras_preprocessing.sequence import pad_sequences #from tensorflow.keras.preprocessing.sequence import pad_sequences from torch.utils.data import TensorDataset, DataLoader, RandomSampler, SequentialSampler import numpy as np import pandas as pd from typing import Dict import json # Path to the directory containing the pre-trained model files #model_dir = "./models/" model_dir= "./dependencies/models/" dict_path = f"{model_dir}/model_BERT_DAVID_v2.pth" state_dict = torch.load(dict_path,map_location=torch.device('cpu')) vocab_path=f"{model_dir}/vocab_BERT_DAVID_v2.pth" vocab = torch.load(vocab_path,map_location=torch.device('cpu')) model = BertForSequenceClassification.from_pretrained('bert-base-uncased', num_labels=4, state_dict=state_dict) tokenizer = BertTokenizer.from_pretrained('bert-base-uncased', do_lower_case=True, vocab=vocab) def handler(event): #payload = json.loads(event) payload=event # dict with the text text = payload['text'] df = pd.DataFrame() df['TEXT']=[text] sentences = df['TEXT'].values sentences = ["[CLS] " + sentence + " [SEP]" for sentence in sentences] tokenized_texts = [tokenizer.tokenize(sent) for sent in sentences] MAX_LEN = 256 # Use the BERT tokenizer to convert the tokens to their index numbers in the BERT vocabulary input_ids = [tokenizer.convert_tokens_to_ids(x) for x in tokenized_texts] # Pad our input tokens #input_ids = pad_sequences(input_ids, maxlen=MAX_LEN, dtype="long", truncating="post", padding="post") # Pad our input tokens input_ids = [torch.tensor(seq)[:MAX_LEN].clone().detach() for seq in input_ids] input_ids = torch.nn.utils.rnn.pad_sequence(input_ids, batch_first=True, padding_value=0) input_ids = torch.nn.functional.pad(input_ids, (0, MAX_LEN - input_ids.shape[1]), value=0)[:, :MAX_LEN] input_ids = input_ids.type(torch.LongTensor) # Create attention masks attention_masks = [] # Create a mask of 1s for each token followed by 0s for padding for seq in input_ids: seq_mask = [float(i>0) for i in seq];attention_masks.append(seq_mask) prediction_inputs = input_ids.to('cpu') # cuda prediction_masks = torch.tensor(attention_masks, device='cpu') # cuda batch_size = 32 prediction_data = TensorDataset(prediction_inputs, prediction_masks) prediction_sampler = SequentialSampler(prediction_data) prediction_dataloader = DataLoader(prediction_data, sampler=prediction_sampler, batch_size=batch_size) # Prediction # Put model in evaluation mode model.eval() # Tracking variables predictions = [] # Predict for batch in prediction_dataloader: # Add batch to GPU #batch = tuple(t.to(device) for t in batch) batch = tuple(t for t in batch) # Unpack the inputs from our dataloader b_input_ids, b_input_mask = batch # Telling the model not to compute or store gradients, saving memory and speeding up prediction with torch.no_grad(): # Forward pass, calculate logit predictions logits = model(b_input_ids, token_type_ids=None, attention_mask=b_input_mask) # Move logits and labels to CPU logits = logits['logits'].detach().cpu().numpy() #label_ids = b_labels.to('cpu').numpy() # Store predictions and true labels predictions.append(logits) #true_labels.append(label_ids) key = {0:'VERY_NEGATIVE', 1:'SOMEWHAT_NEGATIVE', 2:'NEUTRAL',3:'POSITIVE'} values=np.argmax(predictions[0], axis=1).flatten() # prediccion maxima de likehood converted_values = [key.get(val) for val in values] # valor del dict al que corresponde al optimo valor de likehood # Obtain the score for the intensity exponents = np.exp(predictions) # Operar sobre la softmax para sacar la prob softmax = exponents / np.sum(exponents) intensity={'VERY_NEGATIVE':softmax[0][0][0],'SOMEWHAT_NEGATIVE':softmax[0][0][1],'NEUTRAL':softmax[0][0][2],\ 'POSITIVE':softmax[0][0][3]} score=max(intensity.values()) return converted_values[0] ``` Everything seems correct in local when i create the aws lambda function in the 3.9 version I got this error: ``` { "errorMessage": "invalid load key, 'v'.", "errorType": "UnpicklingError", "requestId": "", "stackTrace": [ " File \"/var/lang/lib/python3.9/importlib/__init__.py\", line 127, in import_module\n return _bootstrap._gcd_import(name[level:], package, level)\n", " File \"<frozen importlib._bootstrap>\", line 1030, in _gcd_import\n", " File \"<frozen importlib._bootstrap>\", line 1007, in _find_and_load\n", " File \"<frozen importlib._bootstrap>\", line 986, in _find_and_load_unlocked\n", " File \"<frozen importlib._bootstrap>\", line 680, in _load_unlocked\n", " File \"<frozen importlib._bootstrap_external>\", line 850, in exec_module\n", " File \"<frozen importlib._bootstrap>\", line 228, in _call_with_frames_removed\n", " File \"/var/task/app.py\", line 25, in <module>\n state_dict = torch.load(dict_path,map_location=torch.device('cpu'))\n", " File \"/var/lang/lib/python3.9/site-packages/torch/serialization.py\", line 608, in load\n return _legacy_load(opened_file, map_location, pickle_module, **pickle_load_args)\n", " File \"/var/lang/lib/python3.9/site-packages/torch/serialization.py\", line 777, in _legacy_load\n magic_number = pickle_module.load(f, **pickle_load_args)\n" ] } ``` I try multiple things but no solution so far anyone can help me
    0
    answers
    0
    votes
    8
    views
    asked 5 hours ago
  • Is there an API call you can make that gives you the state capacity (used/available) of the bucket (capacity and refill rate). CloudWatch monitors failures (RequestLimitExceeded) but only once you exceeded the limit, so you can be at the 99% mark and see zero failures and then pass it without noticing.
    1
    answers
    0
    votes
    2
    views
    Kobster
    asked 7 hours ago
  • Login request was received, the username and password were correctly extracted from the request body, and a user with ID 1 was found in the database. The form still 504 fails eventually. my index.js, db.js, users.js, and login.html all seem fine. I'm on Lightsail so unfortunately I've had to use SQL Workbench this whole time. Not sure if there's an issue with the Lightsail to DB communication? It's been a pain to try to figure out Lightsail with the 'module' stuff like databases. users.js : ``` const connection = require('./db'); const bcrypt = require('bcrypt'); const saltRounds = 10; class User { constructor(id, username, password, email, createdAt, updatedAt) { this.id = id; this.username = username; this.password = password; this.email = email; this.createdAt = createdAt; this.updatedAt = updatedAt; } static create(username, password, email) { const now = new Date().toISOString(); const sql = `INSERT INTO loginserver (username, password, email, created_at, updated_at) VALUES (?, ?, ?, ?, ?)`; bcrypt.hash(password, saltRounds, (err, hash) => { if (err) { console.error('Error hashing password:', err); return; } const values = [username, hash, email, now, now]; connection.query(sql, values, (err, result) => { if (err) { console.error('Error creating user:', err); return; } console.log('User created with ID', result.insertId); const user = new User(result.insertId, username, hash, email, now, now); return user; }); }); } static getByUsername(username) { const sql = `SELECT * FROM loginserver WHERE username = ?`; connection.query(sql, [username], (err, results) => { if (err) { console.error('Error getting user by username:', err); return; } if (results.length === 0) { console.log('User not found'); return null; } const { id, username, password, email, created_at, updated_at } = results[0]; console.log('User found with ID', id); const user = new User(id, username, password, email, created_at, updated_at); return user; }); } checkPassword(password) { return new Promise((resolve, reject) => { bcrypt.compare(password, this.password, (err, isMatch) => { if (err) { console.error('Error checking password:', err); reject(err); } else { resolve(isMatch); } }); }); } update() { const now = new Date().toISOString(); const sql = `UPDATE loginserver SET username = ?, password = ?, email = ?, updated_at = ? WHERE id = ?`; const values = [this.username, this.password, this.email, now, this.id]; connection.query(sql, values, (err) => { if (err) { console.error('Error updating user:', err); return; } console.log('User updated with ID', this.id); this.updatedAt = now; return this; }); } delete() { const sql = `DELETE FROM loginserver WHERE id = ?`; connection.query(sql, [this.id], (err) => { if (err) { console.error('Error deleting user:', err); return; } console.log('User deleted with ID', this.id); return; }); } } module.exports = User; ``` index.js : ``` const express = require('express'); const https = require('https'); const socketIO = require('socket.io'); const path = require('path'); const fs = require('fs'); const mysql = require('mysql'); const User = require('./server/users'); const bodyParser = require('body-parser'); const app = express(); const server = https.createServer({ key: fs.readFileSync('/etc/letsencrypt/live/ispeedrun.tv/privkey.pem'), cert: fs.readFileSync('/etc/letsencrypt/live/ispeedrun.tv/fullchain.pem') }, app); const io = socketIO(server); // Add this before the routes app.use((req, res, next) => { console.log('Request received'); next(); }); app.use(express.static(path.join(__dirname, 'views', 'public'))); app.use(bodyParser.urlencoded({ extended: false })); app.use(bodyParser.json()); app.get('/', (req, res) => { res.sendFile(path.join(__dirname, 'views', 'index.html')); }); app.get('/live', (req, res) => { res.sendFile(path.join(__dirname, 'views', 'live.html')); }); const connection = mysql.createConnection({ host: 'ls-7f5846c26112d5a110aa9ce17f20838297ce7c51.cdnunzehdfq0.us-east-2.rds.amazonaws.com', port: '3306', user: 'dbmasteruser', password: '', database: '' }); connection.connect((err) => { if (err) { console.error('Failed to connect to MySQL:', err); return; } console.log('Connected to MySQL database'); }); io.on('connection', (socket) => { console.log('WebSocket connection established'); socket.on('message', (msg) => { console.log('message: ' + msg); io.emit('message', msg); }); socket.on('disconnect', () => { console.log('WebSocket connection closed'); }); }); // add this route to handle form submission app.post('/login', (req, res) => { console.log('Received login request'); console.log('Login request received:', req.body); // Log the received request const { username, password } = req.body; User.getByUsername(username, (err, user) => { if (err) { console.error('Error getting user:', err); res.status(500).send('Internal server error'); return; } if (!user) { res.status(401).send('Invalid username or password'); return; } user.checkPassword(password, (err, isMatch) => { if (err) { console.error('Error checking password:', err); res.status(500).send('Internal server error'); return; } if (!isMatch) { res.status(401).send('Invalid username or password'); return; } res.status(200).send(); // Send a 200 status code to indicate a successful login }); }); }); // Add this after the routes app.use((req, res, next) => { console.log('Response sent'); next(); }); const PORT = process.env.PORT || 6611; server.listen(PORT, () => { console.log(`Server running on port ${PORT}`); }); ``` login.html : ``` <!DOCTYPE html> <html lang="en"> <head> <meta charset="UTF-8"> <title>iSpeedrun.TV - Login</title> <link rel="stylesheet" href="styles.css"> <style> /* Keep the same styles as index.html */ .main-container { display: flex; flex-direction: row; } .video-container { width: 1280px; height: 720px; margin-right: 20px; } .video-container iframe { width: 100%; height: 100%; } .sidebar { width: 300px; height: 720px; display: flex; flex-direction: column; justify-content: space-between; } .sidebar-item { display: flex; align-items: center; padding: 10px; background-color: #222; color: #fff; font-size: 14px; } .sidebar-item img { width: 60px; height: 60px; margin-right: 10px; } header { display: flex; justify-content: space-between; align-items: center; background-color: #222; color: #fff; padding: 10px; } nav ul { display: flex; list-style: none; padding: 0; margin: 0; } nav li { margin-right: 20px; } nav a { color: #fff; text-decoration: none; font-weight: bold; font-size: 16px; text-transform: uppercase; } nav a:hover { color: #ff0000; } .login-container { background-color: #fff; padding: 40px; border-radius: 10px; width: 70%; margin: 20px auto; box-shadow: 0 0 20px rgba(0, 0, 0, 0.5); } .login-container label { font-size: 20px; margin-bottom: 20px; } .login-container input[type="text"], .login-container input[type="password"] { width: 100%; height: 40px; margin-bottom: 30px; padding: 10px; font-size: 16px; border-radius: 5px; border: none; box-shadow: 1px 1px 5px rgba(0, 0, 0, 0.3); } .login-container button[type="submit"] { display: block; width: 100%; height: 50px; background-color: #e74c3c; color: #fff; border: none; border-radius: 5px; font-size: 18px; cursor: pointer; transition: background-color 0.2s; } .login-container button[type="submit"]:hover { background-color: #c0392b; } #message { font-size: 18px; color: red; margin-bottom: 15px; } </style> </head> <body> <header> <h1>iSpeedrun.TV - Login</h1> <nav> <ul> <li><a href="index.html">Home</a></li> <li><a href="livestream.html">Live Streams</a></li> <li><a href="about.html">About Us</a></li> <li><a href="contact.html">Contact</a></li> <li><a href="login.html">Login</a></li> </ul> </nav> </header> <main class="main-container"> <div class="sidebar"> <div class="sidebar-item"> <img src="https://via.placeholder.com/60x60.png?text=User+1" alt="User 1"> <p>User 1</p> </div> <div class="sidebar-item"> <img src="https://via.placeholder.com/60x60.png?text=User+2" alt="User 2"> <p>User 2</p> </div> <div class="sidebar-item"> <img src="https://via.placeholder.com/60x60.png?text=User+3" alt="User 3"> <p>User 3</p> </div> <div class="sidebar-item"> <img src="https://via.placeholder.com/60x60.png?text=User+4" alt="User 4"> <p>User 4</p> </div> </div> <div class="video-container"> <form class="login-container" action="/login" method="post" id="login-form"> <label for="username">Username:</label> <input type="text" id="username" name="username"> <label for="password">Password:</label> <input type="password" id="password" name="password"> <div id="message"></div> <button type="submit">Login</button> </form> </div> </main> <script> const form = document.getElementById('login-form'); const message = document.getElementById('message'); form.addEventListener('submit', async function(event) { console.log('Form submitted'); event.preventDefault(); // Prevent the form from submitting normally const username = document.getElementById('username').value; const password = document.getElementById('password').value; try { console.log('Sending request to server'); const response = await fetch('/login', { method: 'POST', headers: { 'Content-Type': 'application/json', }, body: JSON.stringify({ username, password }), }); console.log('Server responded with status:', response.status); if (response.status === 200) { localStorage.setItem('loggedIn', 'true'); window.location.href = 'index.html'; } else { const error = await response.json(); message.textContent = error.message; } } catch (error) { console.error('Error:', error); message.textContent = 'An error occurred. Please try again.'; } }); </script> </body> </html> ```
    0
    answers
    0
    votes
    12
    views
    asked 7 hours ago
  • 0 Hi, I attended the AWESOME program and was issued a voucher for 50% discount towards the cost of my AWS CCp exam which is valid till 31 March 2023. I was able to successfully redeem the voucher at Pearson vue when I scheduled my exam on 31 March 2023. Now if I reschedule my exam to say 10 April 2023 (the voucher was valid till 31 march and I have redeemed that) will I have to pay the full fee OR will the exam be rescheduled without any extra charges.
    0
    answers
    0
    votes
    21
    views
    asked 8 hours ago
  • Please open my port 25 aws EC2 instance in my account my account Gmail is abrormuxtorov394@gmail.com My Elastic IP Information 13.48.51.229
    1
    answers
    0
    votes
    8
    views
    asked 9 hours ago
  • MY VPS DOES NOT WORK, WHEN I OPEN IT IT TAKES 2 MINUTES TO JUST OPEN AND WHEN I DOES I GET A BLACK SCREEN AND AFTER A FEW SECONDS IT JUST TURNS OFF AND RETURNS ME TO MY DESKTOP. TO SUM IT ALL UP YOUR VPS DOES NOT WORK AT ALL, I DONT HAVE A PROBLEM INDISE THE VPS I HAVE A PROBLEM WITH THE VPS IT SELF ,IT DOES NOT RESPOND TO ANYTING ITS COMPLETELY USELSS AND I HAVE LEFT MY TRADING EA ON YOUR VPS TO RUN , AND IT DOES BUT NOW I NEED TO GO INSIDE THE VPS TO TURN IT OFF BECUASE IM CURRENTLY LOSING MONEY BECAUSE OF YOUR SHITTY PRODUCT. PLEASE TURN OF MY VPS AND CANCEL MY SUBSCRIPTION IMEDDIATELY!!!!!!!!!
    0
    answers
    0
    votes
    20
    views
    Karlo
    asked 9 hours ago
  • I used AWS briefly and may use it again. However, for now I want to get rid of all my AWS services and not be charged until I need it again. I have deleted/removed every EC2 machine that i had. Every month I get charged about $0.40 and the description is just "EC2-other". I have no idea what else to remove. How can I stop these charges? I would prefer not to delete my account.
    1
    answers
    0
    votes
    15
    views
    asked 10 hours ago
  • Hi Guys, I deployed my java app on EC2 and here I am doing call to Amazon Cognito in order to push user data. So I am getting below error, `Profile file contained no credentials for profile 'default': ProfileFile(profiles=[])". When you are trying the same on your local machine its working fine` My understanding was If I gave IAM role(CognitoSuperUser) which has permission for Cognito to Ec2, I don't have to put credentials in EC2 profile file. Am I wrong? will not EC2 be able to allow calling to Cognito without any configuration? Like I do call to S3 from Ec2 by allowing permission in the IAM role that assigned Ec2. This is the way I create client to call. is there any other way to make call instead of **ProfileCredentialsProvider** ``` this.cognitoClient = CognitoIdentityProviderClient.builder() .region(Region.US_EAST_2) .credentialsProvider(ProfileCredentialsProvider.create()) .build(); ``` Thanks
    1
    answers
    0
    votes
    9
    views
    asked 10 hours ago
  • Hi, Is there a way to get the AWS Lambda Function URL string (or the bits to construct it) programmatically from the running instance of the Lambda itself? I tried the below options and neither of them had the necessary URL: 1. checked the input object in `handleRequest(Object input, Context context)` 2. checked the items in `System.getenv()` Thanks
    1
    answers
    0
    votes
    11
    views
    asked 10 hours ago
  • I'd appreciate any help you can provide with this as I'm stumped, and I'm sure I'm missing something. I have a site to site VPN set up and I can confirm it's connect to our on-premises router (DrayTek 3900). The VPN has a transit gateway, customer gateway and static routing. I've set up a new EC2 instance with it's own VPC and I can access it via it's public IP address, and it can access the internet. I do not understand how I enable this EC2 instance to route traffic over the VPN to on-prem and vice versa? I need to be able to share resources between the EC2 instance and on-premise network. Thanks in advance for any help you can provide!
    1
    answers
    0
    votes
    14
    views
    asked 10 hours ago
  • We deployed a new version of a serverless (python) application yesterday. All the CloudFormation events have it looking like a successful deployment. The error is that every lambda gets the ` Handler 'handler' missing on module 'routes/file_name'` We have not made any changes to the structure of our code, nor any changes at all from the AWS console. The changes we made are to use a newer version of Google Ads library, and also deployed from a new machine that required an updated version of `serverless` and `node` packages (plus whatever changed in their dependencies.) ``` $node --version v16.19.1 $serverless --version Running "serverless" from node_modules 1.30.1 ``` I tried - Rolling back one of the lambdas by specifying an older version int he API Gateway function that acts as a pass-through to the lambda. - Deploying a previous version of our code, which I believe is exactly what was already being used by the lambda - Creating an alias of an old version of the lambda, but couldn't figure out what to do with it. I also double-checked the CloudWatch logs and verified that things were working correctly before the new deplioyment. And finally, we deployed another app with a single lamda that gets its input from an SQS queue, with the same (broken) result. This is causing a production outage of functionality that is important to our customers.
    1
    answers
    0
    votes
    11
    views
    asked 11 hours ago
  • Hi All. I have had multiple emails recently from AWS with subject line “[ACTION REQUIRED] - Update your TLS connections to 1.2 to maintain AWS endpoint connectivity [AWS Account: 090759423501]”. The key part of the email seems to be ..................... Please see the following for further details on the TLS 1.0 or TLS 1.1 connections detected from your account between February 25, 2023 and March 13, 2023 (the UserAgent may be truncated due to a limit in the number of characters that can be displayed): Region | Endpoint | API Event Name | TLS Version | Connection Count | UserAgent eu-west-1 | dynamodb.eu-west-1.amazonaws.com | DescribeTable | TLSv1 | 324 | aws-sdk-dotnet-45/3.3.1.0 aws-sdk-dotnet-core/3.3.5.0 .NET_Runtime/4.0 .NET_Framework/4.0 OS/Microsoft_Windows_NT_10.0.14393.0 ClientSync Docu ............................ However, my reading of that is that it is a system call from a .net runtime and I’m not really sure what I can do about this. Your assistance would be appreciated. I typically use AWS resources in two ways: a) scheduled or triggered Lambda calls built directly in the AWS interface or b) calls from c# .net programs coded in Visual Studio I followed some links to literature that the AWS account manager gave me which suggested 1) Use the Cloud Trail section of CloudWatch to find log entries where TLS 1.0 or 1.1 was used - I tried this but could find no matching records when I ran the query 2) Check the general account health dashboard - I did this but no problems are reported there. Can anyone suggest a course of action here? Thanks Richard Abbott
    1
    answers
    0
    votes
    13
    views
    RBA
    asked 11 hours ago
  • I have deployed a container service with nginx as public endpoint, the nginx is configured to timeout as per the following attributes, however when a long running web request hits the server it times out in 60 seconds irrespective of overridding the default timeouts. The same setup works as expected without 60 seconds timeout in a local docker container configured with exactly identical configuration. ``` server { listen 80; sendfile on; default_type application/octet-stream; gzip on; gzip_http_version 1.1; gzip_disable "MSIE [1-6]\."; gzip_min_length 256; gzip_vary on; gzip_proxied expired no-cache no-store private auth; gzip_types text/plain text/css application/json application/javascript application/x-javascript text/xml application/xml application/xml+rss text/javascript; gzip_comp_level 9; root /usr/share/nginx/html; location /health/check { access_log off; return 200; } location /xyz { proxy_pass https://xyz; proxy_buffering off; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-Host $host; proxy_set_header X-Forwarded-Port $server_port; proxy_read_timeout 1200s; proxy_send_timeout 1200s; fastcgi_read_timeout 1200s; uwsgi_read_timeout 1200s; } location / { try_files $uri $uri/ /index.html =404; } upstream xyz { server xyz.domain.com:443; keepalive 1; keepalive_time 1100; keepalive_timeout 1200; } ``` I am trying to understand why the same configuration works on a local container as opposed to the one in Lightsail container service.
    0
    answers
    0
    votes
    8
    views
    asked 12 hours ago
  • 1
    answers
    0
    votes
    10
    views
    asked 12 hours ago

Recent Knowledge Center content

see all
1/18

Recent articles

see all
1/18

Popular users

see all
1/18

Learn AWS faster by following popular topics

1/2