Explore how you can quickly prepare for, respond to, and recover from security events. Learn more.
All Content tagged with ML Ops with Amazon SageMaker and Kubernetes
Kubernetes is an open source system used to automate the deployment, scaling, and management of containerized applications. Kubeflow Pipelines is a workflow manager that offers an interface to manage and schedule machine learning (ML) workflows on a Kubernetes cluster. Using open source tools offers flexibility and standardization, but requires time and effort to set up infrastructure, provision notebook environments for data scientists, and stay up-to-date with the latest deep learning framework versions.
Content language: English
Select up to 5 tags to filter
Sort by most recent
16 results
Wanted to check if AWS supports GPU inferencing via serverless compute (dynamic loading), since I don't want to spend $1,5/h for EC2 instance, which my client will use not more than 5 minutes per mont...
Hello,
I am trying to serve a model using SageMaker Endpoint.
I am using Triton Inference Server as a framework,
I know that I can enable Triton's gRPC protocol communication by setting the `SAGEMA...
I was trying to create a sagemaker project using the template "MLOps template for model building, training, and deployment with third-party Git repositories using Jenkins".
But I kept getting the err...
Hello AWS team!
I am trying to run a suite of inference recommendation jobs leveraging NVIDIA Triton Inference Server on a set of GPU instances (ml.g5.12xlarge, ml.g5.8xlarge, ml.g5.16xlarge) as well...
Hello,
I am trying to run a suite of inference recommendation jobs on a set of GPU instances (ml.g5.12xlarge, ml.g5.8xlarge, ml.g5.16xlarge) as well as AWS Inferentia machines (ml.inf2.2xlarge, ml.in...
EXPERT
published 9 months ago0 votes2.2K views
Announcement for pre-built AWS solutions
Hi,
How would one go about designing a serverless ML application in AWS?
Currently, our project is using the [serverless framework](https://www.serverless.com/) and lambda functions to accomplish thi...
I want to create a training step in sagemaker pipeline, and use custom processor like below. But instead of python code I want to use java code in the place of [code = 'src/processing.py' ]. Is it po...
I am trying to build a architecture for custom anomaly ai on AWS for my startup. Please let me know if my way of thinking is correct or not
1. Data Ingestion: Ingesting the data into AWS S3 in JSON fo...
Calling the sagemaker model endpoint with contentType `application/octet-stream` which is also being captured in Data Capture Logs.
What would be the ideal way to transform the data such that model mo...
based on aws docs/examples (https://docs.aws.amazon.com/sagemaker/latest/dg/model-registry-version.html), one can create/register model that is generated by your training pipeline. first we need to cr...
Hi,
I'm working on an end-to-end ml project which, for the moment, goes from training (it takes already processed train/val/test data from an S3 bucket) to deploy, passing through hyperparameter tun...