All Content tagged with 아마존 세이지메이커 배포

Amazon SageMaker provides a broad selection of machine learning (ML) infrastructure and model deployment options to help meet your needs, whether real time or batch. Once you deploy a model, SageMaker creates persistent endpoints to integrate into your applications to make ML predictions (also known as inference). It supports the entire spectrum of inference, from low latency (a few milliseconds) and high throughput (hundreds of thousands of inference requests per second) to long-running inference for use cases such as natural language processing (NLP) and computer vision (CV). Whether you bring your own models and containers or use those provided by AWS, you can implement MLOps best practices using SageMaker to reduce the operational burden of managing ML models at scale.

콘텐츠 언어: 한국어

Select up to 5 tags to filter
정렬 기준 가장 최근
0개의 결과
결과 없음선택한 콘텐츠 언어 한국어에 대해 0개의 검색 결과가 나왔습니다. 언어 설정 섹션을 방문해 선호하는 콘텐츠 언어를 변경하세요.