1 Antwort
- Neueste
- Die meisten Stimmen
- Die meisten Kommentare
1
As a native service offering, Greengrass has support for deploying models to the edge and running inference code against those models. Nothing prevents you from deploying your own code to the edge that would train a model, but I suspect you wouldn't be able to store it as a Greengrass local resource for later inferences without doing a round trip to the cloud and redeploying to GG.
beantwortet vor 5 Jahren
Relevanter Inhalt
- AWS OFFICIALAktualisiert vor 2 Jahren
- AWS OFFICIALAktualisiert vor einem Jahr
- AWS OFFICIALAktualisiert vor einem Jahr
- AWS OFFICIALAktualisiert vor 2 Jahren