- Newest
- Most votes
- Most comments
You can also train your own model in Amazon SageMaker. You can use build in models or bring your own model to fine-tune it on your images
Hi Shimekan,
Here is how I would approach object positional analysis using AWS Rekognition:
Amazon Rekognition's object and scene detection capabilities can identify the location of common objects in images and videos by returning bounding box coordinates. This can be done using the DetectLabels API.
The bounding box information returned by DetectLabels can be used to infer the position and orientation of the detected objects. For example, you could analyze the relative positions and sizes of the bounding boxes to determine if an object is properly aligned.
To implement this, you would need to: Use the DetectLabels API to detect objects in your images Extract the bounding box coordinates for the objects of interest Analyze the bounding box data to determine the object's alignment and orientation Compare the observed orientation to your desired "pass" tolerance to classify the object as aligned or not
While Amazon Rekognition does not provide a pre-built API for this type of object positional analysis, you can build a custom solution using the existing Rekognition capabilities along with other AWS services like AWS Lambda, Amazon S3, and Amazon SageMaker.
I recommend reviewing the Amazon Rekognition documentation for more details on the DetectLabels API response structure and how to work with the bounding box data. You can find the latest documentation at the Amazon Rekognition Developer Guide
Relevant content
- asked a year ago
- AWS OFFICIALUpdated 3 months ago
- AWS OFFICIALUpdated 3 years ago
- AWS OFFICIALUpdated a year ago
- AWS OFFICIALUpdated 2 years ago