Zephyrnet Logo

Object detection and model retraining with Amazon SageMaker and Amazon Augmented AI

Date:

Industries like healthcare, media, and social media platforms use image analysis workflows to identify objects and entities within pictures to understand the whole image. For example, an ecommerce website might use objects present in an image to surface relevant search results. Sometimes image analysis may be difficult when images are blurry or more nuanced. In these cases, you may need a human to complete the machine learning (ML) loop and advise on the image using their human judgment.

In this post, we use Amazon SageMaker to build, train, and deploy an ML model for object detection and use Amazon Augmented AI (Amazon A2I) to build and render a custom worker template that allows reviewers to identify or review objects found in an image. Amazon SageMaker is a fully managed service that provides developers and data scientists the ability to build, train, and deploy ML models quickly. Amazon SageMaker removes the heavy lifting from each step of the ML process to make it easier to develop high-quality models. Amazon A2I is a fully managed service that helps you build human review workflows to review and validate the ML models’ predictions.

You can also use Amazon Rekognition for object detection to identify objects from a predefined set of classes, or use Amazon Rekogition Custom Labels to train your custom model to detect objects and scenes in images that are specific to your business needs, simply by bringing your own data.

Some other common use cases that may require human workflows are content moderation in images and video, extracting text and entities from documents, translation, and sentiment analysis. Although you can use ML models to identify inappropriate content or extract entities, humans may need to validate the model predictions based on the use case. Amazon A2I helps you quickly create these human workflows.

You can also use Amazon A2I to send a random sample of ML predictions to human reviewers. You can use these results to inform stakeholders about the model’s performance and to audit model predictions.

Prerequisites

This post requires you to have the following prerequisites:

  • An IAM role – To create a human review workflow, you need to provide an AWS Identity and Access Management (IAM) role that grants Amazon A2I permission to access Amazon Simple Storage Service (Amazon S3) for reading objects to render in a human task UI and for writing the results. This role also needs an attached trust policy to give Amazon SageMaker permission to assume the role. This allows Amazon A2I to perform actions in accordance with permissions that you attach to the role. For more information and example policies, see Add Permissions to the IAM Role Used to Create a Flow Definition.
  • Accompanying object detection training notebook – This post provides an accompanying notebook for this walkthrough. For this post, we focus on using Amazon A2I and the importance of bringing human reviewers in the loop. Therefore, we take a trained object detection model from an S3 bucket and host it on an Amazon SageMaker hosted endpoint for real-time prediction. For more information about training an object detection model using the Amazon SageMaker built-in Single Shot multibox Detector (SSD) algorithm with PASCAL VOC dataset and hosting it for real-time prediction, see the GitHub repo. If you’re interested in building your own model, follow the object detection training notebook. If you have a large dataset without Amazon SageMaker Ground Truth labels, you can use Ground Truth to efficiently label your images at scale.

Walkthrough overview

To implement this solution, you complete the following steps:

  1. Host an object detection model on Amazon SageMaker.
  2. Create a worker task template.
  3. Create a private work team.
  4. Create a human review workflow.
  5. Call the Amazon SageMaker endpoint.
  6. Complete the human review.
  7. Process the JSON output for incremental training.

For this post, we ran the walkthrough in us-east-1, but Amazon A2I is available in many Regions. For more information, see Region Table.

Step 1: Host an object detection model on Amazon SageMaker

This step is available in the accompanying Jupyter notebook. To set up your endpoint, enter the following Python code (this may take a few minutes):

# provided trained model in the public bucket
source_model_data_s3_uri = 's3://aws-sagemaker-augmented-ai-example/model/model.tar.gz' !aws s3 cp {source_model_data_s3_uri} {MODEL_PATH}/model.tar.gz model_data_s3_uri = f'{MODEL_PATH}/model.tar.gz' timestamp = time.strftime("%Y-%m-%d-%H-%M-%S", time.gmtime())
endpoint_name = 'DEMO-object-detection-augmented-ai-' + timestamp # the docker image for the SageMaker built-in object detection algorithm
image = sagemaker.amazon.amazon_estimator.get_image_uri(region, 'object-detection', repo_version='latest') # loading up a Model object from the provided trained model model = sagemaker.model.Model(model_data_s3_uri, image, role = role, predictor_cls = sagemaker.predictor.RealTimePredictor, sagemaker_session = sess)
# deploying the model into one ml.m4.xlarge instance
object_detector = model.deploy(initial_instance_count = 1, instance_type = 'ml.m4.xlarge', endpoint_name = endpoint_name)
# defining the input content type. We will be sending images of jpeg format to the endpoint. object_detector.content_type = 'image/jpeg' 

When the endpoint is up and running, you should see the InService status on the Amazon SageMaker console. (Note that the URL takes you to the console in us-east-1 which is where we did the demo, but Amazon A2I is available in many more Regions. Be sure to switch to your region.)

To see what object detection looks like, enter the following code. The predicted class and the prediction probability is visualized, along with the bounding box using the helper functions visualize_detection and load_and_predict defined in the accompanying Jupyter notebook.

test_photos_index = ['980382', '276517', '1571457'] if not os.path.isdir('sample-a2i-images'): os.mkdir('sample-a2i-images') for ind in test_photos_index: !curl https://images.pexels.com/photos/{ind}/pexels-photo-{ind}.jpeg > sample-a2i-images/pexels-photo-{ind}.jpeg test_photos = ['sample-a2i-images/pexels-photo-980382.jpeg', # motorcycle 'sample-a2i-images/pexels-photo-276517.jpeg', # bicycle 'sample-a2i-images/pexels-photo-1571457.jpeg'] # sofa results, detection_filtered, f = load_and_predict(test_photos[1], object_detector, threshold=0.2) 

The following screenshot shows the output of an image with a label and bounding box.

We under-trained this SSD model for demonstration purposes in the object detection training notebook. Although the model identifies a bicycle in the image, a probability of 0.245 is considered low to be a trustworthy prediction in modern computer vision. Furthermore, the localization of the object isn’t quite accurate; the bounding box doesn’t cover the front wheel and the saddle. However, this under-trained model serves as a perfect example of bringing human reviewers when a model doesn’t make a prediction with high confidence.

Step 2: Create a worker task template

You can use Amazon A2I to incorporate a human review into any ML workflow. In this post, to integrate Amazon A2I with the Amazon SageMaker hosted endpoint, you need to create a custom task. When you use a custom task type, you create and start a human loop using the Amazon A2I Runtime API to send the data that requires review using a worker task template. For more information, see Use Amazon Augmented AI with Custom Task Types.

Crowd HTML elements are web components that provide several task widgets and design elements that you can tailor to the question you want to ask. You can use these crowd elements to create a custom worker template and integrate it with an Amazon A2I human review workflow to customize the worker console and instructions. We provide over 60 sample custom task templates in the GitHub repo that you can use as is or as a starting point to customize your own templates. For an object detection use case, the reviewer typically needs to select labels and draw bounding boxes. For this post, you use one of the sample task templates, bounding-box.liquid.html, from the repository and make some customizations. This template includes labeling instructions, labeling functionality (draw, zoom in and out, and label search) and reads an image from a given Amazon S3 path. You may also customize the template to display the bounding boxes with an initial-value so that the workers can start with a bounding box predicted by the ML model instead of drawing the bounding box from scratch.

This step is available in the accompanying Jupyter notebook. To create a custom worker template on the Amazon A2I console, complete the following steps:

  1. Navigate to Worker task templates.
  2. Choose Create template.
  3. For Template name, enter a name that is unique within the Region in your account; for example, a2i-demo-object-detection-ui.
  4. For Template type, choose Custom.
  5. In the Template editor, enter the sample task HTML templates from bounding-box.liquid.html.
    1. Modify the labels variable in the editor according to the classes included in the PASCAL VOC dataset and object detection model: ['aeroplane', 'bicycle', 'bird', 'boat', 'bottle', 'bus', 'car', 'cat','chair', 'cow', 'diningtable', 'dog', 'horse', 'motorbike', 'person','pottedplant', 'sheep', 'sofa', 'train', 'tvmonitor']
  6. Choose Create.

Step 3: Create a private work team

You can easily route reviews to your private workforce with Amazon A2I. You can also access a workforce of over 500,000 independent contractors who are already performing ML-related tasks through Amazon Mechanical Turk. Alternatively, if your data requires confidentiality or special skills, you can use workforce vendors who are experienced with review projects and prescreened by AWS for quality and security procedures.

Whichever workforce type you choose, Amazon A2I takes care of sending tasks to workers. For this post, you create a work team using a private workforce and add yourself to the team to preview the Amazon A2I workflow.

You create and manage your private workforce in the Labeling workforces page on the Amazon SageMaker console. When following the instructions, you can create a private workforce by entering worker emails or importing a pre-existing workforce from an Amazon Cognito user pool.

If you already have a work team created for Amazon SageMaker Ground Truth, you can use the same work team with Amazon A2I and skip to the following section.

This step is not available in the accompanying Jupyter notebook.

To create your private work team, complete the following steps:

  1. On the Amazon SageMaker console, navigate to the Labeling workforces
  2. On the Private tab, choose Create private team.
  3. Choose Invite new workers by email.
  4. For this post, enter your email address to work on your document-processing tasks.

You can enter a list of up to 50 email addresses, separated by commas, into the Email addresses box.

  1. Enter an organization name and contact email.
  2. Choose Create private team.

After you create the private team, you get an email invitation. The following screenshot shows an example email.

After you click the link and change your password, you are registered as a verified worker for this team. The following screenshot shows the updated information on the Private tab.

Your one-person team is now ready, and you can create a human review workflow.

Replace YOUR_WORKTEAM_ARN in the accompanying Jupyter notebook with the ARN of the work team you created:

WORKTEAM_ARN = 'YOUR_WORKTEAM_ARN'

Step 4: Create a human review workflow

A human review workflow is also referred to as a flow definition. You use the flow definition to configure your human work team and provide information about how to accomplish the review task. You can use a flow definition to create multiple human loops.

This step is available in the accompanying Jupyter notebook. To do so on the Amazon A2I console, complete the following steps:

  1. Navigate to the Human review workflows
  2. Choose Create human review workflow.
  3. In the Workflow settings section, for Name, enter a unique workflow name; for example, a21-demo-1.
  4. For S3 bucket, enter the S3 bucket where you want to store the human review results.

The bucket must be located in the same Region as the workflow. For example, if you create a bucket called a2i-demos, enter the path s3://a2i-demos/.

  1. For IAM role, choose Create a new role from the drop-down menu.

Amazon A2I can create a role automatically for you.

  1. For S3 buckets you specify, select Specific S3 buckets.
  2. Enter the S3 bucket you specified earlier; for example, a2i-demos.
  3. Choose Create.

You see a confirmation when role creation is complete, and your role is now pre-populated in the IAM role drop-down menu.

  1. For Task type, select Custom.

In the next steps, you select the UI template you created earlier.

  1. In the Worker task template section, select Use your own template.
  2. For Template, choose the template you created.
  3. For Task description, enter a short description of the task.
  4. In the Workers section, select Private.
  5. For Private teams, choose the work team you created earlier.
  6. Choose Create.

You are redirected to the Human review workflows page and see a confirmation message similar to the following screenshot.

Record your new human review workflow ARN, which you use to configure your human loop in the next section.

Step 5: Call the Amazon SageMaker endpoint

Now that you have set up your Amazon A2I human review workflow, you’re ready to call your object detection endpoint on Amazon SageMaker and start your human loops. For this use case, you only want to start a human loop if the highest prediction probability score returned by your model for objects detected is less than 50% (SCORE_THREHOLD). With a bit of logic (see the following code), you can check the response for each call to the Amazon SageMaker endpoint using the load_and_predict helper function, and if the highest prediction probability score is less than 50%, you create a human loop.

You use a human loop to start your human review workflow. When a human loop is triggered, human review tasks are sent to the workers as specified in the flow definition.

This step is available in the accompanying Jupyter notebook.

human_loops_started = []
SCORE_THRESHOLD = .50
for fname in test_photos: # Call SageMaker endpoint and not display any object detected with probability lower than 0.4. response, score_filtered, fig = load_and_predict(fname, object_detector, threshold=0.4) # Sort by prediction score so that the first item has the highest probability score_filtered.sort(key=lambda x: x[1], reverse=True) # Our condition for triggering a human review # if the highest probability is lower than the threshold, send it to human review # otherwise proceed to the next image if (score_filtered[0][1] < SCORE_THRESHOLD): s3_fname='s3://%s/a2i-results/%s' % (BUCKET, fname) print(s3_fname) humanLoopName = str(uuid.uuid4()) inputContent = { "initialValue": score_filtered[0][0], "taskObject": s3_fname # the s3 object will be passed to the worker task UI to render } # start an a2i human review loop with an input start_loop_response = a2i.start_human_loop( HumanLoopName=humanLoopName, FlowDefinitionArn=flowDefinitionArn, HumanLoopInput={ "InputContent": json.dumps(inputContent) } ) human_loops_started.append(humanLoopName) print(f'Object detection Confidence Score of %s is less than the threshold of %.2f' % (score_filtered[0][0], SCORE_THRESHOLD)) print(f'Starting human loop with name: {humanLoopName} n') else: print(f'Object detection Confidence Score of %s is above than the threshold of %.2f' % (score_filtered[0][0], SCORE_THRESHOLD)) print('No human loop created. n')

The preceding code uses a simple if-else statement, but for dynamic conditions, you can also use AWS Lambda to evaluate if an object needs a human review. When you decide that a human review is needed, you can create a human loop using a2i.start_human_loop.

Step 6: Completing a human review

After you send the images with low prediction probability to Amazon A2I via the start_human_loop call, you or the person assigned as the reviewer can log in to the labeling portal to review the images. You can find the URL on the Amazon SageMaker console, on the Private tab of the Labeling workforce page. You can also programmatically retrieve the URL with the following code:

print('https://' + sagemaker_client.describe_workteam(WorkteamName=workteamName)['Workteam']['SubDomain'])

For this post, workteamName is a2i-demo-1.

To complete a human review, complete the following steps:

  1. When you navigate to the portal, you are prompted to log in with your username and password (if this is the first time for you to visit the portal, you need to create a new password).

You can see a new job for you in the Jobs section.

  1. Choose Object detection a2i demo.
  2. Choose Start working.

The page contains a customizable instruction panel, the image, and available labels.

  1. From the toolbar, choose Box.
  2. Under Labels, choose bicycle.
  3. Draw your bounding box around the object.
  4. Choose Submit.

After you complete all the image reviews, you can analyze the output of the human loop. Amazon A2I stores the results in your S3 bucket and sends you an Amazon CloudWatch event.

Your results should be available in the Amazon S3 output path specified in the human review workflow definition when all work is completed. The human answer, label, and bounding box are returned and saved in the JSON file. The following code shows a sample Amazon A2I output JSON file:

{'flowDefinitionArn': 'arn:aws:sagemaker:us-east-1:xxxxxxxx:flow-definition/fd-sagemaker-object-detection-demo-2020-05-01-18-08-47', 'humanAnswers': [{'answerContent': { 'annotatedResult': { 'boundingBoxes': [{'height': 1801, 'label': 'bicycle', 'left': 1042, 'top': 627, 'width': 2869}], 'inputImageProperties': {'height': 2608, 'width': 3911}}}, 'submissionTime': '2020-05-01T18:24:53.742Z', 'workerId': 'xxxxxxxxxx'}], 'humanLoopName': 'xxxxxxx-xxxxx-xxxx-xxxx-xxxxxxxxxx', 'inputContent': {'initialValue': 'bicycle', 'taskObject': 's3://sagemaker-us-east-1-xxxxxxx/a2i-results/sample-a2i-images/pexels-photo-276517.jpeg'}}

You can retrieve this information and parse it for further analyses. In the next step, we show you how to use this human-reviewed data for the next retraining iteration of your model.

Step 7: Processing the JSON output for incremental training

In the object detection training notebook, you used the Amazon SageMaker built-in object detection algorithm to train the first version of the model. You used the model to generate predictions on some random out-of-sample images and got an unsatisfactory prediction (low probability). You also used Amazon A2I to review and label the image based on your custom criteria (<50% confidence score threshold). The next step in a typical ML lifecycle is to include the cases with which the model had trouble in the next batch of training data for retraining purposes. This way, the model can learn from a set of new training data for continuous improvement. In ML, this is called incremental training.

This step is available in the accompanying Jupyter notebook.

You can supply the image data and annotation to the object detection algorithm in three different ways. For more information, see Input/Output Interface for the Object Detection Algorithm. For this post, we trained our original model with the RecordIO format because we converted the PASCAL VOC images and annotations into RecordIO format. For instructions on creating a custom RecordIO data, see Prepare custom datasets for object detection.

Alternatively, the object detection algorithm also takes a JSON file as input. You could create one JSON file per image or take an advantage of Pipe mode by using an augmented manifest file as the input format. Pipe mode accelerates overall model training time by up to 35% by streaming the data into the training algorithm while it’s running instead of copying data to the Amazon Elastic Block Store (Amazon EBS) volume attached to the training instance. You can construct an augmented manifest file from the Amazon A2I output with the following code:

object_categories_dict = {str(i): j for i, j in enumerate(object_categories)} def convert_a2i_to_augmented_manifest(a2i_output): annotations = [] confidence = [] for i, bbox in enumerate(a2i_output['humanAnswers'][0]['answerContent']['annotatedResult']['boundingBoxes']): object_class_key = [key for (key, value) in object_categories_dict.items() if value == bbox['label']][0] obj = {'class_id': int(object_class_key), 'width': bbox['width'], 'top': bbox['top'], 'height': bbox['height'], 'left': bbox['left']} annotations.append(obj) confidence.append({'confidence': 1}) augmented_manifest={'source-ref': a2i_output['inputContent']['taskObject'], 'a2i-retraining': {'annotations': annotations, 'image_size': [{'width': a2i_output['humanAnswers'][0]['answerContent']['annotatedResult']['inputImageProperties']['width'], 'depth':3, 'height': a2i_output['humanAnswers'][0]['answerContent']['annotatedResult']['inputImageProperties']['height']}]}, 'a2i-retraining-metadata': {'job-name': 'a2i/%s' % a2i_output['humanLoopName'], 'class-map': object_categories_dict, 'human-annotated':'yes', 'objects': confidence, 'creation-date': a2i_output['humanAnswers'][0]['submissionTime'], 'type':'groundtruth/object-detection'}} return augmented_manifest

This results in a JSON object like the following code, which is compatible with how Ground Truth outputs the result and how the SageMaker built-in object detection algorithm expects the input:

{ 'source-ref': 's3://sagemaker-us-east-1-xxxxxxx/a2i-results/sample-a2i-images/pexels-photo-276517.jpeg', 'a2i-retraining': { 'annotations': [ { 'class_id': 1, 'height': 1801, 'left': 1042, 'top': 627, 'width': 2869}], 'image_size': [ { 'depth': 3, 'height': 2608, 'width': 3911}]}, 'a2i-retraining-metadata': { 'class-map': { '0': 'aeroplane', '1': 'bicycle', '2': 'bird', '3': 'boat', '4': 'bottle', '5': 'bus', '6': 'car', '7': 'cat', '8': 'chair', '9': 'cow' '10': 'diningtable', '11': 'dog', '12': 'horse', '13': 'motorbike', '14': 'person', '15': 'pottedplant', '16': 'sheep', '17': 'sofa', '18': 'train', '19': 'tvmonitor',}, 'creation-date': '2020-05-01T18:24:53.742Z', 'human-annotated': 'yes', 'job-name': 'a2i/fc3cea7e-ead8-4c5c-b52d-166ff6147ff0', 'objects': [{'confidence': 1}], 'type': 'groundtruth/object-detection'}}

The preceding code is only one image. To create a cohort of training images from all the images re-labeled by human reviewers on the Amazon A2I console, you can loop through all the Amazon A2I output, convert the JSON file, and concatenate them into a JSON Lines file, with each line represents results of one image. See the following code:

with open('augmented.manifest', 'w') as outfile: for resp in completed_human_loops: # completed_human_loops contains a list of responses from a2i.describe_human_loop() calls splitted_string = re.split('s3://' + BUCKET + '/', resp['HumanLoopOutput']['OutputS3Uri']) output_bucket_key = splitted_string[1] response = s3.get_object(Bucket=BUCKET, Key=output_bucket_key) content = response["Body"].read() json_output = json.loads(content) augmented_manifest = convert_a2i_to_augmented_manifest(json_output) json.dump(augmented_manifest, outfile) outfile.write('n')
!head -n2 augmented.manifest
{"source-ref": "s3://sagemaker-us-east-1-xxxxxxx/a2i-results/sample-a2i-images/pexels-photo-276517.jpeg", "a2i-retraining": {"annotations": [{"class_id": 1, "width": 2869, "top": 627, "height": 1801, "left": 1042}], "image_size": [{"width": 3911, "depth": 3, "height": 2608}]}, "a2i-retraining-metadata": {"job-name": "a2i/fc3cea7e-ead8-4c5c-b52d-166ff6147ff0", "class-map": {"0": "aeroplane", "1": "bicycle", "2": "bird", "3": "boat", "4": "bottle", "5": "bus", "6": "car", "7": "cat", "8": "chair", "9": "cow", "10": "diningtable", "11": "dog", "12": "horse", "13": "motorbike", "14": "person", "15": "pottedplant", "16": "sheep", "17": "sofa", "18": "train", "19": "tvmonitor"}, "human-annotated": "yes", "objects": [{"confidence": 1}], "creation-date": "2020-05-21T18:36:33.834Z", "type": "groundtruth/object-detection"}}
{"source-ref": "s3://sagemaker-us-east-1-xxxxxxx/a2i-results/sample-a2i-images/pexels-photo-1571457.jpeg", "a2i-retraining": {"annotations": [{"class_id": 17, "width": 1754, "top": 1285, "height": 1051, "left": 657}], "image_size": [{"width": 3500, "depth": 3, "height": 2336}]}, "a2i-retraining-metadata": {"job-name": "a2i/8241d6e4-8078-4036-b065-ccdd5ebf955f", "class-map": {"0": "aeroplane", "1": "bicycle", "2": "bird", "3": "boat", "4": "bottle", "5": "bus", "6": "car", "7": "cat", "8": "chair", "9": "cow", "10": "diningtable", "11": "dog", "12": "horse", "13": "motorbike", "14": "person", "15": "pottedplant", "16": "sheep", "17": "sofa", "18": "train", "19": "tvmonitor"}, "human-annotated": "yes", "objects": [{"confidence": 1}], "creation-date": "2020-05-21T18:36:22.268Z", "type": "groundtruth/object-detection"}}

After you collect enough data points, you can construct a new Estimator for incremental training. For more information, see Easily train models using datasets labeled by Amazon SageMaker Ground Truth. In this blog we use the hyperparameters exactly the same as how the first model was trained in the object detection training notebook, with the exception of using the weights from the trained model instead of pretrained weights that comes with the algorithm (use_pretrained_model=0).

The following code example demonstrates incremental training with one or two new samples. Because we only reviewed two images in this post, this doesn’t yield a model with meaningful improvement.

s3_train_data = 's3://bucket/path/to/training/augmented.manifest'
s3_validation_data = 's3://bucket/path/to/validation/augmented.manifest'
s3_output_location = 's3://bucket/path/to/incremental-training/'
num_training_samples = 1234 from sagemaker.amazon.amazon_estimator import get_image_uri
training_image = get_image_uri(region, 'object-detection', repo_version='latest') # Create a model object set to using "Pipe" mode because we are inputing augmented manifest files.
new_od_model = sagemaker.estimator.Estimator(training_image, role, train_instance_count=1, train_instance_type='ml.p3.2xlarge', train_volume_size = 50, train_max_run = 360000, input_mode = 'Pipe', output_path=s3_output_location, sagemaker_session=sess) # the hyperparameters are the same from how the original model is trained new_od_model.set_hyperparameters(base_network='resnet-50', use_pretrained_model=0, num_classes=20, mini_batch_size=32, epochs=1, lr_scheduler_step='3,6', lr_scheduler_factor=0.1, optimizer='sgd', momentum=0.9, weight_decay=0.0005, overlap_threshold=0.5, nms_threshold=0.45, image_shape=300, label_width=350, num_training_samples=num_training_samples) train_data = sagemaker.session.s3_input(s3_train_data, distribution='FullyReplicated', content_type='application/x-recordio', record_wrapping='RecordIO', s3_data_type='AugmentedManifestFile', attribute_names=['source-ref', 'a2i-retraining'])
validation_data = sagemaker.session.s3_input(s3_validation_data, distribution='FullyReplicated', content_type='application/x-recordio', record_wrapping='RecordIO', s3_data_type='AugmentedManifestFile', attribute_names=['source-ref', 'a2i-retraining'])
# Use the output model from the previous job. model_data = sagemaker.session.s3_input(model_data_s3_uri, distribution='FullyReplicated', content_type='application/x-sagemaker-model', s3_data_type='S3Prefix', input_mode = 'File') data_channels = {'train': train_data, 'validation': validation_data, 'model': model_data} new_od_model.fit(inputs=data_channels, logs=True)

After training, you get a new model in the s3_output_location. You can then deploy this model to a new inference endpoint or update an existing endpoint. There is no availability loss when you update an existing endpoint. To update an endpoint, you need to provide a new endpoint configuration. For more information, see UpdateEndpoint.

Cleaning up

To avoid incurring future charges, delete resources such as the Amazon SageMaker endpoint, notebook instance, and the model artifacts in Amazon S3 when not in use.

Conclusion

This post has merely scratched the surface of what Amazon A2I can do in a typical ML lifecycle. We demonstrated how to set up everything you need to have a working human in the loop framework: an Amazon A2I worker task template interface, a human review workflow, and a work team. We also showed how to trigger an Amazon A2I human loop programmatically after an Amazon SageMaker hosted object detection endpoint returns a low confidence inference. Lastly, we walked you through how to work with the Amazon A2I output JSON file to create a new batch of training data in an augmented manifest format for incremental training using the Amazon SageMaker built-in object detection algorithm.

For video presentations, sample Jupyter notebooks, or more information about use cases like document processing, content moderation, sentiment analysis, text translation, and others, see Amazon Augmented AI Resources.

References

  • Everingham, Mark, et al. “The pascal visual object classes challenge: A retrospective.” International journal of computer vision 111.1 (2015): 98-136.
  • Liu, Wei, et al. “SSD: Single shot multibox detector.” European conference on computer vision. Springer, Cham, 2016.

About the authors

Michael Hsieh is the Senior AI/ML Specialist Solutions Architect. He works with customers to advance their ML journey with a combination of AWS ML offerings and his ML domain knowledge. As a Seattle transplant, he loves exploring the great mother nature the city has to offer such as the hiking trails, scenery kayaking in the SLU, and the sunset at the Shilshole Bay.

Anuj Gupta is Senior Product Manager for Amazon Augmented AI. He focuses on delivering products that make it easier for customers to adopt machine learning. In his spare time, he enjoys road trips and watching Formula 1.

Source: https://aws.amazon.com/blogs/machine-learning/object-detection-and-model-retraining-with-amazon-sagemaker-and-amazon-augmented-ai/

spot_img

Latest Intelligence

spot_img

Chat with us

Hi there! How can I help you?