Zephyrnet Logo

Build an event-based tracking solution using Amazon Lookout for Vision

Date:

Amazon Lookout for Vision is a machine learning (ML) service that spots defects and anomalies in visual representations using computer vision (CV). With Amazon Lookout for Vision, manufacturing companies can increase quality and reduce operational costs by quickly identifying differences in images of objects at scale.

Many enterprise customers want to identify missing components in products, damage to vehicles or structures, irregularities in production lines, minuscule defects in silicon wafers, and other similar problems. Amazon Lookout for Vision uses ML to see and understand images from any camera as a person would, but with an even higher degree of accuracy and at a much larger scale. Amazon Lookout for Vision eliminates the need for costly and inconsistent manual inspection, while improving quality control, defect and damage assessment, and compliance. In minutes, you can begin using Amazon Lookout for Vision to automate inspection of images and objects—with no ML expertise required.

In this post, we look at how we can automate detecting anomalies in silicon wafers and notifying operators in real time.

Solution overview

Keeping track of the quality of products in a manufacturing line is a challenging task. Some process steps take images of the product that humans then review in order to assure good quality. Thanks to artificial intelligence, you can automate these anomaly detection tasks, but human intervention may be necessary after anomalies are detected. A standard approach is sending emails when problematic products are detected. These emails might be overlooked, which could cause a loss in quality in a manufacturing plant.

In this post, we automate the process of detecting anomalies in silicon wafers and notifying operators in real time using automated phone calls. The following diagram illustrates our architecture. We deploy a static website using AWS Amplify, which serves as the entry point for our application. Whenever a new image is uploaded via the UI (1), an AWS Lambda function invokes the Amazon Lookout for Vision model (2) and predicts whether this wafer is anomalous or not. The function stores each uploaded image to Amazon Simple Storage Service (Amazon S3) (3). If the wafer is anomalous, the function sends the confidence of the prediction to Amazon Connect and calls an operator (4), who can take further action (5).

Setting up Amazon Connect and the associated contact flow

To configure Amazon Connect and the contact flow, you complete the following high-level steps:

  1. Create an Amazon Connect instance.
  2. Set up the contact flow.
  3. Claim your phone number.

Create an Amazon Connect instance

The first step is to create an Amazon Connect instance. For the rest of the setup, we use the default values, but don’t forget to create an administrator login.

Instance creation can take a few minutes, after which we can log in to the Amazon Connect instance using the admin account we created.

Setting up the contact flow

In this post, we have a predefined contact flow that we can import. For more information about importing an existing contact flow, see Import/export contact flows.

  1. Choose the file contact-flow/wafer-anomaly-detection from the GitHub repo.
  2. Choose Import.

The imported contact flow looks similar to the following screenshot.

  1. On the flow details page, expand Show additional flow information.

Here you can find the ARN of the contact flow.

  1. Record the contact flow ID and contact center ID, which you need later.

Claim your phone number

Claiming a number is easy and takes just a few clicks. Make sure to choose the previously imported contact flow while claiming the number.

If no numbers are available in the country of your choice, raise a support ticket.

Contact flow overview

The following screenshot shows our contact flow.

The contact flow performs the following functions:

  • Enable logging
  • Set the output Amazon Polly voice (for this post, we use the Kendra voice)
  • Get customer input using DTMF (only keys 1 and 2 are valid).
  • Based on the user’s input, the flow does one of the following:
    • Prompt a goodbye message stating no action will be taken and exit
    • Prompt a goodbye message stating an action will be taken and exit
    • Fail and deliver a fallback block stating that the machine will shut down and exit

Optionally, you can enhance your system with an Amazon Lex bot.

Deploy the solution

Now that you have set up Amazon Connect, deployed your contact flow, and noted the information you need for the rest of the deployment, we can deploy the remaining components. In the cloned GitHub repository, edit the build.sh script and run it from the command line:

#Global variables
ApplicationRegion="YOUR_REGION"
S3SourceBucket="YOUR_S3_BUCKET-sagemaker"
LookoutProjectName="YOUR_PROJECT_NAME"
FlowID="YOUR_FLOW_ID"
InstanceID="YOUR_INSTANCE_ID"
SourceNumber="YOUR_CLAIMED_NUMBER"
DestNumber="YOUR_MOBILE_PHONE_NUMBER"
CloudFormationStack="YOUR_CLOUD_FORMATION_STACK_NAME"

Provide the following information:

  • Your Region
  • The S3 bucket name you want to use (make sure the name includes the word sagemaker).
  • The name of the Amazon Lookout for Vision project you want to use
  • The ID of your contact flow
  • Your Amazon Connect instance ID
  • The number you’ve claimed in Amazon Connect in E.164 format (for example, +132398765)
  • A name for the AWS CloudFormation stack you create by running this script

This script then performs the following actions:

  • Create an S3 bucket for you
  • Build the .zip files for your Lambda function
  • Upload the CloudFormation template and the Lambda function to your new S3 bucket
  • Create the CloudFormation stack

After the stack is deployed, you can find the following resources created on the AWS CloudFormation console.

You can see that an Amazon SageMaker notebook called amazon-lookout-vision-create-project is also created.

Build, train, and deploy the Amazon Lookout for Vision model

In this section, we see how to build, train, and deploy the Amazon Lookout for Vision model using the open-source Python SDK. For more information about the Amazon Lookout for Vision Python SDK, see this blog post.

You can build the model via the AWS Management Console. For programmatic deployment, complete the following steps:

  1. On the SageMaker console, on the Notebook instances page, access the SageMaker notebook instance that was created earlier by choosing Open Jupyter.

In the instance, you can find the GitHub repository of the Amazon Lookout for Vision Python SDK automatically cloned.

  1. Navigate into the amazon-lookout-for-vision-python-sdk/example folder.

The folder contains an example notebook that walks you through building, training, and deploying a model. Before you get started, you need to upload the images to use to train the model into your notebook instance.

  1. In the example/folder, create two new folders named good and bad.
  2. Navigate into both folders and upload your images accordingly.

Example images are in the downloaded GitHub repository.

  1. After you upload the images, open the lookout_for_vision_example.ipynb notebook.

The notebook walks you through the process of creating your model. One important step you should do first is provide the following information:

# Training & Inference
input_bucket = "YOUR_S3_BUCKET_FOR_TRAINING"
project_name = "YOUR_PROJECT_NAME"
model_version = "1" # leave this as one if you start right at the beginning # Inference
output_bucket = "YOUR_S3_BUCKET_FOR_INFERENCE" # can be same as input_bucket
input_prefix = "YOUR_KEY_TO_FILES_TO_PREDICT/" # used in batch_predict
output_prefix = "YOUR_KEY_TO_SAVE_FILES_AFTER_PREDICTION/" # used in batch_predict

You can ignore the inference section, but feel free to also play around with this part of the notebook. Because you’re just getting started, you can leave model_version set to “1”.

For input_bucket and project_name, use the S3 bucket and Amazon Lookout for Vision project name that are provided as part of the build.sh script. You can then run each cell in the notebook, which successfully deploys the model.

You can view the training metrics using the SDK, but you can also find them on the console. To do so, open your project, navigate to the models, and choose the model you’ve trained. The metrics are available on the Performance metrics tab.

You’re now ready to deploy a static website that can call your model on demand.

Deploy the static website

Your first step is to add the endpoint of your Amazon API Gateway to your static website’s source code.

  1. On the API Gateway console, find the REST API called LookoutVisionAPI.
  2. Open the API and choose Stages.
  3. On the stage’s drop-down menu (for this post, dev), choose the POST
  4. Copy the value for Invoke URL.

We add the URL to the HTML source code.

  1. Open the file html/index.html.

At the end of the file, you can find a section that uses jQuery to trigger an AJAX request. One key is called url, which has an empty string as its value.

  1. Enter the URL you copied as your new url value and save the file.

The code should look similar to the following:

$.ajax({ type: 'POST', url: 'https://<API_Gateway_ID>.execute-api.<AWS_REGION>.amazonaws.com/dev/amazon-lookout-vision-api', data: JSON.stringify({coordinates: coordinates, image: reader.result}), cache: false, contentType: false, processData: false, success:function(data) { var anomaly = data["IsAnomalous"] var confidence = data["Confidence"] text = "Anomaly:" + anomaly + "<br>" + "Confidence:" + confidence + "<br>"; $("#json").html(text); }, error: function(data){ console.log("error"); console.log(data);
}});

  1. Convert the index.html file to a .zip file.
  2. On the AWS Amplify console, choose the app ObjectTracking.

The front-end environment page of your app opens automatically.

  1. Select Deploy without Git provider.

You can enhance this piece to connect AWS Amplify to Git and automate your whole deployment.

  1. Choose Connect branch.

  1. For Environment name¸ enter a name (for this post, we enter dev).
  2. For Method, select Drag and drop.
  3. Choose Choose files to upload the index.html.zip file you created.
  4. Choose Save and deploy.

After the deployment is successful, you can use your web application by choosing the domain displayed in AWS Amplify.

Detect anomalies

Congratulations! You just built a solution to automate the detection of anomalies in silicon wafers and alert an operator to take appropriate action. The data we use for Amazon Lookout for Vision is a wafer map taken from Wikipedia. A few “bad” spots have been added to mimic real-world scenarios in semiconductor manufacturing.

After deploying the solution, you can run a test to see how it works. When you open the AWS Amplify domain, you see a website that lets you upload an image. For this post, we present the result of detecting a bad wafer with a so-called donut pattern. After you upload the image, it’s displayed on your website.

If the image is detected as an anomaly, Amazon Connect calls your phone number and you can interact with the service.

Conclusion

In this post, we used Amazon Lookout for Vision to automate the detection of anomalies in silicon wafers and alert an operator in real time using Amazon Connect so they can take action as needed.

This solution isn’t bound to just wafers. You can extend it to object tracking in transportation, products in manufacturing, and other endless possibilities.


About the Authors

Tolla Cherwenka is an AWS Global Solutions Architect who is certified in data and analytics. She uses an art of the possible approach to work backwards from business goals to develop transformative event-driven data architectures that enable data-driven decisions. Moreover, she is passionate about creating prescriptive solutions for refactoring to mission critical monolithic workloads to microservices, supply chain and connected factories that leverage IOT, machine learning, big data and analytics services.

 Michael Wallner is a Global Data Scientist with AWS Professional Services and is passionate about enabling customers on their AI/ML journey in the cloud to become AWSome. Besides having a deep interest in Amazon Connect he likes sports and enjoys cooking.

Krithivasan Balasubramaniyan is a Principal Consultant at Amazon Web Services. He enables global enterprise customers in their digital transformation journey and helps architect cloud native solutions.

Coinsmart. Beste Bitcoin-Börse in Europa
Source: https://aws.amazon.com/blogs/machine-learning/build-an-event-based-tracking-solution-using-amazon-lookout-for-vision/

spot_img

Latest Intelligence

spot_img