Zephyrnet Logo

Automate weed detection in farm crops using Amazon Rekognition Custom Labels

Date:

Amazon Rekognition Custom Labels makes automated weed detection in crops easier. Instead of manually locating weeds, you can automate the process with Amazon Rekognition Custom Labels, which allows you to build machine learning (ML) models that can be trained with only a handful of images and yet are capable of accurately predicting which areas of a crop have weeds and need treatment. This saves farmers time, effort, and weed treatment costs.

Every farm has weeds. Weeds compete with crops and if not controlled can take up precious space, sunlight, water, and nutrients from crops and reduce their yield. Weeds grow much faster than crops and need immediate and effective control. Detecting weeds in crops is a lengthy and time-consuming process and is currently done manually. Although weed spray machines exist that can be coded to go to an exact location in a field and spray weed treatment in just those spots, the process of locating where those weeds exist is not yet automated.

Weed location automation isn’t an easy process. This is where computer vision and AI come in. Amazon Rekognition is a fully managed computer vision service that allows developers to analyze images and videos for a variety of use cases, including face identification and verification, media intelligence, custom industrial automation, and workplace safety. Detecting custom objects and scenes can be hard. Training and improving the accuracy of a computer vision model requires a large amount of data and is a complex problem. Amazon Rekognition Custom Labels allows you to detect custom labeled objects and scenes with just a handful of training images.

In this post, we use Amazon Rekognition Custom Labels to build an ML model that detects weeds in crops. We’re presently helping researchers at a US university automate this process for local farmers.

Create and train a weed detection model

We solve this problem by feeding images of crops with and without weeds to Amazon Rekognition Custom Labels and building an ML model. After the model is built and deployed, we can perform inference by feeding the model images from field cameras. This way farmers can automate weed detection in their fields. Our experiments showed that highly accurate models can be built with as few as 32 images.

  1. On the Amazon Rekognition console, choose Use Custom Labels.

  1. Choose Projects.
  2. Choose Create project.
  3. For Project name, enter a name (for example, Weed-detection-in-crops).
  4. Choose Create project.

Next, we create a dataset.

  1. On the Amazon Rekognition Custom Labels console, choose Datasets.
  2. Choose Create dataset.
  3. Enter a name for your dataset, such as crop-weed-ds.
  4. Select your training data location (for this post, we select Upload images from your computer).

  1. Choose Add images to upload your images.

For this post, we use 32 field images, of which half are images of crops without weeds and half are weed-infected crops.

  1. After you upload your training images, choose Add labels to add labels to your training data.

For this post, we define two labels: good-crop and weed.

  1. Assign your uploaded images one of these two labels depending on that image type.
  2. Save these changes.

We now have labeled images for both the classes we defined.

  1. Create another dataset for testing called test-ds, which contains four labeled images for testing purposes.

We’re now ready to train a new model.

  1. Select the project you created and choose Train new model.
  2. Choose the training dataset and test dataset that you created earlier.
  3. Choose Train.

After the model is trained, we can see how it performed. Our model was near perfect, with an F1 score of 1.0. Precision and recall were 1.0 as well.

We can choose View test results to see how this model performed on our test data. The following screenshot shows that good crops were predicted accurately as good crops and weed-infected crops were detected as containing weeds.

Test the model via your browser

We offer an AWS CloudFormation template in the GitHub repo that allows you to test the model through a browser. Choose the appropriate template depending on your Region. The template launches the required resources for you to test the model

The template asks for your email when you launch it. When the template is ready, it emails you the required credentials. The Outputs tab for the CloudFormation stack has a website URL for testing the model.

  1. On the browser front end, choose Start the model.

  1. Enter 1 for inference units.
  2. Choose Start the model.

  1. When the model is running, you can upload any image to it and get classification results.

  1. Stop the model once your testing is completed.

Perform inference using the SDK

Inference from the model is also possible using the SDK. The following code runs on the same image as in the previous section:

import boto3 def show_custom_labels(model, bucket, image, min_confidence): client=boto3.client('rekognition') #Call DetectCustomLabels response = client.detect_custom_labels(Image={'S3Object': {'Bucket': bucket, 'Name': image}}, MinConfidence=min_confidence, ProjectVersionArn=model) # Print results for customLabel in response['CustomLabels']: print('Label ' + str(customLabel['Name'])) print('Confidence ' + str(customLabel['Confidence']) + "n") return len(response['CustomLabels']) def main(): bucket = 'crop-weed-bucket' image = "Weed-1.jpg" model = 'arn:aws:rekognition:us-east-2:xxxxxxxxxxxx:project/Weed-detection-in-crops/version/Weed-detection-in-crops.2021-03-30T10.02.49/yyyyyyyyyy' min_confidence=1 label_count=show_custom_labels(model, bucket, image, min_confidence) print("Custom labels detected: " + str(label_count)) if __name__ == "__main__": main()

The results from using the SDK are the same as earlier from the browser:

Label weed
Confidence 92.1469955444336 Label good-crop
Confidence 7.852999687194824 Custom labels detected: 2

Best practices

Consider the following best practices when using Amazon Rekognition Custom Labels:

  • Use images that have high resolution
  • Crop out any background noise in the image
  • Have a good contrast between the object you’re trying to detect and other objects in the image
  • Delete any resources that you have created once your project is completed

Conclusion

In this post, we showed how you can automate weed detection in crops by building custom ML models with Amazon Rekognition Custom Labels. Amazon Rekognition Custom Labels takes care of deep learning complexities behind the scenes, allowing you to build powerful image classification models with just a handful of training images. You can improve model accuracy by increasing the number of images in your training data and resolution of those images. Farmers can deploy models such as these into their weed spray machines in order to reduce cost and manual effort. To learn more, including other use cases and video tutorials, visit the Amazon Rekognition Custom Labels webpage.


About the Author

Raju Penmatcha is a Senior AI/ML Specialist Solutions Architect at AWS. He works with education, government, and nonprofit customers on machine learning and artificial intelligence related projects, helping them build solutions using AWS. When not helping customers, he likes traveling to new places.

Coinsmart. Beste Bitcoin-Börse in Europa
Source: https://aws.amazon.com/blogs/machine-learning/automate-weed-detection-in-farm-crops-using-amazon-rekognition-custom-labels/

spot_img

Latest Intelligence

spot_img