Zephyrnet Logo

Accelerate analytics on Amazon OpenSearch Service with AWS Glue through its native connector | Amazon Web Services

Date:

As the volume and complexity of analytics workloads continue to grow, customers are looking for more efficient and cost-effective ways to ingest and analyse data. Data is stored from online systems such as the databases, CRMs, and marketing systems to data stores such as data lakes on Amazon Simple Storage Service (Amazon S3), data warehouses in Amazon Redshift, and purpose-built stores such as Amazon OpenSearch Service, Amazon Neptune, and Amazon Timestream.

OpenSearch Service is used for multiple purposes, such as observability, search analytics, consolidation, cost savings, compliance, and integration. OpenSearch Service also has vector database capabilities that let you implement semantic search and Retrieval Augmented Generation (RAG) with large language models (LLMs) to build recommendation and media search engines. Previously, to integrate with OpenSearch Service, you could use open source clients for specific programming languages such as Java, Python, or JavaScript or use REST APIs provided by OpenSearch Service.

Movement of data across data lakes, data warehouses, and purpose-built stores is achieved by extract, transform, and load (ETL) processes using data integration services such as AWS Glue. AWS Glue is a serverless data integration service that makes it straightforward to discover, prepare, and combine data for analytics, machine learning (ML), and application development. AWS Glue provides both visual and code-based interfaces to make data integration effortless. Using a native AWS Glue connector increases agility, simplifies data movement, and improves data quality.

In this post, we explore the AWS Glue native connector to OpenSearch Service and discover how it eliminates the need to build and maintain custom code or third-party tools to integrate with OpenSearch Service. This accelerates analytics pipelines and search use cases, providing instant access to your data in OpenSearch Service. You can now use data stored in OpenSearch Service indexes as a source or target within the AWS Glue Studio no-code, drag-and-drop visual interface or directly in an AWS Glue ETL job script. When combined with AWS Glue ETL capabilities, this new connector simplifies the creation of ETL pipelines, enabling ETL developers to save time building and maintaining data pipelines.

Solution overview

The new native OpenSearch Service connector is a powerful tool that can help organizations unlock the full potential of their data. It enables you to efficiently read and write data from OpenSearch Service without needing to install or manage OpenSearch Service connector libraries.

In this post, we demonstrate exporting the New York City Taxi and Limousine Commission (TLC) Trip Record Data dataset into OpenSearch Service using the AWS Glue native connector. The following diagram illustrates the solution architecture.

By the end of this post, your visual ETL job will resemble the following screenshot.

Prerequisites

To follow along with this post, you need a running OpenSearch Service domain. For setup instructions, refer to Getting started with Amazon OpenSearch Service. Ensure it is public, for simplicity, and note the primary user and password for later use.

Note that as of this writing, the AWS Glue OpenSearch Service connector doesn’t support Amazon OpenSearch Serverless, so you need to set up a provisioned domain.

Create an S3 bucket

We use an AWS CloudFormation template to create an S3 bucket to store the sample data. Complete the following steps:

  1. Choose Launch Stack.
  2. On the Specify stack details page, enter a name for the stack.
  3. Choose Next.
  4. On the Configure stack options page, choose Next.
  5. On the Review page, select I acknowledge that AWS CloudFormation might create IAM resources.
  6. Choose Submit.

The stack takes about 2 minutes to deploy.

Create an index in the OpenSearch Service domain

To create an index in the OpenSearch service domain, complete the following steps:

  1. On the OpenSearch Service console, choose Domains in the navigation pane.
  2. Open the domain you created as a prerequisite.
  3. Choose the link under OpenSearch Dashboards URL.
  4. On the navigation menu, choose Dev Tools.
  5. Enter the following code to create the index:
PUT /yellow-taxi-index
{
  "mappings": {
    "properties": {
      "VendorID": {
        "type": "integer"
      },
      "tpep_pickup_datetime": {
        "type": "date",
        "format": "epoch_millis"
      },
      "tpep_dropoff_datetime": {
        "type": "date",
        "format": "epoch_millis"
      },
      "passenger_count": {
        "type": "integer"
      },
      "trip_distance": {
        "type": "float"
      },
      "RatecodeID": {
        "type": "integer"
      },
      "store_and_fwd_flag": {
        "type": "keyword"
      },
      "PULocationID": {
        "type": "integer"
      },
      "DOLocationID": {
        "type": "integer"
      },
      "payment_type": {
        "type": "integer"
      },
      "fare_amount": {
        "type": "float"
      },
      "extra": {
        "type": "float"
      },
      "mta_tax": {
        "type": "float"
      },
      "tip_amount": {
        "type": "float"
      },
      "tolls_amount": {
        "type": "float"
      },
      "improvement_surcharge": {
        "type": "float"
      },
      "total_amount": {
        "type": "float"
      },
      "congestion_surcharge": {
        "type": "float"
      },
      "airport_fee": {
        "type": "integer"
      }
    }
  }
}

Create a secret for OpenSearch Service credentials

In this post, we use basic authentication and store our authentication credentials securely using AWS Secrets Manager. Complete the following steps to create a Secrets Manager secret:

  1. On the Secrets Manager console, choose Secrets in the navigation pane.
  2. Choose Store a new secret.
  3. For Secret type, select Other type of secret.
  4. For Key/value pairs, enter the user name opensearch.net.http.auth.user and the password opensearch.net.http.auth.pass.
  5. Choose Next.
  6. Complete the remaining steps to create your secret.

Create an IAM role for the AWS Glue job

Complete the following steps to configure an AWS Identity and Access Management (IAM) role for the AWS Glue job:

  1. On the IAM console, create a new role.
  2. Attach the AWS managed policy GlueServiceRole.
  3. Attach the following policy to the role. Replace each ARN with the corresponding ARN of the OpenSearch Service domain, Secrets Manager secret, and S3 bucket.
{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "OpenSearchPolicy",
            "Effect": "Allow",
            "Action": [
                "es:ESHttpPost",
                "es:ESHttpPut"
            ],
            "Resource": [
                "arn:aws:es:<region>:<aws-account-id>:domain/<amazon-opensearch-domain-name>"
            ]
        },
        {
            "Sid": "GetDescribeSecret",
            "Effect": "Allow",
            "Action": [
                "secretsmanager:GetResourcePolicy",
                "secretsmanager:GetSecretValue",
                "secretsmanager:DescribeSecret",
                "secretsmanager:ListSecretVersionIds"
            ],
            "Resource": "arn:aws:secretsmanager:<region>:<aws-account-id>:secret:<secret-name>"
        },
        {
            "Sid": "S3Policy",
            "Effect": "Allow",
            "Action": [
                "s3:GetBucketLocation",
                "s3:ListBucket",
                "s3:GetBucketAcl",
                "s3:GetObject",
                "s3:PutObject",
                "s3:DeleteObject"
            ],
            "Resource": [
                "arn:aws:s3:::<bucket-name>",
                "arn:aws:s3:::<bucket-name>/*"
            ]
        }
    ]
}

Create an AWS Glue connection

Before you can use the OpenSearch Service connector, you need to create an AWS Glue connection for connecting to OpenSearch Service. Complete the following steps:

  1. On the AWS Glue console, choose Connections in the navigation pane.
  2. Choose Create connection.
  3. For Name, enter opensearch-connection.
  4. For Connection type, choose Amazon OpenSearch.
  5. For Domain endpoint, enter the domain endpoint of OpenSearch Service.
  6. For Port, enter HTTPS port 443.
  7. For Resource, enter yellow-taxi-index.

In this context, resource means the index of OpenSearch Service where the data is read from or written to.

  1. Select Wan only enabled.
  2. For AWS Secret, choose the secret you created earlier.
  3. Optionally, if you’re connecting to an OpenSearch Service domain in a VPC, specify a VPC, subnet, and security group to run AWS Glue jobs inside the VPC. For security groups, a self-referencing inbound rule is required. For more information, see Setting up networking for development for AWS Glue.
  4. Choose Create connection.

Create an ETL job using AWS Glue Studio

Complete the following steps to create your AWS Glue ETL job:

  1. On the AWS Glue console, choose Visual ETL in the navigation pane.
  2. Choose Create job and Visual ETL.
  3. On the AWS Glue Studio console, change the job name to opensearch-etl.
  4. Choose Amazon S3 for the data source and Amazon OpenSearch for the data target.

Between the source and target, you can optionally insert transform nodes. In this solution, we create a job that has only source and target nodes for simplicity.

  1. In the Data source properties section, specify the S3 bucket where the sample data is located, and choose Parquet as the data format.
  2. In the Data sink properties section, specify the connection you created in the previous section (opensearch-connection).
  3. Choose the Job details tab, and in the Basic properties section, specify the IAM role you created earlier.
  4. Choose Save to save your job, and choose Run to run the job.
  5. Navigate to the Runs tab to check the status of the job. When it is successful, the run status should be Succeeded.
  6. After the job runs successfully, navigate to OpenSearch Dashboards, and log in to the dashboard.
  7. Choose Dashboards Management on the navigation menu.
  8. Choose Index patterns, and choose Create index pattern.
  9. Enter yellow-taxi-index for Index pattern name.
  10. Choose tpep_pickup_datetime for Time.
  11. Choose Create index pattern. This index pattern will be used to visualize the index.
  12. Choose Discover on the navigation menu, and choose yellow-taxi-index.


You have now created an index in OpenSearch Service and loaded data into it from Amazon S3 in just a few steps using the AWS Glue OpenSearch Service native connector.

Clean up

To avoid incurring charges, clean up the resources in your AWS account by completing the following steps:

  1. On the AWS Glue console, choose ETL jobs in the navigation pane.
  2. From the list of jobs, select the job opensearch-etl, and on the Actions menu, choose Delete.
  3. On the AWS Glue console, choose Data connections in the navigation pane.
  4. Select opensearch-connection from the list of connectors, and on the Actions menu, choose Delete.
  5. On the IAM console, choose Roles in the navigation page.
  6. Select the role you created for the AWS Glue job and delete it.
  7. On the CloudFormation console, choose Stacks in the navigation pane.
  8. Select the stack you created for the S3 bucket and sample data and delete it.
  9. On the Secrets Manager console, choose Secrets in the navigation pane.
  10. Select the secret you created, and on the Actions menu, choose Delete.
  11. Reduce the waiting period to 7 days and schedule the deletion.

Conclusion

The integration of AWS Glue with OpenSearch Service adds the powerful ability to perform data transformation when integrating with OpenSearch Service for analytics use cases. This enables organizations to streamline data integration and analytics with OpenSearch Service. The serverless nature of AWS Glue means no infrastructure management, and you pay only for the resources consumed while your jobs are running. As organizations increasingly rely on data for decision-making, this native Spark connector provides an efficient, cost-effective, and agile solution to swiftly meet data analytics needs.


About the authors

Basheer Sheriff is a Senior Solutions Architect at AWS. He loves to help customers solve interesting problems leveraging new technology. He is based in Melbourne, Australia, and likes to play sports such as football and cricket.

Shunsuke Goto is a Prototyping Engineer working at AWS. He works closely with customers to build their prototypes and also helps customers build analytics systems.

spot_img

Latest Intelligence

spot_img