Zephyrnet Logo

Prepare data from Snowflake for machine learning with Amazon SageMaker Data Wrangler

Date:

Data preparation remains a major challenge in the machine learning (ML) space. Data scientists and engineers need to write queries and code to get data from source data stores, and then write the queries to transform this data, to create features to be used in model development and training. All of this data pipeline development work doesn’t really focus on the building of ML models, but focuses on the building of data pipelines necessary to make the data available to the models. Amazon SageMaker Data Wrangler makes it easier for data scientists and engineers to prepare data in the early phase of developing ML applications by using a visual interface.

Data Wrangler simplifies the process of data preparation and feature engineering using a single visual interface. Data Wrangler comes with over 300 built-in data transformations to help normalize, transform, and combine features without writing any code. You can now use Snowflake as a data source in Data Wrangler to easily prepare data in Snowflake for ML.

In this post, we use a simulated dataset that represents loans from a financial services provider, which has been provided by Snowflake. This dataset contains lender data about loans granted to individuals. We use Data Wrangler to transform and prepare the data for later use in ML models, first building a data flow in Data Wrangler, then exporting it to Amazon SageMaker Pipelines. First, we walk through setting up Snowflake as the data source, then explore and transform the data using Data Wrangler.

Prerequisites

This post assumes you have the following:

Set up permissions for Data Wrangler

In this section, we cover the permissions required to set up Snowflake as a data source for Data Wrangler. This section requires you to perform steps in both the AWS Management Console and Snowflake. The user in each environment should have permission to create policies, roles, and secrets in AWS, and the ability to create storage integrations in Snowflake.

All permissions for AWS resources are managed via your IAM role attached to your Amazon SageMaker Studio instance. Snowflake-specific permissions are managed by the Snowflake admin; they can grant granular permissions and privileges to each Snowflake user. This includes databases, schemas, tables, warehouses, and storage integration objects. Make sure that the correct permissions are set up outside of Data Wrangler.

AWS access requirements

Snowflake requires the following permissions on your output S3 bucket and prefix to be able to access objects in the prefix:

  • s3:GetObject
  • s3:GetObjectVersion
  • s3:ListBucket

You can add a bucket policy to ensure that Snowflake only communicates with your bucket over HTTPS. For instructions, see What S3 bucket policy should I use to comply with the AWS Config rule s3-bucket-ssl-requests-only?

Create an IAM policy allowing Amazon S3 access

In this section, we cover creating the policy required for Snowflake to access data in an S3 bucket of your choosing. If you already have a policy and role that allows access to the S3 bucket you plan to use for the Data Wrangler output, you can skip this section and the next section, and start creating your storage integration in Snowflake.

  1. On the IAM console, choose Policies in the navigation pane.
  2. Choose Create policy.
  3. On the JSON tab, enter the following JSON snippet, substituting your bucket and prefix name for the placeholders:
# Example policy for S3 write access
# This needs to be updated
# Be sure to remove the angle brackets around <bucket> and <prefix> # Then replace with your own bucket and prefix names (eg: MY-SAGEMAKER-BUCKET/MY-PREFIX)
{ "Version":"2012-10-17", "Statement":[ { "Effect":"Allow", "Action": [ "s3:PutObject", "s3:GetObject", "s3:GetObjectVersion", "s3:DeleteObject", "s3:DeleteObjectVersion" ], "Resource":["arn:aws:s3:::<bucket>/<prefix>/*"] }, { "Effect":"Allow", "Action": [ "s3:ListBucket" ], "Resource":["arn:aws:s3:::<bucket>"], "Condition": { "StringLike": { "s3:prefix": ["<prefix>/*"] } } } ]
}

  1. Choose Next: Tags.
  2. Choose Next: Review.
  3. For Name, enter a name for your policy (for example, snowflake_datawrangler_s3_access).
  4. Choose Create policy.

Create an IAM role

In this section, we create an IAM role and attach it to the policy we created.

  1. On the IAM console, choose Roles in the navigation pane.
  2. Choose Create role.
  3. Select Another AWS account as the trusted entity type
  4. For Account ID field, enter your own AWS account ID.

You modify the trusted relationship and grant access to Snowflake later.

  1. Select the Require External ID
  2. Enter a dummy ID such as your own account ID.

Later, we modify the trust relationship and specify the external ID for your Snowflake stage. An external ID is required to grant access to your AWS resources (such as Amazon  S3) to a third party (Snowflake).

  1. Choose Next.
  2. Locate the policy you created previously for the S3 bucket and choose this policy.
  3. Choose Next.
  4. Enter a name and description for the role, then choose Create role.

You now have an IAM policy created for an IAM role, and the policy is attached to the role.

  1. Record the role ARN value located on the role summary page.

In the next step, you create a Snowflake integration that references this role.

Create a storage integration in Snowflake

A storage integration in Snowflake stores a generated IAM entity for external cloud storage, with an optional set of allowed or blocked locations, in Amazon S3. An AWS administrator in your organization grants permissions on the storage location to the generated IAM entity. With this feature, users don’t need to supply credentials when creating stages or when loading or unloading data.

Create the storage integration with the following code:

CREATE STORAGE INTEGRATION IF NOT EXISTS SAGEMAKER_DATAWRANGLER_INTEGRATION TYPE = EXTERNAL_STAGE STORAGE_PROVIDER = S3 STORAGE_AWS_ROLE_ARN = '<iam_role_arn>' ENABLED = true STORAGE_ALLOWED_LOCATIONS = ('s3://<your_s3_bucket>/<optional_path>/')

Retrieve the IAM user for your Snowflake account

Run the following DESCRIBE INTEGRATION command to retrieve the ARN for the IAM user that was created automatically for your Snowflake account:

DESC INTEGRATION SAGEMAKER_DATAWRANGLER_INTEGRATION;

Record the following values from the output:

  • STORAGE_AWS_IAM_USER_ARN – The IAM user created for your Snowflake account
  • STORAGE_AWS_EXTERNAL_ID– The external ID needed to establish a trust relationship

Update the IAM role trust policy

Now we update the trust policy.

  1. On the IAM console, choose Roles in the navigation pane.
  2. Choose the role you created.
  3. On the Trust relationship tab, choose Edit trust relationship.
  4. Modify the policy document as shown in the following code with the DESC STORAGE INTEGRATION output values you recorded in the previous step:
{ "Version": "2012-10-17", "Statement": [ { "Sid": "", "Effect": "Allow", "Principal": { "AWS": "<snowflake_user_arn>" }, "Action": "sts:AssumeRole", "Condition": { "StringEquals": { "sts:ExternalId": "<snowflake_external_id>" } } } ]
}

  1. Choose Update trust policy.

Create an external stage in Snowflake

We use an external stage within Snowflake for loading data from an S3 bucket in your own account into Snowflake. In this step, we create an external (Amazon S3) stage that references the storage integration you created. For more information, see Creating an S3 Stage.

This requires a role that has the CREATE_STAGE privilege for the schema as well as the USAGE privilege on the storage integration. You can grant these privileges to the role as shown in the code in the next step.

Create the stage using the CREATE_STAGE command with placeholders for the external stage and S3 bucket and prefix. The stage also references a named file format object called my_csv_format:

grant create stage on schema public to role <iam_role>;
grant usage on integration SAGEMAKE_DATAWRANGLER_INTEGRATION to role <iam_role_arn>;
create stage <external_stage> storage_integration = SAGEMAKE_DATAWRANGLER_INTEGRATION url = '<s3_bucket>/<prefix>' file_format = my_csv_format;

Create a secret for Snowflake credentials (Optional)

Data Wrangler allows users to use the ARN of an AWS Secrets Manager secret or a Snowflake account name, user name, and password to access Snowflake. If you intend to use the Snowflake account name, user name, and password option, skip to the next section, which covers adding the data source. By default, Data Wrangler creates a Secrets Manager secret on your behalf, when using the second option.

To create a Secrets Manager secret manually, complete the following steps:

  1. On the Secrets Manager console, choose Store a new secret.
  2. For Select secret type¸ select Other types of secrets.
  3. Specify the details of your secret as key-value pairs.

The names of the key are case-sensitive and must be lowercase. If you enter any of these incorrectly, Data Wrangler raises an error.

If you prefer, you can use the plaintext option and enter the secret values as JSON:

{ "username": "<snowflake username>", "password": "<snowflake password>", "accountid": "<snowflake account id>"
}

  1. Choose Next.
  2. For Secret name, add the prefix AmazonSageMaker (for example, our secret is AmazonSageMaker-DataWranglerSnowflakeCreds).
  3. In the Tags section, add a tag with the key SageMaker and value true.

  1. Choose Next.
  2. The rest of the fields are optional; choose Next until you have the option to choose Store to store the secret.

After you store the secret, you’re returned to the Secrets Manager console.

  1. Choose the secret you just created, then retrieve the secret ARN.
  2. Store this in the text editor of your choice for use later when you create the Data Wrangler data source.

Set up the data source in Data Wrangler

In this section, we cover setting up Snowflake as a data source in Data Wrangler. This post assumes that you have access to SageMaker, an instance of Studio, and a user for Studio. For more information about prerequisites, see Get Started with Data Wrangler.

Create a new data flow

To create your data flow, complete the following steps:

  1. On the SageMaker console, choose Amazon SageMaker Studio in the navigation pane.
  2. Choose Open Studio.
  3. In the Launcher, choose New data flow.

Alternatively, on the File drop-down, choose New, then choose Data Wrangler Flow.

Creating a new flow can take a few minutes. After the flow has created, you see the Import data page.

Add Snowflake as a data source in Data Wrangler

Next, we add Snowflake as a data source.

  1. On the Add data source menu, choose Snowflake.

  1. Add your Snowflake connection details.

Data Wrangler uses HTTPS to connect to Snowflake.

  1. If you created a Secrets Manager secret manually, choose the Authentication method drop-down menu and choose ARN.

  1. Choose Connect.

You’re redirected to the import menu.

Run a query

Now that Snowflake is set up as a data source, you can access your data in Snowflake directly from the Data Wrangler query editor. The query we write in the editor is what Data Wrangler uses to import data from Snowflake to start our data flow.

  1. On the drop-down menus, choose the data warehouse, database, and schema you want to use for your query.

For this post, our dataset is in the database FIN_LOANS, the schema is DEV, and the table is LOAN_INT_HV. My data warehouse is called MOONMAXW_DEV_WH; depending on your setup, these will likely differ.

Alternatively, you can specify the full path to the dataset in the query editor. Make sure you still choose the database and schema on the drop-down menus.

  1. In the query editor, enter a query and preview the results.

For this post, we retrieve all columns from 1,000 rows.

  1. Choose Import.

  1. Enter a dataset name when prompted (for this post, we use snowflake_loan_int_hv).
  2. Choose Add.

You’re taken to the Prepare page, where you can add transformations and analyses to the data.

Add transformations to the data

Data Wrangler has over 300 built-in transformations. In this section, we use some of these transformations to prepare the dataset for an ML model.

On the Data Wrangler flow page, make sure you have chosen the Prepare tab. If you’re following the steps in the post, you’re directed here automatically after adding your dataset.

Convert data types

The first step we want to perform is to check that the correct data type was inferred on ingest for each column.

  1. Next to Data types, choose the plus sign.
  2. Choose Edit data types.

Looking through the columns, we identify that MNTHS_SINCE_LAST_DELINQ and MNTHS_SINCE_LAST_RECORD should most likely be represented as a number type, rather than string.

  1. On the right-hand menu, scroll down until you find MNTHS_SINCE_LAST_DELINQ and MNTHS_SINCE_LAST_RECORD.
  2. On the drop-down menu, choose Float.

Looking through the dataset, we can confirm that the rest of the columns appear to have been correctly inferred.

  1. Choose Preview to preview the changes.
  2. Choose Apply to apply the changes.
  3. Choose Back to data flow to see the current state of the flow.

Manage columns

The dataset we’re using has several columns that likely aren’t beneficial to future models, so we start our transformation process by dropping the columns that aren’t useful.

  1. Next to Data types, choose the plus sign.
  2. Choose Add transformation.

The transformation console opens. Here you can preview your dataset, select from the available transformations, and preview the transformations.

Looking through the data, we can see that the fields EMP_TITLE, URL, DESCRIPTION, and TITLE will likely not provide value to our model in our use case, so we drop them.

  1. On the Transform menu, choose Manage columns.
  2. On the Transform drop-down menu, leave Drop column
  3. Enter EMP_TITLE for Column to drop.
  4. Choose Preview to review the changes.
  5. Choose Add to add the step.
  6. If you want to see the step you added and previous steps, choose Previous steps on the Transform

  1. Repeat these steps for the remaining columns (URL, DESCRIPTION, and TITLE).
  2. Choose Back to data flow to see the current state of the flow.

In the data flow view, we can see that this node in the flow has four steps, which represent the four columns we’re dropping for this part of the flow.

Format string

Next, we look for columns that are string data that can be formatted to be more beneficial to use later. Looking through our dataset, we can see that INT_RATE might be useful in a future model as float, but has a trailing character of %. Before we can use another built-in transformation (parse as type) to convert this to a float, we must strip the trailing character.

  1. Next to Steps, choose the plus sign.
  2. Choose Add transform.
  3. Choose Format string.
  4. On the Transform drop-down, choose Remove Symbols.
  5. On the Input column drop-down, choose the INT_RATE column.
  6. For Symbols, enter %.
  7. Optionally, in the Output field, enter the name of a column that this data is written to.

For this post, we keep the original column and set the output column to INT_RATE_PERCENTAGE to denote to future users of this data that this column is the interest rate as a percentage. Later, we convert this to a float.

  1. Choose Preview.

When Data Wrangler adds a new column, it’s automatically added as the rightmost column.

  1. Review the change to ensure accuracy.
  2. Choose Add.

Parse column as type

Continuing with the preceding example, we’ve identified that INT_RATE_PERCENTAGE should be converted to a float type.

  1. Next to Steps, choose the plus sign.
  2. Choose Add transform.
  3. Choose Parse Column as Type.
  4. On the Column drop-down, choose INT_RATE_PERCENTAGE.

The From field is automatically populated.

  1. On the to drop-down, choose Float.
  2. Choose Preview.
  3. Choose Add.
  4. Choose Back to data flow.

As you can see, we now have six steps in this portion of the flow, four that represent columns being dropped, one that represents string formatting, and one that represents parse column as type.

Encode categorical data

Next, we want to look for categorical data in our dataset. Data Wrangler has a built-in functionality to encode categorical data using both ordinal and one-hot encodings. Looking at our dataset, we can see that the TERM, HOME_OWNERSHIP, and PURPOSE columns all appear to be categorical in nature.

  1. Next to Steps, choose the plus sign.
  2. Choose Add transform.

The first column in our list TERM has two possible values: 60 months and 36 months. Perhaps our future model would benefit from having these values one-hot encoded and placed into new columns.

  1. Choose Encode Categorical.
  2. On the Transform drop-down, choose One-hot encode.
  3. For Input column, choose TERM.
  4. On the Output style drop-down, choose Columns.
  5. Leave all other fields and check boxes as is.
  6. Choose Preview.

We can now see two columns, TERM_36 months and TERM_60 months, are one-hot encoded to represent the corresponding value in the TERM column.

  1. Choose Add.

The HOME_OWNERSHIP column has  four possible values: RENT, MORTGAGE, OWN, and other.

  1. Repeat the preceding steps to apply a one-hot encoding approach on these values.

Lastly, the PURPOSE column has several possible values. For this data, we use a one-hot encoding approach as well, but we set the output to a vector, rather than columns.

  1. On the Transform drop-down, choose One-hot encode.
  2. For Input column, choose PURPOSE.
  3. On the Output style drop-down, choose Vector.
  4. For Output Column, we call this column PURPOSE_VCTR.

This keeps the original PURPOSE column, if we decide to use it later.

  1. Leave all other fields and check boxes as is.
  2. Choose Preview.

  1. Choose Add.
  2. Choose Back to data flow.

We can now see nine different transformations in this flow, and we still haven’t written a single line of code.

Handle outliers

As our last step in this flow, we want to handle outliers in our dataset. As part of the data exploration process, we can create an analysis (which we cover in the next section). In the following example scatter plot, I explored if I could gain insights from looking at the relationship between annual income, interest rate, and employment length by observing the dataset on a scatter plot. On the graph, we have the loan receivers INT_RATE_PERCENTAGE on the X axis, ANNUAL_INC on the Y axis, and the data is color-coded by EMP_LENGTH. The dataset has some outliers that might skew the result of our model later. To address this, we use Data Wrangler’s built-in transformation for handling outliers.

  1. Next to Steps, choose the plus sign.
  2. Choose Add transform.
  3. Choose Handle outliers.
  4. On the Transform drop-down, choose Standard deviation numeric outliers.
  5. For Input column, enter ANNUAL_INC.
  6. For Output column, enter ANNUAL_INC_NO_OUTLIERS.

This is optional, but it’s good practice to notate that a column has been transformed for later consumers.

  1. On the Fix method drop-down, leave Clip

This option automatically clips values to the corresponding outlier detection bound, which we set next.

  1. For Standard deviations, leave the default of 4 to start.

This allows values within four standard deviations of the mean to be considered valid (and therefore not clipped). Values outside of this bound are clipped.

  1. Choose Preview.
  2. Choose Add.

The output includes an object type. We need to convert this to a float for it to be valid within our dataset and visualization.

  1. Follow the steps as when parsing a column as type, this time using the ANNUAL_INC_NO_OUTLIERS columns.
  2. Choose Back to data flow to see the current state of the flow.

Add analyses to the data

In this section, we walk through adding analyses to dataset. We focus on visualizations, but there are several other options, including detecting target leakage, generating a bias report, or adding your own custom visualizations using the Altair library.

Scatter plot

To create a scatter plot, complete the following steps:

  1. On the data flow page, next to Steps, choose the plus sign.
  2. Choose Add analysis.
  3. For Analysis type¸ choose Scatter plot.
  4. Using the preceding example, we name this analysis EmpLengthAnnualIncIntRate.
  5. For X Axis, enter INT_RATE_PERCENTAGE.
  6. For Y Axis, enter ANNUAL_INC_NO_OUTLIERS.
  7. For Color by, enter EMP_LENGTH.
  8. Choose Preview.

The following screenshot shows our scatter plot.

We can compare this to the old version, before the anomalies were removed.

So far this is looking good, but let’s add a facet to break out each category in the Grade column into its own graph.

  1. For Facet by, choose GRADE.
  2. Choose Preview.

The following screenshot has been trimmed down for display purposes. The Y axis still represents ANNUAL_INC. For faceted plots, this is displayed on the bottommost plot.

  1. Choose Save to save the analysis.

Export the data flow

Finally, we export this whole data flow as a pipeline, which creates a Jupyter notebook with the code pre-populated. With Data Wrangler, you can also export your data to a Jupyter notebook as a SageMaker processing job, SageMaker feature store, or export directly to Python code.

  1. On the Data Flow console, choose the Export
  2. Choose the steps to export. For our use case, we choose each box that represents a step.

  1. Choose Export step, then choose Pipeline.

The pre-populated Jupyter notebook loads and opens automatically, displaying all the generated steps and code for your data flow. The following screenshot shows the input section that defines the data source.

Clean up

If your work with Data Wrangler is complete, shut down your Data Wrangler instance to avoid incurring additional fees.

Conclusion

In this post, we covered setting up Snowflake as a data source for Data Wrangler, adding transformations and analyses to a dataset, then exporting to the data flow for further use in a Jupyter notebook. We further improved our data flow after visualizing our dataset using the Data Wrangler built-in analysis functionality. Most notably, we built a data preparation pipeline without having to write a single line of code.

To get started with Data Wrangler, see Prepare ML Data with Amazon SageMaker Data Wrangler, and see the latest information on the Data Wrangler product page.

Data Wrangler makes it easy to ingest data and perform data preparation tasks such as exploratory data analysis, feature selection, feature engineering. We’ve only covered a few of the capabilities of Data Wrangler in this post on data preparation; you can use Data Wrangler for more advanced data analysis such as feature importance, target leakage, and model explainability using an easy and intuitive user interface.


About the Authors

Maxwell Moon is a Senior Solutions Architect at AWS working with Independent Software Vendors (ISVs) to design and scale their applications on AWS. Outside of work, Maxwell is a dad to two cats, is an avid supporter of the Wolverhampton Wanderers Football Club, and tries to spend as much time playing music as possible.

Bosco Albuquerque is a Sr Partner Solutions Architect at AWS and has over 20 years of experience in working with database and analytics products from enterprise database vendors, and cloud providers and has helped large technology companies in designing data analytics solutions as well as led engineering teams is designing and implementing data analytics platforms and data products.

Coinsmart. Beste Bitcoin-Börse in Europa
Source: https://aws.amazon.com/blogs/machine-learning/prepare-data-from-snowflake-for-machine-learning-with-amazon-sagemaker-data-wrangler/

spot_img

Latest Intelligence

spot_img

Chat with us

Hi there! How can I help you?