Connect with us

AI

Right-sizing resources and avoiding unnecessary costs in Amazon SageMaker

Avatar

Published

on

Amazon SageMaker is a fully managed service that allows you to build, train, deploy, and monitor machine learning (ML) models. Its modular design allows you to pick and choose the features that suit your use cases at different stages of the ML lifecycle. Amazon SageMaker offers capabilities that abstract the heavy lifting of infrastructure management and provides the agility and scalability you desire for large-scale ML activities with different features and a pay-as-you-use pricing model.

In this post, we outline the pricing model for Amazon SageMaker and offer some best practices on how you can optimize your cost of using Amazon SageMaker resources to effectively and efficiently build, train, and deploy your ML models. In addition, the post offers programmatic approaches for automatically stopping or detecting idle resources that are incurring costs, allowing you to avoid unnecessary charges.

Amazon SageMaker pricing

Machine Learning is an iterative process with different computational needs for prototyping the code and exploring the dataset, processing, training, and hosting the model for real-time and offline predictions. In a traditional paradigm, estimating the right amount of computational resources to support different workloads is difficult, and often leads to over-provisioning resources. The modular design of Amazon SageMaker offers flexibility to optimize the scalability, performance, and costs for your ML workloads depending on each stage of the ML lifecycle. For more information about how Amazon SageMaker works, see the following resources:

  1. What Is Amazon SageMaker?
  2. Amazon SageMaker Studio
  3. Get Started with Amazon SageMaker

The following diagram is a simplified illustration of the modular design for each stage of the ML lifecycle. Each environment, called build, train (and tune), and deploy, use separate compute resources with different pricing.

For more information about the costs involved in your ML journey on Amazon SageMaker, see Lowering total cost of ownership for machine learning and increasing productivity with Amazon SageMaker.

With Amazon SageMaker, you pay only for what you use. Pricing within Amazon SageMaker is broken down by the ML stage: building, processing, training, and model deployment (or hosting), and further explained in this section.

Build environment

Amazon SageMaker offers two environments for building your ML models: SageMaker Studio Notebooks and on-demand notebook instances. Amazon SageMaker Studio is a fully integrated development environment for ML, using a collaborative, flexible, and managed Jupyter notebook experience. You can now access Amazon SageMaker Studio, the first fully integrated development environment (IDE), for free, and you only pay for the AWS services that you use within Studio. For more information, see Amazon SageMaker Studio Tour.

Prices for compute instances are the same for both Studio and on-demand instances, as outlined in Amazon SageMaker Pricing. With Studio, your notebooks and associated artifacts such as data files and scripts are persisted on Amazon Elastic File System (Amazon EFS). For more information about storage charges, see Amazon EFS Pricing.

An Amazon SageMaker on-demand notebook instance is a fully managed compute instance running the Jupyter Notebook app. Amazon SageMaker manages creating the instance and related resources. Notebooks contain everything needed to run or recreate an ML workflow. You can use Jupyter notebooks in your notebook instance to prepare and process data, write code to train models, deploy models to Amazon SageMaker hosting, and test or validate your models.

Processing

Amazon SageMaker Processing lets you easily run your preprocessing, postprocessing, and model evaluation workloads on a fully managed infrastructure. Amazon SageMaker manages the instances on your behalf, and launches the instances for the job and terminates the instances when the job is done. For more information, see Amazon SageMaker Processing – Fully Managed Data Processing and Model Evaluation.

Training and tuning

Depending on the size of your training dataset and how quickly you need the results, you can use resources ranging from a single general-purpose instance to a distributed cluster of GPU instances. Amazon SageMaker manages these resources on your behalf, and provisions, launches, and then stops and terminates the compute resources automatically for the training jobs. With Amazon SageMaker training and tuning, you only pay for the time the instances were consumed for training. For more information, see Train and tune a deep learning model at scale.

Amazon SageMaker automatic model tuning, also known as hyperparameter tuning, finds the best version of a model by running many training jobs on your dataset using the algorithm and ranges of hyperparameters that you specify on a cluster of instances you define. Similar to training, you only pay for the resources consumed during the tuning time.

Deployment and hosting

You can perform model deployment for inference in two different ways:

  • ML hosting for real-time inference – After you train your model, you can deploy it to get predictions in real time using a persistent endpoint with Amazon SageMaker hosting services
  • Batch transform – You can use Amazon SageMaker batch transform to get predictions on an entire dataset offline

The Amazon SageMaker pricing model

The following table summarizing the pricing model for Amazon SageMaker.

ML Compute Instance Storage Data Processing In/Out
Build (On-Demand Notebook Instances) Per instance-hour consumed while the notebook instance is running. Cost of GB-month of provisioned storage. No cost.
Build (Studio Notebooks) Per instance-hour consumed while the instance is running. See Amaon Elastic File System (EFS) pricing. No cost.
Processing Per instance-hour consumed for each instance while the processing job is running. Cost of GB-month of provisioned storage. No cost.
Training and Tuning

On-Demand Instances: Per instance-hour consumed for each instance, from the time an instance is available for use until it is terminated or stopped. Each partial instance-hour consumed is billed per second.

Spot Training: Save up to 90% costs compared to on-demand instances by using managed spot training.

GB-month of provisioned storage. No cost.
Batch Transform Per instance-hour consumed for each instance while the batch transform job is running. No cost. No cost
Deployment (Hosting) Per instance-hour consumed for each instance while the endpoint is running. GB-month of provisioned storage. GB Data Processed IN and GB Data Processed OUT of the endpoint instance.

You can also get started with Amazon SageMaker with the free tier. For more information about pricing, see Amazon SageMaker Pricing.

Right-sizing compute resources for Amazon SageMaker notebooks, processing jobs, training, and deployment

With the pricing broken down based on time and resources you use in each stage of an ML lifecycle, you can optimize the cost of Amazon SageMaker and only pay for what you really need. In this section, we discuss general guidelines to help you choose the right resources for your Amazon SageMaker ML lifecycle.

Amazon SageMaker currently offers ML compute instances on the following instance families:

  • T – General-purpose burstable performance instances (when you don’t need consistently high levels of CPU, but benefit significantly from having full access to very fast CPUs when you need them)
  • M – General-purpose instances
  • C – Compute-optimized instances (ideal for compute bound applications)
  • R – Memory-optimized instances (designed to deliver fast performance for workloads that process large datasets in memory)
  • P, G and Inf – Accelerated compute instances (using hardware accelerators, or co-processors)
  • EIA – Inference acceleration instances (used for Amazon Elastic Inference)

Instance type consideration for a computational workload running on an Amazon SageMaker ML compute instance is no different than running on an Amazon Elastic Compute Cloud (Amazon EC2) instance. For more information about instance specifications, such as number of virtual CPU and amount of memory, see Amazon SageMaker Pricing.

Build environment

The Amazon SageMaker notebook instance environment is suitable for interactive data exploration, script writing, and prototyping of feature engineering and modeling. We recommend using notebooks with instances that are smaller in compute for interactive building and leaving the heavy lifting to ephemeral training, tuning, and processing jobs with larger instances, as explained in the following sections. This way, you don’t keep a large instance (or a GPU) constantly running with your notebook. This can help you minimize your build costs by selecting the right instance.

For the building stage, the size of an Amazon SageMaker on-demand notebook instance depends on the amount of data you need to load in-memory for meaningful exploratory data analyses (EDA) and the amount of computation required. We recommend starting small with general-purpose instances (such as T or M families) and scale up as needed.

The burstable T family of instances is ideal for notebook activity because computation only comes when you run a cell but you want full power from CPU. For example, ml.t2.medium is sufficient for most of basic data processing, feature engineering, and EDA that deal with small datasets that can be held within 4 GB memory. You can select an instance with larger memory capacity, such as ml.m5.12xlarge (192 GB memory), if you need to load significantly more data into the memory for feature engineering. If your feature engineering involves heavy computational work (such as image processing), you can use one of the compute-optimized C family instances, such as ml.c5.xlarge.

The benefit of Studio notebooks over on-demand notebook instances is that with Studio, the underlying compute resources are fully elastic and you can change the instance on the fly, allowing you to scale the compute up and down as your compute demand changes, for example from ml.t3.medium to ml.g4dn.xlarge as your build compute demand increases, without interrupting your work or managing infrastructure. Moving from one instance to another is seamless, and you can continue working while the instance launches. With on-demand notebook instances, you need to stop the instance, update the setting, and restart with the new instance type.

To keep your build costs down, we recommend stopping your on-demand notebook instances or shutting down your Studio instances when you don’t need them. In addition, you can use AWS Identity and Access Management (IAM) condition keys as an effective way to restrict certain instance types, such as GPU instances, for specific users, thereby controlling costs. We go into more detail in the section Recommendations for avoiding unnecessary costs.

Processing environment

After you complete data exploration and prototyping with a subset of your data and are ready to apply the preprocessing and transformation on the entire data, you can launch an Amazon SageMaker Processing job with your processing script that you authored during the EDA phase without scaling up the relatively small notebook instance you have been using. Amazon SageMaker Processing dispatches all things needed for processing the entire dataset, such as code, container, and data, to a compute infrastructure separate from the Amazon SageMaker notebook instance. Amazon SageMaker Processing takes care of the resource provisioning, data and artifact transfer, and shutdown of the infrastructure once the job finishes.

The benefit of using Amazon SageMaker Processing is that you only pay for the processing instances while the job is running. Therefore, you can take advantage of powerful instances without worrying too much about the cost. For example, as a general recommendation, you can use an ml.m5.4xlarge for medium jobs (MBs of GB of data), ml.c5.18xlarge for workloads requiring heavy computational capacity, or ml.r5.8xlarge when you want to load multiple GBs of data in memory for processing, and only pay for the time of the processing job. Sometimes, you may consider using a larger instance to get the job done quicker, and end up paying less in total cost of the job.

Alternatively, for distributed processing, you can use a cluster of smaller instances by increasing the instance count. For this purpose, you could shard input objects by Amazon Simple Storage Service (Amazon S3) key by setting s3_data_distribution_type='ShardedByS3Key' inside a ProcessingInput so that each instance receives about the same number of more manageable input objects, and instead you can use smaller instances in the cluster, leading to potential cost savings. Furthermore, you could perform the processing job asynchronously with .run(…, wait = False), as if you submit the job and get your notebook cell back immediately for other activities, leading to a more efficient use of your build compute instance time.

Training and tuning environment

The same compute paradigm and benefits for Amazon SageMaker Processing apply to Amazon SageMaker Training and Tuning. When you use fully managed Amazon SageMaker Training, it dispatches all things needed for a training job, such as code, container, and data, to a compute infrastructure separate from the Amazon SageMaker notebook instance. Therefore, your training jobs aren’t limited by the compute resource of the notebook instance. The Amazon SageMaker Training Python SDK also supports asynchronous training when you call .fit(…, wait = False). You get your notebook cell back immediately for other activities, such as calling .fit() again for another training job with a different ML compute instance for profiling purposes or a variation of the hyperparameter settings for experimentation purposes. Because ML training can often be a compute-intensive and time-consuming part of the ML lifecycle, with training jobs happening asynchronously in a remote compute infrastructure, you can safely shut down the notebook instance for cost-optimizing purposes if starting a training job is the last task of your day. We discuss how to automatically shut down unused, idle on-demand notebook instances in the section Recommendations for avoiding unnecessary costs.

Cost-optimization factors that you need to consider when selecting instances for training include the following:

  • Instance family – What type of instance is suitable for the training? You need to optimize for overall cost of training, and sometimes selecting a larger instance can lead to much faster training and thus less total cost; can the algorithm even utilize a GPU instance?
  • Instance size – What is the minimum compute and memory capacity your algorithm requires to run the training? Can you use distributed training?
  • Instance count – If you can use distributed training, what instance type (CPU or GPU) can you use in the cluster, and how many?

As for the choice of instance type, you could base your decision on what algorithms or frameworks you use for the workload. If you use the Amazon SageMaker built-in algorithms, which give you a head start without any sophisticated programming, see Instance types for built-in algorithms for detailed guidelines. For example, XGBoost currently only trains using CPUs. It is a memory-bound (as opposed to compute-bound) algorithm. So, a general-purpose compute instance (for example, M5) is a better choice than a compute-optimized instance (for example, C4).

Furthermore, we recommend having enough total memory in selected instances to hold the training data. Although it supports the use of disk space to handle data that doesn’t fit into main memory (the out-of-core feature available with the libsvm input mode), writing cache files onto disk slows the algorithm processing time. For the object detection algorithm, we support the following GPU instances for training:

  • ml.p2.xlarge
  • ml.p2.8xlarge
  • ml.p2.16xlarge
  • ml.p3.2xlarge
  • ml.p3.8xlarge
  • ml.p3.16xlarge

We recommend using GPU instances with more memory for training with large batch sizes. You can also run the algorithm on multi-GPU and multi-machine settings for distributed training.

If you’re bringing your own algorithms with script mode or with custom containers, you need to first clarify whether the framework or algorithm supports CPU, GPU, or both to decide the instance type to run the workload. For example, scikit-learn doesn’t support GPU, meaning that training with accelerated compute instances doesn’t result in any material gain in runtime but leads to overpaying for the instance. To determine which instance type and number of instances, if training in distributed fashion, for your workload, it’s highly recommended to profile your jobs to find the sweet spot between number of instance and runtime, which translates to cost. For more information, see Amazon Web Services achieves fasted training times for BERT and Mask R-CNN. You should also find the balance between instance type, number of instances, and runtime. For more information, see Train ALBERT for natural language processing with TensorFlow on Amazon SageMaker.

When it comes to GPU-powered P and G families of instances, you need to consider the differences. For example, P3 GPU compute instances are designed to handle large distributed training jobs for fastest time to train, whereas G4 instances are suitable for cost-effective, small-scale training jobs.

Another factor to consider in training is that you can select from either On-Demand Instances or Spot Instances. On-demand ML instances for training let you pay for ML compute capacity based on the time the instance is consumed, at on-demand rates. However, for jobs that can be interrupted or don’t need to start and stop at specific times, you can choose managed Spot Instances (Managed Spot Training). Amazon SageMaker can reduce the cost of training models by up to 90% over On-Demand Instances, and manages the Spot interruptions on your behalf.

Deployment/hosting environment

In many cases, up to 90% of the infrastructure spend for developing and running an ML application is on inference, making the need for high-performance, cost-effective ML inference infrastructure critical. This is mainly because the build and training jobs aren’t frequent and you only pay for the duration of build and training, but an endpoint instance is running all the time (while the instance is in service). Therefore, selecting the right way to host and the right type of instance can have a large impact on the total cost of ML projects.

For model deployment, it’s important to work backwards from your use case. What is the frequency of the prediction? Do you expect live traffic to your application and real-time response to your clients? Do you have many models trained for different subsets of data for the same use case? Does the prediction traffic fluctuate? Is latency of inference a concern?

There are hosting options from Amazon SageMaker for each of these situations. If your inference data comes in batches, Amazon SageMaker batch transform is a cost-effective way to obtain predictions with fully managed infrastructure provisioning and tear-down. If you have trained multiple models for one single use case, a multi-modal endpoint is a great way to save cost on hosting ML models that are trained on a per-user or segment basis. For more information, see Save on inference costs by using Amazon SageMaker multi-model endpoints.

After you decide how to host your models, load testing is the best practice to determine the appropriate instance type and fleet size, with or without autoscaling for your live endpoint to avoid over-provisioning and paying extra for capacity you don’t need. Algorithms that train most efficiently on GPUs might not benefit from GPUs for efficient inference. It’s important to load test to determine the most cost-effective solution. The following flowchart summarizes the decision process.

Amazon SageMaker offers different options for instance families that you can use for inference, from general-purpose instances to compute-optimized and GPU-powered instances. Each family is optimized for a different application, and not all instance types are suitable for inference jobs. For example, Amazon Inf1 instances offer high throughput and low latency and have the lowest cost per inference in the cloud. G4 instances for inference G4 have the lowest cost per inference for GPU instances and offer greater performance, lower latency, and reduced cost per inference for GPU instances. And P3 instances are optimized for training, and are designed to handle large distributed training jobs for fastest time to train, and thus not fully utilized for inference.

Another way to lower inference cost is to use Elastic Inference for cost savings of up to 75% on inference jobs. Picking an instance type and size for inference may not be easy, given the many factors involved. For example, for larger models, the inference latency of CPUs may not meet the needs of online applications, while the cost of a full-fledged GPU may not be justified. In addition, resources like RAM and CPU may be more important to the overall performance of your application than raw inference speed. With Elastic Inference, you attach just the right amount of GPU-powered inference acceleration to any Amazon compute instance. This is also available for Amazon SageMaker notebook instances and endpoints, bringing acceleration to built-in algorithms and to deep learning environments. This lets you select the best price/performance ratio for your application. For example, an ml.c5.large instance configured with eia1.medium acceleration costs you about 75% less than an ml.p2.xlarge, but with only 10–15% slower performance. For more information, see Amazon Elastic Inference – GPU-Powered Deep Learning Inference Acceleration.

In addition, you can use Auto Scaling for Amazon SageMaker to add and remove capacity or accelerated instances to your endpoints automatically, whenever needed. With this feature, instead of having to closely monitor inference volume and change the endpoint configuration in response, your endpoint automatically adjusts the number of instances up or down in response to actual workloads, determined by using Amazon CloudWatch metrics and target values defined in the policy. For more information, see AWS Auto Scaling.

Recommendations for avoiding unnecessary costs

Certain Amazon SageMaker resources (such as processing, training, tuning, and batch transform instances) are ephemeral, and Amazon SageMaker automatically launches the instance and terminates them when the job is done. However, other resources (such as build compute resources or hosting endpoints) aren’t ephemeral, and the user has control over when these resources should be stopped or terminated. Therefore, knowing how to identify idle resources and stopping them can lead to better cost-optimization. This section outlines some useful methods for automating these processes.

Build environment: Automatically stopping idle on-demand notebook instances

One way to avoid the cost of idle notebook instances is to automatically stop idle instances using lifecycle configurations. With lifecycle configuration in Amazon SageMaker, you can customize your notebook environment by installing packages or sample notebooks on your notebook instance, configuring networking and security for it, or otherwise use a shell script to customize it. Such flexibility allows you to have more control over how your notebook environment is set up and run.

AWS maintains a public repository of notebook lifecycle configuration scripts that address common use cases for customizing notebook instances, including a sample bash script for stopping idle notebooks.

You can configure your notebook instance using a lifecycle configuration to automatically stop itself if it’s idle for a certain period of time (a parameter that you set). The idle state for a Jupyter notebook is defined in the following GitHub issue. To create a new lifecycle configuration for this purpose, follow these steps:

  1. On the Amazon SageMaker console, choose Lifecycle configurations.
  2. Choose Create a new lifecycle configuration (if you are creating a new one).
  3. For Name, enter a name using alphanumeric characters and -, but no spaces. The name can have a maximum of 63 characters. For example, Stop-Idle-Instance.
  4. To create a script that runs when you create the notebook and every time you start it, choose Start notebook.
  5. In the Start notebook editor, enter the script.
  6. Choose Create configuration.

The bash script to use for this purpose can be found on AWS Samples repository for lifecycle configuration samples. This script is basically running a cron job at a specific period of idle time, as defined with parameter IDLE_TIME in the script. You can change this time to your preference and change the script as needed on the Lifecycle configuration page.

For this script to work, the notebook should meet these two criteria:

  • The notebook instance has internet connectivity to fetch the example config Python script (autostop.py) from the public repository
  • The notebook instance execution role permissions to SageMaker:StopNotebookInstance to stop the notebook and SageMaker:DescribeNotebookInstance to describe the notebook

If you create notebook instances in a VPC that doesn’t allow internet connectivity, you need to add the Python script inline in the bash script. The script is available on the GitHub repo. Enter it in your bash script as follows, and use this for lifecycle configuration instead:

#!/bin/bash
set -e
# PARAMETERS
IDLE_TIME=3600 echo "Creating the autostop.py"
cat << EOF > autostop.py
##
## [PASTE PYTHON SCRIPT FROM GIT REPO HERE]
##
EOF echo "Starting the SageMaker autostop script in cron"
(crontab -l 2>/dev/null; echo "*/5 * * * * /usr/bin/python $PWD/autostop.py --time $IDLE_TIME --ignore-connections") | crontab -

The following screenshot shows how to choose the lifecycle configuration on the Amazon SageMaker console.

Alternatively, you can store the script on Amazon S3 and connect to the script through a VPC endpoint. For more information, see New – VPC Endpoint for Amazon S3.

Now that you have created the lifecycle configuration, you can assign it to your on-demand notebook instance when creating a new one or when updating existing notebooks. To create a notebook with your lifecycle configuration (for this post, Stop-Idle-Instance), you need to assign the script to the notebook under the Additional Configuration section. All other steps are the same as outlined in Create a On-Demand Notebook Instance. To attach the lifecycle configuration to an existing notebook, you first need to stop the on-demand notebook instance, and choose Update settings to make changes to the instance. You attach the lifecycle configuration in the Additional configuration section.

Build environment: Scheduling start and stop of on-demand notebook instances

Another approach is to schedule your notebooks to start and stop at specific times. For example, if you want to start your notebooks (such as notebooks of specific groups or all notebooks in your account) at 7:00 AM and stop all of them at 9:00 PM during weekdays (Monday through Friday), you can accomplish this by using Amazon CloudWatch Events and AWS Lambda functions. For more information about configuring your Lambda functions, see Configuring functions in the AWS Lambda console. To build the schedule for this use case, you can follow the steps in the following sections.

Starting notebooks with a Lambda function

To start your notebooks with a Lambda function, complete the following steps:

  1. On the Lambda console, create a Lambda function for starting on-demand notebook instances with specific keywords in their name. For this post, our development team’s on-demand notebook instances have names starting with dev-.
  2. Use Python as the runtime for the function, and name the function start-dev-notebooks.

Your Lambda function should have the SageMakerFullAccess policy attached to its execution IAM role.

  1. Enter the following script into the Function code editing area:
# Code to start InService Notebooks that contain specific keywords in their name
# Change "dev-" in NameContains to your specific use case import boto3
client = boto3.client('sagemaker')
def lambda_handler(event, context): try: response_nb_list = client.list_notebook_instances( NameContains='dev-', # Change this to your specific use case StatusEquals= 'Stopped' ) for nb in response_nb_list['NotebookInstances']: response_nb_stop = client.start_notebook_instance( NotebookInstanceName = nb['NotebookInstanceName']) return {"Status": "Success"} except: return {"Status": "Failure"}

  1. Under Basic Settings, change Timeout to 15 minutes (max).

This step makes sure the function has the maximum allowable timeout range during stopping and starting multiple notebooks.

  1. Save your function.

Stopping notebooks with a Lambda function

To stop your notebooks with a Lambda function, follow the same steps, use the following script, and name the function stop-dev-notebooks:

# Code to stop InService Notebooks that contain specific keywords in their name
# Change "dev-" in NameContains to your specific use case import boto3
client = boto3.client('sagemaker')
def lambda_handler(event, context): try: response_nb_list = client.list_notebook_instances( NameContains='dev-', # Change this to your specific use case StatusEquals= 'InService' ) for nb in response_nb_list['NotebookInstances']: response_nb_stop = client.stop_notebook_instance( NotebookInstanceName = nb['NotebookInstanceName']) return {"Status": "Success"} except: return {"Status": "Failure"}

Creating a CloudWatch event

Now that you have created the functions, you need to create an event to trigger these functions on a specific schedule.

We use cron expression format for the schedule. For more information about creating your custom cron expression, see Schedule Expressions for Rules. All scheduled events use UTC time zone, and the minimum precision for schedules is 1 minute.

For example, the cron expression for 7:00 AM, Monday through Friday throughout the year, is 0 7 ? * MON-FRI *, and for 9:00 PM on the same days is 0 21 ? * MON-FRI *.

To create the event for stopping your instances on a specific schedule, complete the following steps:

  1. On the CloudWatch console, under Events, choose Rules.
  2. Choose Create rule.
  3. Under Event Source, select Schedule, and then select Cron expression.
  4. Enter your cron expression (for example, 21 ? * MON-FRI * for 9:00 PM Monday through Friday).
  5. Under Targets, choose Lambda function.
  6. Choose your function from the list (for this post, stop-dev-notebooks).
  7. Choose Configure details

  1. Add a name for your event, such as Stop-Notebooks-Event, and a description.
  2. Leave Enabled
  3. Choose Create.

You can follow the same steps to create scheduled event to start your notebooks on a schedule, such as 7:00 AM on weekdays, so when your staff start their day, the notebooks are ready and in service.

Hosting environment: Automatically detecting idle Amazon SageMaker endpoints

You can deploy your ML models as endpoints to test the model for real-time inference. Sometimes these endpoints are accidentally left in service, leading to ongoing charges on the account. You can automatically detect these endpoints and take corrective actions (such as deleting them) by using CloudWatch Events and Lambda functions. For example, you can detect if endpoints have been idle for the past number of hours (with no invocations over a certain period, such as 24 hours). The function script we provide in this section detects idle endpoints and publishes a message to an Amazon Simple Notification Service (Amazon SNS) topic with the list of idle endpoints. You can subscribe the account admins to this topic, and they receive emails with the list of idle endpoints when detected. To create this scheduled event, follow these steps:

  1. Create an SNS topic and subscribe your email or phone number to it.
  2. Create a Lambda function with the following script.
    1. Your Lambda function should have the following policies attached to its IAM execution role: CloudWatchReadOnlyAccess, AmazonSNSFullAccess, and AmazonSageMakerReadOnly.
import boto3
from datetime import datetime
from datetime import timedelta def lambda_handler(event, context): idle_threshold_hr = 24 # Change this to your threshold in hours cw = boto3.client('cloudwatch') sm = boto3.client('sagemaker') sns = boto3.client('sns') try: inservice_endpoints = sm.list_endpoints( SortBy='CreationTime', SortOrder='Ascending', MaxResults=100, # NameContains='string', # for example 'dev-' StatusEquals='InService' ) idle_endpoints = [] for ep in inservice_endpoints['Endpoints']: ep_describe = sm.describe_endpoint( EndpointName=ep['EndpointName'] ) metric_response = cw.get_metric_statistics( Namespace='AWS/SageMaker', MetricName='Invocations', Dimensions=[ { 'Name': 'EndpointName', 'Value': ep['EndpointName'] }, { 'Name': 'VariantName', 'Value': ep_describe['ProductionVariants'][0]['VariantName'] } ], StartTime=datetime.utcnow()-timedelta(hours=idle_threshold_hr), EndTime=datetime.utcnow(), Period=int(idle_threshold_hr*60*60), Statistics=['Sum'], Unit='None' ) if len(metric_response['Datapoints'])==0: idle_endpoints.append(ep['EndpointName']) if len(idle_endpoints) > 0: response_sns = sns.publish( TopicArn='YOUR SNS TOPIC ARN HERE', Message="The following endpoints have been idle for over {} hrs. Log on to Amazon SageMaker console to take actions.nn{}".format(idle_threshold_hr, 'n'.join(idle_endpoints)), Subject='Automated Notification: Idle Endpoints Detected', MessageStructure='string' ) return {'Status': 'Success'} except: return {'Status': 'Fail'}

You can also revise this code to filter the endpoints based on resource tags. For more information, see AWS Python SDK Boto3 documentation.

Investigating endpoints

This script sends an email (or text message, depending on how the SNS topic is configured) with the list of detected idle endpoints. You can then sign in to the Amazon SageMaker console and investigate those endpoints, and delete them if you find them to be unused stray endpoints. To do so, complete the following steps:

  1. On the Amazon SageMaker console, under Inference, choose Endpoints.

You can see the list of all endpoints on your account in that Region.

  1. Select the endpoint that you want to investigate, and under Monitor, choose View invocation metrics.
  2. Under All metrics, select Invocations

You can see the invocation activities on the endpoint. If you notice no invocation event (or activity) for the duration of your interest, it means the endpoint isn’t in use and you can delete it.

  1. When you’re confident you want to delete the endpoint, go back to the list of endpoints, select the endpoint you want to delete, and under the Actions menu, choose

Conclusion

This post walked you through how Amazon SageMaker pricing works, best practices for right-sizing Amazon SageMaker compute resources for different stages of an ML project, and best practices for avoiding unnecessary costs of unused resources by either automatically stopping idle on-demand notebook instances or automatically detecting idle Amazon SageMaker endpoints so you can take corrective actions.

By understanding how Amazon SageMaker works and the pricing model for Amazon SageMaker resources, you can take steps in optimizing your total cost of ML projects even further.


About the authors

Nick Minaie is an Artificial Intelligence and Machine Learning (AI/ML) Specialist Solution Architect, helping customers on their journey to well-architected machine learning solutions at scale. In his spare time, Nick enjoys family time, abstract painting, and exploring nature.

Michael Hsieh is a Senior AI/ML Specialist Solutions Architect. He works with customers to advance their ML journey with a combination of AWS ML offerings and his ML domain knowledge. As a Seattle transplant, he loves exploring the great mother nature the city has to offer such as the hiking trails, scenic kayaking in the SLU, and the sunset at the Shilshole Bay.

Source: https://aws.amazon.com/blogs/machine-learning/right-sizing-resources-and-avoiding-unnecessary-costs-in-amazon-sagemaker/

AI

10 Ways Machine Learning Practitioners Can Build Fairer Systems

Avatar

Published

on

Author profile picture

@skylerwhartonSkyler Wharton

Software Engineer (ML & Backend) @ Airbnb. My opinions are my own. [they/them]

An introduction to the harm that ML systems cause and to the power imbalance that exists between ML system developers and ML system participants …and 10 concrete ways for machine learning practitioners to help build fairer ML systems.

Image caption: Photo by Koshu Kunii on Unsplash. Image description: Photo of Black Lives Matter protesters in Washington, D.C. — 2 signs say “Black Lives Matter” and “White Silence is Violence.”

Machine learning systems are increasingly used as tools of oppression. All too often, they’re used in high-stakes processes without participants’ consent and with no reasonable opportunity for participants to contest the system’s decisions — like when risk assessment systems are used by child welfare services to identify at-risk children; when a machine learning (or “ML”) model decides who sees which online ads for employment, housing, or credit opportunities; or when facial recognition systems are used to surveil neighborhoods where Black and Brown people live.

ML systems are deployed widely because they are viewed as “neutral” and “objective.”

In reality though, machine learning systems reflect the beliefs and biases of those who design and develop them.

As a result, ML systems mirror and amplify the beliefs and biases of their designers, and are at least as susceptible to making mistakes as human arbiters.

When ML systems are deployed at scale, they cause harm — especially when their decisions are wrong. This harm is disproportionately felt by members of marginalized communities [1]. This is especially evident in this moment, when people protesting as part of the global movement for Black Lives are being tracked by police departments using facial recognition systems [2] and when an ML system was recently used to determine students’ A-level grades in the U.K. after the tests were cancelled due to the pandemic, jeopardizing the futures of poorer students, many of whom are people of color and immigrants [3].

In this post, I’ll describe some examples of harm caused by machine learning systems. Then I’ll offer some concrete recommendations and resources that machine learning practitioners can use to develop fairer machine learning systems. I hope this post encourages other machine learning practitioners to start using and educating their peers about practices for developing fairer ML systems within their teams and companies.

How machine learning systems cause harm

In June 2020, Robert Williams, a Black man, was arrested by the Detroit Police Department because a facial recognition system identified him as the person who committed a recent shoplifting; however, visual comparison of his face to the face in the photo clearly revealed that they weren’t the same person [4].

Nevertheless, Mr. Williams was arrested, interrogated, kept in custody for more than 24 hours, released on bail on his own money, and had to court before his case was dismissed.

This “accident” significantly harmed Mr. Williams and his family:

  • He felt humiliated and embarrassed. When interviewed by the New York Times about this incident, he said, “My mother doesn’t know about it. It’s not something I’m proud of … It’s humiliating.”
  • It caused lasting trauma to him and his family. Had Mr. Williams resisted arrest — which would have been reasonable given that it was unjust — he could have been killed. As it was, the experience was harrowing. He and his wife now wonder whether they need to put their two young daughters into therapy.
  • It put his job — and thus his ability to support himself and his family — at risk. He could have lost his job, even though his case was ultimately dismissed; companies have fired employees with impunity for far less. Fortunately, his boss was understanding of the situation, but his boss still advised him not to tell others at work.
  • It nearly resulted in him having a permanent criminal record. When Mr. Williams went to court, his case was initially dismissed “without prejudice,” which meant that he could still be charged later. Only after the false positive received widespread media attention did the prosecutor apologize and offer to expunge his record and fingerprints.

The harms caused here by a facial recognition system used by a local police department are unacceptable.

Facebook’s ad delivery system is another example of a harmful machine learning system. In 2019, Dr. Piotr Sapieżyński, a research scientist at Northeastern University, and his collaborators conducted an experiment using Facebook’s own marketing tools to discover how employment ads are distributed on Facebook [5, 6]. Through this experiment, they discovered that Facebook’s ad delivery system, despite neutral targeting preferences, shows significantly different job ads to each user depending upon their gender and race. In other words, even if an advertiser specifies that they want their ad to be seen uniformly by all genders and all races, Facebook’s ad delivery system will, depending on the content of the ad, show the ad to a race- and/or gender-skewed audience.

Specifically, Dr. Sapieżyński and collaborators discovered that women are more likely to receive ads for supermarket, janitor, and preschool jobs, whereas men are more likely to receive ads for taxi, artificial intelligence, and lumber jobs. (The researchers acknowledge that the study was limited to binary genders due to restrictions in Facebook’s advertising tools.) They similarly discovered that Black people are more likely to receive ads for taxi, janitor, and restaurant jobs, whereas white people are more likely to receive ads for secretary, artificial intelligence, and lumber jobs.

Facebook’s ad delivery system is an example of a consumer-facing ML system that causes harm to those who participate in it:

  • It perpetuates and amplifies gender- and race-based employment stereotypes for people who use Facebook. For example, women are shown ads for jobs that have historically been associated with “womanhood” (e.g., caregiving or cleaning jobs); seeing such ads reinforces their own — and also other genders’ — perceptions of jobs that women can or “should” do. This is also the case for the ads shown to Black people.
  • It restricts Black users’ and woman users’ access to economic opportunity. The advertisements that Facebook shows to Black people and women are for noticeably lower-paying jobs. If Black people and women do not even know about available higher-paying jobs, then they are unable to apply for and be hired for them.

The harms caused by Facebook’s ad delivery system are also unacceptable.

Broader context

In the case of both aforementioned algorithmic systems, the harm they cause goes deeper: they amplify existing systems of oppression, often in the name of “neutrality” and “objectivity.” In other words, the examples above are not isolated incidents; they contribute to long-standing patterns of harm.

For example, Black people, especially Black men and Black masculine people, have been systematically overpoliced, targeted, and murdered for the last four hundred years. This is undoubtedly still true, as evidenced by the recent murders by the police of George Floyd, Breonna Taylor, Tony McDade, and Ahmaud Arbery and recent shooting by the police of Jacob Blake.

Commercial facial recognition systems allow police departments to more easily and subtly target Black men and masculine people, including to target them at scale. A facial recognition system can identify more “criminals” in an hour than a hundred police officers could in a month, and it can do so less expensively. Thus, commercial facial recognition systems allow police departments to “mass produce” their practice of overpolicing, targeting, and murdering Black people.

Moreover, in 2018, computer science researchers Joy Buolamwini and Dr. Timnit Gebru showed that commercial facial recognition systems are significantly less accurate for darker-skinned people than they are for lighter-skinned people [7]. Indeed, when used for surveillance, facial recognition systems identify the wrong person up to 98% of the time [8]. As a result, when allowed to be used by police departments, commercial facial recognition systems cause harm not only by “scaling” police forces’ discriminatory practices but also by identifying the wrong person the majority of the time.

Facebook’s ad delivery system also amplifies a well-documented system of oppression: wealth inequality by race. In the United States, the median adjusted household income of white and Asian households is 1.6x greater than that of Black and Hispanic households (~$71K vs. $43K), and the median net worth of white households is 13x greater than that of Black households (~$144K vs. $11K) [9]. Thus, by consistently showing ads for only lower-paying jobs to the millions of Black people who use Facebook, Facebook is entrenching and widening the wealth gap between Black people and more affluent demographic groups (especially white people) in the United States. Facebook’s ad delivery system is likely similarly amplifying wealth inequities in other countries around the world.

How collecting labels for machine learning systems causes harm

Harm is not only caused by machine learning systems that have been deployed; harm is also caused while machine learning systems are being developed. That is, harm is often caused while labels are being collected for the purpose of training machine learning models.

For example, in February 2019, The Verge’s Casey Newton released a piece about the working conditions inside Cognizant, a vendor that Facebook hires to label and moderate Facebook content [10]. His findings were shocking: Facebook was essentially running a digital sweatshop.

What they discovered:

  • Employees were underpaid: In Phoenix, AZ, a moderator made $28,800/year (versus the $240,000/year total compensation of a full-time Facebook employee).
  • Working conditions at Cognizant were abysmal: Employees were often fired after making just a few mistakes a week. Since a “mistake” occurred when two employees disagreed about how a piece of content should be moderated, resentment grew between employees. Fired employees often threatened to return to work and harm their old colleagues. Additionally, employees were micromanaged: they got two 15-minute breaks and one 30-minute lunch per day. Much of their break time was spent waiting in line for the bathroom, as often >500 people had to share six bathroom stalls.
  • Employees’ mental health was damaged: Moderators spent most of their time reviewing graphically violent or hateful content, including animal abuse, child abuse, and murders. As a result of watching six hours per day of violent or hateful content, employees developed severe anxiety, often while still in training. After leaving the company, employees developed symptoms of PTSD. While employed, employees had access to only nine minutes of mental health support per day; after they left the company, they had no mental health support from Facebook or Cognizant.

Similar harms are caused by crowdsourcing platforms like Amazon Mechanical Turk, through which individuals, academic labs, or companies submit tasks for “crowdworkers” to complete:

  • Employees are underpaid. Mechanical Turk and other similar platforms are premised on a large amount of unpaid labor: workers are not paid to find tasks, for tasks they start but can’t complete due to vague instructions, for tasks rejected by task authors for often arbitrary reasons, or for breaks. As a result, the median wage for a crowdworker on Mechanical Turk is approximately $2/hour [11]. Workers who do not live in the United States, are women, and/or are disabled are likely to earn much less per hour [12].
  • Working conditions are abysmal. Workers’ income fluctuates over time, so they can’t plan for themselves or their families for the long-term; workers don’t get healthcare or any other benefits; and workers have no legal protections.
  • Employees’ mental health is damaged. Crowdworkers often struggle to find enough well-paying tasks, which causes stress and anxiety. For example, workers report waking up at 2 or 3am in order to get tasks that pay better [11].

Contrary to popular belief, many people who complete tasks on crowdsourcing platforms do so in order to earn the bulk of their income. Thus, people who work for private labeling companies like Cognizant and people who work for crowdsourcing platforms like Mechanical Turk have a similar goal: to complete labeling tasks in a safe and healthy work environment in exchange for fair wages.

Why these harms are happening

At this point, you might be asking yourself, “Why are these harms happening?” The answer is multifaceted: there are many reasons why deployed machine learning systems cause harm to their participants.

When ML systems are used

A big reason that machine learning systems cause harm is due to the contexts in which they’re used. That is, because machine learning systems are considered “neutral” and “objective,” they’re often used in high-stakes decision processes as a way to save money. High-stakes decision processes are inherently more likely to cause harm, since a mistake made during the decision process could have a significant negative impact on someone’s life.

At best, introducing a machine learning system into a high-stakes decision process does not affect the probability that the system causes harm; at worst, it increases the probability of harm, due to machine learning models’ tendency to amplify biases against marginalized groups, human complacency around auditing the model’s decisions (since they’re “neutral” and “objective”), and that machine learning models’ decisions are often uninterpretable.

How ML systems are designed

Machine learning systems also cause harm because of how they’re designed. For example, when designing a system, engineers often do not account for the possibility that the system could make an incorrect decision; thus, machine learning systems often do not include a mechanism for participants to feasibly contest the decision or seek recourse.

Whose perspectives are centered when ML systems are designed

Another reason that ML systems cause harm is that the perspectives of people who are most likely to be harmed by them are not centered when the system is being designed.

Systems designed by people will reflect the beliefs and biases — both conscious and unconscious — of those people. Machine learning systems are overwhelmingly built by a very homogenous group of people: white, Asian-American, or Asian heterosexual cisgender men who are between 20 and 50 years old, who are able-bodied and neurotypical, who are American and/or who live in the United States, and who have a traditional educational background, including a degree in computer science from one of ~50 elite universities. As a result, machine learning systems are biased towards the experiences of this narrow group of people.

Additionally, machine learning systems are used in often contexts that disproportionately involve historically marginalized groups (like predicting recidivism or surveilling “high crime” neighborhoods) or to determine access to resources that have long been unfairly denied to marginalized groups (like housing, employment opportunities, credit and loans, and healthcare). For example, since Black people have historically been denied fair access to healthcare, machine learning systems used in such contexts display similar patterns of discrimination, because they hinge on historical assumptions and data [13]. As a result, unless deliberate action is taken to center the experiences of the groups that ML systems are arbitrating, machine learning systems lead to history repeating itself.

At the intersection of the aforementioned two points is a chilling realization: the people who design machine learning systems are rarely the people who are affected by machine learning systems. This rings eerily similar to the fact that most police do not live in the cities where they work [14].

Lack of transparency around when ML systems are used

Harm is also caused by machine learning systems because it’s often unclear when an algorithm has been used to make a decision. This is because companies are not required to disclose when and how machine learning systems are used (much less get participants’ consent), even when the outcomes of those decisions affect human lives. If someone is unaware that they’ve been affected by an ML system, then they can’t attribute harm they may have experienced to it.

Additionally, even if a person knows or suspects that they’ve been harmed by a machine learning system, proving that they’ve been discriminated against is difficult or impossible, since the complete set of decisions made by the ML system is private and thus cannot be audited for discrimination. As a result, harm that machine learning systems cause often cannot be “proven.”

Lack of legal protection for ML system participants

Finally, machine learning systems cause harm because there is currently very little regulatory or legal oversight around when and how machine learning systems are used, so companies, governments, and other organizations can use them to discriminate against participants with impunity.

With respect to facial recognition, this is slowly changing: in 2019, San Francisco became the first major city to ban the use of facial recognition by local government agencies [15]. Since then, several other cities have done the same, including Oakland, CA; Somerville, MA; and Boston, MA [16, 17].

Nevertheless, there are still hundreds of known instances of local government agencies using facial recognition, including at points of entry into the United States like borders and airports and by local police for unspecified purposes [18]. Use of facial recognition systems in these contexts — especially given that the majority of their decisions are likely wrong [8] — have real-world impact, including harassment, unjustified imprisonment, and deportation.

With respect to other types of machine learning systems, there have been few legal advances.

Call to action

Given the contexts in which ML systems are used, the current lack of legal and regulatory oversight for such contexts, and the lack of societal power that people harmed by ML systems tend to have (due to their, e.g., race, gender, disability, citizenship, and/or wealth), ML system developers have massively more power than participants.

Image caption: There are huge power imbalances in machine learning system development: ML system developers have more power than ML system participants, and labeling task requesters have more power than labeling agents. [Image source: http://www.clker.com/clipart-scales-uneven.html] Image description: Imbalanced scale image — ML system developer & labeling task requester weigh more than ML system participant & labeling agent

There’s a similar power dynamic between people who design labeling tasks and people who complete labeling tasks: labeling task requesters have more power than labeling agents.

Here, ML system developer is defined as anyone who is involved in the design, development, and deployment of machine learning systems, including machine learning engineers and data scientists and also software engineers of other technical disciplines, product managers, engineering managers, UX researchers, UX writers, lawyers, mid-level managers, and C-suite executives. All of these roles are included in order to emphasize that even if you don’t work directly on a machine learning system, if you work at a company or organization that uses machine learning systems, then you have power to affect change on when and how machine learning is used at your company.

Let me be clear: individual action is not enough — we desperately need well-designed legislation to guide when and how ML systems can be used. Importantly, there should be some contexts in which ML systems cannot be used, no matter how “accurate” they are, because the probability of misuse and mistakes are too great — like police departments using facial recognition systems [19].

Unfortunately, we do not have necessary legislation and regulation in place yet. In the meantime, as ML system developers, we should intentionally consider the ML systems that we, our teams, or our companies own and utilize.

How to build fairer machine learning systems

If you are a machine learning system developer — especially if you are machine learning practitioner, like an ML engineer or data scientist — here are 10 ways you can help build machine learning systems that are more fair:

#1

When designing a new ML system or evaluating an existing ML system, ask yourself and your team the following questions about the context in which the system is being deployed/is deployed [20]:

  • What could go wrong when this ML system is deployed?
  • When something goes wrong, who is harmed?
  • How likely is it that something will go wrong?
  • Does the harm disproportionately fall on marginalized groups?

Use your answers to these questions to evaluate how to proceed. For example, if possible, proactively engineer solutions that prevent harms from occurring (e.g., add safeguards to prevent harm, like including human intervention and mechanisms for participants to contest system decisions, and inform participants that a machine learning algorithm is being used). Alternately, if the likelihood and scale of harm are too high, do not deploy it. Instead, consider pursuing a solution that does not depend on machine learning or that uses machine learning in a less risky way. Deploying a biased machine learning system can cause real-world harm to system participants as well as reputational damage to your company [21, 22, 23].

#2

Utilize best practices for developing fairer ML systems. Machine learning fairness researchers have been designing and testing best practices for several years now. For example, one best practice is to, when releasing a dataset for public or internal use, simultaneously release a datasheet, a short document that shares information that consumers of the dataset need in order to make informed decisions about using it (e.g., mechanisms or procedures used to collect the data, whether an ethical review process was conducted, whether or not the dataset relates to people) [24].

Similarly, when releasing a trained model for public or internal use, simultaneously release a model card, a short document that shares information about the model (e.g., evaluation results (ideally disaggregated across different demographic groups and communities), intended usage(s), usages to avoid, insight into model training processes) [25].

Finally, consider implementing a company-wide process for internal algorithmic auditing, like that which Deb RajiAndrew Smart, and their collaborators proposed in their 2020 paper Closing the AI Accountability Gap: Defining an End-to-End Framework for Internal Algorithmic Auditing.

#3

Work with your company or organization to develop partnerships with advocacy organizations that represent groups of people that machine learning systems tend to marginalize, in order to responsibly engage marginalized communities as stakeholders. Examples of such organizations include Color Of Change and the NAACP. Then, while developing new machine learning systems or evaluating existing machine learning systems, seek and incorporate their feedback.

#4

Hire machine learning engineers and data scientists from underrepresented backgrounds, especially Black people, Indigenous people, Latinx people, disabled people, transgender and nonbinary people, formerly incarcerated people, and people from countries that are underrepresented in technology (e.g., countries in Africa, countries in Southeast Asia, and counties in South America). Note that this will require rethinking how talent is discovered and trained [26] — consider recruiting from historically-black colleges and universities (HBCUs) in the U.S. and coding and data science bootcamps or starting an internal program like Slack’s Next Chapter.

On a related note, work with your company to support organizations that foster talent from underrepresented backgrounds, like AI4ALLBlack Girls CodeCode2040NCWITTECHNOLOchicas, TransTech, and Out for Undergrad. Organizations like these are critical for increasing the number of people from underrepresented backgrounds in technology jobs, including in ML/AI jobs, and all of them have a proven track record of success. Additionally, consider supporting organizations like these with your own money and time.

#5

Work with your company or organization to sign the Safe Face Pledge, an opportunity for organizations to make public commitments towards mitigating the abuse of facial analysis technology. This pledge was jointly drafted by the Algorithmic Justice League and the Center on Technology & Privacy at Georgetown Law, and has already been signed by many leading ethics and privacy experts.

#6

Learn more about the ways in which machine learning systems cause harm. For example, here are seven recommended resources to continue learning:

  1. [Book] Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy by Cathy O’Neil (2016)
  2. [Book] Algorithms of Oppression: How Search Engines Reinforce Racism by Safiya Noble (2018)
  3. [Book] Artificial Unintelligence: How Computers Misunderstand the World by Meredith Broussard (2018)
  4. [Book] Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor by Virginia Eubanks (2019)
  5. [Book] Race After Technology: Abolitionist Tools for the New Jim Code by Ruha Benjamin (2019)
  6. [Book] Ghost Work: How to Stop Silicon Valley from Building a New Global Underclass by Mary L. Gray and Siddharth Suri (2019)
  7. [Film] Coded Bias (2020)

Additionally, you can learn more about harms caused by ML systems by reading the work of journalists and researchers who are uncovering biases in machine learning systems. In addition to the researchers and journalists I’ve already named in this essay (e.g., Dr. Piotr SapieżyńskiCasey Newton, Joy BuolamwiniDr. Timnit GebruDeb RajiAndrew Smart), some examples include Julia Angwin (and anything written by The Markup), Khari JohnsonMoira WeigelLauren Kirchner, and anything written by Upturn. The work of journalists and researchers serve as important case studies for how not to design machine learning systems, which is valuable for ML practitioners’ who are aiming to develop fair and equitable ML systems.

#7

Learn about ways in which existing machine learning systems have been improved in order to cause less harm. For example, IBM has worked to improve the performance of their commercial facial recognition system with respect to racial and gender bias (direct link), Google has worked to reduce gender bias in Google Translate (direct link), and Jigsaw (within Google) has worked to change Perspective AI (a public API for hate speech detection algorithm) to less often classify phrases containing frequently targeted groups (e.g., Muslims, women, queer people) as being hate speech (direct link).

#8

Conduct an audit of a machine learning system for disparate impact. Disparate impact occurs when, even though a policy or system is neutral, one group of people is adversely affected more than another. Facebook’s ad delivery system is an example of a system causing disparate impact.

For example, use Project Lighthouse, a methodology that Airbnb released earlier this year that uses anonymized demographic data to measure user experience discrepancies that may be due to discrimination or bias, or ArthurAI, an ML monitoring framework that allows you to also monitor model bias. (Full disclosure: I work at Airbnb.)

Alternatively, hire an algorithmic consulting firm to conduct an audit of a machine learning system that your team or company owns, like O’Neil Risk Consulting & Algorithmic Auditing or the Algorithmic Justice League.

#9

When hiring third-party vendors or using crowdsourcing platforms for machine learning labeling tasks, be critical of who you choose to support. Inquire about the working conditions of the people who will be labeling for you. Additionally, if possible, make an onsite visit to the vendor to gauge working conditions for yourself. What is their hourly pay? Do they have healthcare and other benefits? Are they full-time employees or contractors? Do they expose their workforce to graphically violent or hateful content? Are there opportunities for career growth and advancement within the company?

#10

Give a presentation to your team or company about harms that machine learning systems’ cause and how to mitigate them. The more people who understand the harms that machine learning systems cause and the power imbalance that currently exists between ML system developers and ML system participants, the more likely it is that we can affect change on our teams and in our companies.

#11

Finally, the bonus #11 in this list is, if you are eligible to do so in the United States, VOTE. There is so much at stake in this upcoming election, including the rights of BIPOC people, immigrants, women, LGBTQ people, and disabled people as well as — quite literally — the future of our democracy. If you are not registered to vote, please do so now: Register to vote. If you are registered to vote but have not requested your absentee or mail-in ballot, please do so now: Request your absentee ballotEven though Joe Biden is far from the perfect candidate, we need to elect him and Kamala Harris; this country, the people in it, and so many people around the world cannot survive another four years of a Trump presidency.

Conclusion

Machine learning systems are incredibly powerful tools; unfortunately though, they can be either agents of empowerment or agents of harm. As machine learning practitioners, we have a responsibility to recognize the harm that systems we build cause and then act accordingly. Together, we can work toward a world in which machine learning systems are used responsibly, do not reinforce existing systemic biases, and uplift and empower people from marginalized communities.

This piece was inspired in part by Participatory Approaches to Machine Learning, a workshop at the 2020 International Conference on Machine Learning (ICML) that I had the opportunity to attend in July. I would like to deeply thank the organizers of this event for calling attention to the power imbalance between ML system developers and ML system participants and for creating a space to discuss it: Angela ZhouDavid MadrasInioluwa Deborah RajiBogdan KulynychSmitha Milli, and Richard Zemel. Also published at here.

References

[1] Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy by Cathy O’Neil. Published 2016.

[2] NYPD used facial recognition to track down Black Lives Matter activistThe Verge. August 18, 2020.

[3] An Algorithm Determined UK Students’ Grades. Chaos EnsuedWired. August 15, 2020.

[4] Wrongfully Accused by an AlgorithmThe New York Times. June 24, 2020.

[5] Discrimination through Optimization: How Facebook’s Ad Delivery Can Lead to Biased Outcomes. Muhammad Ali, Piotr Sapiezynski, Miranda Bogen, Aleksandra Korolova, Alan Mislove, and Aaron Rieke. CSCW 2019.

[6] Turning the tables on Facebook: How we audit Facebook using their own marketing tools. Piotr Sapiezynski, Muhammad Ali, Aleksandra Korolova, Alan Mislove, Aaron Rieke, Miranda Bogen, and Avijit Ghosh. Talk given at PAML Workshop at ICML 2020.

[7] Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification. Joy Buolamwini and Timnit Gebru. ACM FAT* 2018.

[8] Facial-recognition software inaccurate in 98% of cases, report findsCNET. May 13, 2018.

[9] On Views of Race and Inequality, Blacks and Whites Are Worlds Apart: Demographic trends and economic well-beingPew Research Center. June 27, 2016.

[10] The Trauma Floor: The secret lives of Facebook moderators in AmericaThe Verge. February 25, 2019.

[11] The Internet Is Enabling a New Kind of Poorly Paid HellThe Atlantic. January 23, 2018.

[12] Worker Demographics and Earnings on Amazon Mechanical Turk: An Exploratory Analysis. Kotaro Hara, Abigail Adams, Kristy Milland, Saiph Savage, Benjamin V. Hanrahan, Jeffrey P. Bigham, and Chris Callison-Burch. CHI Late Breaking Work 2019.

[13] Millions of black people affected by racial bias in health-care algorithmsNature. October 24, 2019.

[14] Most Police Don’t Live In The Cities They ServeFiveThirtyEight. August 20, 2014.

[15] San Francisco’s facial recognition technology ban, explainedVox. May 14, 2019.

[16] Beyond San Francisco, more cities are saying no to facial recognitionCNN. July 17, 2019.

[17] Boston is second-largest US city to ban facial recognitionSmart Cities Dive. July 6, 2020.

[18] Ban Facial Recognition: Map. Accessed August 30, 2020.

[19] Defending Black Lives Means Banning Facial RecognitionWired. July 10, 2020.

[20] Credit for the framing goes to Dr. Cathy O’Neil, of O’Neil Risk Consulting & Algorithmic Auditing.

[21] Amazon reportedly scraps internal AI recruiting tool that was biased against womenThe Verge. October 10, 2018.

[22] Google ‘fixed’ its racist algorithm by removing gorillas from its image-labeling techThe Verge. January 12, 2018.

[23] Facebook’s ad-serving algorithm discriminates by gender and raceMIT Technology Review. April 5, 2019.

[24] Datasheets for Datasets. Timnit Gebru, Jamie Morgenstern, Briana Vecchione, Jennifer Wortman Vaughan, Hanna Wallach, Hal Daumé III, and Kate Crawford. ArXiv preprint 2018.

[25] Model Cards for Model Reporting. Margaret Mitchell, Simone Wu, Andrew Zaldivar, Parker Barnes, Lucy Vasserman, Ben Hutchinson, Elena Spitzer, Inioluwa Deborah Raji, and Timnit Gebru. ACM FAT* 2019.

[26] Combating Anti-Blackness in the AI Community.

Author profile picture

Read my stories

Software Engineer (ML & Backend) @ Airbnb. My opinions are my own. [they/them]

Tags

The Noonification banner

Subscribe to get your daily round-up of top tech stories!

Source: https://hackernoon.com/10-ways-machine-learning-practitioners-can-build-fairer-systems-9p213t7l?source=rss

Continue Reading

AI

Singapore Organizations Adopt AI, ML Amid COVID-19 Induced Uncertainties

Avatar

Published

on

Amid the COVID-19 pandemic, Singapore businesses are turning to artificial intelligence (AI) and machine learning (ML) to manage consumer credit risk and deal with economic uncertainties, according to a new research by information services company Experian.

Experian, which surveyed 3,000 consumers and 900 executives working in retail banking, e-commerce, consumer technology and telecommunications, found that COVID-19 has accelerated adoption of digital solutions.

Singapore organizations in particular are embracing AI and ML at a much faster pace than their international peers, with 78% of organizations already using AI to cope with today’s marketplace unpredictability while 79% are leveraging ML. These are higher than the global figure of 69%.

S&P Global Ratings estimates that Asia Pacific (APAC) financial institutions will be hit with US$1.4 trillion in additional nonperforming assets and additional credit costs of about US$440 million as risks associated with COVID-19 and market volatility take hold.

Against this backdrop, 25% of Singapore-based respondents are planning to use on-demand cloud-based decisioning applications, policy rules (25%) and automated decision management (24%) to help them effectively determine which consumers can be safely given extended credit. Over the next 12 months, 69% will be allocating resources towards building their analytics capabilities to assess customer creditworthiness, the survey found.

Online shopping and e-commerce on the rise

Singaporean businesses’ willingness to invest in and adopt digital solutions comes at a time when consumers are demanding better digital-first experiences. A research conducted in June by market research consultancy Blackbox and survey firm Toluna found that while consumers spent more online during the pandemic, about four in ten Singaporeans said they were not satisfied with their e-commerce experience, noting that delivery costs, product prices and delivery time could be better improved.

That being said, global marketing research firm Nielsen expects the penetration of users venturing into e-commerce to continue to rise. Nielsen’s COVID-19 dipstick in March 2020 found that 69% of Singaporean people surveyed who bought household goods online for the first time during COVID-19 will do so again in the next 12 months.

Similarly, Standard Chartered, which polled 12,000 consumers across 12 markets in August 2020, found that, amid COVID-19, Singaporean consumers that prefer online purchases to in-person card or cash payments increased to 50%, up from 35% before the pandemic.

Changing spending habits

Globally, the COVID-19 crisis and its ramifications have disrupted markets and deteriorated the health and economic welfare of consumers. In Singapore, 23% of respondents still face challenges in paying credit card bills, while 20% are encountering difficulties paying their utility bills, the Experian research found. This has prompted many consumers to rethink their spending habits, shifting to essentials and cutting back on most discretionary categories.

In Singapore, consumers are taking steps to manage these financial challenges by reducing their expenditure on non-essentials (22%), saving more (22%), and starting a personal budget (17%), the study found.

According to the Standard Chartered survey, consumers in the city-state are spending about 15-52% more on groceries, digital devices and healthcare, but spend less on clothes, experiences and travel or holidays.

Almost eight in ten respondents in Singapore said they would like to be better at managing their finances, and six in ten said the pandemic has made them more likely to track their spending. Most of the respondents are either user or interested in using budgeting as well as finance tracking tools.

Jeremy Soo, head of consumer banking at DBS Bank, told Fintech News Singapore in September, that, amid COVID-19, people were starting financial planning earlier. Since the bank launched its new digital financial planning tool, NAV Planner, back in April, over one million customers had used it, Soo said.

Featured Image: Pexels

Print Friendly, PDF & Email

Source: https://fintechnews.sg/44597/ai/singapore-organizations-adopt-ai-ml-amid-covid-19-induced-uncertainties/

Continue Reading

AI

How to Get the Best Start at Sports Betting

Avatar

Published

on

If you are looking into getting into sports betting, then you might be hesitant about how to start, and the whole idea of it can be quite daunting. There are many techniques to get the best possible start at sports betting and, in this article, we will have a look at some of the best tips for that.

Mental preparation

This sounds a bit pretentious, but it is very important to understand some things about betting before starting so you can not only avoid nasty surprises but also avoid losing too much money. Firstly, you need to know that, in the beginning, you will not be good at betting. It is through experience and learning from your mistakes that you will get better. It is imperative that you do not convince yourself that you are good at betting, especially if you win some early bets, because I can guarantee it will have been luck – and false confidence is not your friend. 

It is likely that you will lose some money at first, but this is to be expected. Almost any hobby that you are interested in will cost you some money so, instead, look at it as an investment. However, do not invest ridiculous amounts; rather, wait until you are confident in your betting ability to start placing larger stakes. 

Set up different accounts

This is the best way to start with sports betting, as the welcome offers will offset a lot of the risk. These offers are designed to be profitable to entice you into betting with the bookie, but it is completely legal to just profit from the welcome offer and not bet with the bookie again. 

If you do this with the most bookies, as you can, you are minimising the risk involved with your betting and maximising possible returns, so it really is a no-brainer.

As well as this clear advantage, different betting companies offer different promotions. Ladbrokes offer a boost every day, for example, where you can choose your bet and boost it a little bit, and the Parimatch betting website chooses a bet for big events and doubles the odds. 

If you are making sure you stay aware of the best offers across these platforms, then you will be able to use the most lucrative ones and, as such, you will be giving yourself the best chance of making money. The house always wins, as they say, but if you use this tip, you are skewing the odds back in your favour. 

Remember, the house wins because of gamblers that do not put in the effort and do not bet smart. Avoid those mistakes and you will massively increase your chances of making money.

Tipsters

On Twitter, especially, but also other social media platforms, there are tipsters who offer their bets for free. It is not so much the bets themselves that you are interested in, but rather why they are betting on this. It is important that you find tipsters who know what they are doing, though, because there are a lot of tipsters who are essentially scamming their customers. It is quite easy to find legitimate tipsters because they are not afraid to show their mistakes. 

Once you have found good tipsters, then you need to understand the reasoning behind their bets. When you have done that, you can start placing these bets yourself, and they will likely be of better value since some tipsters influence the betting markets considerably. You can also follow their bets as they are likely to be sensible bets, although this does not necessarily translate to success.

Source: https://1reddrop.com/2020/10/20/how-to-get-the-best-start-at-sports-betting/?utm_source=rss&utm_medium=rss&utm_campaign=how-to-get-the-best-start-at-sports-betting

Continue Reading
Esports8 hours ago

cogu joins MIBR as manager and coach

Energy8 hours ago

Strategic Resources Files Mustavaara Technical Report

Energy9 hours ago

Ur-Energy Announces Extension of State Bond Loan and Provides Update

Energy9 hours ago

Pettit Marine Paint Develops the Most Effective Anti-fouling Paint to Hit the Market in Many Years – ODYSSEY® TRITON

Energy9 hours ago

Core Lab Reports Third Quarter 2020 Results From Continuing Operations:

Energy9 hours ago

A Difference-Making Disinfectant

Automotive10 hours ago

How Car Tires Are Manufactured

Medical Devices10 hours ago

5 Real World Applications of the Doppler Effect

Big Data11 hours ago

Join Hands with Instagram’s New Algorithm to Boost Your Business’s User Engagement

Energy12 hours ago

BioMicrobics Acclaimed by Frost & Sullivan for Its Continuous Innovation-led Growth in the Water and Wastewater Treatment Market

Energy13 hours ago

SME Education Foundation Seeks Industry Involvement for Unadilla Valley High School Initiative to Create STEM Opportunities for Students

Energy13 hours ago

Verisem Acquires State-of-the-Art Vegetable Seed Processing Facility, Further Enhancing Capabilities

Energy13 hours ago

Global Synthetic and Bio Based Polypropylene Market 2020-2026 Growing Demand in the Automotive Industries

AR/VR13 hours ago

AI-Driven Dynamic Filmmaking is the Future

Energy14 hours ago

Growing Concerns around Global Warming Are Set to Drive Hypercar Market Forward: TMR

AR/VR15 hours ago

Angry Birds VR and Acron: Attack of the Squirrels Gear up for Halloween

Crowdfunding15 hours ago

This Is a $103 Billion Profit Opportunity

Energy15 hours ago

Power Plant Boiler Market by Type, Capacity, Technology, Fuel Type, and Region – Global Forecast to 2025

Energy15 hours ago

Rising Phoenix Royalties Announces Second Yoakum County, Permian Basin, Oil and Natural Gas Royalty Acquisition

Energy15 hours ago

Chem-Dry Grows Amid Pandemic with Signed Agreements to Open 64 New Franchises Across the Nation

Energy16 hours ago

Key Trends and Recent Innovations in Powder Bed Fusion, IDTechEx Identifies

Blockchain News16 hours ago

Bitcoin Breaks $12K Resistance and Aims for $14K as BTC Rallies Higher in the Expense of Altcoins

Energy16 hours ago

Pasternack Now Offers a Broad Selection of Field Replaceable Connectors Available for Same-Day Shipment

AR/VR16 hours ago

Star Wars: Tales from the Galaxy’s Edge Gameplay Trailer Drops With November Date for Oculus Quest

Blockchain16 hours ago

Central Banks Are Getting a Prototype of CBDC

Crunchbase17 hours ago

The Briefing: RVShare raises over $100M, Google disputes charges, and more

Blockchain17 hours ago

Mode Adds Bitcoin to Reserves, Joining Microstrategy and Square

Blockchain17 hours ago

Bitcoin Breaks the $12,000 Overhead Resistance; Here’s Why

Blockchain17 hours ago

Has Bitcoin met its match with this altcoin?

Crunchbase17 hours ago

Syte Sees $30M Series C For Product Discovery

Blockchain News17 hours ago

What could a Democrat Sweep in US Elections mean for Bitcoin?

Blockchain17 hours ago

Yearn Finance Adds GUSD Vaults and Updated Keep3r Network Details

Blockchain17 hours ago

B2BX Exchange Announces B2BX Token Buyback

Blockchain17 hours ago

Coinend: 1, 2, 3 Take off -New Gamified Crypto Prediction Platform!

Coinpedia18 hours ago

GenTech Proudly Secures Deal with TruLife Distribution to Drive Growth in SINFIT Digital Sales

Blockchain18 hours ago

Coinend: 1, 2, 3, Take off – New gamified crypto prediction platform!

Blockchain18 hours ago

PayPal to allow transactions in Bitcoin, Ethereum etc. from 2021

Blockchain News18 hours ago

Ripple CTO Assesses XRP as a Bridge Cryptocurrency Between CBDCs, Stablecoins, and Fiat

Blockchain18 hours ago

PayPal Introduces Crypto Trading, Plans To Start Crypto Shopping Facility

Big Data18 hours ago

Top 10 Big Data trends of 2020

Trending