Amazon SageMaker Studio is a web-based fully integrated development environment (IDE) where you can perform end-to-end machine learning (ML) development to prepare data and build, train, and deploy models.
Like other AWS services, Studio supports a rich set of security-related features that allow you to build highly secure and compliant environments.
One of these fundamental security features allows you to launch Studio in your own Amazon Virtual Private Cloud (Amazon VPC). This allows you to control, monitor, and inspect network traffic within and outside your VPC using standard AWS networking and security capabilities. For more information, see Securing Amazon SageMaker Studio connectivity using a private VPC.
Customers in regulated industries, such as financial services, often don’t allow any internet access in ML environments. They often use only VPC endpoints for AWS services, and connect only to private source code repositories in which all libraries have been vetted both in terms of security and licensing. Customers may want to provide internet access but also have some controls such as domain name or URL filtering and allow access to only specific public repositories and websites, possibly packet inspection, or other network traffic-related security controls. For these cases, AWS Network Firewall and NAT gateway-based deployment may provide a suitable use case.
In this post, we show how you can use Network Firewall to build a secure and compliant environment by restricting and monitoring internet access, inspecting traffic, and using stateless and stateful firewall engine rules to control the network flow between Studio notebooks and the internet.
Depending on your security, compliance, and governance rules, you may not need to or cannot completely block internet access from Studio and your AI and ML workloads. You may have requirements beyond the scope of network security controls implemented by security groups and network access control lists (ACLs), such as application protocol protection, deep packet inspection, domain name filtering, and intrusion prevention system (IPS). Your network traffic controls may also require many more rules compared to what is currently supported in security groups and network ACLs. In these scenarios, you can use Network Firewall—a managed network firewall and IPS for your VPC.
When you deploy Studio in your VPC, you control how Studio accesses the internet with the parameter
AppNetworkAccessType (via the Amazon SageMaker API) or by selecting your preference on the console when you create a Studio domain.
If you select Public internet Only (
PublicInternetOnly), all the ingress and egress internet traffic from Amazon SageMaker notebooks flows through an AWS managed internet gateway attached to a VPC in your SageMaker account. The following diagram shows this network configuration.
Studio provides public internet egress through a platform-managed VPC for data scientists to download notebooks, packages, and datasets. Traffic to the attached Amazon Elastic File System (Amazon EFS) volume always goes through the customer VPC and never through the public internet egress.
To use your own control flow for the internet traffic, like a NAT or internet gateway, you must set the
AppNetworkAccessType parameter to
VpcOnly (or select VPC Only on the console). When you launch your app, this creates an elastic network interface in the specified subnets in your VPC. You can apply all available layers of security control—security groups, network ACLs, VPC endpoints, AWS PrivateLink, or Network Firewall endpoints—to the internal network and internet traffic to exercise fine-grained control of network access in Studio. The following diagram shows the
VpcOnly network configuration.
In this mode, the direct internet access to or from notebooks is completely disabled, and all traffic is routed through an elastic network interface in your private VPC. This also includes traffic from Studio UI widgets and interfaces, such as Experiments, Autopilot, and Model Monitor, to their respective backend SageMaker APIs.
For more information about network access parameters when creating a domain, see CreateDomain.
The solution in this post uses the
VpcOnly option and deploys the Studio domain into a VPC with three subnets:
- SageMaker subnet – Hosts all Studio workloads. All ingress and egress network flow is controlled by a security group.
- NAT subnet – Contains a NAT gateway. We use the NAT gateway to access the internet without exposing any private IP addresses from our private network.
- Network Firewall subnet – Contains a Network Firewall endpoint. The route tables are configured so that all inbound and outbound external network traffic is routed via Network Firewall. You can configure stateful and stateless Network Firewall policies to inspect, monitor, and control the traffic.
The following diagram shows the overview of the solution architecture and the deployed components.
The solution deploys the following resources in your account:
- A VPC with a specified Classless Inter-Domain Routing (CIDR) block
- Three private subnets with specified CIDRs
- Internet gateway, NAT gateway, Network Firewall, and a Network Firewall endpoint in the Network Firewall subnet
- A Network Firewall policy and stateful domain list group with an allow domain list
- Elastic IP allocated to the NAT gateway
- Two security groups for SageMaker workloads and VPC endpoints, respectively
- Four route tables with configured routes
- An Amazon S3 VPC endpoint (type Gateway)
- AWS service access VPC endpoints (type Interface) for various AWS services that need to be accessed from Studio
Network routing for targets outside the VPC is configured in such a way that all ingress and egress internet traffic goes via the Network Firewall and NAT gateway. For details and reference network architectures with Network Firewall and NAT gateway, see Architecture with an internet gateway and a NAT gateway, Deployment models for AWS Network Firewall, and Enforce your AWS Network Firewall protections at scale with AWS Firewall Manager. The AWS re:Invent 2020 video Which inspection architecture is right for you? discusses which inspection architecture is right for your use case.
The solution creates a SageMaker domain and user profile.
The solution uses only one Availability Zone and is not highly available. A best practice is to use a Multi-AZ configuration for any production deployment. You can implement the highly available solution by duplicating the Single-AZ setup—subnets, NAT gateway, and Network Firewall endpoints—to additional Availability Zones.
You use Network Firewall and its policies to control entry and exit of the internet traffic in your VPC. You create an allow domain list rule to allow internet access to the specified network domains only and block traffic to any domain not on the allow list.
AWS CloudFormation resources
Network Firewall is a Regional service; for more information on Region availability, see the AWS Region Table.
Your CloudFormation stack doesn’t have any required parameters. You may want to change the
*CIDR parameters to avoid naming conflicts with the existing resources and your VPC CIDR allocations. Otherwise, use the following default values:
- ProjectName –
- DomainName –
- UserProfileName –
- VPCCIDR – 10.2.0.0/16
- FirewallSubnetCIDR – 10.2.1.0/24
- NATGatewaySubnetCIDR – 10.2.2.0/24
- SageMakerStudioSubnetCIDR – 10.2.3.0/24
Deploy the CloudFormation template
To start experimenting with the Network Firewall and stateful rules, you need first to deploy the provided CloudFormation template to your AWS account.
- Clone the GitHub repository:
- Create an S3 bucket in the Region where you deploy the solution:
You can skip this step if you already have an S3 bucket.
- Deploy the CloudFormation stack:
The deployment procedure packages the CloudFormation template and copies it to the S3 bucket your provided. Then the CloudFormation template is deployed from the S3 bucket to your AWS account.
The stack deploys all the needed resources like VPC, network devices, route tables, security groups, S3 buckets, IAM policies and roles, and VPC endpoints, and also creates a new Studio domain and user profile.
When the deployment is complete, you can see the full list of stack output values by running the following command in terminal:
- Launch Studio via the SageMaker console.
Experiment with Network Firewall
Now you can learn how to control the internet inbound and outbound access with Network Firewall. In this section, we discuss the initial setup, accessing resources not on the allow list, adding domains to the allow list, configuring logging, and additional firewall rules.
The solution deploys a Network Firewall policy with a stateful rule group with an allow domain list. This policy is attached to the Network Firewall. All inbound and outbound internet traffic is blocked now, except for the
.kaggle.com domain, which is on the allow list.
Let’s try to access
https://kaggle.com by opening a new notebook in Studio and attempting to download the front page from
The following screenshot shows that the request succeeds because the domain is allowed by the firewall policy. Users can connect to this and only to this domain from any Studio notebook.
Access resources not on the allowed domain list
In the Studio notebook, try to clone any public GitHub repository, such as the following:
This operation times out after 5 minutes because any internet traffic except to and from the .kaggle.com domain isn’t allowed and is dropped by Network Firewall.
Add a domain to the allowed domain list
To be able to run the git clone command, you must allow internet traffic to the
- On the Amazon VPC console, choose Firewall policies.
- Choose the policy network-firewall-policy-<ProjectName>.
- In the Stateful rule groups section, select the group rule domain-allow-sagemaker-<ProjectName>.
You can see the domain
.kaggle.com on the allow list.
- Choose Add domain.
- Choose Save.
You now have two names on the allow domain list.
Firewall policy is propagated in real time to Network Firewall and your changes take effect immediately. Any inbound or outbound traffic from or to these domains is now allowed by the firewall and all other traffic is dropped.
To validate the new configuration, go to your Studio notebook and try to clone the same GitHub repository again:
The operation succeeds this time—Network Firewall allows access to the
Network Firewall logging
In this section, you configure Network Firewall logging for your firewall’s stateful engine. Logging gives you detailed information about network traffic, including the time that the stateful engine received a packet, detailed information about the packet, and any stateful rule action taken against the packet. The logs are published to the log destination that you configured, where you can retrieve and view them.
- On the Amazon VPC console, choose Firewalls.
- Choose your firewall.
- Choose the Firewall details tab.
- In the Logging section, choose Edit.
- Configure your firewall logging by selecting what log types you want to capture and providing the log destination.
For this post, select Alert log type, set Log destination for alerts to CloudWatch Log group, and provide an existing or a new log group where the firewall logs are delivered.
- Choose Save.
To check your settings, go back to Studio and try to access
pypi.org to install a Python package:
This command fails with
ReadTimeoutError because Network Firewall drops any traffic to any domain not on the allow list (which contains only two domains:
On the Amazon CloudWatch console, navigate to the log group and browse through the recent log streams.
pipy.org domain shows the
blocked action. The log event also provides additional details such as various timestamps, protocol, port and IP details, event type, availability zone, and the firewall name.
You can continue experimenting with Network Firewall by adding
.pythonhosted.org domains to the allowed domain list.
Then validate your access to them via your Studio notebook.
Additional firewall rules
You can create any other stateless or stateful firewall rules and implement traffic filtering based on a standard stateful 5-tuple rule for network traffic inspection (protocol, source IP, source port, destination IP, destination port). Network Firewall also supports industry standard stateful Suricata compatible IPS rule groups. You can implement protocol-based rules to detect and block any non-standard or promiscuous usage or activity. For more information about creating and managing Network Firewall rule groups, see Rule groups in AWS Network Firewall.
Additional security controls with Network Firewall
In the previous section, we looked at one feature of the Network Firewall: filtering network traffic based on the domain name. In addition to stateless or stateful firewall rules, Network Firewall provides several tools and features for further security controls and monitoring:
Build secure ML environments
A robust security design normally includes multi-layer security controls for the system. For SageMaker environments and workloads, you can use the following AWS security services and concepts to secure, control, and monitor your environment:
- VPC and private subnets to perform secure API calls to other AWS services and restrict internet access for downloading packages.
- S3 bucket policies that restrict access to specific VPC endpoints.
- Encryption of ML model artifacts and other system artifacts that are either in transit or at rest. Requests to the SageMaker API and console are made over a Secure Sockets Layer (SSL) connection.
- Restricted IAM roles and policies for SageMaker runs and notebook access based on resource tags and project ID.
- Restricted access to Amazon public services, such as Amazon Elastic Container Registry (Amazon ECR) to VPC endpoints only.
For a reference deployment architecture and ready-to-use deployable constructs for your environment, see Amazon SageMaker with Guardrails on AWS.
In this post, we showed how you can secure, log, and monitor internet ingress and egress traffic in Studio notebooks for your sensitive ML workloads using managed Network Firewall. You can use the provided CloudFormation templates to automate SageMaker deployment as part of your Infrastructure as Code (IaC) strategy.
For more information about other possibilities to secure your SageMaker deployments and ML workloads, see Building secure machine learning environments with Amazon SageMaker.
About the Author
Yevgeniy Ilyin is a Solutions Architect at AWS. He has over 20 years of experience working at all levels of software development and solutions architecture and has used programming languages from COBOL and Assembler to .NET, Java, and Python. He develops and codes cloud native solutions with a focus on big data, analytics, and data engineering.
Listen: OakNorth CIO shares automation trends in commercial lending
Commercial banks have been automating aspects of the lending and decisioning process, primarily at the lower end of the commercial lending spectrum, but hesitate to automate for loans more than $1 million. This means commercial banks have kept automations focused on loans of less than $1 million, explains Sean Hunter in this podcast discussion with […]
Predictive Maintenance is a Killer AI App
By John P. Desmond, AI Trends Editor
Predictive maintenance (PdM) has emerged as a killer AI app.
In the past five years, predictive maintenance has moved from a niche use case to a fast-growing, high return on investment (ROI) application that is delivering true value to users. These developments are an indication of the power of the Internet of Things (IoT) and AI together, a market considered in its infancy today.
These observations are from research conducted by IoT Analytics, consultants who supply market intelligence, which recently estimated that the $6.9 billion predictive maintenance market will reach $28.2 billion by 2026.
The company began its research coverage of the IoT-driven predictive maintenance market in 2016, at an industry maintenance conference in Dortmund, Germany. Not much was happening. “We were bitterly disappointed,” stated Knud Lasse Lueth, CEO at IoT Analytics, in an account in IoT Business News. “Not a single exhibitor was talking about predictive maintenance.”
Things have changed. IoT Analytics analyst Fernando Alberto Brügge stated, “Our research in 2021 shows that predictive maintenance has clearly evolved from the rather static condition-monitoring approach. It has become a viable IoT application that is delivering overwhelmingly positive ROI.”
Technical developments that have contributed to the market expansion include: a simplified process for connecting IoT assets, major advances in cloud services, and improvements in the accessibility of machine learning/data science frameworks, the analysts state.
Along with the technical developments, the predictive maintenance market has seen a steady increase in the number of software and service providers offering solutions. IoT Analytics identified about 100 companies in the space in 2016; today the company identifies 280 related solution providers worldwide. Many of them are startups who recently entered the field. Established providers including GE, PTC, Cisco, ABB, and Siemens, have entered the market in the past five years, many through acquisitions.
The market still has room; the analysts predict 500 companies will be in the business in the next five years.
In 2016, the ROI from predictive analytics was unclear. In 2021, a survey of about 100 senior IT executives from the industrial sector found that predictive maintenance projects have delivered a positive ROI in 83% of the cases. Some 45% of those reported amortizing their investments in less than a year. “This data demonstrated how attractive the investment has become in recent years,” the analysts stated.
More IoT Sensors Means More Precision
Implemented projects that the analysts studied in 2016 relied on a limited number of data sources, typically one sensor value, such as vibration or temperature. Projects described in the 2021 report described 11 classes of data sources, such as data from existing sensors or data from the controllers. As more sources are tapped, the precision of the predictions increase, the analysts state.
Many projects today are using hybrid modeling approaches that rely on domain expertise, virtual sensors and augmented data. AspenTech and PARC are two suppliers identified in the report as embracing hybrid modeling approaches. AspenTech has worked with over 60 companies to develop and test hybrid models that combine physics with ML/data science knowledge, enhancing prediction accuracy.
The move to edge computing is expected to further benefit predictive modeling projects, by enabling algorithms to run at the point where data is collected, reducing response latency. The supplier STMicroelectronics recently introduced some smart sensor nodes that can gather data and do some analytic processing.
More predictive maintenance apps are being integrated with enterprise software systems, such as enterprise resource planning (ERP) or computerized maintenance management systems (CMMS). Litmus Automation offers an integration service to link to any industrial asset, such as a programmable logic controller, a distributed control system, or a supervisory control and data acquisition system.
Reduced Downtime Results in Savings
Gains come from preventing downtime. “Predictive maintenance is the result of monitoring operational equipment and taking action to prevent potential downtime or an unexpected or negative outcome,” stated Mike Leone, an analyst at IT strategy firm Enterprise Strategy Group, in an account from TechTarget.
Advances that have made predictive maintenance more practical today include sensor technology becoming more widespread, and the ability to monitor industrial machines in real time, stated Felipe Parages, senior data scientist at Valkyrie, data sense consultants. With more sensors, the volume of data has grown exponentially, and data analytics via cloud services has become available.
It used to be that an expert had to perform an analysis to determine if a machine was not operating in an optimal way. “Nowadays, with the amount of data you can leverage and the new techniques based on machine learning and AI, it is possible to find patterns in all that data, things that are very subtle and would have escaped notice by a human being,” stated Parages.
As a result, one person can now monitor hundreds of machines, and companies are accumulating historical data, which enables deeper trend analysis. “Predictive maintenance “is a very powerful weapon,” he stated.
In an example project, Italy’s primary rail operator, Trenitalia, adopted predictive maintenance for its high-speed trains. The system is expected to save eight to 10% of an annual maintenance budget of 1.3 billion Euros, stated Paul Miller, an analyst with research firm Forrester, which recently issued a report on the project.
“They can eliminate unplanned failures which often provide direct savings in maintenance but just as importantly, by taking a train out of service before it breaks—that means better customer service and happier customers,” Miller stated. He recommended organizations start out with predictive maintenance by fielding a pilot project.
In an example of the types of cooperation predictive maintenance projects are expected to engender, the CEOs of several European auto and electronics firms recently announced plans to join forces to form the “Software Republique,” a new ecosystem for innovation in intelligent mobility. Atos, Dassault Systèmes, Groupe Renault, and STMicroelectronics and Thales announced their decision to pool their expertise to accelerate the market.
Luca de Meo, Chief Executive Officer of Groupe Renault, stated in a press release from STMicroelectronics, “In the new mobility value chain, on-board intelligence systems are the new driving force, where all research and investment are now concentrated. Faced with this technological challenge, we are choosing to play collectively and openly. There will be no center of gravity, the value of each will be multiplied by others. The combined expertise in cybersecurity, microelectronics, energy and data management will enable us to develop unique, cutting-edge solutions for low-carbon, shared, and responsible mobility, made in Europe.”
The Software République will be based in Guyancourt, a commune in north-central France at the Renault Technocentre in a building called Odyssée, a 12,000 square meter space which is eco-responsible. For example, its interior and exterior structure is 100 percent wood, and the building is covered with photovoltaic panels.
Post Office Looks to Gain an Edge With Edge Computing
By AI Trends Editor John P. Desmond
NVIDIA on May 6 detailed a partnership with the US Postal Service underway for over a year to speed up mail service using AI, with a goal of reducing current processing time tasks that take days to hours.
The project fields edge servers at 195 Post Services sites across the nation, which review 20 terabytes of images a day from 1,000 mail processing machines, according to a post on the NVIDIA blog.
“The federal government has been for the last several years talking about the importance of artificial intelligence as a strategic imperative to our nation, and as an important funding priority. It’s been talked about in the White House, on Capitol Hill, in the Pentagon. It’s been funded by billions of dollars, and it’s full of proof of concepts and pilots,” stated Anthony Robbins, Vice President of Federal for NVIDIA, in an interview with Nextgov. “And this is one of the few enterprise–wide examples of an artificial intelligence deployment that I think can serve to inspire the whole of the federal government.”
The project started with USPS AI architect at the time Ryan Simpson, who had the idea to try to expand an image analysis system a postal team was developing, into something much bigger, according to the blog post. (Simpson worked for USPS for over 12 years, and moved to NVIDIA as a senior data scientist eight months ago.) He believed that a system could analyze billions of images each center generated, and gain insights expressed in a few data points that could be shared quickly over the network.
In a three-week sprint, Simpson worked with half a dozen architects at NVIDIA and others to design the needed deep-learning models. The work was done within the Edge Computing Infrastructure Program (ECIP), a distributed edge AI system up and running on Nvidia’s EGX platform at USPS. The EGX platform enables existing and modern, data-intensive applications to be accelerated and secure on the same infrastructure, from data center to edge.
“It used to take eight or 10 people several days to track down items, now it takes one or two people a couple of hours,” stated Todd Schimmel, Manager, Letter Mail Technology, USPS. He oversees USPS systems including ECIP, which uses NVIDIA-Certified edge servers from Hewlett-Packard Enterprise.
In another analysis, a computer vision task that would have required two weeks on a network of servers with 800 CPUs can now get done in 20 minutes on the four NVIDIA V100 Tensor Core GPUs in one of the HPE Apollo 6500 servers.
Contract Awarded in 2019 for System Using OCR
USPS had put out a request for proposals for a system using optical character recognition (OCR) to streamline its imaging workflow. “In the past, we would have bought new hardware, software—a whole infrastructure for OCR; or if we used a public cloud service, we’d have to get images to the cloud, which takes a lot of bandwidth and has significant costs when you’re talking about approximately a billion images,” stated Schimmel.
Today, the new OCR application will rely on a deep learning model in a container on ECIP managed by Kubernetes, the open source container orchestration system, and served by NVIDIA Triton, the company’s open-source inference-serving software. Triton allows teams to deploy trained AI models from any framework, such as TensorFlow or PyTorch.
The deployment was very streamlined,” Schimmel stated. “We awarded the contract in September 2019, started deploying systems in February 2020 and finished most of the hardware by August—the USPS was very happy with that,” he added
Multiple models need to communicate to the USPS OCR application to work. The app that checks for mail items alone requires coordinating the work of more than a half dozen deep-learning models, each checking for specific features. And operators expect to enhance the app with more models enabling more features in the future.
“The models we have deployed so far help manage the mail and the Postal Service—they help us maintain our mission,” Schimmel stated.
One model, for example, automatically checks to see if a package carries the right postage for its size, weight, and destination. Another one that will automatically decipher a damaged barcode could be online this summer.
“We’re at the very beginning of our journey with edge AI. Every day, people in our organization are thinking of new ways to apply machine learning to new facets of robotics, data processing and image handling,” he stated.
Accenture Federal Services, Dell Technologies, and Hewlett-Packard Enterprise contributed to the USPS OCR system incorporating AI, Robbins of NVIDIA stated. Specialized computing cabinets—or nodes—that contain hardware and software specifically tuned for creating and training ML models, were installed at two data centers.
“The AI work that has to happen across the federal government is a giant team sport,” Robbins stated to Nextgov. “And the Postal Service’s deployment of AI across their enterprise exhibited just that.”
The new solutions could help the Postal Service improve delivery standards, which have fallen over the past year. In mid-December, during the last holiday season, the agency delivered as little as 62% of first-class mail on time—the lowest level in years, according to an account in VentureBeat . The rate rebounded to 84% by the week of March 6 but remained below the agency’s target of about 96%.
The Postal Service has blamed the pandemic and record peak periods for much of the poor service performance.
Here Come the AI Regulations
By AI Trends Staff
New laws will soon shape how companies use AI.
The five largest federal financial regulators in the US recently released a request for information how banks use AI, signaling that new guidance is coming for the finance business. Soon after that, the US Federal Trade Commission released a set of guidelines on “truth, fairness and equity” in AI, defining the illegal use of AI as any act that “causes more harm than good,” according to a recent account in Harvard Business Review.
And on April 21, the European Commission issued its own proposal for the regulation of AI (See AI Trends, April 22, 2021)
While we don’t know what these regulation will allow, “Three central trends unite nearly all current and proposed laws on AI, which means that there are concrete actions companies can undertake right now to ensure their systems don’t run afoul of any existing and future laws and regulations,” stated article author Andrew Burt, the managing partner of bnh.ai, a boutique law firm focused on AI and analytics.
First, conduct assessments of AI risks. As part of the effort, document how the risks have been minimized or resolved. Regulatory frameworks that refer to these “algorithmic impact assessments,” or “IA for AI,” are available.
For example, Virginia’s recently-passed Consumer Data Protection Act, requires assessments for certain types of high-risk algorithms.
The EU’s new proposal requires an eight-part technical document to be completed for high-risk AI systems that outlines “the foreseeable unintended outcomes and sources of risks” of each AI system, Burt states. The EU proposal is similar to the Algorithmic Accountability Act filed in the US Congress in 2019. The bill did not go anywhere but is expected to be reintroduced.
Second, accountability and independence. This suggestion is that the data scientists, lawyers and others evaluating the AI system have different incentives than those of the frontline data scientists. This could mean that the AI is tested and validated by different technical personnel than those who originally developed it, or organizations may choose to hire outside experts to assess the AI system.
“Ensuring that clear processes create independence between the developers and those evaluating the systems for risk is a central component of nearly all new regulatory frameworks on AI,” Burt states.
Third, continuous review. AI systems are “brittle and subject to high rates of failure,” with risks that grow and change over time, making it difficult to mitigate risk at a single point in time. “Lawmakers and regulators alike are sending the message that risk management is a continual process,” Burt stated.
Approaches in US, Europe and China Differ
The approaches between the US, Europe and China toward AI regulation differ in their approach, according to a recent account in The Verdict, based on analysis by Global Data, the data analytics and consulting company based in London.
“Europe appears more optimistic about the benefits of regulation, while the US has warned of the dangers of over regulation,”’ the account states. Meanwhile, “China continues to follow a government-first approach” and has been widely criticized for the use of AI technology to monitor citizens. The account noted examples in the rollout by Tencent last year of an AI-based credit scoring system to determine the “trust value” of people, and the installation of surveillance cameras outside people’s homes to monitor the quarantine imposed after the breakout of COVID-19.
“Whether the US’ tech industry-led efforts, China’s government-first approach, or Europe’s privacy and regulation-driven approach is the best way forward remains to be seen,” the account stated.
In the US, many companies are aware of the risk of new AI regulation that could stifle innovation and their ability to grow in the digital economy, suggested a recent report from pwc, the multinational professional services firm.
“It’s in a company’s interests to tackle risks related to data, governance, outputs, reporting, machine learning and AI models, ahead of regulation,” the pwc analysts state. They recommended business leaders assemble people from across the organization to oversee accountability and governance of technology, with oversight from a diverse team that includes members with business, IT and specialized AI skills.
Critics of European AI Act Cite Too Much Gray Area
While some argue that the European Commission’s proposed AI Act leaves too much gray area, the hope of the European Commission is that their proposed AI Act will provide guidance for businesses wanting to pursue AI, as well as a degree of legal certainty.
“Trust… we think is vitally important to allow the development we want of artificial intelligence,” stated Thierry Breton, European Commissioner for the Internal Market, in an account in TechCrunch. AI applications “need to be trustworthy, safe, non-discriminatory — that is absolutely crucial — but of course we also need to be able to understand how exactly these applications will work.”
“What we need is to have guidance. Especially in a new technology… We are, we will be, the first continent where we will give guidelines—we’ll say ‘hey, this is green, this is dark green, this is maybe a little bit orange and this is forbidden’. So now if you want to use artificial intelligence applications, go to Europe! You will know what to do, you will know how to do it, you will have partners who understand pretty well and, by the way, you will come also to the continent where you will have the largest amount of industrial data created on the planet for the next ten years.”
“So come here—because artificial intelligence is about data—we’ll give you the guidelines. We will also have the tools to do it and the infrastructure,” Breton suggested.
Another reaction was that the Commission’s proposal has overly broad exemptions, such as for law enforcement to use remote biometric surveillance including facial recognition technology, and it does not go far enough to address the risk of discrimination.
Reactions to the Commission’s proposal included plenty of criticism of overly broad exemptions for law enforcement’s use of remote biometric surveillance (such as facial recognition tech) as well as concerns that measures in the regulation to address the risk of AI systems discriminating don’t go nearly far enough.
“The legislation lacks any safeguards against discrimination, while the wide-ranging exemption for ‘safeguarding public security’ completely undercuts what little safeguards there are in relation to criminal justice,” stated Griff Ferris, legal and policy officer for Fair Trials, the global criminal justice watchdog based in London. “The framework must include rigorous safeguards and restrictions to prevent discrimination and protect the right to a fair trial. This should include restricting the use of systems that attempt to profile people and predict the risk of criminality.”
To accomplish this, he suggested, “The EU’s proposals need radical changes to prevent the hard-wiring of discrimination in criminal justice outcomes, protect the presumption of innocence and ensure meaningful accountability for AI in criminal justice.”
JetBlue Hits Back At Eastern Airlines On Ecuador Flights
“Privacy is a ‘Privilege’ that Users Ought to Cherish”: Elena Nadoliksi
Build a cognitive search and a health knowledge graph using AWS AI services
ONE Gas to Participate in American Gas Association Financial Forum
Shiba Inu: Know How to Buy the New Dogecoin Rival
Meme Coins Craze Attracting Money Behind Fall of Bitcoin
Yieldly announces IDO
Pokémon Go Special Weekend announced, features global partners like Verizon, 7-Eleven Mexico, and Yoshinoya
Opimas estimates that over US$190 billion worth of Bitcoin is currently at risk due to subpar safekeeping
Credit Karma Launches Instant Karma Rewards
Valve launches Supporters Clubs, allows fans to directly support Dota Pro Circuit teams
Sentiment Flippening: Why This Bitcoin Expert Doesn’t Own Ethereum
How to download PUBG Mobile’s patch 1.4 update
Bella Aurora launches its first treatment for white patches on the skin
‘Destroy Sandcastles’ in Fortnite Locations Explained
5 Best Mid Laners in League of Legends Patch 11.10
Top Tips On Why And How To Get A Cyber Security Degree ?
PR Newswire7 days ago
Polystyrene Foam Market worth $32.2 billion by 2026 – Exclusive Report by MarketsandMarkets™
Energy1 week ago
Systém GameChange Solar 631 MW Genius Tracker™ bude vztyčen v jižním Texasu
Blockchain1 week ago
The Reason for Ethereum’s Recent Rally to ATH According to Changpeng Zhao
Aviation1 week ago
American Airlines Passenger Arrested After Alleged Crew Attack
Blockchain1 week ago
Chiliz Price Prediction 2021-2025: $1.76 By the End of 2025
Blockchain1 week ago
Mining Bitcoin: How to Mine Bitcoin
PR Newswire1 week ago
Memorial Day Grill Accessories Roundup
Private Equity1 week ago
Beyond the fanfare and SEC warnings, SPACs are here to stay
Blockchain1 week ago
Amid XRP lawsuit, Ripple appoints former US Treasurer to its board, and names new CFO
Aviation6 days ago
What Happened To Lufthansa’s Boeing 707 Aircraft?
Blockchain1 week ago
NYDIG: Bitcoin is Coming to Hundreds of American Banks This Year
Blockchain1 week ago
NYDIG: Bitcoin is Coming to Hundreds of American Banks This Year