Connect with us


New Uses For AI




AI is being embedded into an increasing number of technologies that are commonly found inside most chips, and initial results show dramatic improvements in both power and performance.

Unlike high-profile AI implementations, such as self-driving cars or natural language processing, much of this work flies well under the radar for most people. It generally takes the path of least disruption, building on or improving technology that already exists. But in addition to having a significant impact, these developments provide design teams with a baseline for understanding what AI can and cannot do well, how it behaves over time and under different environmental and operating conditions, and how it interacts with other systems.

Until recently, the bulk of AI/machine learning has been confined to the data center or specialized mil/aero applications. It has since begun migrating to the edge, which itself is just beginning to take form, driven by a rising volume of data and the need to process that data closer to the source.

Memory improvements
Optimizing the movement of data is an obvious target across all of these markets. So much data is being generated that it is overwhelming traditional von Neumann approaches. Rather than scrap proven architectures, companies are looking at ways to reduce the flow of data back and forth between memories and processors. In-memory and near-memory compute are two such solutions that have gained attention, but adding AI into those approaches can have a significant incremental impact.

Samsung’s announcement that it is adding machine learning into the high-bandwidth memory (HBM) stack is a case in point.

“The most difficult part was how to make this as a drop-in replacement for existing DRAM without impacting any of the computing ecosystem,” said Nam Sung Kim, senior vice president of Samsung’s Memory Business Unit. “We still use existing machine learning algorithms, but this technology is about running them more efficiently. Sometimes it wasn’t feasible to run the machine learning model in the past because it required too much memory bandwidth. But with the computing unit inside the memory, now we can explore a lot more bandwidth.”

Kim said this approach allowed a 70% reduction in total system energy without any additional optimization. What makes this so valuable is that it adds a level of “intelligence” into how data is moved. That, in turn, can be paired with other technology improvements to achieve even greater power/performance efficiency. Kim estimates this can be an order of magnitude, but other technologies could push this even higher.

Fig. 1: Processing in memory software stack. Source: Samsung

“As an industry, we have to look in a few different places,” said Steven Woo, fellow and distinguished inventor at Rambus. “One of them is architectures. We have to think about what are the right ways to construct chips so they’re really targeted more toward the actual algorithms. We’ve been seeing that happen for the last four or five years. People have implemented some really neat architectures — things like systolic arrays and more targeted implementations. There are some other ones, too. We certainly know that memory systems are very, very important in the overall energy consumption. One of the things that has to happen is we have to work on making memory accesses more energy-efficient. Utilizing the PHY more effectively is an important piece. SoCs themselves are spending 25% to 40% of their power budget just on PHYs, and then the act of moving data back and forth between and SoC and a PHY — about two thirds of power being used is really just in the movement of the data. And that’s just for HBM2. For GDDR, even more of the power is spent in moving the data because it’s a higher data rate. For an equivalent bandwidth, it’s taking more power just because it’s a much higher speed signal.”

Fig. 2: Breakdown of data movement costs. Source: Rambus

Fig. 2: Breakdown of data movement costs. Source: Rambus

Network optimization
Another place where this kind of approach is being utilized is network configuration and optimization. Unlike in the past, when a computer or smart phone could tap into any of a number of standards-based protocols and networks, the edge is focused on application-specific optimizations and unique implementations. Every component in the data flow needs to be optimized, sometimes across different systems that are connected together.

This is causing headaches for users, who have to integrate edge systems, as well as for vendors looking to sell a horizontal technology that can work across many vertical markets. And it is opening the door for more intelligent devices and components that can configure themselves on a network or in a package — as well as for configurable devices that can adapt to changes in algorithms used for those markets.

“It’s going to start out as software-defined hardware, but it’s going to evolve into a self-healing, self-orchestrating device that can be AI-enabled,” said Kartik Srinivasan, director of data center marketing at Xilinx. “It can say, ‘I’m going to do this level of processing for specific traffic flows,’ and do a multitude of offloads depending upon what AI is needed.”

AI/ML is proving to be very good at understanding how to prioritize and partition data based upon patterns of behavior and probabilities for where it can be best utilized. Not all data needs to be acted upon immediately, and much of it can be trashed locally.

“We’re starting to view machine learning as an optimization problem,” said Anoop Saha, senior manager for strategy and business development at Siemens EDA. “Machine learning historically has been used for pattern recognition, whether it’s supervised or unsupervised learning or reinforcement learning. The idea is that you recognize some pattern from the data that you have, and then use that to classify things to make predictions or do a cat-versus-dog identification. There are other use cases, though, such as a smart NIC card, where you didn’t find the network topology identifying how you maximize your SDN (software defined networking) network. These are not pure pattern-recognition problems, and they are very interesting for the broader industry. People are starting to use this for a variety of tasks.”

While the implementations are highly specific, general concepts are starting to come into focus across multiple markets. “It differs somewhat depending on the market segment that you’re in,” said Geoff Tate, CEO of Flex Logix. “We’re working at what we’re calling the enterprise edge for medical imaging and things like that. Our customers need high throughput, high accuracy, low cost, and low power. So you really have to have an architecture that’s better than GPUs, and we benchmarked ours at 3 to 10 times better. We do that with finer granularity, and rather than having a big matrix multiplier, we have our one-dimensional tensor processors. Those are modular, so we can combine them in different ways to do different convolution and matrix applications. That also requires a programmable interconnect, which we’ve developed. And the last thing we do is have our compute very close to memory, which minimizes latency and power. All of the computation takes place in SRAM, and then the DRAM is used for storing weights.”

AI on the edge
This modular and programmable kind of approach is often hidden in many of these designs, but the emphasis on flexibility in design and implementation is critical. More sensors, a flood of data, and a slowdown in the benefits of scaling, have forced chipmakers to pivot to more complex architectures that can drive down latency and power while boosting performance.

This is particularly true on the edge, where some of the devices are based on batteries, and in on-premises and near-premises data centers where speed is the critical factor. Solutions tend to be highly customized, heterogeneous, and often involve multiple chips in a package. So instead of a hyperscale cloud, where everything is located in one or more giant data centers, there are layers of processing based upon how quickly data needs to be acted upon and how much data needs to be processed.

The result is a massively complex data partitioning problem, because now that data has to be intelligently parsed between different servers and even between different systems. “We definitely see that trend, especially with more edge nodes on the way,” said Sandeep Krishnegowda, senior director of marketing and applications for memory solutions at Infineon. “When there’s more data coming in, you have to partition what you’re trying to accelerate. You don’t want to just send raw bits of information all the way to the cloud. It needs to be meaningful data. At the same time, you want real-time controller on the edge to actually make the inference decisions right there. All of this definitely has highlighted changes to architecture, making it more efficient at managing your traffic. But most importantly, a lot of this comes back to data and how you manage the data. And invariably a lot of that goes back to your memory and the subsystem of memory architectures.”

In addition, this becomes a routing problem because everything is connected and data is flowing back and forth.

“If you’re doing a data center chip, you’re designing at the reticle limit,” said Frank Schirrmeister, senior group director for solution marketing at Cadence. “You have an accelerator in there, different thermal aspects, and 3D-IC issues. When you move down to the wearable, you’re still dealing with equally relevant thermal power levels, and in a car you have an AI component. So this is going in all directions, and it needs a holistic approach. You need to optimize the low-power/thermal/energy activities regardless of where you are at the edge, and people will need to adapt systems for their workloads. Then it comes down to how you put these things together.”

That adds yet another level of complexity. “Initially it was, ‘I need the highest density SRAM I can get so that I can fit as many activations and weights on chip as possible,’” said Ron Lowman, strategic marketing manager for IP at Synopsys. “Other companies were saying they needed it to be as low power as possible. We had those types of solutions before, but we saw a lot of new requests specifically around AI. And then they moved to the next step where they’d say, ‘I need some customizations beyond the highest density or lowest leakage,’ because they’re combining them with specialized processing components such as memory and compute-type technologies. So there are building blocks, like primitive math blocks, DSP processors, RISC processors, and then a special neural network engine. All of those components make up the processing solution, which includes scalar, vector, and matrix multiplication, and memory architectures that are connected to it. When we first did these processors, it was assumed that you would have some sort of external memory interface, most likely LPDDR or DDR, and so a lot of systems were built that way around those assumptions. But there are unique architectures out there with high-bandwidth memories, and that changes how loads and stores are taken from those external memory interfaces and the sizes of those. Then the customer adds their special sauce. That will continue to grow as more niches are found.”

Those niches will increase the demand for more types of hardware, but they also will drive demand for continued expansion of these base-level technologies that can be form-fitted to a particular use case.

“Our FPGAs are littered with memory across the entire device, so you can localize memory directly to the accelerator, which can be a deep learning processing unit,” said Jayson Bethurem, product line manager at Xilinx. “And because the architecture is not fixed, it can be adapted to different characterizations, and classification topologies, with CNNs and other things like that. That’s where most of the application growth is, and we see people wanting to classify something before they react to it.”

AI’s limits in end devices
AI itself is not a fixed technology. Different pieces of an AI solution are in motion as the technology adapts and optimizes, so processing results typically come in the form of distributions and probabilities of acceptability.

That makes it particularly difficult to define the precision and reliability of AI, because the metrics for each implementation and use case are different, and it’s one reason why the chip industry is treading carefully with this technology. For example, consider AI/ML in a car with assisted driving. The data inputs and decisions need to be made in real time, but the AI system needs to be able to weight the value of that data, which may be different from how another vehicle weights that data. Assuming the two vehicles don’t ever interact, that’s not a problem. But if they’re sharing information, the result can be very different.

“That’s somewhat of an open problem,” said Rob Aitken, fellow and director of technology for Arm’s Research and Development Group. “If you have a system with a given accuracy and another with a different accuracy, then cumulatively their accuracy depends on how independent they are from each other. But it also depends on what mechanism you use to combine the two. This seems to be reasonably well understood in things like image recognition, but it’s harder when you’re looking at an automotive application where you’ve got some radar data and some camera data. They’re effectively independent of one another, but their accuracies are dependent on external factors that you would have to know, in addition to everything else. So the radar may say, ‘This is a cat,’ but the camera says there’s nothing there. If it’s dark, then the radar is probably right. If it’s raining, maybe the radar is wrong, too. These external bits can come into play very, very quickly and start to overwhelm any rule of thumb.”

All of those interactions need to be understood in detail. “A lot of designs in automotive are highly configurable, and they’re configurable even on the fly based on the data they’re getting from sensors,” said Simon Rance, head of marketing at ClioSoft. “The data is going from those sensors back to processors. The sheer amount of data that’s running from the vehicle to the data center and back to the vehicle, all of that has to be traced. If something goes wrong, they’ve got to trace it and figure out what the root cause is. That’s where there’s a need to be filled.”

Another problem is knowing what is relevant data and what is not. “When you’re shifting AI to the edge, you shift something like a model, which means that you already know what is the relevant part of the information and what is not,” said Dirk Mayer, department head for distributed data processing and control in Fraunhofer IIS’ Engineering of Adaptive Systems Division. “Even if you just do something like a low-pass filtering or high-pass filtering or averaging, you have something in mind that tells you, ‘Okay, this is relevant if you apply a low-pass filter, or you just need data up to 100 Hz or so.’”

The challenge is being able to leverage that across multiple implementations of AI. “Even if you look at something basic, like a milling machine, the process is the same but the machines may be totally different,” Mayer said. “The process materials are different, the materials being milled are different, the process speed is different, and so on. It’s quite hard to invent artificial intelligence that adapts itself from one machine to another. You always need a retraining stage and time to collect new data. This will be a very interesting research area to invent something like building blocks for AI, where the algorithm is widely accepted in the industry and you can move it from this machine to that machine and it’s pre-trained. So you add domain expertise, some basic process parameters, and you can parameterize your algorithm so that it learns faster.”

That is not where the chip industry is today, however. AI and its sub-groups, machine learning and deep learning, add unique capabilities to an industry that was built on volume and mass reproducibility. While AI has been proven to be effective for certain things, such as optimizing data traffic and partitioning based upon use patterns, it has a long way to go before it can make much bigger decisions with predictable outcomes.

The early results of power reduction and performance improvements are encouraging. But they also need to be set in the context of a much broader set of systems, the rapid evolution of multiple market segments, and different approaches such as heterogeneous integration, domain-specific designs, and the limitations of data sharing across the supply chain.

Coinsmart. Beste Bitcoin-Börse in Europa


Listen: OakNorth CIO shares automation trends in commercial lending




Commercial banks have been automating aspects of the lending and decisioning process, primarily at the lower end of the commercial lending spectrum, but hesitate to automate for loans more than $1 million. This means commercial banks have kept automations focused on loans of less than $1 million, explains Sean Hunter in this podcast discussion with […]

Coinsmart. Beste Bitcoin-Börse in Europa

Continue Reading


Predictive Maintenance is a Killer AI App 




Predictive maintenance resulting from IoT and AI working together has been identified as a killer app, with a track record of ROI. (Credit: Getty Images) 

By John P. Desmond, AI Trends Editor 

Predictive maintenance (PdM) has emerged as a killer AI app. 

In the past five years, predictive maintenance has moved from a niche use case to a fast-growing, high return on investment (ROI) application that is delivering true value to users. These developments are an indication of the power of the Internet of Things (IoT) and AI together, a market considered in its infancy today. 

These observations are from research conducted by IoT Analytics, consultants who supply market intelligence, which recently estimated that the $6.9 billion predictive maintenance market will reach $28.2 billion by 2026.  

The company began its research coverage of the IoT-driven predictive maintenance market in 2016, at an industry maintenance conference in Dortmund, Germany. Not much was happening. “We were bitterly disappointed,” stated Knud Lasse Lueth, CEO at IoT Analytics, in an account in IoT Business News. “Not a single exhibitor was talking about predictive maintenance.”  

Things have changed. IoT Analytics analyst Fernando Alberto Brügge stated, Our research in 2021 shows that predictive maintenance has clearly evolved from the rather static condition-monitoring approach. It has become a viable IoT application that is delivering overwhelmingly positive ROI.” 

Technical developments that have contributed to the market expansion include: a simplified process for connecting IoT assets, major advances in cloud services, and improvements in the accessibility of machine learning/data science frameworks, the analysts state.  

Along with the technical developments, the predictive maintenance market has seen a steady increase in the number of software and service providers offering solutions. IoT Analytics identified about 100 companies in the space in 2016; today the company identifies 280 related solution providers worldwide. Many of them are startups who recently entered the field. Established providers including GE, PTC, Cisco, ABB, and Siemens, have entered the market in the past five years, many through acquisitions.  

The market still has room; the analysts predict 500 companies will be in the business in the next five years.  

In 2016, the ROI from predictive analytics was unclear. In 2021, a survey of about 100 senior IT executives from the industrial sector found that predictive maintenance projects have delivered a positive ROI in 83% of the cases. Some 45% of those reported amortizing their investments in less than a year. “This data demonstrated how attractive the investment has become in recent years,” the analysts stated.   

More IoT Sensors Means More Precision 

Implemented projects that the analysts studied in 2016 relied on a limited number of data sources, typically one sensor value, such as vibration or temperature. Projects described in the 2021 report described 11 classes of data sources, such as data from existing sensors or data from the controllers. As more sources are tapped, the precision of the predictions increase, the analysts state.  

Many projects today are using hybrid modeling approaches that rely on domain expertise, virtual sensors and augmented data. AspenTech and PARC are two suppliers identified in the report as embracing hybrid modeling approaches. AspenTech has worked with over 60 companies to develop and test hybrid models that combine physics with ML/data science knowledge, enhancing prediction accuracy. 

The move to edge computing is expected to further benefit predictive modeling projects, by enabling algorithms to run at the point where data is collected, reducing response latency. The supplier STMicroelectronics recently introduced some smart sensor nodes that can gather data and do some analytic processing. 

More predictive maintenance apps are being integrated with enterprise software systems, such as enterprise resource planning (ERP) or computerized maintenance  management systems (CMMS). Litmus Automation offers an integration service to link to any industrial asset, such as a programmable logic controller, a distributed control system, or a supervisory control and data acquisition system.   

Reduced Downtime Results in Savings 

Gains come from preventing downtime. Predictive maintenance is the result of monitoring operational equipment and taking action to prevent potential downtime or an unexpected or negative outcome,” stated Mike Leone, an analyst at IT strategy firm Enterprise Strategy Group, in an account from TechTarget.  

Felipe Parages, Senior Data Scientist, Valkyrie

Advances that have made predictive maintenance more practical today include sensor technology becoming more widespread, and the ability to monitor industrial machines in real time, stated Felipe Parages, senior data scientist at Valkyrie, data sense consultants. With more sensors, the volume of data has grown exponentially, and data analytics via cloud services has become available. 

It used to be that an expert had to perform an analysis to determine if a machine was not operating in an optimal way. “Nowadays, with the amount of data you can leverage and the new techniques based on machine learning and AI, it is possible to find patterns in all that data, things that are very subtle and would have escaped notice by a human being,” stated Parages. 

As a result, one person can now monitor hundreds of machines, and companies are accumulating historical data, which enables deeper trend analysis. “Predictive maintenance “is a very powerful weapon,” he stated.  

In an example project, Italy’s primary rail operator, Trenitalia, adopted predictive maintenance for its high-speed trains. The system is expected to save eight to 10% of an annual maintenance budget of 1.3 billion Euros, stated Paul Miller, an analyst with research firm Forrester, which recently issued a report on the project.  

They can eliminate unplanned failures which often provide direct savings in maintenance but just as importantly, by taking a train out of service before it breaks—that means better customer service and happier customers,” Miller stated. He recommended organizations start out with predictive maintenance by fielding a pilot project. 

In an example of the types of cooperation predictive maintenance projects are expected to engender, the CEOs of several European auto and electronics firms recently announced plans to join forces to form the “Software Republique,” a new ecosystem for innovation in intelligent mobility. Atos, Dassault Systèmes, Groupe Renault, and STMicroelectronics and Thales announced their decision to pool their expertise to accelerate the market.   

Luca de Meo, Chief Executive Officer, Groupe Renault

Luca de Meo, Chief Executive Officer of Groupe Renault, stated in a press release from STMicroelectronics, In the new mobility value chain, on-board intelligence systems are the new driving force, where all research and investment are now concentrated. Faced with this technological challenge, we are choosing to play collectively and openly. There will be no center of gravity, the value of each will be multiplied by others. The combined expertise in cybersecurity, microelectronics, energy and data management will enable us to develop unique, cutting-edge solutions for low-carbon, shared, and responsible mobility, made in Europe.”    

The Software République will be based in Guyancourt, a commune in north-central France at the Renault Technocentre in a building called Odyssée, a 12,000 square meter space which is eco-responsible. For example, its interior and exterior structure is 100 percent wood, and the building is covered with photovoltaic panels. 

Read the source articles in IoT Business News TechTarget, and in a press release from STMicroelectronics.

Coinsmart. Beste Bitcoin-Börse in Europa

Continue Reading


Post Office Looks to Gain an Edge With Edge Computing 




By AI Trends Editor John P. Desmond  

NVIDIA on May 6 detailed a partnership with the US Postal Service underway for over a year to speed up mail service using AI, with a goal of reducing current processing time tasks that take days to hours.   

The project fields edge servers at 195 Post Services sites across the nation, which review 20 terabytes of images a day from 1,000 mail processing machines, according to a post on the NVIDIA blog.  

Anthony Robbins, Vice President of Federal, Nvidia

“The federal government has been for the last several years talking about the importance of artificial intelligence as a strategic imperative to our nation, and as an important funding priority. It’s been talked about in the White House, on Capitol Hill, in the Pentagon. It’s been funded by billions of dollars, and it’s full of proof of concepts and pilots,” stated Anthony Robbins, Vice President of Federal for NVIDIA, in an interview with Nextgov “And this is one of the few enterprisewide examples of an artificial intelligence deployment that I think can serve to inspire the whole of the federal government.”  

The project started with USPS AI architect at the time Ryan Simpson, who had the idea to try to expand an image analysis system a postal team was developing, into something much bigger, according to the blog post. (Simpson worked for USPS for over 12 years, and moved to NVIDIA as a senior data scientist eight months ago.) He believed that a system could analyze billions of images each center generated, and gain insights expressed in a few data points that could be shared quickly over the network.  

In a three-week sprint, Simpson worked with half a dozen architects at NVIDIA and others to design the needed deep-learning models. The work was done within the Edge Computing Infrastructure Program (ECIP), a distributed edge AI system up and running on Nvidia’s EGX platform at USPS. The EGX platform enables existing and modern, data-intensive applications to be accelerated and secure on the same infrastructure, from data center to edge. 

“It used to take eight or 10 people several days to track down items, now it takes one or two people a couple of hours,” stated Todd Schimmel, Manager, Letter Mail Technology, USPS. He oversees USPS systems including ECIP, which uses NVIDIA-Certified edge servers from Hewlett-Packard Enterprise.  

In another analysis, a computer vision task that would have required two weeks on a network of servers with 800 CPUs can now get done in 20 minutes on the four NVIDIA V100 Tensor Core GPUs in one of the HPE Apollo 6500 servers.  

Contract Awarded in 2019 for System Using OCR  

USPS had put out a request for proposals for a system using optical character recognition (OCR) to streamline its imaging workflow. “In the past, we would have bought new hardware, software—a whole infrastructure for OCR; or if we used a public cloud service, we’d have to get images to the cloud, which takes a lot of bandwidth and has significant costs when you’re talking about approximately a billion images,” stated Schimmel. 

AI algorithms were developed on these NVIDIA DGX servers at a US Postal Service Engineering facility. (Credit: Nvidia)

Today, the new OCR application will rely on a deep learning model in a container on ECIP managed by Kubernetes, the open source container orchestration system, and served by NVIDIA Triton, the company’s open-source inference-serving software. Triton allows teams to deploy trained AI models from any framework, such as TensorFlow or PyTorch. 

The deployment was very streamlined,” Schimmel stated. “We awarded the contract in September 2019, started deploying systems in February 2020 and finished most of the hardware by August—the USPS was very happy with that,” he added 

Multiple models need to communicate to the USPS OCR application to work. The app that checks for mail items alone requires coordinating the work of more than a half dozen deep-learning models, each checking for specific features. And operators expect to enhance the app with more models enabling more features in the future. 

“The models we have deployed so far help manage the mail and the Postal Service—they help us maintain our mission,” Schimmel stated.  

One model, for example, automatically checks to see if a package carries the right postage for its size, weight, and destination. Another one that will automatically decipher a damaged barcode could be online this summer.  

“We’re at the very beginning of our journey with edge AI. Every day, people in our organization are thinking of new ways to apply machine learning to new facets of robotics, data processing and image handling,” he stated. 

Accenture Federal Services, Dell Technologies, and Hewlett-Packard Enterprise contributed to the USPS OCR system incorporating AI, Robbins of NVIDIA stated. Specialized computing cabinets—or nodes—that contain hardware and software specifically tuned for creating and training ML models, were installed at two data centers.   

The AI work that has to happen across the federal government is a giant team sport,” Robbins stated to Nextgov. “And the Postal Service’s deployment of AI across their enterprise exhibited just that.” 

The new solutions could help the Postal Service improve delivery standards, which have fallen over the past year. In mid-December, during the last holiday season, the agency delivered as little as 62% of first-class mail on time—the lowest level in years, according to an account in VentureBeat . The rate rebounded to 84% by the week of March 6 but remained below the agency’s target of about 96%. 

The Postal Service has blamed the pandemic and record peak periods for much of the poor service performance. 

Read the source articles and information on the Nvidia blog, in Nextgov and in VentureBeat.

Coinsmart. Beste Bitcoin-Börse in Europa

Continue Reading


Here Come the AI Regulations  




New proposed laws to govern AI are being entertained in the US and Europe, with China following a government-first approach. (Credit: Getty Images)  

By AI Trends Staff 

New laws will soon shape how companies use AI.   

The five largest federal financial regulators in the US recently released a request for information how banks use AI, signaling that new guidance is coming for the finance business. Soon after that, the US Federal Trade Commission released a set of guidelines on “truth, fairness and equity” in AI, defining the illegal use of AI as any act that “causes more harm than good,” according to a recent account in Harvard Business Review  

And on April 21, the European Commission issued its own proposal for the regulation of AI (See AI Trends, April 22, 2021)  

Andrew Burt, Managing Partner,

While we don’t know what these regulation will allow, “Three central trends unite nearly all current and proposed laws on AI, which means that there are concrete actions companies can undertake right now to ensure their systems don’t run afoul of any existing and future laws and regulations,” stated article author Andrew Burt, the managing partner of, a boutique law firm focused on AI and analytics.  

First, conduct assessments of AI risks. As part of the effort, document how the risks have been minimized or resolved. Regulatory frameworks that refer to these “algorithmic impact assessments,” or “IA for AI,” are available.  

For example, Virginia’s recently-passed Consumer Data Protection Act, requires assessments for certain types of high-risk algorithms. 

The EU’s new proposal requires an eight-part technical document to be completed for high-risk AI systems that outlines “the foreseeable unintended outcomes and sources of risks” of each AI system, Burt states. The EU proposal is similar to the Algorithmic Accountability Act filed in the US Congress in 2019. The bill did not go anywhere but is expected to be reintroduced.  

Second, accountability and independence. This suggestion is that the data scientists, lawyers and others evaluating the AI system have different incentives than those of the frontline data scientists. This could mean that the AI is tested and validated by different technical personnel than those who originally developed it, or organizations may choose to hire outside experts to assess the AI system.   

“Ensuring that clear processes create independence between the developers and those evaluating the systems for risk is a central component of nearly all new regulatory frameworks on AI,” Burt states.  

Third, continuous review. AI systems are “brittle and subject to high rates of failure,” with risks that grow and change over time, making it difficult to mitigate risk at a single point in time. “Lawmakers and regulators alike are sending the message that risk management is a continual process,” Burt stated.  

Approaches in US, Europe and China Differ  

The approaches between the US, Europe and China toward AI regulation differ in their approach, according to a recent account in The Verdict, based on analysis by Global Data, the data analytics and consulting company based in London. 

“Europe appears more optimistic about the benefits of regulation, while the US has warned of the dangers of over regulation,”’ the account states. Meanwhile, “China continues to follow a government-first approach” and has been widely criticized for the use of AI technology to monitor citizens. The account noted examples in the rollout by Tencent last year of an AI-based credit scoring system to determine the “trust value” of people, and the installation of surveillance cameras outside people’s homes to monitor the quarantine imposed after the breakout of COVID-19. 

Whether the US’ tech industry-led efforts, China’s government-first approach, or Europe’s privacy and regulation-driven approach is the best way forward remains to be seen,” the account stated. 

In the US, many companies are aware of the risk of new AI regulation that could stifle innovation and their ability to grow in the digital economy, suggested a recent report from pwc, the multinational professional services firm.   

It’s in a company’s interests to tackle risks related to data, governance, outputs, reporting, machine learning and AI models, ahead of regulation,” the pwc analysts state. They recommended business leaders assemble people from across the organization to oversee accountability and governance of technology, with oversight from a diverse team that includes members with business, IT and specialized AI skills.  

Critics of European AI Act Cite Too Much Gray Area 

While some argue that the European Commission’s proposed AI Act leaves too much gray area, the hope of the European Commission is that their proposed AI Act will provide guidance for businesses wanting to pursue AI, as well as a degree of legal certainty.   

Thierry Breton, European Commissioner for the Internal Market

“Trust… we think is vitally important to allow the development we want of artificial intelligence,” stated Thierry Breton, European Commissioner for the Internal Market, in an account in TechCrunch. AI applications “need to be trustworthy, safe, non-discriminatory — that is absolutely crucial — but of course we also need to be able to understand how exactly these applications will work.” 

“What we need is to have guidance. Especially in a new technology… We are, we will be, the first continent where we will give guidelines—we’ll say ‘hey, this is green, this is dark green, this is maybe a little bit orange and this is forbidden’. So now if you want to use artificial intelligence applications, go to Europe! You will know what to do, you will know how to do it, you will have partners who understand pretty well and, by the way, you will come also to the continent where you will have the largest amount of industrial data created on the planet for the next ten years.” 

“So come here—because artificial intelligence is about data—we’ll give you the guidelines. We will also have the tools to do it and the infrastructure,” Breton suggested. 

Another reaction was that the Commission’s proposal has overly broad exemptions, such as for law enforcement to use remote biometric surveillance including facial recognition technology, and it does not go far enough to address the risk of discrimination. 

Reactions to the Commission’s proposal included plenty of criticism of overly broad exemptions for law enforcement’s use of remote biometric surveillance (such as facial recognition tech) as well as concerns that measures in the regulation to address the risk of AI systems discriminating don’t go nearly far enough. 

“The legislation lacks any safeguards against discrimination, while the wide-ranging exemption for ‘safeguarding public security’ completely undercuts what little safeguards there are in relation to criminal justice,” stated Griff Ferris, legal and policy officer for Fair Trials, the global criminal justice watchdog based in London. “The framework must include rigorous safeguards and restrictions to prevent discrimination and protect the right to a fair trial. This should include restricting the use of systems that attempt to profile people and predict the risk of criminality.”  

To accomplish this, he suggested, “The EU’s proposals need radical changes to prevent the hard-wiring of discrimination in criminal justice outcomes, protect the presumption of innocence and ensure meaningful accountability for AI in criminal justice. 

Read the source articles and information in Harvard Business Review, in The Verdict and in TechCrunch. 

Coinsmart. Beste Bitcoin-Börse in Europa

Continue Reading
AI2 days ago

Build a cognitive search and a health knowledge graph using AWS AI services

Energy3 days ago

ONE Gas to Participate in American Gas Association Financial Forum

Blockchain1 day ago

Shiba Inu: Know How to Buy the New Dogecoin Rival

Blockchain5 days ago

Yieldly announces IDO

Esports3 days ago

Pokémon Go Special Weekend announced, features global partners like Verizon, 7-Eleven Mexico, and Yoshinoya

Blockchain2 days ago

Meme Coins Craze Attracting Money Behind Fall of Bitcoin

Esports2 days ago

‘Destroy Sandcastles’ in Fortnite Locations Explained

Fintech3 days ago

Credit Karma Launches Instant Karma Rewards

Business Insider3 days ago

Bella Aurora launches its first treatment for white patches on the skin

Esports2 days ago

Valve launches Supporters Clubs, allows fans to directly support Dota Pro Circuit teams

Blockchain2 days ago

Sentiment Flippening: Why This Bitcoin Expert Doesn’t Own Ethereum

Esports3 days ago

How to download PUBG Mobile’s patch 1.4 update

Esports4 days ago

5 Best Mid Laners in League of Legends Patch 11.10

Cyber Security4 days ago

Top Tips On Why And How To Get A Cyber Security Degree ?

Esports5 days ago

Resident Evil Village: Chamber of Solace Full Items List and Locations

Private Equity3 days ago

Warburg Pincus leads $110m Aetion Series C in wake of company doubling revenue last year

Blockchain2 days ago

Texas House Passes Bill that Recognizes Crypto Under Commercial Law

AR/VR5 days ago

The VR Job Hub: HTC Vive, Zen Studios, Wooorld & Vertigo Games

Cyber Security4 days ago

Colonial Pipeline are Struggling to Get Fuel Flowing at Normal Capacity After a Cyberattack

Blockchain4 days ago

NYC Comptroller Candidate Suggests Crypto Investments as Inflation Hedge