Connect with us

AI

Making Use Of AI Ethics Tuning Knobs In AI Autonomous Cars 

Avatar

Published

on

Ethical tuning knobs would be a handy addition to self-driving car controls, the author suggests, if for example the operator was late for work and needed to exceed the speed limit. (Credit: Getty Images) 

By Lance Eliot, the AI Trends Insider  

There is increasing awareness about the importance of AI Ethics, consisting of being mindful of the ethical ramifications of AI systems.   

AI developers are being asked to carefully design and build their AI mechanizations by ensuring that ethical considerations are at the forefront of the AI systems development process. When fielding AI, those responsible for the operational use of the AI also need to be considering crucial ethical facets of the in-production AI systems. Meanwhile, the public and those using or reliant upon AI systems are starting to clamor for heightened attention to the ethical and unethical practices and capacities of AI.   

Consider a simple example. Suppose an AI application is developed to assess car loan applicants. Using Machine Learning (ML) and Deep Learning (DL), the AI system is trained on a trove of data and arrives at some means of choosing among those that it deems are loan worthy and those that are not. 

The underlying Artificial Neural Network (ANN) is so computationally complex that there are no apparent means to interpret how it arrives at the decisions being rendered. Also, there is no built-in explainability capability and thus the AI is unable to articulate why it is making the choices that it is undertaking (note: there is a movement toward including XAI, explainable AI components to try and overcome this inscrutability hurdle).   

Upon the AI-based loan assessment application being fielded, soon thereafter protests arose by some that assert they were turned down for their car loan due to an improper inclusion of race or gender as a key factor in rendering the negative decision.   

At first, the maker of the AI application insists that they did not utilize such factors and professes complete innocence in the matter. Turns out though that a third-party audit of the AI application reveals that the ML/DL is indeed using race and gender as core characteristics in the car loan assessment process. Deep within the mathematically arcane elements of the neural network, data related to race and gender were intricately woven into the calculations, having been dug out of the initial training dataset provided when the ANN was crafted. 

That is an example of how biases can be hidden within an AI system. And it also showcases that such biases can go otherwise undetected, including that the developers of the AI did not realize that the biases existed and were seemingly confident that they had not done anything to warrant such biases being included. 

People affected by the AI application might not realize they are being subjected to such biases. In this example, those being adversely impacted perchance noticed and voiced their concerns, but we are apt to witness a lot of AI that no one will realize they are being subjugated to biases and therefore not able to ring the bell of dismay.   

Various AI Ethics principles are being proffered by a wide range of groups and associations, hoping that those crafting AI will take seriously the need to consider embracing AI ethical considerations throughout the life cycle of designing, building, testing, and fielding AI.   

AI Ethics typically consists of these key principles: 

1)      Inclusive growth, sustainable development, and well-being 

2)      Human-centered values and fairness 

3)      Transparency and explainability 

4)      Robustness, security, and safety 

5)      Accountability   

We certainly expect humans to exhibit ethical behavior, and thus it seems fitting that we would expect ethical behavior from AI too.   

Since the aspirational goal of AI is to provide machines that are the equivalent of human intelligence, being able to presumably embody the same range of cognitive capabilities that humans do, this perhaps suggests that we will only be able to achieve the vaunted goal of AI by including some form of ethics-related component or capacity. 

What this means is that if humans encapsulate ethics, which they seem to do, and if AI is trying to achieve what humans are and do, the AI ought to have an infused ethics capability else it would be something less than the desired goal of achieving human intelligence.   

You could claim that anyone crafting AI that does not include an ethics facility is undercutting what should be a crucial and integral aspect of any AI system worth its salt. 

Of course, trying to achieve the goals of AI is one matter, meanwhile, since we are going to be mired in a world with AI, for our safety and well-being as humans we would rightfully be arguing that AI had better darned abide by ethical behavior, however that might be so achieved.   

Now that we’ve covered that aspect, let’s take a moment to ponder the nature of ethics and ethical behavior.  

Considering Whether Humans Always Behave Ethically   

Do humans always behave ethically? I think we can all readily agree that humans do not necessarily always behave in a strictly ethical manner.   

Is ethical behavior by humans able to be characterized solely by whether someone is in an ethically binary state of being, namely either purely ethical versus being wholly unethical? I would dare say that we cannot always pin down human behavior into two binary-based and mutually exclusive buckets of being ethical or being unethical. The real-world is often much grayer than that, and we at times are more likely to assess that someone is doing something ethically questionable, but it is not purely unethical, nor fully ethical. 

In a sense, you could assert that human behavior ranges on a spectrum of ethics, at times being fully ethical and ranging toward the bottom of the scale as being wholly and inarguably unethical. In-between there is a lot of room for how someone ethically behaves. 

If you agree that the world is not a binary ethical choice of behaviors that fit only into truly ethical versus solely unethical, you would therefore also presumably be amenable to the notion that there is a potential scale upon which we might be able to rate ethical behavior. 

This scale might be from the scores of 1 to 10, or maybe 1 to 100, or whatever numbering we might wish to try and assign, maybe even including negative numbers too. 

Let’s assume for the moment that we will use the positive numbers of a 1 to 10 scale for increasingly being ethical (the topmost is 10), and the scores of -1 to -10 for being unethical (the -10 is the least ethical or in other words most unethical potential rating), and zero will be the midpoint of the scale. 

Please do not get hung up on the scale numbering, which can be anything else that you might like. We could even use letters of the alphabet or any kind of sliding scale. The point being made is that there is a scale, and we could devise some means to establish a suitable scale for use in these matters.   

The twist is about to come, so hold onto your hat.   

We could observe a human and rate their ethical behavior on particular aspects of what they do. Maybe at work, a person gets an 8 for being ethically observant, while perhaps at home they are a more devious person, and they get a -5 score. 

Okay, so we can rate human behavior. Could we drive or guide human behavior by the use of the scale? 

Suppose we tell someone that at work they are being observed and their target goal is to hit an ethics score of 9 for their first year with the company. Presumably, they will undertake their work activities in such a way that it helps them to achieve that score.   

In that sense, yes, we can potentially guide or prod human behavior by providing targets related to ethical expectations. I told you a twist was going to arise, and now here it is. For AI, we could use an ethical rating or score to try and assess how ethically proficient the AI is.   

In that manner, we might be more comfortable using that particular AI if we knew that it had a reputable ethical score. And we could also presumably seek to guide or drive the AI toward an ethical score too, similar to how this can be done with humans, and perhaps indicate that the AI should be striving towards some upper bound on the ethics scale. 

Some pundits immediately recoil at this notion. They argue that AI should always be a +10 (using the scale that I’ve laid out herein). Anything less than a top ten is an abomination and the AI ought to not exist. Well, this takes us back into the earlier discussion about whether ethical behavior is in a binary state.   

Are we going to hold AI to a “higher bar” than humans by insisting that AI always be “perfectly” ethical and nothing less so?   

This is somewhat of a quandary due to the point that AI overall is presumably aiming to be the equivalent of human intelligence, and yet we do not hold humans to that same standard. 

For some, they fervently believe that AI must be held to a higher standard than humans. We must not accept or allow any AI that cannot do so. 

Others indicate that this seems to fly in the face of what is known about human behavior and begs the question of whether AI can be attained if it must do something that humans cannot attain.   

Furthermore, they might argue that forcing AI to do something that humans do not undertake is now veering away from the assumed goal of arriving at the equivalent of human intelligence, which might bump us away from being able to do so as a result of this insistence about ethics.   

Round and round these debates continue to go. 

Those on the must-be topnotch ethical AI are often quick to point out that by allowing AI to be anything less than a top ten, you are opening Pandora’s box. For example, it could be that AI dips down into the negative numbers and sits at a -4, or worse too it digresses to become miserably and fully unethical at a dismal -10. 

Anyway, this is a debate that is going to continue and not be readily resolved, so let’s move on. 

If you are still of the notion that ethics exists on a scale and that AI might also be measured by such a scale, and if you also are willing to accept that behavior can be driven or guided by offering where to reside on the scale, the time is ripe to bring up tuning knobs. Ethics tuning knobs. 

Here’s how that works. You come in contact with an AI system and are interacting with it. The AI presents you with an ethics tuning knob, showcasing a scale akin to our ethics scale earlier proposed. Suppose the knob is currently at a 6, but you want the AI to be acting more aligned with an 8, so you turn the knob upward to the 8. At that juncture, the AI adjusts its behavior so that ethically it is exhibiting an 8-score level of ethical compliance rather than the earlier setting of a 6. 

What do you think of that? 

Some would bellow out balderdash, hogwash, and just unadulterated nonsense. A preposterous idea or is it genius? You’ll find that there are experts on both sides of that coin. Perhaps it might be helpful to provide the ethics tuning knob within a contextual exemplar to highlight how it might come to play. 

Here’s a handy contextual indication for you: Will AI-based true self-driving cars potentially contain an ethics tuning knob for use by riders or passengers that use self-driving vehicles?   

Let’s unpack the matter and see.   

For my framework about AI autonomous cars, see the link here: https://aitrends.com/ai-insider/framework-ai-self-driving-driverless-cars-big-picture/ 

Why this is a moonshot effort, see my explanation here: https://aitrends.com/ai-insider/self-driving-car-mother-ai-projects-moonshot/ 

For more about the levels as a type of Richter scale, see my discussion here: https://aitrends.com/ai-insider/richter-scale-levels-self-driving-cars/ 

For the argument about bifurcating the levels, see my explanation here: https://aitrends.com/ai-insider/reframing-ai-levels-for-self-driving-cars-bifurcation-of-autonomy/   

Understanding The Levels Of Self-Driving Cars   

As a clarification, true self-driving cars are ones that the AI drives the car entirely on its own and there isn’t any human assistance during the driving task.   

These driverless vehicles are considered a Level 4 and Level 5, while a car that requires a human driver to co-share the driving effort is usually considered at a Level 2 or Level 3. The cars that co-share the driving task are described as being semi-autonomous, and typically contain a variety of automated add-on’s that are referred to as ADAS (Advanced Driver-Assistance Systems).   

There is not yet a true self-driving car at Level 5, which we don’t yet even know if this will be possible to achieve, and nor how long it will take to get there. 

Meanwhile, the Level 4 efforts are gradually trying to get some traction by undergoing very narrow and selective public roadway trials, though there is controversy over whether this testing should be allowed per se (we are all life-or-death guinea pigs in an experiment taking place on our highways and byways, some contend). 

Since semi-autonomous cars require a human driver, the adoption of those types of cars won’t be markedly different than driving conventional vehicles, so there’s not much new per se to cover about them on this topic (though, as you’ll see in a moment, the points next made are generally applicable).   

For semi-autonomous cars, it is important that the public needs to be forewarned about a disturbing aspect that’s been arising lately, namely that despite those human drivers that keep posting videos of themselves falling asleep at the wheel of a Level 2 or Level 3 car, we all need to avoid being misled into believing that the driver can take away their attention from the driving task while driving a semi-autonomous car.   

You are the responsible party for the driving actions of the vehicle, regardless of how much automation might be tossed into a Level 2 or Level 3. 

For why remote piloting or operating of self-driving cars is generally eschewed, see my explanation here: https://aitrends.com/ai-insider/remote-piloting-is-a-self-driving-car-crutch/ 

To be wary of fake news about self-driving cars, see my tips here: https://aitrends.com/ai-insider/ai-fake-news-about-self-driving-cars/ 

The ethical implications of AI driving systems are significant, see my indication here: https://aitrends.com/selfdrivingcars/ethically-ambiguous-self-driving-cars/   

Be aware of the pitfalls of normalization of deviance when it comes to self-driving cars, here’s my call to arms: https://aitrends.com/ai-insider/normalization-of-deviance-endangers-ai-self-driving-cars/   

Self-Driving Cars And Ethics Tuning Knobs 

For Level 4 and Level 5 true self-driving vehicles, there won’t be a human driver involved in the driving task. All occupants will be passengers. The AI is doing the driving.   

This seems rather straightforward. You might be wondering where any semblance of ethics behavior enters the picture. Here’s how. Some believe that a self-driving car should always strictly obey the speed limit. 

Imagine that you have just gotten into a self-driving car in the morning and it turns out that you are possibly going to be late getting to work. Your boss is a stickler and has told you that coming in late is a surefire way to get fired.   

You tell the AI via its Natural Language Processing (NLP) that the destination is your work address. 

And, you ask the AI to hit the gas, push the pedal to the metal, screech those tires, and get you to work on-time.

But it is clear cut that if the AI obeys the speed limit, there is absolutely no chance of arriving at work on-time, and since the AI is only and always going to go at or less than the speed limit, your goose is fried.   

Better luck at your next job.   

Whoa, suppose the AI driving system had an ethics tuning knob. 

Abiding strictly by the speed limit occurs when the knob is cranked up to the top numbers like say 9 and 10. 

You turn the knob down to a 5 and tell the AI that you need to rush to work, even if it means going over the speed limit, which at a score of 5 it means that the AI driving system will mildly exceed the speed limit, though not in places like school zones, and only when the traffic situation seems to allow for safely going faster than the speed limit by a smidgen.   

The AI self-driving car gets you to work on-time!   

Later that night, when heading home, you are not in as much of a rush, so you put the knob back to the 9 or 10 that it earlier was set at. 

Also, you have a child-lock on the knob, such that when your kids use the self-driving car, which they can do on their own since there isn’t a human driver needed, the knob is always set at the topmost of the scale and the children cannot alter it.   

How does that seem to you? 

Some self-driving car pundits find the concept of such a tuning knob to be repugnant. 

They point out that everyone will “cheat” and put the knob on the lower scores that will allow the AI to do the same kind of shoddy and dangerous driving that humans do today. Whatever we might have otherwise gained by having self-driving cars, such as the hoped-for reduction in car crashes, along with the reduction in associated injuries and fatalities, will be lost due to the tuning knob capability.   

Others though point out that it is ridiculous to think that people will put up with self-driving cars that are restricted drivers that never bend or break the law. 

You’ll end-up with people opting to rarely use self-driving cars and will instead drive their human-driven cars. This is because they know that they can drive more fluidly and won’t be stuck inside a self-driving car that drives like some scaredy-cat. 

As you might imagine, the ethical ramifications of an ethics tuning knob are immense. 

In this use case, there is a kind of obviousness about the impacts of what an ethics tuning knob foretells.   

Other kinds of AI systems will have their semblance of what an ethics tuning knob might portend, and though it might not be as readily apparent as the case of self-driving cars, there is potentially as much at stake in some of those other AI systems too (which, like a self-driving car, might entail life-or-death repercussions).   

For why remote piloting or operating of self-driving cars is generally eschewed, see my explanation here: https://aitrends.com/ai-insider/remote-piloting-is-a-self-driving-car-crutch/   

To be wary of fake news about self-driving cars, see my tips here: https://aitrends.com/ai-insider/ai-fake-news-about-self-driving-cars/ 

The ethical implications of AI driving systems are significant, see my indication here: https://aitrends.com/selfdrivingcars/ethically-ambiguous-self-driving-cars/   

Be aware of the pitfalls of normalization of deviance when it comes to self-driving cars, here’s my call to arms: https://aitrends.com/ai-insider/normalization-of-deviance-endangers-ai-self-driving-cars/   

Conclusion   

If you really want to get someone going about the ethics tuning knob topic, bring up the allied matter of the Trolley Problem.   

The Trolley Problem is a famous thought experiment involving having to make choices about saving lives and which path you might choose. This has been repeatedly brought up in the context of self-driving cars and garnered acrimonious attention along with rather diametrically opposing views on whether it is relevant or not. 

In any case, the big overarching questions are will we expect AI to have an ethics tuning knob, and if so, what will it do and how will it be used. 

Those that insist there is no cause to have any such device are apt to equally insist that we must have AI that is only and always practicing the utmost of ethical behavior. 

Is that a Utopian perspective or can it be achieved in the real world as we know it?   

Only my crystal ball can say for sure.  

Copyright 2020 Dr. Lance Eliot  

This content is originally posted on AI Trends.  

[Ed. Note: For reader’s interested in Dr. Eliot’s ongoing business analyses about the advent of self-driving cars, see his online Forbes column: https://forbes.com/sites/lanceeliot/] 

http://ai-selfdriving-cars.libsyn.com/website 

Source: https://www.aitrends.com/ai-insider/making-use-of-ai-ethics-tuning-knobs-in-ai-autonomous-cars/

AI

Facial recognition tech: risks, regulations and future startup opportunities in the EU

Avatar

Published

on

Facial recognition differs from the conventional camara surveillance, as it is not a mere passive recording, but rather it entails identification of an individual by comparing newly capture images with those images saved in a data base.

The status in Europe

Although facial recognition is not yet specifically regulated in Europe, it is covered by the General Data Privacy Regulation – GDPR – as a means of collecting and processing personal biometric data, including facial data and fingerprints. Therefore, facial recognition is only possible under the criteria of the GDPR.

Biometric data provides a high level of accuracy when identifying an individual due to the uniqueness of the identifiers (facial image or fingerprint) and a great potential to improve business security.

The processing of biometric data, which is considered sensitive data, is in principle prohibited with some exceptions, such as, for reasons of substantial public interests, to protect the vital interest of the data holder or another person, or if data holder has given its explicit consent, to name some.

Moreover, other factors such as proportionality or power imbalance are considered to determine if it is a valid exception, for instance, facial recognition can be considered disproportionate to track attendance in a school, since less intrusive options are available. Also even when the data holder has explicitly consented to the processing of biometric data, consideration should be given to potential imbalance of power dynamics between the individual data holder and the institution processing the data. For instance in a student and school scenario, there could be doubts as to whether the consent of the parents of a student to the use of facial recognition techniques, is freely given in the manner intended by the GDPR and therefore, a valid exception to the prohibition of processing.

One of the challenges in this field is that the underlying technology used for facial recognition, for instance AI, can present serious risks of bias and discrimination, affecting and discriminating many people without the social control mechanism that governs human behaviour. Bias and discrimination are inherent risks of any societal or economic activity. Human decision-making is not immune to mistakes and biases. However, the same bias when present in AI could have a much larger effect.

Authentication vs. identification

Obviously biometrics for authentication (which is described as a security mechanism), is not the same as remote biometric identification (which is used for instance in airports or public spaces, to identify multiple persons’ identities at a distance and in continuous manner by checking them against data stored in a database).

The collection and use of biometric information used for facial recognition and identification in public spaces carries specific risks for fundamental rights. In fact, the European Commission (EC) has warned that remote biometric identification is the most intrusive form of facial recognition and it is in principle prohibited in Europe.

So where is all this going?

What should prevail: the protection of fundamental rights, or the advancement that comes with invasive and overpowering new technologies?

New technologies, like AI, bring some benefits, such as technological advancement and more efficiency and economic growth, but at what cost?

Using a risk-based approach the EC has considered the use of AI for remote biometric identification and other intrusive surveillance technologies to be high-risk, since it could compromise fundamental rights such as human dignity, non-discrimination and privacy protection.

The EU Commission is currently investigating whether additional safeguards are needed or whether facial recognition should not be allowed in certain cases, or certain areas, opening the door for a debate regarding the scenarios that could justify the use of facial recognition for remote biometric identification.

Artificial intelligence entails great benefits but also several potential risks, such as opaque decision-making, gender-based or other kinds of discrimination, intrusion in our private lives or being used for criminal purposes.

To address these challenges, the Commission in its white paper on AI, issued in February this year, has proposed a new regulatory framework on high risk AI, and a prior conformity assessment, including testing and certification of AI facial recognition high risk systems to ensure that they abide by EU standards and requirements.

The regulatory framework will include additional mandatory legal requirements related to training data, record-keeping, transparency, accuracy, oversight and application-based use, and specific requirements for some AI applications, specifically those designed for remote biometric facial recognition.

We should then expect new regulation coming, with the aim to have an AI system framework, compliant with current legislation and that does not compromise fundamental rights.

Opportunities for startups?

Facial recognition technologies are here to stay, therefore, so if you are thinking about changing your hair colour, watch out as your phone might not recognize you! With the speed in which facial recognition is growing, we should not wait too long for new forms of ‘selfie payment’.

Facial recognition is already been used quite successfully in several areas, among them:

  1. Health: Where thanks to face analysis is already possible to track patience use of mediation more accurately;
  2. Market and retail: Where facial recognition promises the most, as ‘knowing your customer’ is a hot topic, this means placing cameras in retail outlets to analyze the shopper behavior and improve the customer experience, subject of course to the corresponding privacy checks; and,
  3. Security and law enforcement: That is, to find missing children, identify and track criminals or accelerate investigations.

With lots of choices on the horizon for facial recognition, it remains to be seen whether European startups will lead new innnovations in this area.

Source: https://www.eu-startups.com/2020/12/facial-recognition-tech-risks-regulations-and-future-startup-opportunities-in-the-eu/

Continue Reading

AI

KDnuggets™ News 20:n45, Dec 2: TabPy: Combining Python and Tableau; Learn Deep Learning with this Free Course from Yann LeCun

Avatar

Published

on

KDnuggets™ News 20:n45, Dec 2: TabPy: Combining Python and Tableau; Learn Deep Learning with this Free Course from Yann LeCun

Combine Python and Tableau with TabPy; Learn Deep Learning with this Free Course from Yann LeCun; Find 15 Exciting AI Project Ideas for Beginners; Read about the Rise of the Machine Learning Engineer; See How to Incorporate Tabular Data with HuggingFace Transformers


Features |  News |  Tutorials |  Opinions |  Tops |  Jobs  |  Submit a blog  |  Image of the week

This week on KDnuggets: Combine Python and Tableau with TabPy; Learn Deep Learning with this Free Course from Yann LeCun; Find 15 Exciting AI Project Ideas for Beginners; Read about the Rise of the Machine Learning Engineer; See How to Incorporate Tabular Data with HuggingFace Transformers; and much, much more.

 Features

 News

 Tutorials, Overviews

 Opinions

 Top Stories, Tweets

 
 Jobs

  Image of the week

The Rise of the Machine Learning Engineer
From The Rise of the Machine Learning Engineer

Source: https://www.kdnuggets.com/2020/n45.html

Continue Reading

AI

Remembering Pluribus: The Techniques that Facebook Used to Master World’s Most Difficult Poker Game

Avatar

Published

on

Remembering Pluribus: The Techniques that Facebook Used to Master World’s Most Difficult Poker Game

Tags: AI, Facebook, Poker

Pluribus used incredibly simple AI methods to set new records in six-player no-limit Texas Hold’em poker. How did it do it?


I recently started a new newsletter focus on AI education. TheSequence is a no-BS( meaning no hype, no news etc) AI-focused newsletter that takes 5 minutes to read. The goal is to keep you up to date with machine learning projects, research papers and concepts. Please give it a try by subscribing below:

Image

 

I had a long conversation with one of my colleagues about imperfect information games and deep learning this weekend and reminded me of an article I wrote last year so I decided to republish it.

Poker has remained as one of the most challenging games to master in the fields of artificial intelligence(AI) and game theory. From the game theory-creator John Von Neumann writing about poker in his 1928 essay “Theory of Parlor Games, to Edward Thorp masterful book “Beat the Dealer” to the MIT Blackjack Team, poker strategies has been an obsession to mathematicians for decades. In recent years, AI has made some progress in poker environments with systems such as Libratus, defeating human pros in two-player no-limit Hold’em in 2017. Last year, a team of AI researchers from Facebook in collaboration with Carnegie Mellon University achieved a major milestone in the conquest of Poker by creating Pluribus, an AI agent that beat elite human professional players in the most popular and widely played poker format in the world: six-player no-limit Texas Hold’em poker.

The reasons why Pluribus represents a major breakthrough in AI systems might result confusing to many readers. After all, in recent years AI researchers have made tremendous progress across different complex games such as checkerschessGotwo-player pokerStarCraft 2, and Dota 2. All those games are constrained to only two players and are zero-sum games (meaning that whatever one player wins, the other player loses). Other AI strategies based on reinforcement learning have been able to master multi-player games Dota 2 Five and Quake III. However, six-player, no-limit Texas Hold’em still remains one of the most elusive challenges for AI systems.

Mastering the Most Difficult Poker Game in the World

 
The challenge with six-player, no-limit Texas Hold’em poker can be summarized in three main aspects:

  1. Dealing with incomplete information.
  2. Difficulty to achieve a Nash equilibrium.
  3. Success requires psychological skills like bluffing.

In AI theory, poker is classified as an imperfect-information environment which means that players never have a complete picture of the game. No other game embodies the challenge of hidden information quite like poker, where each player has information (his or her cards) that the others lack. Additionally, an action in poker in highly dependent of the chosen strategy. In perfect-information games like chess, it is possible to solve a state of the game (ex: end game) without knowing about the previous strategy (ex: opening). In poker, it is impossible to disentangle the optimal strategy of a specific situation from the overall strategy of poker.

The second challenge of poker relies on the difficulty of achieving a Nash equilibrium. Named after legendary mathematician John Nash, the Nash equilibrium describes a strategy in a zero-sum game in which a player in guarantee to win regardless of the moves chosen by its opponent. In the classic rock-paper-scissors game, the Nash equilibrium strategy is to randomly pick rock, paper, or scissors with equal probability. The challenge with the Nash equilibrium is that its complexity increases with the number of players in the game to a level in which is not feasible to pursue that strategy. In the case of six-player poker, achieving a Nash equilibrium is computationally impossible many times.

The third challenge of six-player, no-limit Texas Hold’em is related to its dependence on human psychology. The success in poker relies on effectively reasoning about hidden information, picking good action and ensuring that a strategy remains unpredictable. A successful poker player should know how to bluff, but bluffing too often reveals a strategy that can be beaten. This type of skills has remained challenging to master by AI systems throughout history.

Pluribus

 
Like many other recent AI-game breakthroughs, Pluribus relied on reinforcement learning models to master the game of poker. The core of Pluribus’s strategy was computed via self-play, in which the AI plays against copies of itself, without any data of human or prior AI play used as input. The AI starts from scratch by playing randomly, and gradually improves as it determines which actions, and which probability distribution over those actions, lead to better outcomes against earlier versions of its strategy.

Differently from other multi-player games, any given position in six-player, no-limit Texas Hold’em can have too many decision points to reason about individually. Pluribus uses a technique called abstraction to group similar actions together and eliminate others reducing the scope of the decision. The current version of Pluribus uses two types of abstractions:

  • Action Abstraction: This type of abstraction reduces the number of different actions the AI needs to consider. For instance, betting $150 or $151 might not make a difference from the strategy standpoint. To balance that, Pluribus only considers a handful of bet sizes at any decision point.
  • Information Abstraction: This type of abstraction groups decision points based on the information that has been revealed. For instance, a ten-high straight and a nine-high straight are distinct hands, but are nevertheless strategically similar. Pluribus uses information abstraction only to reason about situations on future betting rounds, never the betting round it is actually in.

To automate self-play training, the Pluribus team used a version of the of the iterative Monte Carlo CFR (MCCFR) algorithm. On each iteration of the algorithm, MCCFR designates one player as the “traverser” whose current strategy is updated on the iteration. At the start of the iteration, MCCFR simulates a hand of poker based on the current strategy of all players (which is initially completely random). Once the simulated hand is completed, the algorithm reviews each decision the traverser made and investigates how much better or worse it would have done by choosing the other available actions instead. Next, the AI assesses the merits of each hypothetical decision that would have been made following those other available actions, and so on. The difference between what the traverser would have received for choosing an action versus what the traverser actually achieved (in expectation) on the iteration is added to the counterfactual regret for the action. At the end of the iteration, the traverser’s strategy is updated so that actions with higher counterfactual regret are chosen with higher probability.

The outputs of the MCCFR training are known as the blueprint strategy. Using that strategy, Pluribus was able to master poker in eight days on a 64-core server and required less than 512 GB of RAM. No GPUs were used.
The blueprint strategy is too expensive to use real time in a poker game. During actual play, Pluribus improves upon the blueprint strategy by conducting real-time search to determine a better, finer-grained strategy for its particular situation. Traditional search strategies are very challenging to implement in imperfect information games in which the players can change strategies at any time. Pluribus instead uses an approach in which the searcher explicitly considers that any or all players may shift to different strategies beyond the leaf nodes of a subgame. Specifically, rather than assuming all players play according to a single fixed strategy beyond the leaf nodes, Pluribus assumes that each player may choose among four different strategies to play for the remainder of the game when a leaf node is reached. This technique results in the searcher finding a more balanced strategy that produces stronger overall performance.

Pluribus in Action

 
Facebook evaluated Pluribus by playing against an elite group of players that included several World Series of Poker and World Poker Tour champions. In one experiment, Pluribus played 10,000 hands of poker against five human players selected randomly from the pool. Pluribus’s win rate was estimated to be about 5 big blinds per 100 hands (5 bb/100), which is considered a very strong victory over its elite human opponents (profitable with a p-value of 0.021). If each chip was worth a dollar, Pluribus would have won an average of about $5 per hand and would have made about $1,000/hour.

The following figure illustrates Pluribus’ performance. On the top chart, the solid lines show the win rate plus or minus the standard error. The bottom chart shows the number of chips won over the course of the games.

Pluribus represents one of the major breakthroughs in modern AI systems. Even though Pluribus was initially implemented for poker, the general techniques can be applied to many other multi-agent systems that require both AI and human skills. Just like AlphaZero is helping to improve professional chess, its interesting to see how poker players can improve their strategies based on the lessons learned from Pluribus.

 
Original. Reposted with permission.

Related:

Source: https://www.kdnuggets.com/2020/12/remembering-pluribus-facebook-master-difficult-poker-game.html

Continue Reading

AI

Learning Environment Tips For Your Kids During The Pandemic

Avatar

Published

on

The pandemic has changed almost everything about our everyday lives. While you may still be adjusting to a new remote work environment and battling productivity and overpowering procrastination, your children are likely enduring the same waves of uncertainty.

As a result, parents must help young students overcome new challenges as the new medium of online learning sets the pace for youth worldwide. 

Suppose you are wondering how to keep your kids motivated to learn during the lockdown. In that case, these top tips will help you define a suitable learning environment that compliments homeschooling and virtual classrooms.

Create A Space For Learning

Your child will need their own space to learn, and while you can create a learning environment that is similar to a home office space, your child’s learning area must not be shared with any adults that may be working from home.

You will need to stock up on specific supplies to ensure the at-home learning environment is optimally functional. Besides schooling supplies, suitable furnishings, and a PC or laptop, it is also a great idea to decorate the background to complement learning. Therefore, calming colors are best.

Define Learning Times

Overcoming procrastination and lack of motivation are challenges the world’s workforce currently faces. What’s more, students worldwide are facing the same issues. To encourage your child, you will need to define learning times and help them craft a strict daily schedule. In addition to this, you should also lead by example by sticking to your work schedule. 

Explain The New Learning Dynamic

Regardless of your child’s age, the pandemic’s specifics, and how it has altered our lives is undeniably overwhelming for the average person. That said, these changes are even more frightening and confusing for younger minds who were previously conforming to entirely different dynamics of learning and socializing.

To best assist your child with the adjustment, parents must explain what is happening in the world. Your child should understand why their learning environment is different. Furthermore, you should also explain ongoing changes regarding the pandemic with your child to keep them up to date. This effort will prevent your child from feeling uncertain during this challenging time.

Be Present And Involved

Parents are required to support online learning as the parent-teacher collaboration dynamic has suddenly become vitally important. While previously, you may not have been as involved in your child’s schooling as education took place in controlled learning environments. As a result, you should make efforts to be present during your child’s online classes. Being involved is crucial to ensure your child maintains the ideal discipline that is required for learning. You will now need to participate in your child’s schooling in a manner that assists educators. 

Education standards may have changed dramatically, although not all the changes should be seen as unfavourable. Several studies suggest that even social distancing requirements are unlikely to affect youngsters negatively. The best way to keep your child motivated is to be involved in their education and provide them with an ideal environment that supports learning.

Also Read, Pros and Cons of Online Education

Source: https://www.aiiottalk.com/education/tips-for-learning-during-pandemic/

Continue Reading
Big Data4 hours ago

Teen banking app Step reaches for the stars to raise $50 million

Big Data4 hours ago

Intel’s Habana starts to chip away at Nvidia in cloud with AWS deal

Big Data4 hours ago

U.S judge hearing Google case rejects government’s protective order request

Big Data4 hours ago

Uber, JetBlue join Amazon-backed Climate Pledge

Big Data5 hours ago

Simple & Intuitive Ensemble Learning in R

Aerospace6 hours ago

50,000th PT6 engine rolls off Pratt & Whitney’s production line

Aerospace6 hours ago

Helicopter sale strengthens GKN Aerospace’s defence portfolio

Aerospace6 hours ago

Construction starts on AMRC North West in Lancashire

Aerospace7 hours ago

Automatic evaluation of X-ray detector performance

Aerospace7 hours ago

Otto Aviation selects VOLTA as its collaborative MDO framework

Aerospace7 hours ago

Harnessing innovation crucial to UK aerospace and defence future

Crowdfunding9 hours ago

Payment Service Provider PingPong Payments Secures E-Money License in Luxembourg

Big Data9 hours ago

NoSQL for Beginners

AR/VR9 hours ago

Solaris Offworld Combat’s Squad Update Allows Friends to Teamup

Big Data9 hours ago

RPA‌ ‌in‌ ‌Banking‌ ‌and‌ ‌Finance‌ ‌Industry:‌ ‌The‌ ‌Use‌ ‌Cases‌ ‌and‌ ‌Benefits‌ ‌

Start Ups10 hours ago

Messaging Software Startup Aampe Raises Rs 13 Crore From Sequoia India Surge

Start Ups10 hours ago

Tata inches closer to make foray into Online Grocery Biz

Big Data10 hours ago

Droning the drove: Israeli cow-herders turn to flying tech

Big Data10 hours ago

UK watchdog studies ‘range anxiety’ in electric vehicle charging

Big Data10 hours ago

Salesforce to buy workplace app Slack in $27.7 billion deal

Aerospace10 hours ago

Valuechain’s MES solution now integrates PrintSyst’s AI Engine

Big Data10 hours ago

Do China tech giants pose a risk for European banks?

Aerospace10 hours ago

Paragraf drives electric transport revolution with graphene sensors

Start Ups11 hours ago

Genesis Therapeutics raises $52M A round for its AI-focused drug discovery mission

Blockchain News12 hours ago

Active Bitcoin Addresses Hit Third-Highest Level in November

Aviation12 hours ago

Major US Airlines Pause Nonstop Flights To Shanghai

AI12 hours ago

Facial recognition tech: risks, regulations and future startup opportunities in the EU

Aviation12 hours ago

HOP’s Embraer Fleet To Be Rebranded As Air France

AI12 hours ago

KDnuggets™ News 20:n45, Dec 2: TabPy: Combining Python and Tableau; Learn Deep Learning with this Free Course from Yann LeCun

Aviation12 hours ago

UK approves Australia-purchased vaccine

Trending