Connect with us

AI

Parable About ‘The Upstream’ Provides Key Lessons For AI Autonomous Cars 

Avatar

Published

on

Balancing the upstream with the downstream might be the best approach dealing with problems introduced by the advent of self-driving cars on the road.  

By Lance Eliot, the AI Trends Insider  

There is a famous allegory called the Upstream Parable that provides numerous valuable lessons and can be gainfully applied to the advent of AI autonomous self-driving cars. 

The Upstream Parable sometimes referred to as the Rivers Story, has been attributed to various originating sources, including that some suggest it was initially brought up in the 1930s by Saul Alinksy, political activist,  and then later by Irving Zola, medical sociologist, though it was perhaps given its greatest impetus via a paper by John McKinlay in 1975 that applied the parable to the domain of healthcare. 

I’ll start with a slimmed-down version of the story. 

You are walking along the bank of a rushing river when you spy a person in the water that seems to be drowning. Heroically, you leap into the water and save the person. A few minutes later, another person floats by that seems to be drowning. Once again, you jump into the river and save the person.   

This keeps happening, again and again. 

In each case, you dive in, and though you manage to save the person each such time, doing so denies you the chance to go upstream and ascertain why all these people are getting into the water to begin with, for which you might be able to bring the matter to an overall halt and prevent anyone else from further getting into the dangerous waters.   

And that’s the end of the story. 

You might be thinking, what gives with this?   

Why is it such a catchy parable? 

By most interpretations, the story offers a metaphor about how we oftentimes are so busy trying to fix things that we don’t pay attention to how they were originating. Our efforts and focus go toward that which we immediately see. And, especially when something is demanding incessantly our rapt attention right away. If you can take a breather and mull things over in such a situation, you might ultimately be able to solve the matter entirely by going upstream, make a fix there, rather than being battered over and over downstream.

In fact, it could be that one fix at the upstream would prevent all the rest of the downstream efforts, meaning that economically it is potentially a lot more sound to deal with the upstream rather than the frenetic and costly downstream activities. 

This can be applied to healthcare in a myriad of ways. For example, suppose that a populace has improper hygiene habits and lives in a manner that encourages disease to take hold. Upon arriving at such a locale, your first thought might be to build a hospital to care for the sick. After a while, the hospital may fill up, so you need to build another hospital. On and on, this merry-go-round goes, devoting more and more resources to building hospitals to aid the ill.   

It would be easy to fall into the mental trap of putting all your attention toward those hospitals. 

You might chew-up your energy on dealing with: 

  • Are the hospitals running efficiently? 
  • Do hospitals have sufficient medical equipment? 
  • Can you keep enough nurses and doctors on-staff to handle the workloads? 
  • Etc. 

Recalling the lesson of the Upstream Parable, maybe there ought to be attention given to how the populace is living and try to find ways to cut down on the breaking out of disease. That’s upstream and it is the point at which the production of ill people is taking place. Imagine, if you did change the upstream to clean things up and prevent or at least reduce by a large measure the rampant disease, you’d no longer need such a large volume of hospitals, and nor all that equipment, and nor have the issues of staffing the medical teams in a large-scale way.   

Notice too that everyone involved in the matter is doing what they believe best to do. 

In other words, those building all those hospitals perceive a need to heal the sick, and so they are sincerely and genuinely “doing the right thing.” Unfortunately, they are consumed mightily by that task, akin to pulling drowning people out of the rushing river, and thus they fail to consider what’s upstream and potentially better ways to “cure” the people of their ills. 

Okay, that’s the overarching gist of the upstream and downstream related fable. 

There are numerous variants of how the story is told.   

Some like to say that the persons falling into the water are children and that you are therefore saving essentially helpless children (and, as though to go even further, sometimes the indication is that they are babies). 

I guess that might make the parable more engaging, but it doesn’t especially change the overall tenor of the lessons involved. 

Here’s one reason that some like to use children or babies in place of referring to adults.   

A bizarrely distorted reaction by some is that if it is adults that are falling into the water, why aren’t they astute enough to stop doing so, and why should it be that anyone else should be worried about saving adults that presumably should know better (thus, substituting children or babies makes that less arguable, but I must say that the somewhat cynical and bitter portrayal of adults is a bit alarming since it could be that something beyond their power is tossing them into the drink, and anyway it fights against the spirit of the parable overall). 

Another variation of the story has a second individual that comes to aid in saving the drowning subjects. 

At the end of the story, this second individual, after having helped to pull person after person out of the river, suddenly stops doing so and walks upstream. 

The first individual, still steeped in pulling people out of the water, yells frantically to the second individual, imploring with grave concern, where are they going? 

I’m going upstream to find out what’s going on and aim to stop whoever is tossing people into the river, says the second individual. 

End of story.   

That’s a nifty variant. 

Why? 

Well, in the first version, the person saving the lives has no chance to do anything but continue to save lives (we can reasonably conclude that if the saving were to be curtailed, person after person would drown).   

In the second version, we hope or assume that the first individual can sufficiently continue to save lives, while the second person scoots upstream to try and do something about the predicament. 

Of course, life is never that clear cut. 

It could be that the second person leaving will lamentably present a serious and life-denying result at the downstream saving-lives position. 

In which case, we need to ponder as to whether it is better to keep saving lives in the immediate, rather than trying to solve the problem overall, or that you must make a death sentence decision to essentially abandon some to their deaths to deal with the problem by sorting out its root. 

On a related topic, nearly all seasoned software developers and AI builders tend to know that whenever you have a budding system that is exhibiting problems, you seek to find the so-called root cause. 

If you spend all your time trying to fix errors being generated by the root cause, you’ll perpetually be in a bind of just fixing those errors and never stop the flow. 

Anyway, the variant to the parable is quite handy since it brings up a devilish dilemma. 

While in the midst of dealing with a crisis, can you spare time and effort toward the root cause, or would that meanwhile generate such adverse consequences that you are risking greater injury by not coping with the immediate and direct issues at-hand? 

Keep in mind too that just because the second person opts to walk upstream, we have no way of knowing whether the upstream exploration will even be successful. 

It could be that the upstream problem is so distant that the second individual never gets there, and in which case, if meanwhile, people were drowning, it was quite a hefty price to pay for having not solved the root problem.   

Or, maybe the second individual finds the root, but they are unable to fix it quickly (maybe it’s a troll that is too large to battle, and instead the second individual has to try and prevent people from wandering into its trap, but this only cuts down on say one-third of the pace of people getting tossed into the river). 

This means that for some time, those drowning are going to keep drowning.   

Here’s an even sadder possibility. 

The second individual reaches the upstream root and tries to fix the problem, yet somehow, regrettably, makes it worse (maybe it was a bridge that people were falling off, and while attempting to fix the bridge, the second individual messed-up and the bridge is even more precarious than it was before!).  

It could be that up until then, the first individual was able to keep up with saving those drowning, and now, ironically, after the second individual tried to fix the problem, and in the meantime wasn’t around to help save the drowning victims, there are a slew more people falling into the water, completely overwhelming the first individual. 

Yikes! 

As you can see, I like this latter version that includes the second individual, allowing us to extend the lessons that can be readily gleaned from the parable. 

Some though prefer using the simpler version. 

It all depends upon the point that you are trying to drive home by using the tale. 

For those of you that are smarmy, I’m sure that you’ve already come up with other variations.   

Why not make a net that is stretched across the river and catches all those people? 

There, problem solved, you proudly proclaim.   

Well, which problem? 

The problem of the people drowning at the downstream position, or the problem of the people being tossed into the river and possibly leading to being drowned (hopefully, they don’t drown before they reach your net). 

In any case, yes, it might be sensible to come up with a more effective or efficient way to save the drowning persons.   

That doesn’t necessarily negate the premise that it is the root that deserves attention, but I appreciate that you’ve tried to find a means to reduce the effort at the downstream, which maybe frees up those that are aiming to go upstream to find and fix the root cause. 

Bravo. 

One other last facet to mention, and it somewhat dovetails into the notion of creating and putting in place the net, sometimes there is such a massive setup of infrastructure at the downstream that it becomes unwieldy and takes on a life of its own to deal with.   

Furthermore, and the twist upon a twist, suppose that the net gets nearly all, but a few happen to go underwater and aren’t saved by the net.   

Imagine someone standing downstream of the (already) downstream net. 

They might end up in the same parable, and upon coming up to find you and your net, believe they have found the root cause.  

It could be that the root cause is further upstream and that there are lots of other intervening downstream solutions, all of which are (hopefully) mitigating the upstream, yet it might be difficult to figure out what’s the root versus what’s not the root. 

There could be a nearly infinite series of downstream solutions, all well-meaning, each of which makes the whole affair incredibly complex and confounding, while there might be an elegant end to the monstrosity by somehow getting to the real root.   

Well, that was quite an instructive look at the fable. 

You might be wondering, can the fable be used in other contexts, such as something AI-related (that’s why I’m here). 

Yes, indeed, here’s an interesting question to ponder: “Will the advent of AI-based true self-driving cars potentially find itself getting mired in downstream matters akin to the Upstream Parable?” 

Let’s unpack the matter and see.   

For my framework about AI autonomous cars, see the link here: https://aitrends.com/ai-insider/framework-ai-self-driving-driverless-cars-big-picture/ 

Why this is a moonshot effort, see my explanation here: https://aitrends.com/ai-insider/self-driving-car-mother-ai-projects-moonshot/ 

For more about the levels as a type of Richter scale, see my discussion here: https://aitrends.com/ai-insider/richter-scale-levels-self-driving-cars/ 

For the argument about bifurcating the levels, see my explanation here: https://aitrends.com/ai-insider/reframing-ai-levels-for-self-driving-cars-bifurcation-of-autonomy/ 

  

The Levels Of Self-Driving Cars 

  

It is important to clarify what I mean when referring to AI-based true self-driving cars. 

  

True self-driving cars are ones where the AI drives the car entirely on its own and there isn’t any human assistance during the driving task. 

  

These driverless vehicles are considered a Level 4 and Level 5, while a car that requires a human driver to co-share the driving effort is usually considered at a Level 2 or Level 3. The cars that co-share the driving task are described as being semi-autonomous, and typically contain a variety of automated add-on’s that are referred to as ADAS (Advanced Driver-Assistance Systems). 

  

There is not yet a true self-driving car at Level 5, which we don’t yet even know if this will be possible to achieve, and nor how long it will take to get there. 

  

Meanwhile, the Level 4 efforts are gradually trying to get some traction by undergoing very narrow and selective public roadway trials, though there is controversy over whether this testing should be allowed per se (we are all life-or-death guinea pigs in an experiment taking place on our highways and byways, some point out). 

  

Since semi-autonomous cars require a human driver, the adoption of those types of cars won’t be markedly different than driving conventional vehicles, so there’s not much new per se to cover about them on this topic (though, as you’ll see in a moment, the points next made are generally applicable). 

  

For semi-autonomous cars, the public must be forewarned about a disturbing aspect that’s been arising lately, namely that despite those human drivers that keep posting videos of themselves falling asleep at the wheel of a Level 2 or Level 3 car, we all need to avoid being misled into believing that the driver can take away their attention from the driving task while driving a semi-autonomous car. 

  

You are the responsible party for the driving actions of the vehicle, regardless of how much automation might be tossed into a Level 2 or Level 3. 

  

Self-Driving Cars And The Parable 

  

For Level 4 and Level 5 true self-driving vehicles, there won’t be a human driver involved in the driving task. 

  

All occupants will be passengers. 

  

The AI is doing the driving. 

  

Sounds pretty good. 

  

No need for any arcane fables or tall tales. 

  

But, wait, give the Upstream Parable a chance. 

  

Some today are arguing that more regulation is needed at the federal level to guide how self-driving cars will be designed, built, and fielded. 

  

Those proponents tend to say that having the states or local authorities in cities and counties having to come up with guidelines for the use of self-driving cars is counterproductive. 

  

You might be surprised to know that many of the automakers and self-driving tech firms seem to generally agree with the notion that the guidelines ought to be at the federal level. 

  

Why? 

  

One reason would be the presumed simplicity of having an across-the-board set of rules, rather than having to adjust or craft the AI system and driverless car to accommodate a potential morass of thousands upon thousands of varying rules across the entire country. 

  

On the other hand, a cogent argument is made that having a singular federal level approach might not allow for sufficient flexibility and tailoring that befits the needs of local municipalities. 

  

Let’s suppose that the local approach prevails (I’m not making such a proclamation, it’s just a what-if). 

  

If self-driving cars have trouble coping at the local levels, we might become focused on the downstream matters. 

  

Meanwhile, one might contend that it was the upstream that needed to provide an overarching approach that was sufficient to abate the downstream issues. 

  

Back to the parable we go. 

  

Suppose a fleet of self-driving cars is owned by a particular automaker. 

  

The self-driving cars communicate with a cloud-based system, via OTA (Over-The-Air) electronic capabilities, and pull down patches and updates to the AI system that’s on-board, and also the on-board system uploads collected sensory data and other info from the self-driving car. 

  

Pretend that something goes awry in the self-driving cars of that fleet. 

  

Do you try to quickly deal with each individual self-driving car, which might be on the roadway and endangering passengers, pedestrians, or other human-driven cars, or do you try to ferret out the root cause and then see if you can get that patch shoved out to the fleet in-time? 

  

Some assert that this very kind of issue is why there ought to be a kill button or kill switch inside all self-driving cars, allowing presumably for a human passenger to make a decision right there in the driverless car to stop it from processing. 

  

In any case, you could liken this to the upstream versus downstream fable. 

  

Pleasingly, once again, lessons are revealed due to a handy underlying schema or template. 

  

For why remote piloting or operating of self-driving cars is generally eschewed, see my explanation here: https://aitrends.com/ai-insider/remote-piloting-is-a-self-driving-car-crutch/ 

To be wary of fake news about self-driving cars, see my tips here: https://aitrends.com/ai-insider/ai-fake-news-about-self-driving-cars/ 

The ethical implications of AI driving systems are significant, see my indication here: https://aitrends.com/selfdrivingcars/ethically-ambiguous-self-driving-cars/ 

Be aware of the pitfalls of normalization of deviance when it comes to self-driving cars, here’s my call to arms: https://aitrends.com/ai-insider/normalization-of-deviance-endangers-ai-self-driving-cars/ 

Conclusion 

  

Generally, the Upstream Parable is pretty handy for lots of circumstances. 

  

Part of the reason it is so memorable is due to the aspect that it captures innately what we see every day, and helps to bring to light the otherwise hidden or unrealized elements of systems around us that we are immersed in. 

  

While standing at the DMV and waiting endlessly to get your driver’s license renewed, you have to let your mind wander to keep your sanity and wonder whether you’ve found yourself floating in the downstream waters. 

  

Drowning in paperwork! 

  

If the DMV had its act together, there’d be a solution at the root that would make your desire to renew your driver’s license a bit less arduous and frustrating. 

  

For sanity sake, go ahead and use the fable to your heart’s content and keep finding ways to balance the downstream with the upstream, aiming to prevent problems before they arise and make the world a better place. 

  

That’s a good lesson no matter how you cut it.  

 

Copyright 2020 Dr. Lance Eliot  

This content is originally posted on AI Trends. 

 

[Ed. Note: For reader’s interested in Dr. Eliot’s ongoing business analyses about the advent of self-driving cars, see his online Forbes column: https://forbes.com/sites/lanceeliot/] 

http://ai-selfdriving-cars.libsyn.com/website 

 

 

Source: https://www.aitrends.com/ai-insider/parable-about-the-upstream-provides-key-lessons-for-ai-autonomous-cars/

AI

How 5G Will Impact Customer Experience?

Avatar

Published

on

5G is the breakthrough technology promised to bring new innovations, change the way people are traversing through the Internet with its faster connection speeds, lower latency, high bandwidth, and ability to connect one million devices per square kilometre. Telcos are deploying 5G to enhance our day-to-day lives.

“When clubbed with other technologies like Artificial Intelligence, Internet of Things (IoT), it could mean a lot to a proliferation of other technologies like AR/VR, data analytics.” 

5G can be a boon for businesses with the delivery of increased reliability, efficiency and performance if it can be used to drive more value to the customers as well as the business stakeholders and meet their expectations with the help of digital technologies as mentioned below:

Consumer Expectations are on the Rise

In modern days, customer service teams provide and manage customer support via call centres and digital platforms. The rollout of 5G is expected to unleash more benefits with a positive impact on customer service as they improve their present personalized service offerings to customers and allow it to further create new solutions that could develop their customer engagement to win great deals.

For instance, salespeople in a retail store are being imbibed with layers of information about customers’ behaviour and choices that will help them build a rich and tailored experience for the customers walking down the store.

Video Conferencing/streaming is Just a Few Clicks Away

Video support is considered to be a critical part of Consumer Experience (CX) and will open new avenues for consumer-led enterprises.

“As per a survey conducted by Oracle with 5k people, 75% of people understand the efficiency and value of video chat and voice calls.” 

CX representatives used the video support feature to troubleshoot highly technical situations through video chat and screen sharing options with few clicks, potentially reducing the number of in-house technician visits during critical situations like coronavirus pandemic.

Also, nowadays video conferencing is facilitated with an option to record a quick instant video describing the process/solution and discarding the long process of sending step-by-step emails. Enterprises can develop advanced user guide for troubleshooting issues featuring video teasers for resolving common problems.

However, high-definition video quality is preferable for video conferencing, chat and demands for an uninterrupted network with smooth video streaming. This means operators need to carry out network maintenance activities on regular intervals to check whether there is any kind of 5G PIM formation on these network cell towers that could reduce receive sensitivity and performance, thereby deteriorating network speed, video resolution etc.

Thus, PIM testing becomes critical for delivering enhanced network services without interference, necessary for high-resolution online video conferencing, chats, and many more.

Increased Smart Devices and the Ability to Troubleshoot via Self-Service

The inception of 5G will give a boost to the IoT and smart device market which is already growing.

These smart devices IoT connections are expected to become twice in number between 2019 and 2025 i.e. more than 25Bn as per the GSM association which is an industry organization representing telecom operators across the globe.

With lower latency and improvisation in reliability, 5G has a lot more to offer as it connects a large number of devices. This will ultimately curb the manpower needed for customer support thereby reducing labour costs for the enterprise. Moreover, these IoT connected devices and high-speed network of 5G permit consumers to self-troubleshoot these devices at their own homes.

In order to facilitate these high-resolution networks, telecom operators need to perform 5G network testing and identify issues, take corrective actions that could improve their network and integrate with advanced capabilities, making it more efficient than previous connections with the wider network coverage.

Enhanced Augmented Reality (AR) / Virtual Reality (VR) Capabilities

As these tools are being widely used, customers are provided with virtual stores or immersive experiences using AR to view a sneak peek of the products in their house in real-time.

“‘Augmented Retail: The New Consumer Reality’ study by Nielsen in 2019 suggested that AR/VR has created a lot of interest in people and they are willing to use these technologies to check out products.” 

Analysis of Bulk Data With Big Data Analytics

Enterprises have to deal with a huge volume of data daily. 5G has the ability to collect these data and with its advanced network connectivity across a large number of devices, it delivers faster data analytics too.

Companies will be able to process this vast amount of unstructured data sets combined with Artificial Intelligence (AI) to extract meaningful insights and use them for drafting business strategies like using customer behaviour data sets to study their buying behaviour and targeting such segment with customized service offerings as per their requirement.

As per Ericsson’s AI in networks report, 68% of Communications Service Providers (CSPs) believe improving CX is a business objective while more than half of them already believe AI will be a key technology that will assist in improving the overall CX. Thus, big data analytics will be crucial for harnessing all new data and enhance the customer experience.

Conclusion

Looking from a CX point of view, 5G benefits will far extend beyond the experience of a citizen. Real-time decisions will accelerate with the prevalence of 5G and application of other new-age technologies like AI, ML, IoT, etc. As 5G deployment will continue to grow, so is the transition of each trending processes mentioned above that will ultimately improve your business in terms of productivity, gain a large customer base and bring more revenues.

Source: https://www.aiiottalk.com/technology/5g-impact-on-customer-experience/

Continue Reading

AI

Resiliency And Security: Future-Proofing Our AI Future

Avatar

Published

on

Deploying AI in the enterprise means thinking forward for resiliency and security (GETTY IMAGES)

By Allison Proffitt, AI Trends

On the first day of the Second Annual AI World Government conference and expo held virtually October 28-30, a panel moderated by Robert Gourley, cofounder & CTO of OODA, raised the issue of AI resiliency. Future-proofing AI solutions requires keeping your eyes open to upcoming likely legal and regulatory roadblocks, said Antigone Peyton, General Counsel & Innovation Strategist at Cloudigy Law. She takes a “use as little as possible” approach to data, raising questions such as: How long do you really need to keep training data? Can you abstract training data to the population level, removing some risk while still keeping enough data to find dangerous biases?

Stephen Dennis, Director of Advanced Computing Technology Centers at the U.S. Department of Homeland Security, also recommended a forward-looking posture, but in terms of the AI workforce. In particular, Dennis challenged the audience to consider the maturity level of the users of new AI technology. Full automation is not likely a first AI step, he said. Instead, he recommends automating slowly, bringing the team along. Take them a technology that works in the context they are used to, he said. They shouldn’t need a lot of training. Mature your team with the technology. Remove the human from the loop slowly.

Of course, some things will never be fully automated. Brian Drake, U.S. Department of Defense, pointed out that some tasks are inherently human-to-human interactions—such as gathering human intelligence. But AI can help humans do even those tasks better, he said.

He also cautioned enterprises to consider their contingency plan as they automate certain tasks. For example, we rarely remember phone numbers anymore. We’ve outsourced that data to our phones while accepting a certain level of risk. If you deploy a tool that replaces a human analytic activity, that’s fine, Drake said. But be prepared with a contingency plan, a solution for failure.   

Organizing for Resiliency

All of these changes will certainly require some organizational rethinking, the panel agreed. While government is organized in a top down fashion, Dennis said, the most AI-forward companies—Uber, Netflix—organize around the data. That makes more sense, he proposed, if we are carefully using the data.

Data models—like the new car trope—begin degrading the first day they are used. Perhaps the source data becomes outdated. Maybe an edge use case was not fully considered. The deployment of the model itself may prompt a completely unanticipated behavior. We must capture and institutionalize those assessments, Dennis said. He proposed an AI quality control team—different from the team building and deploying algorithms—to understand degradation and evaluate the health of models in an ongoing way. His group is working on this with sister organizations in cyber security, and he hopes the best practices they develop can be shared to the rest of the department and across the government.

Peyton called for education—and reeducation—across organizations. She called the AI systems we use today a “living and breathing animal”. This is not, she emphasized, an enterprise-level system that you buy once and drop into the organization. AI systems require maintenance, and someone must be assigned to that caretaking.

But at least at the Department of Defense, Drake pointed out, all employees are not expected to become data scientists. We’re a knowledge organization, he said, but even if reskilling and retraining are offered, a federal workforce does not have to universally accept those opportunities. However, surveys across DoD have revealed an “appetite to learn and change”, Drake said. The Department is hoping to feed that curiosity with a three-tiered training program offering executive-level overviews, practitioner-level training on the tools currently in place, and formal data science training. He encouraged a similar structure to AI and data science training across other organizations.

Bad AI Actors

Gourley turned the conversation to bad actors. The very first telegraph message between Washington DC and Baltimore in 1844 was an historic achievement. The second and third messages—Gourley said—were spam and fraud. Cybercrime is not new and it is absolutely guaranteed in AI. What is the way forward, Gourley asked the panel.

“Our adversaries have been quite clear about their ambitions in this space,” Drake said. “The Chinese have published a national artificial intelligence strategy; the Russians have done the same thing. They are resourcing those plans and executing them.”

In response, Drake argued for the vital importance of ethics frameworks and for the United States to embrace and use these technologies in an “ethically up front and moral way.” He predicted a formal codification around AI ethics standards in the next couple of years similar to international nuclear weapons agreements now.

Source: https://www.aitrends.com/ai-world-government/deploying-ai-in-the-enterprise-means-thinking-forward-for-resiliency-and-security/

Continue Reading

AI

AI Projects Progressing Across Federal Government Agencies

Avatar

Published

on

The AI World Government Conference kicked off virtually on Oct. 28 and continues on Oct. 29 and 30. Tune in to learn about AI strategies and plans of federal agencies. (Credit: Getty Images)

By AI Trends Staff

Government agencies are gaining experience with AI on projects, with practitioners focusing on defining the project benefit and the data quality is good enough to ensure success. That was a takeaway from talks on the opening day of the Second Annual AI World Government conference and expo held virtually on October 28.

Wendy Martinez, PhD, director of the Mathematical Statistics Research Center, US Bureau of Labor Statistics

Wendy Martinez, PhD, director of the Mathematical Statistics Research Center, with the Office of Survey Methods Research in the US Bureau of Labor Statistics, described a project to use natural language understanding AI to parse text fields of databases, and automatically correlate them to job occupations in the federal system. One lesson learned was despite interest in sharing experience with other agencies, “You can’t build a model based on a certain dataset and use the model somewhere else,”  she stated. Instead, each project needs its own source of data and model tuned to it.

Renata Miskell, Chief Data Officer in the Office of the Inspector General for the US Department of Health and Human Services, fights fraud and abuse for an agency that oversees over $1 trillion in annual spending, including on Medicare and Medicaid. She emphasized the importance of ensuring that data is not biased and that models generate ethical recommendations. For example, to track fraud in its grant programs awarding over $700 billion annually, “It’s important to understand the data source and context,” she stated. The unit studied five years of data from “single audits” of individual grant recipients, which included a lot of unstructured text data. The goal was to pass relevant info to the audit team. “It took a lot of training, she stated. “Initially we had many false positives.” The team tuned for data quality and ethical use, steering away from blind assumptions. “If we took for granted that the grant recipients were high risk, we would be unfairly targeting certain populations,” Miskell stated.

Dave Cook, senior director of AI/ML Engineering Services, Figure Eight Federal

In the big picture, many government agencies are engaged in AI projects and a lot of collaboration is going on. Dave Cook is senior director of AI/ML Engineering Services for Figure Eight Federal, which works on AI projects for federal clients. He has years of experience working in private industry and government agencies, mostly now the Department of Defense and intelligence agencies. “In AI in the government right now, groups are talking to one another and trying to identify best practices around whether to pilot, prototype, or scale up,” he said. “The government has made some leaps over the past few years, and a lot of sorting out is still going on.”

Ritu Jyoti, Program VP, AI Research and Global AI Research lead for IDC consultants, program contributor to the event, has over 20 years of experience working with companies including EMC, IBM Global Services, and PwC Consulting. “AI has progressed rapidly,” she said. From a global survey IDC conducted in March, business drivers for AI adoption were found to be better customer experience, improved employee productivity, accelerated innovation and improved risk management. A fair number of AI projects failed. The main reasons were unrealistic expectations, the AI did not perform as expected, the project did not have access to the needed data, and the team lacked the necessary skills. “The results indicate a lack of strategy,” Joti stated.

David Bray, PhD, Inaugural Director of the nonprofit Atlantic Council GeoTech Center, and a contributor to the event program, posted questions on how data governance challenges the future of AI. He asked what questions practitioners and policymakers around AI should be asking, and how the public can participate more in deciding what can be done with data. “You choose not to be a data nerd at your own peril,” he said.

Anthony Scriffignano, PhD, senior VP & Chief Data Scientist with Dun & Bradstreet, said in the pandemic era with many segments of the economy shut down, companies are thinking through and practicing different ways of doing things. “We sit at the point of inflection. We have enough data and computer power to use the AI techniques invented generations ago in some cases,” he said. This opportunity poses challenges related to what to try and what not to try, and “sometimes our actions in one area cause a disruption in another area.”

AI World Government continues tomorrow and Friday.

(Ed. Note: Dr. Eric Schmidt, former CEO of Google is now chair of the National Security Commission on AI, today was involved in a discussion, Transatlantic Cooperation Around the Future of AI, with Ambassador Mircea Geoana, Deputy Secretary General, North Atlantic Treaty Organization, and Secretary Robert O. Work, vice chair of the National Security Commission. Convened by the Atlantic Council, the event can be viewed here.)

Source: https://www.aitrends.com/ai-world-government/ai-projects-progressing-across-federal-government-agencies/

Continue Reading
Blockchain News1 hour ago

Bitcoin Price Flashes $750M Warning Sign As 60,000 BTC Options Set To Expire

Blockchain News1 hour ago

Bitcoin Suffers Mild Drop but Analyst Who Predicted Decoupling Expects BTC Price to See Bullish Uptrend

Blockchain News2 hours ago

AMD Purchases Xilinx in All-Stock Transaction to Develop Mining Devices

Cyber Security3 hours ago

Newly Launched Cybersecurity Company Stairwell

AI3 hours ago

How 5G Will Impact Customer Experience?

AR/VR3 hours ago

You can now Request the PlayStation VR Camera Adaptor for PS5

Blockchain News4 hours ago

HSBC and Wave Facilitate Blockchain-Powered Trade Between New Zealand and China

Blockchain News4 hours ago

Aave Makes History as Core Developers Transfer Governance to Token Holders

Blockchain News5 hours ago

Caitlin Long’s Avanti Becomes the Second Crypto Bank in the US, Open for Commercial Clients in Early 2021

Blockchain News5 hours ago

KPMG Partners with Coin Metrics to Boost Institutional Crypto Adoption

Blockchain News6 hours ago

US SEC Executive Who said Ethereum is Not a Security to Leave the Agency

Blockchain News6 hours ago

MicroStrategy Plans to Purchase Additional Bitcoin Reserves With Excess Cash

Covid198 hours ago

How followers on Instagram can help to navigate your brand during a pandemic

Cyber Security13 hours ago

StackRox Announced the Release of KubeLinter to Identify Misconfigurations in Kubernetes

Cyber Security16 hours ago

How Was 2020 Cyber Security Awareness Month?

Ecommerce16 hours ago

Celerant Technology® Expands NILS™ Integration Enabling Retailers…

Ecommerce16 hours ago

The COVID-19 Pandemic Causes Eating Patterns in America to Take a…

Ecommerce16 hours ago

MyJane Collaborates with Hedger Humor to Bring Wellness and Laughter…

AR/VR17 hours ago

Sci-fi Shooter Hive Slayer is Free, Asks Players for Louisiana Hurricane Relief Donations Instead

AR/VR17 hours ago

AMD Announces Radeon RX 6000-series GPUs with USB-C “for a modern VR experience”

AI19 hours ago

Resiliency And Security: Future-Proofing Our AI Future

AI19 hours ago

AI Projects Progressing Across Federal Government Agencies

Blockchain21 hours ago

Kucoin and Revain Announce Partnership

AR/VR22 hours ago

Crowdfunded AR Startup Tilt Five Secures $7.5M Series A Investment

AR/VR22 hours ago

The Importance of XR Influencers

AR/VR22 hours ago

Head Back Underground in 2021 With Cave Digger 2: Dig Harder

AR/VR1 day ago

Five All-New Multiplayer Modes Revealed for Tetris Effect: Connected

Crowdfunding1 day ago

The Perfect Investment

AR/VR1 day ago

Snapchat’s new Halloween AR Lenses Offer Full Body Tracking

Cyber Security1 day ago

How the PS5 Will Completely Change Gaming As We Know It?

Cyber Security1 day ago

Compromised Credentials used by Hackers to Access the Content Management System

Cyber Security1 day ago

Which are the safest payment methods for online betting?

Cyber Security1 day ago

How to stay safe if you’re using an Android device for betting?

Cyber Security1 day ago

Three technological advancements that we might see in online betting

Cyber Security1 day ago

Why do people prefer to use iOS for betting rather than Android?

Quantum1 day ago

Bell nonlocality with a single shot

Quantum1 day ago

Optimization of the surface code design for Majorana-based qubits

Quantum1 day ago

Classical Simulations of Quantum Field Theory in Curved Spacetime I: Fermionic Hawking-Hartle Vacua from a Staggered Lattice Scheme

Ecommerce1 day ago

How Digital Transformation Will Change the Retail Industry

Cyber Security1 day ago

What The Meme? Top Meme Generators To Help You Make That Perfect One!

Trending