Connect with us


Create a multi-region Amazon Lex bot with Amazon Connect for high availability




AWS customers rely on Amazon Lex bots to power their Amazon Connect self service conversational experiences on telephone and other channels. With Amazon Lex, callers (or customers, in Amazon Connect terminology) can get their questions conveniently answered regardless of agent availability. What architecture patterns can you use to make a bot resilient to service availability issues? In this post, we describe a cross-regional approach to yield higher availability by deploying Amazon Lex bots in multiple Regions.

Architecture overview

In this solution, Amazon Connect flows can achieve business continuity with minimal disruptions in the event of service availability issues with Amazon Lex. The architecture pattern uses the following components:

  • Two Amazon Lex bots, each in a different Region.
  • An Amazon Connect flow integrated with the bots triggered based on the result from the region check AWS Lambda function.
  • A Lambda function to check the health of the bot.
  • A Lambda function to read the Amazon DynamoDB table for the primary bot’s Region for a given Amazon Connect Region.
  • A DynamoDB table to store a Region mapping between Amazon Connect and Amazon Lex. The health check function updates this table. The region check function reads this table for the most up-to-date primary Region mapping for Amazon Connect and Amazon Lex.

The goal of having identical Amazon Lex Bots in two Regions is to bring up the bot in the secondary Region and make it the primary in the event of an outage in the primary Region.

Mulit-region pattern for Amazon Lex in Amazon Connect

Multi-region pattern for Amazon Lex

The next two sections describe how an Amazon Connect flow integrated with an Amazon Lex bot can recover quickly in case of a service failure or outage in the primary Region and start servicing calls using Amazon Lex in the secondary Region.

The health check function calls one of the two Amazon Lex Runtime API calls—PutSession or PostText, depending on the TEST_METHOD Lambda environment variable. You can choose either one based on your preference and use case. The PutSession API call doesn’t have any extra costs associated with Amazon Lex, but it doesn’t test any natural language understanding (NLU) features of Amazon Lex. The PostTextAPI allows you to check the NLU functionality of Amazon Lex, but includes a minor cost.

The health check function updates the lexRegion column of the DynamoDB table (lexDR) with the Region name in which the test passed. If the health check passes the test in the primary Region, lexRegion gets updated with the name of the primary Region. If the health check fails, the function issues a call to the corresponding Runtime API based on the TEST_METHOD environment variable in the secondary Region. If the test succeeds, the lexRegion column in the DynamoDB table gets updated to the secondary Region; otherwise, it gets updated with err, which indicates both Regions have an outage.

On every call that Amazon Connect receives, it issues a region check function call to get the active Amazon Lex Region for that particular Amazon Connect Region. The primary Region returned by the region check function is the last entry written to the DynamoDB table by the health check function. Amazon Connect invokes the respective Get Customer Input Block configured with the Amazon Lex bot in the Region returned by the region check function. If the function returns the same Region as the Amazon Connect Region, it indicates that the health check has passed, and Amazon Connect calls the Amazon Lex bot in its same Region. If the function returns the secondary Region, Amazon Connect invokes the bot in the secondary Region.

Deploying Amazon Lex bots

You need to create an identical bot in both your primary and secondary Region. In this blog post, we selected us-east-1 as the primary and us-west-2 secondary Region. Begin by creating the bot in your primary Region, us-east-1.

  1. On the Amazon Lex console, click Create.
  2. In the Try a Sample section, select OrderFlowers.  Select COPPA to No
  3. Leave all other settings at their default value and click Create.
  4. The bot is created and will start to build automatically.
  5. After your bot is built (in 1–2 minutes), choose Publish.
  6. Create an alias with the name ver_one.

Repeat the above steps for us-west-2.  You should now have a working Amazon Lex bot in both us-east-1 and us-west-2.

Creating a DynamoDB table

Make sure your AWS Region is us-east-1.

  1. On the DynamoDB console, choose Create.
  2. For Table name, enter lexDR.
  3. For Primary key, enter connectRegion with type String.
  4. Leave everything else at their default and choose Create.
  5. On the Items tab, choose Create item.
  6. Set the connectRegion value to us-east-1 , and Append a new column of type String called lexRegion and set its value to us-east-1.
    Appending additional column to the Dynamo Table
  7. Click Save.
    Dynamo DB Table Entry showing Connect and Lex mapping

Creating IAM roles for Lambda functions

In this step, you create an AWS Identity and Access Management (IAM) for both Lambda functions to use.

  1. On the IAM console, click on Access management and select Policies.
  2. Click on Create Policy.
  3. Click on JSON.
  4. Paste the following custom IAM policy that allows read and write access to the DynamoDB table, lexDR. Replace the “xxxxxxxxxxxx” in the policy definition with your AWS Account Number.
    { "Version": "2012-10-17", "Statement": [{ "Sid": "VisualEditor0", "Effect": "Allow", "Action": ["dynamodb:GetItem", "dynamodb:UpdateItem"], "Resource": "arn:aws:dynamodb:us-east-1:xxxxxxxxxxxx:table/lexDR" }]
  5. Click on Review Policy.
  6. Give it a name DynamoDBReadWrite and click on Create Policy.
  7. On the IAM console, click on Roles  under Access management  and then click on Create Role.
  8. Select Lambda for the service and click Next.
  9. Attach the following permissions policies:
    1. AWSLambdaBasicExecutionRole
    2. AmazonLexRunBotsOnly
    3. DynamoDBReadWrite
  10. Click Next: Tags. Skip the Tags page by clicking Next: Review.
  11. Name the role lexDRRole. Click Save.

Deploying the region check function

You first create a Lambda function to read from the DynamoDB table to decide which Amazon Lex bot is in the same Region as the Amazon Connect instance. This function is later called by Amazon Connect or your application that’s using the bot.

  1. On the Lambda console, choose Create function.
  2. For Function name, enter lexDRGetRegion.
  3. For Runtime, choose Python 3.8.
  4. Under Permissions, choose Use an existing role.
  5. Choose the role lexDRRole.
  6. Choose Create function.
  7. In the Lambda code editor, enter the following code (downloaded from
    import json
    import boto3
    import os
    import logging
    logger = logging.getLogger()
    def getCurrentPrimaryRegion(key):
        result = dynamo_client.get_item(
            Key = { 
                "connectRegion": {"S": key }         }
        logger.debug(result['Item']['lexRegion']['S'] )
        return result['Item']['lexRegion']['S']  
    def lambda_handler(event, context):
        region = event["Details"]["Parameters"]["region"]
        return {
            'statusCode': 200,
            'primaryCode': getCurrentPrimaryRegion(region)

  8. In the Environment variables section, choose Edit.
  9. Add an environment variable with Key as TABLE_NAME and Value as lexDR.
  10. Click Save to save the environment variable.
  11. Click Save to save the Lambda function.

Environment Variables section in Lambda Console

Deploying the health check function

Create another Lambda function in us-east-1 to implement the health check functionality.

  1. On the Lambda console, choose Create function.
  2. For Function name, enter lexDRTest.
  3. For Runtime, choose Python 3.8.
  4. Under Permissions, choose Use an existing role.
  5. Choose lexDRRole.
  6. Choose Create function.
  7. In the Lambda code editor, enter the following code (downloaded from
    import json
    import boto3
    import sys
    import os dynamo_client = boto3.client('dynamodb')
    primaryRegion = os.environ['PRIMARY_REGION']
    secondaryRegion = os.environ['SECONDARY_REGION']
    tableName = os.environ['TABLE_NAME']
    primaryRegion_client = boto3.client('lex-runtime',region_name=primaryRegion)
    secondaryRegion_client = boto3.client('lex-runtime',region_name=secondaryRegion) def getCurrentPrimaryRegion(): result = dynamo_client.get_item( TableName=tableName, Key={ 'connectRegion': {'S': primaryRegion} } ) return result['Item']['lexRegion']['S'] def updateTable(region): result = dynamo_client.update_item( TableName= tableName, Key={ 'connectRegion': {'S': primaryRegion } }, UpdateExpression='set lexRegion = :region', ExpressionAttributeValues={ ':region': {'S':region} } ) #SEND MESSAGE/PUT SESSION ENV VA
    def put_session(botname, botalias, user, region): print(region,botname, botalias) client = primaryRegion_client if region == secondaryRegion: client = secondaryRegion_client try: response = client.put_session(botName=botname, botAlias=botalias, userId=user) if (response['ResponseMetadata'] and response['ResponseMetadata']['HTTPStatusCode'] and response['ResponseMetadata']['HTTPStatusCode'] != 200) or (not response['sessionId']): return 501 else: if getCurrentPrimaryRegion != region: updateTable(region) return 200 except: print('ERROR: {}',sys.exc_info()[0]) return 501 def send_message(botname, botalias, user, region): print(region,botname, botalias) client = primaryRegion_client if region == secondaryRegion: client = secondaryRegion_client try: message = os.environ['SAMPLE_UTTERANCE'] expectedOutput = os.environ['EXPECTED_RESPONSE'] response = client.post_text(botName=botname, botAlias=botalias, userId=user, inputText=message) if response['message']!=expectedOutput: print('ERROR: Expected_Response=Success, Response_Received='+response['message']) return 500 else: if getCurrentPrimaryRegion != region: updateTable(region) return 200 except: print('ERROR: {}',sys.exc_info()[0]) return 501 def lambda_handler(event, context): print(event) botName = os.environ['BOTNAME'] botAlias = os.environ['BOT_ALIAS'] testUser = os.environ['TEST_USER'] testMethod = os.environ['TEST_METHOD'] if testMethod == 'send_message': primaryRegion_response = send_message(botName, botAlias, testUser, primaryRegion) else: primaryRegion_response = put_session(botName, botAlias, testUser, primaryRegion) if primaryRegion_response != 501: primaryRegion_client.delete_session(botName=botName, botAlias=botAlias, userId=testUser) if primaryRegion_response != 200: if testMethod == 'send_message': secondaryRegion_response = send_message(botName, botAlias, testUser, secondaryRegion) else: secondaryRegion_response = put_session(botName, botAlias, testUser, secondaryRegion) if secondaryRegion_response != 501: secondaryRegion_client.delete_session(botName=botName, botAlias=botAlias, userId=testUser) if secondaryRegion_response != 200: updateTable('err') #deleteSessions(botName, botAlias, testUser) return {'statusCode': 200,'body': 'Success'}

  8. In the Environment variables section, choose Edit, and add the following environment variables:
    • BOTNAMEOrderFlowers
    • BOT_ALIASver_one
    • SAMPLE_UTTERANCEI would like to order some flowers.
      (The example utterance you want to use to send a message to the bot.)
    • EXPECTED_RESPONSE What type of flowers would you like to order?
      (The expected response from the bot when it receives the above sample utterance.)
    • PRIMARY_REGIONus-east-1
    • SECONDARY_REGIONus-west-2
    • TEST_METHODput_session or send_message
      • send_message : This method calls the Lex Runtime function postText function which takes an utterance and maps it to one of the trained intents. postText will test the Natural Language Understanding capability of Lex. You will also iincur a small charge of $0.00075 per request)
      • put_session: This method calls the Lex Runtime function put_session function which creates a new session for the user. put_session will NOT test the Natual Language Understanding capability of Lex.)
    • TEST_USERtest
  9. Click Save to save the environment variable.
  10. In the Basic Settings section, update the Timeout value to 15 seconds.
  11. Click Save to save the Lambda function.

Environment Variables section in Lambda Console

Creating an Amazon CloudWatch rule

To trigger the health check function to run every 5 minutes, you create an Amazon CloudWatch rule.

  1. On the CloudWatch console, under Events, choose Rules.
  2. Choose Create rule.
  3. Under Event Source, change the option to Schedule.
  4. Set the Fixed rate of to 5 minutes
  5. Under Targets, choose Add target.
  6. Choose Lambda function as the target.
  7. For Function, choose lexDRTest.
  8. Under Configure input, choose Constant(JSON text), and enter {}
  9. Choose Configure details.
  10. Under Rule definition, for Name, enter lexHealthCheckRule.
  11. Choose Create rule.

You should now have a lexHealthCheckRule CloudWatch rule scheduled to invoke your lexDRTest function every 5 minutes. This checks if your primary bot is healthy and updates the DynamoDB table accordingly.

Creating your Amazon Connect instance

You now create an Amazon Connect instance to test the multi-region pattern for the bots in the same Region where you created the lexDRTest function.

  1. Create an Amazon Connect instance if you don’t already have one.
  2. On the Amazon Connect console, choose the instance alias where you want the Amazon Connect flow to be.
  3. Choose Contact flows.
  4. Under Amazon Lex, select OrderFlowers bot from us-east-1 and click Add Lex Bot
  5. Select OrderFlowers bot from us-west-2 and click Add Lex Bot
    Adding Lex Bots in Connect Contact Flows
  6. Under AWS Lambda, select lexDRGetRegion and click Add Lambda Function.
  7. Log in to your Amazon Connect instance by clicking Overview in the left panel and clicking the login link.
  8. Click Routing in the left panel, and then click Contact Flows in the drop down menu.
  9. Click the Create Contact Flow button.
  10. Click the down arrow button next to the Save button, and click on Import Flow.
  11. Download the contact flow Flower DR Flow. Upload this file in the Import Flow dialog.
    Amazon Connect Contact Flow
  12. In the Contact Flow, Click on the Inovke AWS Lambda Function block, and it will open a properties panel on the right.
  13. Select the lexDRGetRegion function and click Save.
  14. Click on the Publish button to publish the contact flow.

Associating a phone number with the contact flow

Next, you will associate a phone number with your contact flow, so you can call in and test the OrderFlowers bot.

  1. Click on the Routing option in the left navigation panel.
  2. Click on Phone Numbers.
  3. Click on Claim Number.
  4. Select your country code and select a Phone Number.
  5. In the Contact flow/IVR select box, select the contact flow Flower DR Flow imported in the earlier step.
  6. Wait for a few minutes, and then call into that number to interact with the OrderFlowers bot.

Testing your integration

To test this solution, you can simulate a failure in the us-east-1 Region by implementing the following:

  1. Open Amazon Lex Console in us-east-1 Region
  2. Select the OrderFlowers bot.
  3. Click on Settings.
  4. Delete the bot alias ver_one

When the health check runs the next time, it will try to communicate with the Lex Bot in us-east-1 region. It will fail in getting a successful response, as the bot alias no longer exists. So, it will then make the call to the secondary Region, us-west-2. Upon receiving a successful response. Upon receiving this response, it will update the lexRegion column in the lexDR, DynamoDB table with us-west-2.

After this, all subsequent calls to Connect in us-east-1 will start interacting with the Lex Bot in us-west-2. This automatic switch over demonstrates how this architectural pattern can help achieve business continuity in the event of a service failure.
Between the time you delete the bot alias, and the next health check run, calls to Amazon Connect will receive a failure. However, after the health check runs, you will see a continuity in business operational automatically. The smaller the duration between your health check runs, the shorter the outage you will have. The duration between your health check runs can be changed by editing the Amazon CloudWatch rule, lexHealthCheckRule .

To make the health check pass in us-east-1 again, recreate the ver_one alias of the OrderFlowers bot in us-east-1.


To avoid incurring any charges in the future, delete all the resources created above.

  1. Amazon Lex bot OrderFlowers created in us-east-1 and us-west-2
  2. The Cloudwatch rule lexHealthCheckRule
  3. The DynamoDB Table lexDR
  4. The Lambda functions lexDRTest and lexDRGetRegion
  5. Delete the IAM role lexDRRole
  6. Delete the Contact Flow Flower DR Flow


Coupled with Amazon Lex for self-service, Amazon Connect allows you to easily create intuitive customer service experiences. This post offers a multi-region approach for high availability so that, if a bot or the supporting fulfillment APIs are under pressure in one Region, resources from a different Region can continue to serve customer demand.

About the Authors

Shanthan Kesharaju is a Senior Architect in the AWS ProServe team. He helps our customers with their Conversational AI strategy, architecture, and development. Shanthan has an MBA in Marketing from Duke University, MS in Management Information Systems from Oklahoma State University, and a Bachelors in Technology from Kakaitya University in India. He is also currently pursuing his third Masters in Analytics from Georgia Tech.

Soyoung Yoon is a Conversation A.I. Architect at AWS Professional Services where she works with customers across multiple industries to develop specialized conversational assistants which have helped these customers provide their users faster and accurate information through natural language. Soyoung has M.S. and B.S. in Electrical and Computer Engineering from Carnegie Mellon University.



How 5G Will Impact Customer Experience?




5G is the breakthrough technology promised to bring new innovations, change the way people are traversing through the Internet with its faster connection speeds, lower latency, high bandwidth, and ability to connect one million devices per square kilometre. Telcos are deploying 5G to enhance our day-to-day lives.

“When clubbed with other technologies like Artificial Intelligence, Internet of Things (IoT), it could mean a lot to a proliferation of other technologies like AR/VR, data analytics.” 

5G can be a boon for businesses with the delivery of increased reliability, efficiency and performance if it can be used to drive more value to the customers as well as the business stakeholders and meet their expectations with the help of digital technologies as mentioned below:

Consumer Expectations are on the Rise

In modern days, customer service teams provide and manage customer support via call centres and digital platforms. The rollout of 5G is expected to unleash more benefits with a positive impact on customer service as they improve their present personalized service offerings to customers and allow it to further create new solutions that could develop their customer engagement to win great deals.

For instance, salespeople in a retail store are being imbibed with layers of information about customers’ behaviour and choices that will help them build a rich and tailored experience for the customers walking down the store.

Video Conferencing/streaming is Just a Few Clicks Away

Video support is considered to be a critical part of Consumer Experience (CX) and will open new avenues for consumer-led enterprises.

“As per a survey conducted by Oracle with 5k people, 75% of people understand the efficiency and value of video chat and voice calls.” 

CX representatives used the video support feature to troubleshoot highly technical situations through video chat and screen sharing options with few clicks, potentially reducing the number of in-house technician visits during critical situations like coronavirus pandemic.

Also, nowadays video conferencing is facilitated with an option to record a quick instant video describing the process/solution and discarding the long process of sending step-by-step emails. Enterprises can develop advanced user guide for troubleshooting issues featuring video teasers for resolving common problems.

However, high-definition video quality is preferable for video conferencing, chat and demands for an uninterrupted network with smooth video streaming. This means operators need to carry out network maintenance activities on regular intervals to check whether there is any kind of 5G PIM formation on these network cell towers that could reduce receive sensitivity and performance, thereby deteriorating network speed, video resolution etc.

Thus, PIM testing becomes critical for delivering enhanced network services without interference, necessary for high-resolution online video conferencing, chats, and many more.

Increased Smart Devices and the Ability to Troubleshoot via Self-Service

The inception of 5G will give a boost to the IoT and smart device market which is already growing.

These smart devices IoT connections are expected to become twice in number between 2019 and 2025 i.e. more than 25Bn as per the GSM association which is an industry organization representing telecom operators across the globe.

With lower latency and improvisation in reliability, 5G has a lot more to offer as it connects a large number of devices. This will ultimately curb the manpower needed for customer support thereby reducing labour costs for the enterprise. Moreover, these IoT connected devices and high-speed network of 5G permit consumers to self-troubleshoot these devices at their own homes.

In order to facilitate these high-resolution networks, telecom operators need to perform 5G network testing and identify issues, take corrective actions that could improve their network and integrate with advanced capabilities, making it more efficient than previous connections with the wider network coverage.

Enhanced Augmented Reality (AR) / Virtual Reality (VR) Capabilities

As these tools are being widely used, customers are provided with virtual stores or immersive experiences using AR to view a sneak peek of the products in their house in real-time.

“‘Augmented Retail: The New Consumer Reality’ study by Nielsen in 2019 suggested that AR/VR has created a lot of interest in people and they are willing to use these technologies to check out products.” 

Analysis of Bulk Data With Big Data Analytics

Enterprises have to deal with a huge volume of data daily. 5G has the ability to collect these data and with its advanced network connectivity across a large number of devices, it delivers faster data analytics too.

Companies will be able to process this vast amount of unstructured data sets combined with Artificial Intelligence (AI) to extract meaningful insights and use them for drafting business strategies like using customer behaviour data sets to study their buying behaviour and targeting such segment with customized service offerings as per their requirement.

As per Ericsson’s AI in networks report, 68% of Communications Service Providers (CSPs) believe improving CX is a business objective while more than half of them already believe AI will be a key technology that will assist in improving the overall CX. Thus, big data analytics will be crucial for harnessing all new data and enhance the customer experience.


Looking from a CX point of view, 5G benefits will far extend beyond the experience of a citizen. Real-time decisions will accelerate with the prevalence of 5G and application of other new-age technologies like AI, ML, IoT, etc. As 5G deployment will continue to grow, so is the transition of each trending processes mentioned above that will ultimately improve your business in terms of productivity, gain a large customer base and bring more revenues.


Continue Reading


Resiliency And Security: Future-Proofing Our AI Future




Deploying AI in the enterprise means thinking forward for resiliency and security (GETTY IMAGES)

By Allison Proffitt, AI Trends

On the first day of the Second Annual AI World Government conference and expo held virtually October 28-30, a panel moderated by Robert Gourley, cofounder & CTO of OODA, raised the issue of AI resiliency. Future-proofing AI solutions requires keeping your eyes open to upcoming likely legal and regulatory roadblocks, said Antigone Peyton, General Counsel & Innovation Strategist at Cloudigy Law. She takes a “use as little as possible” approach to data, raising questions such as: How long do you really need to keep training data? Can you abstract training data to the population level, removing some risk while still keeping enough data to find dangerous biases?

Stephen Dennis, Director of Advanced Computing Technology Centers at the U.S. Department of Homeland Security, also recommended a forward-looking posture, but in terms of the AI workforce. In particular, Dennis challenged the audience to consider the maturity level of the users of new AI technology. Full automation is not likely a first AI step, he said. Instead, he recommends automating slowly, bringing the team along. Take them a technology that works in the context they are used to, he said. They shouldn’t need a lot of training. Mature your team with the technology. Remove the human from the loop slowly.

Of course, some things will never be fully automated. Brian Drake, U.S. Department of Defense, pointed out that some tasks are inherently human-to-human interactions—such as gathering human intelligence. But AI can help humans do even those tasks better, he said.

He also cautioned enterprises to consider their contingency plan as they automate certain tasks. For example, we rarely remember phone numbers anymore. We’ve outsourced that data to our phones while accepting a certain level of risk. If you deploy a tool that replaces a human analytic activity, that’s fine, Drake said. But be prepared with a contingency plan, a solution for failure.   

Organizing for Resiliency

All of these changes will certainly require some organizational rethinking, the panel agreed. While government is organized in a top down fashion, Dennis said, the most AI-forward companies—Uber, Netflix—organize around the data. That makes more sense, he proposed, if we are carefully using the data.

Data models—like the new car trope—begin degrading the first day they are used. Perhaps the source data becomes outdated. Maybe an edge use case was not fully considered. The deployment of the model itself may prompt a completely unanticipated behavior. We must capture and institutionalize those assessments, Dennis said. He proposed an AI quality control team—different from the team building and deploying algorithms—to understand degradation and evaluate the health of models in an ongoing way. His group is working on this with sister organizations in cyber security, and he hopes the best practices they develop can be shared to the rest of the department and across the government.

Peyton called for education—and reeducation—across organizations. She called the AI systems we use today a “living and breathing animal”. This is not, she emphasized, an enterprise-level system that you buy once and drop into the organization. AI systems require maintenance, and someone must be assigned to that caretaking.

But at least at the Department of Defense, Drake pointed out, all employees are not expected to become data scientists. We’re a knowledge organization, he said, but even if reskilling and retraining are offered, a federal workforce does not have to universally accept those opportunities. However, surveys across DoD have revealed an “appetite to learn and change”, Drake said. The Department is hoping to feed that curiosity with a three-tiered training program offering executive-level overviews, practitioner-level training on the tools currently in place, and formal data science training. He encouraged a similar structure to AI and data science training across other organizations.

Bad AI Actors

Gourley turned the conversation to bad actors. The very first telegraph message between Washington DC and Baltimore in 1844 was an historic achievement. The second and third messages—Gourley said—were spam and fraud. Cybercrime is not new and it is absolutely guaranteed in AI. What is the way forward, Gourley asked the panel.

“Our adversaries have been quite clear about their ambitions in this space,” Drake said. “The Chinese have published a national artificial intelligence strategy; the Russians have done the same thing. They are resourcing those plans and executing them.”

In response, Drake argued for the vital importance of ethics frameworks and for the United States to embrace and use these technologies in an “ethically up front and moral way.” He predicted a formal codification around AI ethics standards in the next couple of years similar to international nuclear weapons agreements now.


Continue Reading


AI Projects Progressing Across Federal Government Agencies




The AI World Government Conference kicked off virtually on Oct. 28 and continues on Oct. 29 and 30. Tune in to learn about AI strategies and plans of federal agencies. (Credit: Getty Images)

By AI Trends Staff

Government agencies are gaining experience with AI on projects, with practitioners focusing on defining the project benefit and the data quality is good enough to ensure success. That was a takeaway from talks on the opening day of the Second Annual AI World Government conference and expo held virtually on October 28.

Wendy Martinez, PhD, director of the Mathematical Statistics Research Center, US Bureau of Labor Statistics

Wendy Martinez, PhD, director of the Mathematical Statistics Research Center, with the Office of Survey Methods Research in the US Bureau of Labor Statistics, described a project to use natural language understanding AI to parse text fields of databases, and automatically correlate them to job occupations in the federal system. One lesson learned was despite interest in sharing experience with other agencies, “You can’t build a model based on a certain dataset and use the model somewhere else,”  she stated. Instead, each project needs its own source of data and model tuned to it.

Renata Miskell, Chief Data Officer in the Office of the Inspector General for the US Department of Health and Human Services, fights fraud and abuse for an agency that oversees over $1 trillion in annual spending, including on Medicare and Medicaid. She emphasized the importance of ensuring that data is not biased and that models generate ethical recommendations. For example, to track fraud in its grant programs awarding over $700 billion annually, “It’s important to understand the data source and context,” she stated. The unit studied five years of data from “single audits” of individual grant recipients, which included a lot of unstructured text data. The goal was to pass relevant info to the audit team. “It took a lot of training, she stated. “Initially we had many false positives.” The team tuned for data quality and ethical use, steering away from blind assumptions. “If we took for granted that the grant recipients were high risk, we would be unfairly targeting certain populations,” Miskell stated.

Dave Cook, senior director of AI/ML Engineering Services, Figure Eight Federal

In the big picture, many government agencies are engaged in AI projects and a lot of collaboration is going on. Dave Cook is senior director of AI/ML Engineering Services for Figure Eight Federal, which works on AI projects for federal clients. He has years of experience working in private industry and government agencies, mostly now the Department of Defense and intelligence agencies. “In AI in the government right now, groups are talking to one another and trying to identify best practices around whether to pilot, prototype, or scale up,” he said. “The government has made some leaps over the past few years, and a lot of sorting out is still going on.”

Ritu Jyoti, Program VP, AI Research and Global AI Research lead for IDC consultants, program contributor to the event, has over 20 years of experience working with companies including EMC, IBM Global Services, and PwC Consulting. “AI has progressed rapidly,” she said. From a global survey IDC conducted in March, business drivers for AI adoption were found to be better customer experience, improved employee productivity, accelerated innovation and improved risk management. A fair number of AI projects failed. The main reasons were unrealistic expectations, the AI did not perform as expected, the project did not have access to the needed data, and the team lacked the necessary skills. “The results indicate a lack of strategy,” Joti stated.

David Bray, PhD, Inaugural Director of the nonprofit Atlantic Council GeoTech Center, and a contributor to the event program, posted questions on how data governance challenges the future of AI. He asked what questions practitioners and policymakers around AI should be asking, and how the public can participate more in deciding what can be done with data. “You choose not to be a data nerd at your own peril,” he said.

Anthony Scriffignano, PhD, senior VP & Chief Data Scientist with Dun & Bradstreet, said in the pandemic era with many segments of the economy shut down, companies are thinking through and practicing different ways of doing things. “We sit at the point of inflection. We have enough data and computer power to use the AI techniques invented generations ago in some cases,” he said. This opportunity poses challenges related to what to try and what not to try, and “sometimes our actions in one area cause a disruption in another area.”

AI World Government continues tomorrow and Friday.

(Ed. Note: Dr. Eric Schmidt, former CEO of Google is now chair of the National Security Commission on AI, today was involved in a discussion, Transatlantic Cooperation Around the Future of AI, with Ambassador Mircea Geoana, Deputy Secretary General, North Atlantic Treaty Organization, and Secretary Robert O. Work, vice chair of the National Security Commission. Convened by the Atlantic Council, the event can be viewed here.)


Continue Reading
Blockchain8 hours ago

TRAMS DEX Propels Global Adoption of DeFi with Automated Market Maker (AMM) protocol

Press Releases9 hours ago

Bixin Ventures Announces $100M Proprietary Capital Fund to Support Global Blockchain Ecosystem

Press Releases9 hours ago

SHANGHAI, Oct 26, 2020 – (ACN Newswire)

Start Ups9 hours ago

CB Insights: Trends, Insights & Startups from The Fintech 250

Press Releases9 hours ago

Valarhash Launches New Service Series for its Mining Hosting Operations

zephyrnet10 hours ago

Trends, Insights & Startups from The Fintech 250

Cannabis15 hours ago

Current Research on Effect Specific Uses of Cannabis

Covid1916 hours ago

How Telemedicine Can Help Keep Your Health on Track

Start Ups16 hours ago

Website Packages – Good or Evil?

Blockchain17 hours ago

Self-Sovereign Decentralized Digital Identity

Cyber Security23 hours ago

Best Moon Lamp Reviews and Buying Guide

Cyber Security1 day ago

Guilford Technical Community College Continues to Investigate a Ransomware Cyberattack

Cyber Security1 day ago

IOTW: Will There Be An Incident Of Impact On Tuesday’s Election?

Blockchain News1 day ago

Mastercard and GrainChain Bring Blockchain Provenance to Commodity Supply Chain in Americas

AR/VR1 day ago

Win a Copy of Affected: The Manor for Oculus Quest

AR/VR1 day ago

The Steam Halloween Sale has Begun With Themed Activities and Updates

AR/VR2 days ago

Warhammer Age of Sigmar: Tempestfall Announced for PC VR & Oculus Quest, Arrives 2021

Crowdfunding2 days ago

I Dare You to Ignore This Trend

Blockchain News2 days ago

Bitcoin Price Flashes $750M Warning Sign As 60,000 BTC Options Set To Expire

AR/VR2 days ago

Star Wars: Tales from the Galaxy’s Edge to Include VR Short ‘Temple of Darkness’

Blockchain News2 days ago

Bitcoin Suffers Mild Drop but Analyst Who Predicted Decoupling Expects BTC Price to See Bullish Uptrend

Blockchain News2 days ago

AMD Purchases Xilinx in All-Stock Transaction to Develop Mining Devices

Cyber Security2 days ago

Newly Launched Cybersecurity Company Stairwell

AI2 days ago

How 5G Will Impact Customer Experience?

AR/VR2 days ago

You can now Request the PlayStation VR Camera Adaptor for PS5

Blockchain News2 days ago

HSBC and Wave Facilitate Blockchain-Powered Trade Between New Zealand and China

Blockchain News2 days ago

Aave Makes History as Core Developers Transfer Governance to Token Holders

Blockchain News2 days ago

Caitlin Long’s Avanti Becomes the Second Crypto Bank in the US, Open for Commercial Clients in Early 2021

Blockchain News2 days ago

KPMG Partners with Coin Metrics to Boost Institutional Crypto Adoption

Blockchain News2 days ago

US SEC Executive Who said Ethereum is Not a Security to Leave the Agency

Blockchain News2 days ago

MicroStrategy Plans to Purchase Additional Bitcoin Reserves With Excess Cash

Covid192 days ago

How followers on Instagram can help to navigate your brand during a pandemic

Cyber Security2 days ago

StackRox Announced the Release of KubeLinter to Identify Misconfigurations in Kubernetes

Cyber Security2 days ago

How Was 2020 Cyber Security Awareness Month?

Ecommerce2 days ago

Masks and More Outlet Donates Face Masks For Children In Local…

Ecommerce2 days ago

Clicks Overtake Bricks: PrizeLogic & SmartCommerce Bring Shoppable…

Ecommerce2 days ago

Footwear Sales in the U.S. Expected to Stabilize and Bounce Back…

Ecommerce2 days ago

Celerant Technology® Expands NILS™ Integration Enabling Retailers…

Ecommerce2 days ago

The COVID-19 Pandemic Causes Eating Patterns in America to Take a…

Ecommerce2 days ago

MyJane Collaborates with Hedger Humor to Bring Wellness and Laughter…