Connect with us

AI

Detecting and redacting PII using Amazon Comprehend

Avatar

Published

on

Amazon Comprehend is a natural language processing (NLP) service that uses machine learning (ML) to find insights and relationships like people, places, sentiments, and topics in unstructured text. You can now use Amazon Comprehend ML capabilities to detect and redact personally identifiable information (PII) in customer emails, support tickets, product reviews, social media, and more. No ML experience required. For example, you can analyze support tickets and knowledge articles to detect PII entities and redact the text before you index the documents in the search solution. After that, search solutions are free of PII entities in documents. Redacting PII entities helps you protect privacy and comply with local laws and regulations.

Customer use case: TeraDact Solutions

TeraDact Solutions has already put this new feature to work. TeraDact Solutions’ software offers a robust alternative for secure information sharing in a world of ever-increasing compliance and privacy concerns. With its signature Information Identification & Presentation (IIaP™) capabilities, TeraDact’s tools provide the user with a safe information sharing environment. “Using Amazon Comprehend for PII redaction with our tokenization system not only helps us reach a larger set of our customers but also helps us overcome the shortcomings of rules-based PII detection which can result in false alarms or missed details. PII detection is critical for businesses and with the power of context-aware NLP models from Comprehend we can uphold the trust customers place in us with their information. Amazon is innovating in ways to help push our business forward by adding new features which are critical to our business thereby providing enhanced service to 100% of customers able to access Comprehend in AWS.” said Chris Schrichte, CEO, TeraDact Solutions, Inc.

In this post, I cover how to use Amazon Comprehend to detect PII and redact the PII entities via the AWS Management Console and the AWS Command Line Interface (AWS CLI).

Detecting PII in Amazon Comprehend

When you analyze text using Amazon Comprehend real-time analysis, Amazon Comprehend automatically identifies PII, as summarized in the following table.

PII entity category PII entity types
Financial

BANK_ACCOUNT_NUMBER

BANK_ROUTING

CREDIT_DEBIT_NUMBER

CREDIT_DEBIT_CVV

CREDIT_DEBIT_EXPIRY

PIN

Personal

NAME

ADDRESS

PHONE

EMAIL

AGE

Technical security

USERNAME

PASSWORD

URL

AWS_ACCESS_KEY

AWS_SECRET_KEY

IP_ADDRESS

MAC_ADDRESS

National

SSN

PASSPORT_NUMBER

DRIVER_ID

Other DATE_TIME

For each detected PII entity, you get the type of PII, a confidence score, and begin and end offset. These offsets help you locate PII entities in your documents for document processing to redact it at the secure storage or downstream solutions.

Analyzing text on the Amazon Comprehend console

To get started with Amazon Comprehend, all you need is an AWS account. To use the console, complete the following steps:

  1. On the Amazon Comprehend console, in the Input text section, select Built-in.
  2. For Input text, enter your text.
  3. Choose Analyze.

  1. On the Insights page, choose the PII

The PII tab shows color-coded text to indicate different PII entity types, such as name, email, address, phone, and others. The Results section shows more information about the text. Each entry shows the PII entity, its type, and the level of confidence Amazon Comprehend has in this analysis.

Analyzing text via the AWS CLI

To perform real-time analysis using the AWS CLI, enter the following code:

aws comprehend detect-pii-entities --language-code en --text " Good morning, everybody. My name is Van Bokhorst Serdar, and today I feel like sharing a whole lot of personal information with you. Let's start with my Email address SerdarvanBokhorst@dayrep.com. My address is 2657 Koontz Lane, Los Angeles, CA. My phone number is 818-828-6231. My Social security number is 548-95-6370. My Bank account number is 940517528812 and routing number 195991012. My credit card number is 5534816011668430, Expiration Date 6/1/2022, my C V V code is 121, and my pin 123456. Well, I think that's it. You know a whole lot about me. And I hope that Amazon comprehend is doing a good job at identifying PII entities so you can redact my personal information away from this document. Let's check."

To view the output, open the JSON response object and look at the detected PII entities. For each entity, the service returns the type of PII, confidence score metric, BeginOffset, and EndOffset. See the following code:

{ "Entities": [ { "Score": 0.9996334314346313, "Type": "NAME", "BeginOffset": 36, "EndOffset": 55 }, { "Score": 0.9999902248382568, "Type": "EMAIL", "BeginOffset": 167, "EndOffset": 195 }, { "Score": 0.9999983310699463, "Type": "ADDRESS", "BeginOffset": 211, "EndOffset": 245 }, { "Score": 0.9999997615814209, "Type": "PHONE", "BeginOffset": 265, "EndOffset": 277 }, { "Score": 0.9999996423721313, "Type": "SSN", "BeginOffset": 308, "EndOffset": 319 }, { "Score": 0.9999984502792358, "Type": "BANK_ACCOUNT_NUMBER", "BeginOffset": 347, "EndOffset": 359 }, { "Score": 0.9999974966049194, "Type": "BANK_ROUTING", "BeginOffset": 379, "EndOffset": 388 }, { "Score": 0.9999991655349731, "Type": "CREDIT_DEBIT_NUMBER", "BeginOffset": 415, "EndOffset": 431 }, { "Score": 0.9923601746559143, "Type": "CREDIT_DEBIT_EXPIRY", "BeginOffset": 449, "EndOffset": 457 }, { "Score": 0.9999997615814209, "Type": "CREDIT_DEBIT_CVV", "BeginOffset": 476, "EndOffset": 479 }, { "Score": 0.9998345375061035, "Type": "PIN", "BeginOffset": 492, "EndOffset": 498 } ]
}

Asynchronous PII redaction batch processing on the Amazon Comprehend console

You can redact documents by using Amazon Comprehend asynchronous operations. You can choose redaction mode Replace with PII entity to replace PII entities with PII entity type, or choose to mask PII entity with redaction mode Replace with character and replace the characters in PII entities with a character of your choice (!, #, $, %, &, *, or @).

To analyze and redact large documents and large collections of documents, ensure that the documents are stored in an Amazon Simple Storage Service (Amazon S3) bucket and start an asynchronous operation to detect and redact PII in the documents. The results of the analysis are returned in an S3 bucket.

  1. On the Amazon Comprehend console, choose Analysis jobs.
  2. Choose Create job.

  1. On the Create analysis job page, for Name, enter a name (for this post, we enter comprehend-blog-redact-01).
  2. For Analysis type, choose Personally identifiable information (PII).
  3. For Language, choose English.

  1. In the PII detection settings section, for Output mode, select Redactions.
  2. Expand PII entity types and select the entity types to redact.
  3. For Redaction mode, choose Replace with PII entity type.

Alternatively, you can choose Replace with character to replace PII entities with a character of your choice (!, #, $, %, &, *, or @).

  1. In the Input data section, for Data source, select My documents.
  2. For S3 location, enter the S3 path for pii-s3-input.txt.

This text file has the same example content we used earlier for real-time analysis.

  1. In the Output data section, for S3 location, enter the path to the output folder in Amazon S3.

Make sure you choose the correct input and output paths based on how you organized the document.

  1. In the Access permissions section, for IAM role, select Create an IAM role.

You need an AWS Identity and Access Management (IAM) role with required permissions to access the input and output S3 buckets for the job that is created and propagated.

  1. For Permissions to access, choose Input and Output S3 buckets.
  2. For Name suffix, enter a suffix for your role (for this post, we enter ComprehendPIIRole).
  3. Choose Create job.

You can see the job comprehend-blog-redact-01 with the job status In progress.

When the job status changes to Completed, you can access the output file to view the output. The pii-s3-input.txt file has the same example content we used earlier, and using redaction mode replaces PII with its PII entity type. Your output looks like the following text:

Good morning, everybody. My name is [NAME], and today I feel like sharing a whole lot of personal information with you. Let's start with my Email address [EMAIL]. My address is [ADDRESS] My phone number is [PHONE]. My Social security number is [SSN]. My Bank account number is [BANK-ACCOUNT-NUMBER] and routing number [BANK-ROUTING]. My credit card number is [CREDIT-DEBIT-NUMBER], Expiration Date [CREDIT-DEBIT-EXPIRY], my C V V code is [CREDIT-DEBIT-CVV], and my pin [PIN]. Well, I think that's it. You know a whole lot about me. And I hope that Amazon comprehend is doing a good job at identifying PII entities so you can redact my personal information away from this document. Let's check.

If you have very long entity types, you may prefer to mask PII with a character. If you choose to replace PII with the character *, your output looks like the following text. :

Good morning, everybody. My name is *******************, and today I feel like sharing a whole lot of personal information with you. Let's start with my Email address ****************************. My address is ********************************** My phone number is ************. My Social security number is ***********. My Bank account number is ************ and routing number *********. My credit card number is ****************, Expiration Date ********, my C V V code is ***, and my pin ******. Well, I think that's it. You know a whole lot about me. And I hope that Amazon comprehend is doing a good job at identifying PII entities so you can redact my personal information away from this document. Let's check.

Asynchronous PII redaction batch processing via the AWS CLI

To perform the PII redaction job using the AWS CLI, enter the following code:

aws comprehend start-pii-entities-detection-job --input-data-config S3Uri="s3://ai-ml-services-lab/public/labs/comprehend/pii/input/redact/pii-s3-input.txt" --output-data-config S3Uri="s3://ai-ml-services-lab/public/labs/comprehend/pii/output/redact/" --mode "ONLY_REDACTION" --redaction-config PiiEntityTypes="BANK_ACCOUNT_NUMBER","BANK_ROUTING","CREDIT_DEBIT_NUMBER","CREDIT_DEBIT_CVV","CREDIT_DEBIT_EXPIRY","PIN","EMAIL","ADDRESS","NAME","PHONE","SSN",MaskMode="REPLACE_WITH_PII_ENTITY_TYPE" --data-access-role-arn "arn:aws:iam::<ACCOUNTID>:role/service-role/AmazonComprehendServiceRole-ComprehendPIIRole" --job-name "comprehend-blog-redact-001" --language-code "en"

The request yields the following output:

{ "JobId": "e41101e2f0919a320bc0583a50f86b5f", "JobStatus": "SUBMITTED"
}

To monitor the job request, enter the following code:

aws comprehend describe-pii-entities-detection-job --job-id " e41101e2f0919a320bc0583a50f86b5f "

The following output shows that the job is complete:

{ "PiiEntitiesDetectionJobProperties": { "JobId": "e41101e2f0919a320bc0583a50f86b5f", "JobName": "comprehend-blog-redact-001", "JobStatus": "COMPLETED", "SubmitTime": <SubmitTime>, "EndTime": <EndTime>, "InputDataConfig": { "S3Uri": "s3://ai-ml-services-lab/public/labs/comprehend/pii/input/redact/pii-s3-input.txt", "InputFormat": "ONE_DOC_PER_LINE" }, "OutputDataConfig": { "S3Uri": "s3://ai-ml-services-lab/public/labs/comprehend/pii/output/redact/<AccountID>-PII-e41101e2f0919a320bc0583a50f86b5f/output/" }, "RedactionConfig": { "PiiEntityTypes": [ "BANK_ACCOUNT_NUMBER", "BANK_ROUTING", "CREDIT_DEBIT_NUMBER", "CREDIT_DEBIT_CVV", "CREDIT_DEBIT_EXPIRY", "PIN", "EMAIL", "ADDRESS", "NAME", "PHONE", "SSN" ], "MaskMode": "REPLACE_WITH_PII_ENTITY_TYPE" }, "LanguageCode": "en", "DataAccessRoleArn": "arn:aws:iam::<AccountID>:role/ComprehendBucketAccessRole", "Mode": "ONLY_REDACTION" }
}

After the job is complete, the output file is plain text (same as the input file). Other Amazon Comprehend asynchronous jobs (start-entities-detection-job) have an output file called output.tar.gz, which is a compressed archive that contains the output of the operation. Start-pii-entities-detection-job retains the folder and file structure as input. Our comprehend-blog-redact-001 job input file pii-s3-input.txt has the respective pii-s3-input.txt.out file with the redacted text in the jobs output folder. You can find the Amazon S3 location in the output from monitoring the job; the JSON element PiiEntitiesDetectionJobProperties.OutputDataConfig.S3uri has the file pii-s3-input.txt.out and the redacted content with PII entity type.

Conclusion

As of this writing, the PII detection feature in Amazon Comprehend is available for US English in the following Regions:

  • US East (Ohio)
  • US East (N. Virginia)
  • US West (Oregon),
  • Asia Pacific (Mumbai)
  • Asia Pacific (Seoul)
  • Asia Pacific (Singapore)
  • Asia Pacific (Sydney)
  • Asia Pacific (Tokyo)
  • EU (Frankfurt)
  • EU (Ireland)
  • EU (London)
  • AWS GovCloud (US-West)

Take a look at the pricing page, give the feature a try, and please send us feedback either via the AWS forum for Amazon Comprehend or through your usual AWS support contacts.


About the Author

Sriharsha M S is an AI/ML specialist solution architect in the Strategic Specialist team at Amazon Web Services. He works with strategic AWS customers who are taking advantage of AI/ML to solve complex business problems. He provides technical guidance and design advice to implement AI/ML applications at scale. His expertise spans application architecture, bigdata, analytics and machine learning.

Source: https://aws.amazon.com/blogs/machine-learning/detecting-and-redacting-pii-using-amazon-comprehend/

AI

How 5G Will Impact Customer Experience?

Avatar

Published

on

5G is the breakthrough technology promised to bring new innovations, change the way people are traversing through the Internet with its faster connection speeds, lower latency, high bandwidth, and ability to connect one million devices per square kilometre. Telcos are deploying 5G to enhance our day-to-day lives.

“When clubbed with other technologies like Artificial Intelligence, Internet of Things (IoT), it could mean a lot to a proliferation of other technologies like AR/VR, data analytics.” 

5G can be a boon for businesses with the delivery of increased reliability, efficiency and performance if it can be used to drive more value to the customers as well as the business stakeholders and meet their expectations with the help of digital technologies as mentioned below:

Consumer Expectations are on the Rise

In modern days, customer service teams provide and manage customer support via call centres and digital platforms. The rollout of 5G is expected to unleash more benefits with a positive impact on customer service as they improve their present personalized service offerings to customers and allow it to further create new solutions that could develop their customer engagement to win great deals.

For instance, salespeople in a retail store are being imbibed with layers of information about customers’ behaviour and choices that will help them build a rich and tailored experience for the customers walking down the store.

Video Conferencing/streaming is Just a Few Clicks Away

Video support is considered to be a critical part of Consumer Experience (CX) and will open new avenues for consumer-led enterprises.

“As per a survey conducted by Oracle with 5k people, 75% of people understand the efficiency and value of video chat and voice calls.” 

CX representatives used the video support feature to troubleshoot highly technical situations through video chat and screen sharing options with few clicks, potentially reducing the number of in-house technician visits during critical situations like coronavirus pandemic.

Also, nowadays video conferencing is facilitated with an option to record a quick instant video describing the process/solution and discarding the long process of sending step-by-step emails. Enterprises can develop advanced user guide for troubleshooting issues featuring video teasers for resolving common problems.

However, high-definition video quality is preferable for video conferencing, chat and demands for an uninterrupted network with smooth video streaming. This means operators need to carry out network maintenance activities on regular intervals to check whether there is any kind of 5G PIM formation on these network cell towers that could reduce receive sensitivity and performance, thereby deteriorating network speed, video resolution etc.

Thus, PIM testing becomes critical for delivering enhanced network services without interference, necessary for high-resolution online video conferencing, chats, and many more.

Increased Smart Devices and the Ability to Troubleshoot via Self-Service

The inception of 5G will give a boost to the IoT and smart device market which is already growing.

These smart devices IoT connections are expected to become twice in number between 2019 and 2025 i.e. more than 25Bn as per the GSM association which is an industry organization representing telecom operators across the globe.

With lower latency and improvisation in reliability, 5G has a lot more to offer as it connects a large number of devices. This will ultimately curb the manpower needed for customer support thereby reducing labour costs for the enterprise. Moreover, these IoT connected devices and high-speed network of 5G permit consumers to self-troubleshoot these devices at their own homes.

In order to facilitate these high-resolution networks, telecom operators need to perform 5G network testing and identify issues, take corrective actions that could improve their network and integrate with advanced capabilities, making it more efficient than previous connections with the wider network coverage.

Enhanced Augmented Reality (AR) / Virtual Reality (VR) Capabilities

As these tools are being widely used, customers are provided with virtual stores or immersive experiences using AR to view a sneak peek of the products in their house in real-time.

“‘Augmented Retail: The New Consumer Reality’ study by Nielsen in 2019 suggested that AR/VR has created a lot of interest in people and they are willing to use these technologies to check out products.” 

Analysis of Bulk Data With Big Data Analytics

Enterprises have to deal with a huge volume of data daily. 5G has the ability to collect these data and with its advanced network connectivity across a large number of devices, it delivers faster data analytics too.

Companies will be able to process this vast amount of unstructured data sets combined with Artificial Intelligence (AI) to extract meaningful insights and use them for drafting business strategies like using customer behaviour data sets to study their buying behaviour and targeting such segment with customized service offerings as per their requirement.

As per Ericsson’s AI in networks report, 68% of Communications Service Providers (CSPs) believe improving CX is a business objective while more than half of them already believe AI will be a key technology that will assist in improving the overall CX. Thus, big data analytics will be crucial for harnessing all new data and enhance the customer experience.

Conclusion

Looking from a CX point of view, 5G benefits will far extend beyond the experience of a citizen. Real-time decisions will accelerate with the prevalence of 5G and application of other new-age technologies like AI, ML, IoT, etc. As 5G deployment will continue to grow, so is the transition of each trending processes mentioned above that will ultimately improve your business in terms of productivity, gain a large customer base and bring more revenues.

Source: https://www.aiiottalk.com/technology/5g-impact-on-customer-experience/

Continue Reading

AI

Resiliency And Security: Future-Proofing Our AI Future

Avatar

Published

on

Deploying AI in the enterprise means thinking forward for resiliency and security (GETTY IMAGES)

By Allison Proffitt, AI Trends

On the first day of the Second Annual AI World Government conference and expo held virtually October 28-30, a panel moderated by Robert Gourley, cofounder & CTO of OODA, raised the issue of AI resiliency. Future-proofing AI solutions requires keeping your eyes open to upcoming likely legal and regulatory roadblocks, said Antigone Peyton, General Counsel & Innovation Strategist at Cloudigy Law. She takes a “use as little as possible” approach to data, raising questions such as: How long do you really need to keep training data? Can you abstract training data to the population level, removing some risk while still keeping enough data to find dangerous biases?

Stephen Dennis, Director of Advanced Computing Technology Centers at the U.S. Department of Homeland Security, also recommended a forward-looking posture, but in terms of the AI workforce. In particular, Dennis challenged the audience to consider the maturity level of the users of new AI technology. Full automation is not likely a first AI step, he said. Instead, he recommends automating slowly, bringing the team along. Take them a technology that works in the context they are used to, he said. They shouldn’t need a lot of training. Mature your team with the technology. Remove the human from the loop slowly.

Of course, some things will never be fully automated. Brian Drake, U.S. Department of Defense, pointed out that some tasks are inherently human-to-human interactions—such as gathering human intelligence. But AI can help humans do even those tasks better, he said.

He also cautioned enterprises to consider their contingency plan as they automate certain tasks. For example, we rarely remember phone numbers anymore. We’ve outsourced that data to our phones while accepting a certain level of risk. If you deploy a tool that replaces a human analytic activity, that’s fine, Drake said. But be prepared with a contingency plan, a solution for failure.   

Organizing for Resiliency

All of these changes will certainly require some organizational rethinking, the panel agreed. While government is organized in a top down fashion, Dennis said, the most AI-forward companies—Uber, Netflix—organize around the data. That makes more sense, he proposed, if we are carefully using the data.

Data models—like the new car trope—begin degrading the first day they are used. Perhaps the source data becomes outdated. Maybe an edge use case was not fully considered. The deployment of the model itself may prompt a completely unanticipated behavior. We must capture and institutionalize those assessments, Dennis said. He proposed an AI quality control team—different from the team building and deploying algorithms—to understand degradation and evaluate the health of models in an ongoing way. His group is working on this with sister organizations in cyber security, and he hopes the best practices they develop can be shared to the rest of the department and across the government.

Peyton called for education—and reeducation—across organizations. She called the AI systems we use today a “living and breathing animal”. This is not, she emphasized, an enterprise-level system that you buy once and drop into the organization. AI systems require maintenance, and someone must be assigned to that caretaking.

But at least at the Department of Defense, Drake pointed out, all employees are not expected to become data scientists. We’re a knowledge organization, he said, but even if reskilling and retraining are offered, a federal workforce does not have to universally accept those opportunities. However, surveys across DoD have revealed an “appetite to learn and change”, Drake said. The Department is hoping to feed that curiosity with a three-tiered training program offering executive-level overviews, practitioner-level training on the tools currently in place, and formal data science training. He encouraged a similar structure to AI and data science training across other organizations.

Bad AI Actors

Gourley turned the conversation to bad actors. The very first telegraph message between Washington DC and Baltimore in 1844 was an historic achievement. The second and third messages—Gourley said—were spam and fraud. Cybercrime is not new and it is absolutely guaranteed in AI. What is the way forward, Gourley asked the panel.

“Our adversaries have been quite clear about their ambitions in this space,” Drake said. “The Chinese have published a national artificial intelligence strategy; the Russians have done the same thing. They are resourcing those plans and executing them.”

In response, Drake argued for the vital importance of ethics frameworks and for the United States to embrace and use these technologies in an “ethically up front and moral way.” He predicted a formal codification around AI ethics standards in the next couple of years similar to international nuclear weapons agreements now.

Source: https://www.aitrends.com/ai-world-government/deploying-ai-in-the-enterprise-means-thinking-forward-for-resiliency-and-security/

Continue Reading

AI

AI Projects Progressing Across Federal Government Agencies

Avatar

Published

on

The AI World Government Conference kicked off virtually on Oct. 28 and continues on Oct. 29 and 30. Tune in to learn about AI strategies and plans of federal agencies. (Credit: Getty Images)

By AI Trends Staff

Government agencies are gaining experience with AI on projects, with practitioners focusing on defining the project benefit and the data quality is good enough to ensure success. That was a takeaway from talks on the opening day of the Second Annual AI World Government conference and expo held virtually on October 28.

Wendy Martinez, PhD, director of the Mathematical Statistics Research Center, US Bureau of Labor Statistics

Wendy Martinez, PhD, director of the Mathematical Statistics Research Center, with the Office of Survey Methods Research in the US Bureau of Labor Statistics, described a project to use natural language understanding AI to parse text fields of databases, and automatically correlate them to job occupations in the federal system. One lesson learned was despite interest in sharing experience with other agencies, “You can’t build a model based on a certain dataset and use the model somewhere else,”  she stated. Instead, each project needs its own source of data and model tuned to it.

Renata Miskell, Chief Data Officer in the Office of the Inspector General for the US Department of Health and Human Services, fights fraud and abuse for an agency that oversees over $1 trillion in annual spending, including on Medicare and Medicaid. She emphasized the importance of ensuring that data is not biased and that models generate ethical recommendations. For example, to track fraud in its grant programs awarding over $700 billion annually, “It’s important to understand the data source and context,” she stated. The unit studied five years of data from “single audits” of individual grant recipients, which included a lot of unstructured text data. The goal was to pass relevant info to the audit team. “It took a lot of training, she stated. “Initially we had many false positives.” The team tuned for data quality and ethical use, steering away from blind assumptions. “If we took for granted that the grant recipients were high risk, we would be unfairly targeting certain populations,” Miskell stated.

Dave Cook, senior director of AI/ML Engineering Services, Figure Eight Federal

In the big picture, many government agencies are engaged in AI projects and a lot of collaboration is going on. Dave Cook is senior director of AI/ML Engineering Services for Figure Eight Federal, which works on AI projects for federal clients. He has years of experience working in private industry and government agencies, mostly now the Department of Defense and intelligence agencies. “In AI in the government right now, groups are talking to one another and trying to identify best practices around whether to pilot, prototype, or scale up,” he said. “The government has made some leaps over the past few years, and a lot of sorting out is still going on.”

Ritu Jyoti, Program VP, AI Research and Global AI Research lead for IDC consultants, program contributor to the event, has over 20 years of experience working with companies including EMC, IBM Global Services, and PwC Consulting. “AI has progressed rapidly,” she said. From a global survey IDC conducted in March, business drivers for AI adoption were found to be better customer experience, improved employee productivity, accelerated innovation and improved risk management. A fair number of AI projects failed. The main reasons were unrealistic expectations, the AI did not perform as expected, the project did not have access to the needed data, and the team lacked the necessary skills. “The results indicate a lack of strategy,” Joti stated.

David Bray, PhD, Inaugural Director of the nonprofit Atlantic Council GeoTech Center, and a contributor to the event program, posted questions on how data governance challenges the future of AI. He asked what questions practitioners and policymakers around AI should be asking, and how the public can participate more in deciding what can be done with data. “You choose not to be a data nerd at your own peril,” he said.

Anthony Scriffignano, PhD, senior VP & Chief Data Scientist with Dun & Bradstreet, said in the pandemic era with many segments of the economy shut down, companies are thinking through and practicing different ways of doing things. “We sit at the point of inflection. We have enough data and computer power to use the AI techniques invented generations ago in some cases,” he said. This opportunity poses challenges related to what to try and what not to try, and “sometimes our actions in one area cause a disruption in another area.”

AI World Government continues tomorrow and Friday.

(Ed. Note: Dr. Eric Schmidt, former CEO of Google is now chair of the National Security Commission on AI, today was involved in a discussion, Transatlantic Cooperation Around the Future of AI, with Ambassador Mircea Geoana, Deputy Secretary General, North Atlantic Treaty Organization, and Secretary Robert O. Work, vice chair of the National Security Commission. Convened by the Atlantic Council, the event can be viewed here.)

Source: https://www.aitrends.com/ai-world-government/ai-projects-progressing-across-federal-government-agencies/

Continue Reading
Blockchain News1 hour ago

Bitcoin Price Flashes $750M Warning Sign As 60,000 BTC Options Set To Expire

Blockchain News2 hours ago

Bitcoin Suffers Mild Drop but Analyst Who Predicted Decoupling Expects BTC Price to See Bullish Uptrend

Blockchain News3 hours ago

AMD Purchases Xilinx in All-Stock Transaction to Develop Mining Devices

Cyber Security3 hours ago

Newly Launched Cybersecurity Company Stairwell

AI3 hours ago

How 5G Will Impact Customer Experience?

AR/VR4 hours ago

You can now Request the PlayStation VR Camera Adaptor for PS5

Blockchain News4 hours ago

HSBC and Wave Facilitate Blockchain-Powered Trade Between New Zealand and China

Blockchain News5 hours ago

Aave Makes History as Core Developers Transfer Governance to Token Holders

Blockchain News6 hours ago

Caitlin Long’s Avanti Becomes the Second Crypto Bank in the US, Open for Commercial Clients in Early 2021

Blockchain News6 hours ago

KPMG Partners with Coin Metrics to Boost Institutional Crypto Adoption

Blockchain News6 hours ago

US SEC Executive Who said Ethereum is Not a Security to Leave the Agency

Blockchain News6 hours ago

MicroStrategy Plans to Purchase Additional Bitcoin Reserves With Excess Cash

Covid198 hours ago

How followers on Instagram can help to navigate your brand during a pandemic

Cyber Security14 hours ago

StackRox Announced the Release of KubeLinter to Identify Misconfigurations in Kubernetes

Cyber Security16 hours ago

How Was 2020 Cyber Security Awareness Month?

Ecommerce16 hours ago

Celerant Technology® Expands NILS™ Integration Enabling Retailers…

Ecommerce16 hours ago

The COVID-19 Pandemic Causes Eating Patterns in America to Take a…

Ecommerce16 hours ago

MyJane Collaborates with Hedger Humor to Bring Wellness and Laughter…

AR/VR17 hours ago

Sci-fi Shooter Hive Slayer is Free, Asks Players for Louisiana Hurricane Relief Donations Instead

AR/VR17 hours ago

AMD Announces Radeon RX 6000-series GPUs with USB-C “for a modern VR experience”

AI19 hours ago

Resiliency And Security: Future-Proofing Our AI Future

AI19 hours ago

AI Projects Progressing Across Federal Government Agencies

Blockchain21 hours ago

Kucoin and Revain Announce Partnership

AR/VR22 hours ago

Crowdfunded AR Startup Tilt Five Secures $7.5M Series A Investment

AR/VR22 hours ago

The Importance of XR Influencers

AR/VR23 hours ago

Head Back Underground in 2021 With Cave Digger 2: Dig Harder

AR/VR1 day ago

Five All-New Multiplayer Modes Revealed for Tetris Effect: Connected

Crowdfunding1 day ago

The Perfect Investment

AR/VR1 day ago

Snapchat’s new Halloween AR Lenses Offer Full Body Tracking

Cyber Security1 day ago

How the PS5 Will Completely Change Gaming As We Know It?

Cyber Security1 day ago

Compromised Credentials used by Hackers to Access the Content Management System

Cyber Security1 day ago

Which are the safest payment methods for online betting?

Cyber Security1 day ago

How to stay safe if you’re using an Android device for betting?

Cyber Security1 day ago

Three technological advancements that we might see in online betting

Cyber Security1 day ago

Why do people prefer to use iOS for betting rather than Android?

Quantum1 day ago

Bell nonlocality with a single shot

Quantum1 day ago

Optimization of the surface code design for Majorana-based qubits

Quantum1 day ago

Classical Simulations of Quantum Field Theory in Curved Spacetime I: Fermionic Hawking-Hartle Vacua from a Staggered Lattice Scheme

Ecommerce1 day ago

How Digital Transformation Will Change the Retail Industry

Cyber Security1 day ago

What The Meme? Top Meme Generators To Help You Make That Perfect One!

Trending