Connect with us

AI

Google launches suite of AI-powered solutions for retailers

Avatar

Published

on

Google today announced the launch of Product Discovery Solutions for Retail, a suite of services deigned to enhance retailers’ ecommerce capabilities and help them deliver personalized customer experiences. Product Discovery Solutions for Retail brings together AI algorithms and a search service, Cloud Search for Retail, that leverages Google Search technology to power retailers’ product-finding tools.

The pandemic and corresponding rise in online shopping threaten to push supply chains to the breaking point. Early in the COVID-19 crisis, Amazon was forced to restrict the amount of inventory suppliers could send to its warehouses. Ecommerce order volume has increased by 50% compared with 2019, and shipment times for products like furniture more than doubled in March. Moreover, overall U.S. digital sales have jumped by 30%, expediting the online shopping transition by as much as two years.

Product Discovery Solutions for Retail, which is generally available to all companies as of today, aims to address the challenges with AI and machine learning. To that end, it includes access to Google’s Recommendations AI, which uses machine learning to dynamically adapt to customer behavior and changes in variables like assortment, pricing, and special offers.

Recommendations AI, which launched in beta in July and is now generally available, ostensibly excels at handling recommendations in scenarios with long-tail products and cold-start users and items. Thanks to “context-hungry” deep learning models developed in partnership with Google Brain and Google Research, it’s able to draw insights across tens of millions of items and constantly iterate on those insights in real time.

From a graphical interface, businesses using Recommendations AI can integrate, configure, monitor, and launch recommendations while connecting data by using existing integrations with Merchant Center, Google Tag Manager, Google Analytics 360, Cloud Storage, and BigQuery. Recommendations AI can incorporate unstructured metadata like product name, description, category, images, product longevity, and more while customizing recommendations to deliver desired outcomes, such as engagement, revenue, or conversions. And it lets Google Cloud customers apply rules to fine-tune what shoppers see and diversify which products are shown, filtering by product availability and custom tags.

Product Discovery Solutions for Retail also includes access to Google’s Vision API Product Search, which allows shoppers to search for products with an image and receive a ranked list of visually and semantically similar items. Google says Vision Product Search taps machine learning-powered object recognition and lookup to provide real-time results of similar, or complementary, items from retailers’ product catalog.

Beyond Recommendations AI and Vision API Product Search, Product Discovery Solutions for Retail ships with Cloud Search for Retail. Cloud Search for Retail, which is currently in private preview, pulls from Google’s understanding of user intent and context to provide retail product search functionality that can be embedded into websites and mobile apps.

“As the shift to online continues, smarter and more personalized shopping experiences will be even more critical for retailers to rise above their competition,” Google Cloud retail and consumer VP Carrie Tharp said in a statement. “Retailers are in dire need of agile operating models powered by cloud infrastructure and technologies like artificial intelligence and machine learning (AI/ML) to meet today’s industry demands. We’re proud to partner with retailers around the world and bring forward our Product Discovery offerings to help them succeed.”

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact. Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform
  • networking features, and more

Become a member

Source: https://venturebeat.com/2021/01/19/google-launches-suite-of-ai-powered-solutions-for-retailers/

AI

Using container images to run TensorFlow models in AWS Lambda

Avatar

Published

on

TensorFlow is an open-source machine learning (ML) library widely used to develop neural networks and ML models. Those models are usually trained on multiple GPU instances to speed up training, resulting in expensive training time and model sizes up to a few gigabytes. After they’re trained, these models are deployed in production to produce inferences. They can be synchronous, asynchronous, or batch-based workloads. Those endpoints need to be highly scalable and resilient in order to process from zero to millions of requests. This is where AWS Lambda can be a compelling compute service for scalable, cost-effective, and reliable synchronous and asynchronous ML inferencing. Lambda offers benefits such as automatic scaling, reduced operational overhead, and pay-per-inference billing.

This post shows you how to use any TensorFlow model with Lambda for scalable inferences in production with up to 10 GB of memory. This allows us to use ML models in Lambda functions up to a few gigabytes. For this post, we use TensorFlow-Keras pre-trained ResNet50 for image classification.

Overview of solution

Lambda is a serverless compute service that lets you run code without provisioning or managing servers. Lambda automatically scales your application by running code in response to every event, allowing event-driven architectures and solutions. The code runs in parallel and processes each event individually, scaling with the size of the workload, from a few requests per day to hundreds of thousands of workloads. The following diagram illustrates the architecture of our solution.

You can package your code and dependencies as a container image using tools such as the Docker CLI. The maximum container size is 10 GB. After the model for inference is Dockerized, you can upload the image to Amazon Elastic Container Registry (Amazon ECR). You can then create the Lambda function from the container imaged stored in Amazon ECR.

Prerequisites

For this walkthrough, you should have the following prerequisites:

Implementing the solution

We use a pre-trained model from the TensorFlow Hub for image classification. When an image is uploaded to an Amazon Simple Storage Service (Amazon S3) bucket, a Lambda function is invoked to detect the image and print it to the Amazon CloudWatch logs. The following diagram illustrates this workflow.

To implement the solution, complete the following steps:

  1. On your local machine, create a folder with the name lambda-tensorflow-example.
  2. Create a requirements.txt file in that directory.
  3. Add all the needed libraries for your ML model. For this post, we use TensorFlow 2.4.
  4. Create an app.py script that contains the code for the Lambda function.
  5. Create a Dockerfile in the same directory.

The following text is an example of the requirements.txt file to run TensorFlow code for our use case:

# List all python libraries for the lambda
tensorflow==2.4.0
tensorflow_hub==0.11
Pillow==8.0.1

We’re using the TensorFlow 2.4 version with CPU support only because, as of this writing, Lambda only offers CPU support. For more information about CPU-only versions of TensorFlow, see Package location.

The Python code is placed in app.py. The inference function in app.py needs to follow a specific structure to be invoked by the Lambda runtime. For more information about handlers for Lambda, see AWS Lambda function handler in Python. See the following code:

import json
import boto3
import numpy as np
import PIL.Image as Image import tensorflow as tf
import tensorflow_hub as hub IMAGE_WIDTH = 224
IMAGE_HEIGHT = 224 IMAGE_SHAPE = (IMAGE_WIDTH, IMAGE_HEIGHT)
model = tf.keras.Sequential([hub.KerasLayer("model/")])
model.build([None, IMAGE_WIDTH, IMAGE_HEIGHT, 3]) imagenet_labels= np.array(open('model/ImageNetLabels.txt').read().splitlines())
s3 = boto3.resource('s3') def lambda_handler(event, context): bucket_name = event['Records'][0]['s3']['bucket']['name'] key = event['Records'][0]['s3']['object']['key'] img = readImageFromBucket(key, bucket_name).resize(IMAGE_SHAPE) img = np.array(img)/255.0 prediction = model.predict(img[np.newaxis, ...]) predicted_class = imagenet_labels[np.argmax(prediction[0], axis=-1)] print('ImageName: {0}, Prediction: {1}'.format(key, predicted_class)) def readImageFromBucket(key, bucket_name): bucket = s3.Bucket(bucket_name) object = bucket.Object(key) response = object.get() return Image.open(response['Body'])

The following Dockerfile for Python 3.8 uses the AWS provided open-source base images that can be used to create container images. The base images are preloaded with language runtimes and other components required to run a container image on Lambda.

# Pull the base image with python 3.8 as a runtime for your Lambda
FROM public.ecr.aws/lambda/python:3.8 # Install tar and gzip
RUN yum -y install tar gzip zlib # Copy the earlier created requirements.txt file to the container
COPY requirements.txt ./ # Install the python requirements from requirements.txt
RUN python3.8 -m pip install -r requirements.txt # Copy the earlier created app.py file to the container
COPY app.py ./ # Download ResNet50 and store it in a directory
RUN mkdir model
RUN curl -L https://tfhub.dev/google/imagenet/resnet_v1_50/classification/4?tf-hub-format=compressed -o ./model/resnet.tar.gz
RUN tar -xf model/resnet.tar.gz -C model/
RUN rm -r model/resnet.tar.gz # Download ImageNet labels
RUN curl https://storage.googleapis.com/download.tensorflow.org/data/ImageNetLabels.txt -o ./model/ImageNetLabels.txt # Set the CMD to your handler
CMD ["app.lambda_handler"]

Your folder structure should look like the following screenshot.

You can build and push the container image to Amazon ECR with the following bash commands. Replace the <AWS_ACCOUNT_ID> with your own AWS account ID and also specify a <REGION>.

# Build the docker image
docker build -t lambda-tensorflow-example . # Create a ECR repository
aws ecr create-repository --repository-name lambda-tensorflow-example --image-scanning-configuration scanOnPush=true --region <REGION> # Tag the image to match the repository name
docker tag lambda-tensorflow-example:latest <AWS_ACCOUNT_ID>.dkr.ecr.<REGION>.amazonaws.com/lambda-tensorflow-example:latest # Register docker to ECR
aws ecr get-login-password --region <REGION> | docker login --username AWS --password-stdin <AWS_ACCOUNT_ID>.dkr.ecr.<REGION>.amazonaws.com # Push the image to ECR
docker push <AWS_ACCOUNT_ID>.dkr.ecr.<REGION>.amazonaws.com/lambda-tensorflow-example:latest

If you want to test your model inference locally, the base images for Lambda include a Runtime Interface Emulator (RIE) that allows you to also locally test your Lambda function packaged as a container image to speed up the development cycles.

Creating an S3 bucket

As a next step, we create an S3 bucket to store the images used to predict the image class.

  1. On the Amazon S3 console, choose Create bucket.
  2. Give the S3 bucket a name, such as tensorflow-images-for-inference-<Random_String> and replace the <Random_String> with a random value.
  3. Choose Create bucket.

Creating the Lambda function with the TensorFlow code

To create your Lambda function, complete the following steps:

  1. On the Lambda console, choose Functions.
  2. Choose Create function.
  3. Select Container image.
  4. For Function name, enter a name, such as tensorflow-endpoint.
  5. For Container image URI, enter the earlier created lambda-tensorflow-example repository.

  1. Choose Browse images to choose the latest image.
  2. Click Create function to initialize the creation of it.
  3. To improve the Lambda runtime, increase the function memory to at least 6 GB and timeout to 5 minutes in the Basic settings.

For more information about function memory and timeout settings, see New for AWS Lambda – Functions with Up to 10 GB of Memory and 6 vCPUs.

Connecting the S3 bucket to your Lambda function

After the successful creation of the Lambda function, we need to add a trigger to it so that whenever a file is uploaded to the S3 bucket, the function is invoked.

  1. On the Lambda console, choose your function.
  2. Choose Add trigger.

  1. Choose S3.
  2. For Bucket, choose the bucket you created earlier.

After the trigger is added, you need to allow the Lambda function to connect to the S3 bucket by setting the appropriate AWS Identity and Access Management (IAM) rights for its execution role.

  1. On the Permissions tab for your function, choose the IAM role.
  2. Choose Attach policies.
  3. Search for AmazonS3ReadOnlyAccess and attach it to the IAM role.

Now you have configured all the necessary services to test your function. Upload a JPG image to the created S3 bucket by opening the bucket in the AWS management console and clicking Upload. After a few seconds, you can see the result of the prediction in the CloudWatch logs. As a follow-up step, you could store the predictions in an Amazon DynamoDB table.

After uploading a JPG picture to the S3 bucket we will get the predicted image class as a result printed to CloudWatch. The Lambda function will be triggered by EventBridge and pull the image from the bucket. As an example, we are going to use the picture of this parrot to get predicted by our inference endpoint.

In the CloudWatch logs the predicted class is printed. Indeed, the model predicts the correct class for the picture (macaw):

Performance

In order to achieve optimal performance, you can try various levels of memory setting (which linearly changes the assigned vCPU, to learn more, read this AWS News Blog). In the case of our deployed model, we realize most performance gains at about 3GB – 4GB (~2vCPUs) setting and gains beyond that are relatively low. Different models see different level of performance improvement by increased amount of CPU so it is best to determine this experimentally for your own model. Additionally, it is highly recommended that you compile your source code to take advantage of Advanced Vector Extensions 2 (AVX2) on Lambda that further increases the performance by allowing vCPUs to run higher number of integer and floating-point operations per clock cycle.

Conclusion

Container image support for Lambda allows you to customize your function even more, opening up a lot of new use cases for serverless ML. You can bring your custom models and deploy them on Lambda using up to 10 GB for the container image size. For smaller models that don’t need much computing power, you can perform online training and inference purely in Lambda. When the model size increases, cold start issues become more and more important and need to be mitigated. There is also no restriction on the framework or language with container images; other ML frameworks such as PyTorch, Apache MXNet, XGBoost, or Scikit-learn can be used as well!

If you do require GPU for your inference, you can consider using containers services such as Amazon Elastic Container Service (Amazon ECS), Kubernetes, or deploy the model to an Amazon SageMaker endpoint.


About the Author

Jan Bauer is a Cloud Application Developer at AWS Professional Services. His interests are serverless computing, machine learning, and everything that involves cloud computing.

Source: https://aws.amazon.com/blogs/machine-learning/using-container-images-to-run-tensorflow-models-in-aws-lambda/

Continue Reading

AI

IBM Reportedly Retreating from Healthcare with Watson 

Avatar

Published

on

IBM is reported to be considering a sale of Watson Health, an indication of the challenges of applying AI in healthcare. IBM continues to invest in cloud services using Watson. (Photo by Carson Masterson on Unsplash.)  

By John P. Desmond, AI Trends Editor  

Reports surfaced last week that IBM is contemplating a sale of Watson Health, representing a retreat from the market of AI applied to healthcare that IBM had pursued under the direction of its previous CEO. 

The Wall Street Journal last week reported IBM was exploring the sale of Watson Health; IBM did not confirm the report. Ten years ago, when IBM Watson won on the Jeopardy! game show against two of the game’s record winners, the Watson brand in AI was established. 

As reported in AI Trends last February, the day after Watson defeated the two human champions on Jeopardy!, IBM announced Watson was heading into the medical field. IBM would take its ability to understand natural language that it showed off on television, and apply it to medicine. The first commercial offerings would be available in 18 to 24 months, the company promised, according to an account in IEEE Spectrum from April 2019.  

It was a tough road. IBM was the first company to make a major push to bring AI to medicine. The alarm was sounded by Robert Wachter, chair of the department of medicine at the University of California, San Francisco, and author of the 2015 book The Digital Doctor: Hope, Hype, and Harm at the Dawn of Medicine’s Computer Age (McGraw-Hill). The Watson win on Jeopardy! Gave the IBM AI salesforce a launching pad.  

“They came in with marketing first, product second, and got everybody excited,” stated Wachter. “Then the rubber hit the road. This is an incredibly hard set of problems, and IBM, by being first out, has demonstrated that for everyone else.”  

Then-IBM CEO Ginni Rometty Used Watson Victory to Launch AI in Healthcare  

Ginni Rometty, IBM’s CEO at the time, told an audience of health IT professionals at a 2017 conference that “AI is mainstream, it’s here, and it can change almost everything about health care.” She, like many, saw the potential for AI to help transform the healthcare industry. 

Watson had used advances in natural language processing to win at Jeopardy. The Watson team used machine learning on a training dataset of Jeopardy clues and responses. To enter the healthcare market, IBM tried using text recognition on medical records to build its knowledge base. Unstructured data such as doctors’ notes full of jargon and shorthand may account for 80% of a patient’s record. It was challenging.   

The effort was to build a diagnostic tool. IBM formed the Watson Health division in 2015. The unit made $4 billion of acquisitions. The search continued for the medical business case to justify the investments. Many projects were launched around decision support using large medical data sets. A focus on oncology to personalize cancer treatment for patients looked promising.  

Physicians at the University of Texas MD Anderson Cancer Center in Houston, worked with IBM to create a tool called Oncology Expert Advisor. MD Anderson got the tool to test stage in the leukemia department; it never became a commercial product.   

The project did not end well; it was cancelled in 2016. An audit by the University of Texas found the cancer center had spent $62 million on the project. The IEEE Spectrum authors said the project revealed “a fundamental mismatch between the promise of machine learning and the reality of medical care,” something that would be useful to today’s doctors.  

IBM made a round of layoffs in the IBM Watson Health unit in 2018, according to another report at the time by IEEE Spectrum in June 2018. Engineers from one of the companies IBM had acquired, Phytel, reported a shrinking client base for its patient analytics solution from 150 to 80 since the acquisition. “Smaller companies are eating us alive,” stated the engineer. “They’re better, faster, cheaper. They’re winning our contracts, taking our customers, doing better at AI.”  

Mismatch Seen Between Realities of Healthcare and Promise of AI  

Dr. Thomas J. Fuchs, Dean of AI and Human Health, Mount Sinai Health System

This notion of a mismatch between the promise of AI and realities of healthcare was seconded in last week’s Wall Street Journal report that tech companies may lack the deep expertise in how healthcare works in patient settings. “You truly have to understand the clinical workflow in the trenches,” stated Thomas J. Fuchs, Mount Sinai Health System’s dean of artificial intelligence and human health. “You have to understand where you can insert AI and where it can be helpful” without slowing things down in the clinic. 

Packaging AI advances in computer science into a viable software product or service has always been a fundamental challenge in the software business. “Watson may be very emblematic of a broader issue at IBM of taking good science and finding a way to make it commercially relevant,” stated Toni Sacconaghi, an analyst at Bernstein Research.  

Toni Sacconaghi, analyst, Bernstein Research

New IBM CEO Arvind Krishna has said AI along with hybrid cloud computing, would be pivotal for IBM going forward. (See AI Trends, November 2020.) Krishna is moving to exit struggling business units and concentrate on those that can deliver consistent growth. As part of this effort, IBM is in the process of spinning its managed IT services division out into a new public company; IT services is seen as a declining margin business by analysts. IBM had $100 billion in sales in 2010 and $73.6 billion last year. 

Another challenge for AI in healthcare is the lack of data-collection standards, which makes applying models developed in one healthcare setting and applying it in others is difficult. “The customization problem is severe in healthcare,” stated Andrew Ng, an AI expert and CEO of startup Landing AI, based in Palo Alto, Calif., to The Wall Street Journal. 

Healthcare markets where AI has shown promise and achieved results include radiology and pathology, where image recognition techniques can be used to answer specific questions. Also, AI has made inroads in streamlining business processes such as billing and charting, which can help save money and free up staff to focus on more challenging areas. Administrative costs are said to be 30 percent of healthcare costs. 

Meanwhile, investment for AI in healthcare continues, with spending projected to grow at an annualized rate of 48% through 2023, according to a recent report from Business Insider. New players include giants such as Google, which has defined a Cloud Healthcare application programming interface (API), that can take data from users’ electronic health records via machine learning, with the aim of helping physicians make more informed clinical decisions. Google is also working with the University of California, Stanford University, and the University of Chicago on an AI system to predict the outcomes of hospital visits 

AI is also being applied to the move to personalized healthcare, for example with wearable technology such as FitBits and smartwatches, which can alert users and healthcare professionals to potential health issues and risks.  

While retreating from applying Watson in healthcare, IBM is expanding the role of Watson in its cloud service offerings. These include natural language processing, sentiment analysis and virtual assistants, according to entries on the IBM Watson blog,  

Read the source articles and information in The Wall Street Journalin IEEE Spectrum from April 2019, in AI Trends February 2020, in IEEE Spectrum from June 2018, AI Trends, November 2020, from Business Insider and on the IBM Watson blog.  

Source: https://www.aitrends.com/healthcare/ibm-reportedly-retreating-from-healthcare-with-watson/

Continue Reading

AI

SolarWinds Hackers Targeted Cloud Services as a Key Objective 

Avatar

Published

on

The SolarWinds attackers appear to have as a primary objective the compromise the authentication method for cloud services, with far-reaching implications. (Credit: Getty Images).   

By John P. Desmond, AI Trends Editor 

The SolarWinds hackers appeared to have targeted cloud services as a key objective, potentially giving them access to many, if not all, of an organization’s cloud-based services.  

Christopher Budd, independent security expert

This is from an account in GeekWire written by Christopher Budd, an independent security consultant who worked previously in Microsoft’s Security Response Center for 10 years.  

“If we decode the various reports and connect the dots we can see that the SolarWinds attackers have targeted authentication systems on the compromised networks, so they can log in to cloud-based services like Microsoft Office 365 without raising alarms,” wrote Budd. “Worse, the way they’re carrying this out can potentially be used to gain access to many, if not all, of an organization’s cloud-based services.”  

The implication is that those assessing the impact of the attacks need to look not just at their own systems and networks, but also at their cloud-based services for evidence of compromise. And it means that defending against attacks means increasing the security and monitoring of cloud services authentication systems, “from now on.”  

Budd cited these key takeaways: 

  • After establishing a foothold in a network, the SolarWinds attackers target the systems that issue proof of identity used by cloud-based services; and they steal the means used to issue IDs; 
  • Once they have this ability, they are able to create fake IDs that allow them to impersonate legitimate users, or create malicious accounts that seem legitimate, including accounts with administrative access;  
  • Because the IDs are used to provide access to data and service by cloud-based accounts, the attackers are able to access data and email as if they were legitimate users.

SAML Authentication Method for Cloud Services Seen Targeted 

Cloud-based services use an authentication method called Security Assertion Markup Language (SAML), which issues a token that is “proof” of the identity of a legitimate user to the services. Budd ascertained, based on a series of posts on the Microsoft blog, that the SAML service was targeted. While this type of attack was first seen in 2017, “This is the first major attack with this kind of broad visibility that targets cloud-based authentication mechanisms,” Budd stated. 

In response to a question Budd asked Microsoft, on whether the company learned of any vulnerabilities that led to this attack, he got this response: “We have not identified any Microsoft product or cloud service vulnerabilities in these investigations. Once in a network, the intruder then uses the foothold to gain privilege and use that privilege to gain access.” 

A response from the National Security Administration was similar, saying the attackers, by “abusing the federated authentication,” were not exploiting any vulnerability in the Microsoft authentication system, “but rather abusing the trust established across the integrated components.” 

Also, although the SolarWinds attack came through a Microsoft cloud-based service, it involved the SAML open standard that is widely used by vendors of cloud-based services, not just Microsoft. “The SolarWinds attacks and these kinds of SAML-based attacks against cloud services in the future can involve non-Microsoft SAML-providers and cloud service providers,” Budd stated. 

American Intelligence Sees Attack Originating with Russia’s Cozy Bear 

American intelligence officials believe the attack originated from Russia. Specifically, according to a report from The Economist, the group of attackers known as Cozy Bear, thought to be part of Russia’s intelligence service, were responsible. “It appears to be one of the largest-ever acts of digital espionage against America,” the account stated.  

The attack demonstrated “top-tier operational tradecraft,” according to FireEye, a cyber-security firm that also was itself a victim.  

America has tended to categorize and respond to cyber-attacks happening over the last decade according to the aims of the attackers. It has regarded intrusions intended to steal secretsold-fashioned espionageas fair game that the US National Security Agency is also engaged in. But attacks intended to cause harm, such as the North Korea assault on Sony Pictures in 2014, or China’s theft of industrial secrets, are viewed as crossing a line, the account suggested. Thus, sanctions have been imposed on many Russian, Chinese, North Korean and Iranian hackers.   

The Solar Winds attack seems to have created its own category. “This effort to stamp norms onto a covert and chaotic arena of competition has been unsuccessful,” the Economist account stated. “The line between espionage and subversion is blurred.”  

One observer sees that America has grown less tolerant of “what’s allowed in cyberspace” since the hack of the Officer of Personnel Management (OPM) in 2015. That hack breached OPM networks and exposed the records of 22.1 million related to government employees, others who had undergone background checks, and friends and family. State-sponsored hackers working on behalf of the Chinese government were believed responsible.   

“Such large-scale espionage “would be now at the top of the list of operations that they would deem as unacceptable,” stated Max Smeets of the Centre of Security Studies in Zurich. 

“On-Prem” Software Seen as More Risky 

The SolarWinds Orion product is installed “on-prem,” meaning it is installed and run on computers on the premises of the organization using the software. Such products carry security risks that IT leadership needs to carefully evaluate, suggested a recent account in eWeek 

William White, security and IT director, BigPanda

The SolarWinds attackers apparently used a compromised software patch to gain entry, suggested William White, security and IT director of BigPanda, which offers AI software to detect and analyze problems in IT systems. “With on-prem software, you often have to grant elevated permissions or highly privileged accounts for the software to run, which creates risk,” he stated.    

Because the SolarWinds attack was apparently executed through a software patch, “Ironically, the most exposed SolarWinds customers were the ones that were actually diligent about installing Orion patches,” stated White.  

Read the source articles in GeekWirefrom The Economist and in eWeek.

Source: https://www.aitrends.com/security/solarwinds-hackers-targeted-cloud-services-as-a-key-objective/

Continue Reading

AI

RAND Corp. Finds DoD “Significantly Challenged” in AI Posture 

Avatar

Published

on

A new report from RAND Corp. finds the US DoD’s AI posture faces challenges around data and testing that ensures performance and safety. (Credit: Getty Images)  

By AI Trends Staff  

In a recently-released updated evaluation of the posture of the US Department of Defense (DoD) on artificial intelligence, researchers at RAND Corpfound that “despite some positive signs, the DoD’s posture is significantly challenged across all dimensions” of the assessment. 

The RAND researchers were asked by Congress, within the 2019 National Defense Authorization Act (NDAA), and the director of DoD’s Joint Artificial Intelligence Center (JAIC), to help answer the question: “Is DoD ready to leverage AI technologies and take advantage of the potential associated with them, or does it need to take major steps to position itself to use those technologies effectively and safely and scale up their use?” 

The term artificial intelligence was first coined in 1956 at a conference at Dartmouth College that showcased a program designed to mimic human thinking skills. Almost immediately thereafter, the Defense Advanced Research Projects Agency (DARPA) (then known as the Advanced Research Projects Agency [ARPA]), the research arm of the military, initiated several lines of research aimed at applying AI principles to defense challenges.   

Danielle Tarraf, Senior Information Scientist, RAND Corp.

Since the 1950s, AI—and its subdiscipline of machine learning (ML)—has come to mean many different things to different people, stated the report, whose lead author is Danielle C. Tarraf, a senior information scientist at RAND and a professor at the RAND Graduate School. (RAND Corp. is a US nonprofit think tank created in 1948 to offer research and analysis to the US Armed Forces.)    

For example, the 2019 NDAA cited as many as five definitions of AI. “No consensus emerged on a common definition from the dozens of interviews conducted by the RAND team for its report to Congress,” the RAND report stated.  

The RAND researchers decided to remain flexible and not be bound by precise definitions. Instead, they tried to answer the question of whether the DoD is positioned to build or acquire, test, transition and sustain—at scale—a set of technologies broadly falling under the AI umbrella? And if not, what would DoD need to do to get there? Considering the implications of AI for DoD strategic decision makers, the researchers concentrated on three elements and how they interact:  

  • the technology and capabilities space 
  • the spectrum of DoD AI applications 
  • the investment space and time horizon.

While algorithms underpin most AI solutions, interest and hype is fueled by advances in AI, such as deep learning. This requires large data sets, and which tend to be highly-specific to the applications for which they were designed, most of which are commercial. Referring to AI verification, validation, test and evaluation (VVT&E) procedures critical to the function of software in the DoD,  the researchers stated, “VVT&E remains very challenging across the board for all AI applications, including safety-critical military applications.”  

The researchers divided AI applications for DoD into three groups:  

  • Enterprise AI, including applications such as the management of health records at military hospitals in well-controlled environments;  
  • Mission-Support AI, including applications such as the Algorithmic Warfare Cross-Functional Team (also known as Project Maven), which aims to use machine learning to assist humans in analyzing large volumes of imagery from video data collected in the battle theater by drones, and;  
  • Operational AI, including applications of AI integrated into weapon systems that must contend with dynamic, adversarial environments, and that have significant implications in the case of failure for casualties. 

Realistic goals need to be set for how long AI will need to progress from demonstrations of what is possible to full-scale implementations in the field. The RAND team’s analysis suggests at-scale deployments in the:   

  • near term (up to five years) for enterprise AI 
  • middle term (five to ten years) for most mission-support AI, and  
  • far term (longer than ten years) for most operational AI applications. 

The RAND team sees the following challenges for AI at the DoD:  

  • Organizationally, the current DoD AI strategy lacks both baselines and metrics for assessing progress. And the JAIC has not been given the authority, resources, and visibility needed to scale AI and its impact DoD-wide. 
  • Data are often lacking, and when they exist, they often lack traceability, understandability, accessibility, and interoperability. 
  • The current state of VVT&E for AI technologies cannot ensure the performance and safety of AI systems, especially those that are safety-critical. 
  • DoD lacks clear mechanisms for growing, tracking, and cultivating AI talent, a challenge that is only going to grow with the increasingly tight competition with academia, the commercial world, and other kinds of workspaces for individuals with the needed skills and training. 
  • Communications channels among the builders and users of AI within DoD are sparse. 

The researchers made a number of recommendations to address these issues. 

Two Challenge Areas Addressed  

Two of these challenge areas have been recently addressed at a meeting hosted by the AFCEA, the professional association that links people in military, government, industry and academia, reported in an account in FCW. The organization engages in the “ethical exchange of information” and has roots in the US Civil War, according to its website.   

Jacqueline Tame is Acting Deputy Director at the JAIC, whose years of experience include positions with the House Permanent Select Committee on Intelligence, work with an AI analytics platform for the Office of the Secretary of Defense and then positions in the JAIC. She has graduate degrees from the Naval War College and the LBJ School of Public Affairs.  

She addressed how AI at DoD is running into culture and policy norms in conflict with its capability. For example, “We still have over… several thousand security classification guidance documents in the Department of Defense alone.” The result is a proliferation of “data owners.” She commented, “That is antithetical to the idea that data is a strategic asset for the department.” 

She used the example of predictive maintenance, which requires analysis of data from a range of sources to be effective, as an infrastructure challenge for the DoD currently. “This is a warfighting issue,” Tame stated. “To make AI effective for warfighting applications, we have to stop thinking about it in these limited stovepiped ways.” 

Jane Pinelis, chief of testing and evaluation, JAIC

Data standards need to be set and unified, suggested speaker Jane Pinelis, the chief of testing and evaluation for the JAIC. Her background includes time at the Johns Hopkins University Applied Physics Laboratory, where she was involved in “algorithmic warfare.” She is also a veteran of the Marine Corps, where her assignments included a position in the Warfighting Lab. She holds a PhD in Statistics from the University of Michigan. 

“Standards are elevated best practices and we don’t necessarily have best practices yet,” Pinelis stated. JAIC is working on it, by collecting and documenting best practices and leading a working group in the intelligence community on data collection and tagging. 

Weak data readiness has been an impediment to AI for the DoD, she stated. In response, the JAIC is preparing multiple award contracts for test and evaluation and data readiness, expected soon.  

Read the source articles and information from RAND Corp. and FCW. 

Source: https://www.aitrends.com/ai-in-government/rand-corp-finds-dod-significantly-challenged-in-ai-posture/

Continue Reading
SPACS5 days ago

TPB Acquisition I files for a $250 million IPO; SPAC targets sustainable food production

Amb Crypto4 days ago

Litecoin Price Analysis: 20 February

SPACS5 days ago

Elliott Management’s tech-focused SPAC Elliott Opportunity I files for a $1.0 billion IPO

Blockchain4 days ago

VeChain Review: Blockchain Supply Chain Management

PR Newswire4 days ago

S3 AeroDefense Signs 10 Year Distribution Agreement & Repair License with Honeywell Aerospace

SPACS5 days ago

Special purpose acquisition companies grow in popularity on Wall Street during the pandemic

Amb Crypto4 days ago

Why MicroStrategy and other institutions don’t regret their Bitcoin buys

Amb Crypto4 days ago

Why retail adoption of Bitcoin may be a challenge at $55,000

SPACS5 days ago

Paul Singer-backed blank-check firms file for up to $1.5 billion IPO

NEWATLAS5 days ago

Perseverance sends back post-touchdown selfie

Amb Crypto5 days ago

Chainlink integrates with Danal Fintech to support retail Bitcoin payments

Automotive4 days ago

Tesla’s Gigafactory formula rose from a humble “tent” at the Fremont Factory

Amb Crypto4 days ago

Polkadot, Cosmos, IOTA Price Analysis: 20 February

Amb Crypto4 days ago

Are Bitcoin’s long-term hodlers entering the seller’s market?

NEWATLAS5 days ago

Super-light laptop carries heavyweight price tag

Amb Crypto5 days ago

SEC is “dead wrong” in the Ripple case claims former SEC chair Mary Jo White

Amb Crypto4 days ago

Chainlink, Aave, SushiSwap Price Analysis: 20 February

Amb Crypto4 days ago

Binance coin, Tron, FTX Token Price Analysis: 20 February

Amb Crypto5 days ago

Litecoin, EOS, Decred Price Analysis: 20 February

Amb Crypto5 days ago

Ethereum Price Analysis: 20 February

Trending