Zephyrnet Logo

Security Threats and Security Testing for Chatbots

Date:

Florian Treml

This article is pointing out security threats and attack vectors of typical chatbot architectures — based on OWASP Top 10 and adversarial attacks.

The well-known OWASP Top 10 is a list of top security threats for a web application. Most chatbots out there are available over a public web frontend, and as such all the OWASP security risks apply to those chatbot frontends as well. Out of these risks there are two especially important to defend against, as in contrary to the other risks, those two are nearly always a serious threat when talking about chatbots — XSS and SQL Injection.

Recently another kind of security threat came up, specifically targeting NLP models — so-called “adversarial attacks”.

A typical implementation of a chatbot frontend:

The XSS vulnerability is in the second step — when entering text including malicious Javascript code, the XSS attack is fullfilled with the chatbot frontend running the injected code:

<script>alert(document.cookie)</script>

This vulnerability is easy to defend by validating and sanitizing user input, but even companies like IBM published vulnerable code on Github still available now or only fixed recently.

Possible Chatbot Attack Vector

For exploiting an XSS vulnerability the attacker has to trick the victim to send malicious input text.

1. Case Study: Building Appointment Booking Chatbot

2. IBM Watson Assistant provides better intent classification than other commercial products according to published study

3. Testing Conversational AI

4. How intelligent and automated conversational systems are driving B2C revenue and growth.

A typical implementation of a task-oriented chatbot backend:

With SQL Injection, the attacker may trick the chatbot backend to consider malicious content as part of the information item:

my order number is "1234; DELETE FROM ORDERS"

Developers typically trust their tokenizers and entity extractors to defend against injection attack.

Possible Chatbot Attack Vector

When the attacker has personal access to the chatbot frontend, an SQL injection is exploitable directly by the attacker (see example above), doing all kind of SQL (or no-SQL) queries .

This is a new type of attack specifically targeting at classifiers — the NLP model backing a chatbot is basically a text classifier.

An adversarial attack tries to identify blind spots in the classifier by applying tiny, in worst case invisible changes (noise) to the classifier input data. A famous example is to trick an image classifier to a wrong classification by adding some tiny noise not visible for the human eye.

A more dangerous real-life attack is to trick an autonomous car to ignore a stop sign by adding some stickers to it.

An adversarial example for a picture classifier. Adding a tiny amount of noise causes the model to classify this pig as an airliner. Image from this article.

The same concept can by applied to voice apps — some background noise not noticed by human listeners could trigger IoT devices in the same room to unlock the front door or place online shop orders.

When talking about text-based chatbots, the only difference is that it is not possible to totally hide added noise from the human eye, as noise in this case means changing single characters or whole words.

There is an awesome article “What are adversarial examples in NLP?“ from the TextAttack makers available here.

Possible Chatbot Attack Vector

For voice-based chatbots one possible risk is to hand over control to the attacker based on manipulated audio streams, exploiting weaknesses in the speech recognition and classification engine.

To be honest, it is hard to imagine a real-life security threat for text-based chatbots.

Botium Box includes several tools for improving the robustness of your chatbot and your NLP model against the attacks above.

Penetration Testing with OWASP ZAP Zed Attack Proxy

Botium Box provides a unique way of running continuous security tests based on the OWASP ZAP Zed Attack Proxy — read more in the Botium Wiki. It helps to identify security vulnerabilities in the infrastructure, such as SSL issues and outdated 3rd-party-components.

E2E Test Sets for SQL Injection and XSS

Botium Box includes test sets for running End-2-End-Security-Tests on device cloud and browser farms, based on OWASP recommendations:

Humanification Testing

The Botium Humanification Layer checks your NLP model for robustness against adversarial attacks and common human typing behaviour:

Read more in the Botium Wiki.

Paraphrasing

With the paraphraser it is possible increase test coverage with a single mouse click — read on here:

Load Testing

With Botium Box Load Testing and Stress Testing you can simulate user load on your chatbot and see how it behaves under production load — read on in the Botium Wiki.

Source: https://chatbotslife.com/security-threats-and-security-testing-for-chatbots-325d704da9af?source=rss—-a49517e4c30b—4

spot_img

Latest Intelligence

spot_img