[tdb_mobile_menu menu_id="81451" el_class="plato-left-menu" icon_size="eyJhbGwiOjUwLCJwaG9uZSI6IjMwIn0=" icon_padding="eyJhbGwiOjAuNSwicGhvbmUiOiIxLjUifQ==" tdc_css="eyJhbGwiOnsibWFyZ2luLXRvcCI6IjEwIiwibWFyZ2luLWJvdHRvbSI6IjAiLCJtYXJnaW4tbGVmdCI6IjE1IiwiZGlzcGxheSI6IiJ9LCJwaG9uZSI6eyJtYXJnaW4tdG9wIjoiMCIsIm1hcmdpbi1sZWZ0IjoiMCIsImRpc3BsYXkiOiIifSwicGhvbmVfbWF4X3dpZHRoIjo3Njd9" align_horiz="content-horiz-center" inline="yes" icon_color="#ffffff" icon_color_h="#ffffff"][tdb_header_logo align_vert="content-vert-center" url="https://zephyrnet.com" inline="yes" text="Zephyrnet" image_width="eyJwaG9uZSI6IjM1In0=" img_txt_space="eyJwaG9uZSI6IjEwIn0=" f_text_font_size="eyJwaG9uZSI6IjE4In0=" f_text_font_line_height="eyJwaG9uZSI6IjEuNSJ9" f_text_font_weight="eyJwaG9uZSI6IjcwMCJ9" f_text_font_transform="eyJwaG9uZSI6ImNhcGl0YWxpemUifQ==" f_text_font_family="eyJwaG9uZSI6ImZzXzIifQ==" text_color="#ffffff" text_color_h="var(--accent-color)"]
[tdb_mobile_horiz_menu menu_id="1658" single_line="yes" f_elem_font_family="eyJwaG9uZSI6ImZzXzIifQ==" f_elem_font_weight="eyJwaG9uZSI6IjcwMCJ9" text_color="var(--news-hub-white)" text_color_h="var(--news-hub-accent-hover)" f_elem_font_size="eyJwaG9uZSI6IjE0In0=" f_elem_font_line_height="eyJwaG9uZSI6IjQ4cHgifQ==" elem_padd="eyJwaG9uZSI6IjAgMTVweCJ9" tdc_css="eyJwaG9uZSI6eyJwYWRkaW5nLXJpZ2h0IjoiNSIsInBhZGRpbmctbGVmdCI6IjUiLCJkaXNwbGF5Ijoibm9uZSJ9LCJwaG9uZV9tYXhfd2lkdGgiOjc2N30="]
[tdb_mobile_menu inline="yes" menu_id="81451" el_class="plato-left-menu" icon_size="50" icon_padding="0.5" tdc_css="eyJhbGwiOnsibWFyZ2luLXRvcCI6IjEwIiwibWFyZ2luLWJvdHRvbSI6IjAiLCJtYXJnaW4tbGVmdCI6IjE1IiwiZGlzcGxheSI6IiJ9fQ==" icon_color="#ffffff" icon_color_h="#ffffff"]
Zephyrnet Logo
[tdb_header_menu main_sub_tdicon="td-icon-down" sub_tdicon="td-icon-right-arrow" mm_align_horiz="content-horiz-center" modules_on_row_regular="20%" modules_on_row_cats="20%" image_size="td_300x0" modules_category="image" show_excerpt="none" show_com="none" show_date="" show_author="none" mm_sub_align_horiz="content-horiz-right" mm_elem_align_horiz="content-horiz-center" menu_id="81450" show_mega_cats="yes" align_horiz="content-horiz-center" elem_padd="0 30px" main_sub_icon_space="12" mm_width="1192" mm_padd="30px 25px" mm_align_screen="yes" mm_sub_padd="20px 25px 0" mm_sub_border="1px 0 0" mm_elem_space="25" mm_elem_padd="0" mm_elem_border="0" mm_elem_border_a="0" mm_elem_border_rad="0" mc1_title_tag="h2" modules_gap="25" excl_txt="Premium" excl_margin="0 6px 0 0" excl_padd="2px 5px 2px 4px" excl_bg="var(--news-hub-accent)" f_excl_font_size="12" f_excl_font_weight="700" f_excl_font_transform="uppercase" meta_padding="20px 0 0" art_title="0 0 10px" show_cat="none" show_pagination="disabled" text_color="var(--news-hub-white)" tds_menu_active1-line_color="var(--news-hub-accent)" f_elem_font_size="18" f_elem_font_line_height="64px" f_elem_font_weight="400" f_elem_font_transform="none" mm_bg="var(--news-hub-dark-grey)" mm_border_color="var(--news-hub-accent)" mm_subcats_border_color="#444444" mm_elem_color="var(--news-hub-white)" mm_elem_color_a="var(--news-hub-accent-hover)" f_mm_sub_font_size="14" title_txt="var(--news-hub-white)" title_txt_hover="var(--news-hub-accent-hover)" date_txt="var(--news-hub-light-grey)" f_title_font_line_height="1.25" f_title_font_weight="700" f_meta_font_line_height="1.3" f_meta_font_family="fs_2" tdc_css="eyJhbGwiOnsiYm9yZGVyLXRvcC13aWR0aCI6IjEiLCJib3JkZXItcmlnaHQtd2lkdGgiOiIxIiwiYm9yZGVyLWJvdHRvbS13aWR0aCI6IjEiLCJib3JkZXItbGVmdC13aWR0aCI6IjEiLCJib3JkZXItY29sb3IiOiJ2YXIoLS1uZXdzLWh1Yi1kYXJrLWdyZXkpIiwiZGlzcGxheSI6IiJ9fQ==" mm_border_size="4px 0 0" f_elem_font_family="fs_2" mm_subcats_bg="var(--news-hub-dark-grey)" mm_elem_bg="rgba(0,0,0,0)" mm_elem_bg_a="rgba(0,0,0,0)" f_mm_sub_font_family="fs_2" mm_child_cats="10" mm_sub_inline="yes" mm_subcats_posts_limit="5"]
Home AI A true friend who betrays all your secrets: Korean AI сhatbot turned a data protection failure

A true friend who betrays all your secrets: Korean AI сhatbot turned a data protection failure

0
A true friend who betrays all your secrets: Korean AI сhatbot turned a data protection failure
Zfort Group

Most Americans are unsure of how consciously companies behave while using and protecting personal info. Nearly 81% of them report being insecure about potential risks of data collection, and 66% claim to feel the same about government data collection.

It`s really difficult to weigh the level of potential risks and understand the anticipated harm that irresponsible behavior with personal data can cause. How can this affect our further actions, what restrictions and changes will it bring?

Let`s analyze the main points of what happens when disclosing personal data and how to properly protect yourself using the example of the recent situation in South Korea.

Korean company ScatterLab launched a “scientific and data-driven” app, which was supposed to predict the degree of attachment in relationships.

It relies on KakaoTalk, the most popular messenger app in South Korea that is used by about 90 percent of the population.

So, the analysis of your romantic feelings (or their absence) costs just around $4.50 per conversation processing.
“Science of Love” worked like this: it studied the conversation and based on certain factors (such as average response time, how often the companion writes to you first, the fact of some trigger phrases, and emotions usage) gave a conclusion on whether there is a romantic connection between the interlocutors of the dialogue.

You can say “Come on! Does anyone know that better than your inner sensation? How an app can be aware of what is going on in a person`s head or heart while they are texting you?” Well, there is some common sense here.
But the fact is that by June 2020 Science of Love had received about 2.5 million downloads in South Korea and 5 million in Japan, and was preparing to expand its business to the United States.

So, let`s reveal why does it become so popular among Korean guys and girls?
“Because I felt like the app understood me, I felt safe and sympathized. It felt good because it felt like having a love doctor by my side,” one of the users tells in the review.

In December 2020, the company introduced an A.I. chatbot Lee-Luda.

As a well-trained AI consultant, the bot was positioned, taught on more than 10 billion conversation logs from the app. “20-year-old female” Lee-Luda is ready to set a true friendship with everybody.
As the company`s CEO mentioned, CEO the purpose of Lee-Luda was to become “an A.I. chatbot that people prefer as a conversation partner over a person.”

Just after a couple of weeks of the bot launch users could not help but pay attention to the harsh treatment and statements from the bot towards certain social groups and minorities (LGBTQ+, people with disabilities, feminists, etc.).

The developer company, ScatterLab, explained this phenomenon by the fact that the bot took information from the basic dataset for training, not from personal user discussions.
Thus, it is clear that the company did not properly filter out the set of phrases and profanity before starting the bot training.

The developers just “failed to remove some personal information depending on the context,” (Well, it is what it is).

Lee-Luda could not have learned how to include such personal information in its responses unless they existed in the training dataset.
But there some “good news” as well: it is possible to recover the training dataset from the AI chatbot. So, if personal information existed in the training dataset, it can be extracted by querying the chatbot.

Still going not so bad, huh?
To make things worse, ScatterLab had uploaded a training set of 1,700 sentences, which was a part of the larger dataset is collected, on Github.
It exposed names of more than 20 people, along with the locations they have been to, their relationship status, and some of their medical information.

ScatterLab issued statements of clarification of the incident intended to soothe the public’s concerns, but they ended up infuriating people even more. The company statements indicated that “Lee-Luda is a childlike A.I. that just started conversing with people,” that it “has a lot to learn,” and “will learn what is a better answer and a more appropriate answer through trial and error.” However, is it ethical to violate individuals’ privacy and safety for a chatbot’s “trial and error” learning process? No.

Despite the fact that this situation has become a high-profile event in Korea, it has not received attention on a global scale (and we think quite unfairly).
It’s not about the negligence and dishonesty of the creators, this incident reflects the general trend in the development of the AI industry. Users of software based on technology have little control over the collection and use of personal data.
Situations like this should make you think about more careful and conscientious data management.

The pace of technology development is significantly ahead of the adoption of regulatory standards for their use. It is hard to foresee where the technology will lead us in a couple of years.

So, the global question is “Are AI and tech companies able to independently control the ethical component of the used and developed innovations?”.
Is it worth going back to the concept of “corporate social responsibility”? And where is this golden mean (Innovation VS Humanity)?

Also available in audio format here.

Coinsmart. Beste Bitcoin-Börse in Europa
Source: https://chatbotslife.com/a-true-friend-who-betrays-all-your-secrets-korean-ai-%D1%81hatbot-turned-a-data-protection-failure-997f3f9dd366?source=rss—-a49517e4c30b—4

Chat with us

Hi there! How can I help you?