Connect with us

AI

Voice is Creating the New Normal

How big will voice-based interfaces become in 2020, and how will we use voice differently? This question is on the minds of behavioralists…

Published

on

Mark Persaud

How big will voice-based interfaces become in 2020, and how will we use voice differently? This question is on the minds of behavioralists, technologists, and businesses everywhere as we all adapt to and create the new normal.

At the outset of 2020, Voice was growing at a predictably robust rate. A consumer survey conducted by Voicebot showed that 87.7 million Americans had adopted a smart speaker by January 2020, up 32 percent year over year. Also, the number of in-car voice assistant users rose 13.7 percent to 129.7 million.

1. Publishing your Chatbot on Microsoft Teams

2. Build a Telegram Bot Scheduler with Python

3. Crawl Twitter Data using 30 Lines of Python Code

4. Knowledge graphs and Chatbots — An analytical approach.

And then COVID-19 hit, upending everything. Weeks after President Trump issued a national state of emergency on March 13, a picture is emerging of how people are relying on voice. On April 27, The Wall Street Journal, citing research from Euromonitor International, published a wide-ranging analysis of how consumer behaviors are changing during the pandemic. One of the major trends: people becoming more receptive to the person-machine interface, including voice:

This finding aligns with a blog post I wrote in April, “Mindful Innovation in a New World.” I wrote, “ . . . using smart voice assistants to manage our lives — to do everything from ordering products to getting information — looks to be a less risky behavior from a health standpoint than an interface that requires touching a screen or clicking on a keyboard.”

Are people using voice during this global pandemic? Yes. That’s what research by Edision Research and NPR reveals:

Voice assistant use frequency is indeed on the rise.

What are people using voice to do during the pandemic? The Euromonitor research didn’t say. But a little digging reveals a few interesting uses:

Cooking: Allrecipes Alexa skill sessions are up 67 percent over the past 30 days. Unique users have risen 80 percent during the same period with first-time users up 45 percent. This data makes perfect sense as people shelter in place and look for more things to do in the home.

Companionship: as CNet reports, seniors in a California senior living center living in isolation are using Alexa to stay connected with each other.

According to Edison Research Senior Vice President Tom Webster, overall, smart speakers are also becoming a lifeline for information. “With tens of millions of Americans no longer commuting, smart speakers are becoming even more important as a conduit for news and information,” he said in an article, “and this increased usage and facility with voice assistants will likely increase demand for this technology in vehicles once our commutes resume.”

How might the uses of voice assistants change throughout the year? Possibilities include:

Checking up on our own health, especially now that the major voice assistants have made it possible for people to self-check for COVID-19 symptoms.

Managing mental health. Newsweek reported that the majority of Americans will hit a mental breaking point if stay-at-home orders continue to June. Meanwhile, The Wall Street Journal, citing Datamonitor, reported that people are going to seek more mindfulness and meditation apps. I expect more people will turn to voice interfaces to find wellness information and mindfulness tools.

Shopping as part of a broader contact-free experience: ordering and having products delivered end to end, completely no-touch.

Learning as people living in lockdown have more time on their hands. Unfortunately, layoffs and job furloughs are also on the rise, and people living at home are going to use this time for self-improvement.

On the other hand, voice usage in the car will decrease (and presumably already has) even as people gradually start leaving their homes in limited numbers with states easing shelter-in-place mandates. People are not driving as much, and therefore they simply don’t need to use voice assistants for wayfinding and other uses. (But randomized walking paths might become more common because they allow for routing of foot traffic through non-crowded locales.)

Another area to watch: on the job. Technically, using voice to manage in-office tasks such as booking a conference room will continue to plummet. But with “on the job” now meaning working at home, it will be interesting to see how people use voice to manage remote working beyond video conferencing perhaps utilizing dictation for note-taking and action items.

Now is the time for businesses to watch changing consumer behavior closely. The time to invest in voice is now. Don’t wait until “we get through this.” The dawn of a new day is already here.

Source: https://chatbotslife.com/voice-is-creating-the-new-normal-e4677457a1b7?source=rss—-a49517e4c30b—4

AI

Clearview AI sued by ACLU for scraping billions of selfies from social media to power its facial-recog-for-cops system

Published

on

The American Civil Liberties Union has sued Clearview AI for scraping billions of photos from public social media profiles, without people’s explicit consent, to train its facial-recognition system.

The lawsuit [PDF], filed on Thursday at the Circuit Court of Cook County, Illinois, claims Clearview violated the state’s stringent Biometric Information Privacy Act (BIPA). Companies operating in Illinois must obtain explicit consent from individuals if they collect their biometric data, whether it’s in the form of fingerprints or photographs.

“Clearview has violated and continues to violate the BIPA rights of Plaintiffs’ members, clients, and program participants and other Illinois residents at staggering scale,” the lawsuit, brought by a group led by the ACLU, claimed.

“Using face recognition technology, Clearview has captured more than three billion faceprints from images available online, all without the knowledge – much less the consent – of those pictured.”

The startup, based in New York, made headlines in January when it was revealed to have amassed a database of three billion images by downloading people’s pictures from public pages on sites like Facebook, YouTube, Venmo, Instagram, and Twitter.

The dataset was used to train facial recognition algorithms, so that when images, say from a CCTV camera, are fed into Clearview’s system, the code looks for a match, and if one is found, it spits out everything it knows about that person: their harvested photos, and the URLs to the source pages that typically contain more personal information, such as names and contact details. This allows Clearview’s customers to turn faces in security camera footage stills into complete personal profiles, for example.

Initially, CEO Hoan Thon That said his upstart’s software was only intended for cops and government agents. But a hacker broke into Clearview’s systems and revealed its customer list, which contained US household staples such as Macy’s, Walmart, Wells Fargo, and Bank of America, and some universities.

The unregulated use of the technology has prompted many other groups to file the lawsuit against Clearview alongside the ACLU, including other non-profits and social justice organizations that support sex workers and the Latino population in Illinois.

Illustration of facial recognition

Hacker swipes customer list from controversial face-recog-for-Feds Clearview. Its reaction? ‘A part of life’

READ MORE

“Given the immutability of our biometric information and the difficulty of completely hiding our faces in public, face recognition poses severe risks to our security and privacy,” the ACLU said in its lawsuit.

“The capture and storage of faceprints leaves people vulnerable to data breaches and identity theft. It can also lead to unwanted tracking and invasive surveillance by making it possible to instantaneously identify everyone at a protest or political rally, a house of worship, a domestic violence shelter, an Alcoholics Anonymous meeting, and more.

“And, because the common link is an individual’s face, a faceprint can also be used to aggregate countless additional facts about them, gathered from social media and professional profiles, photos posted by others, and government IDs.”

Tech companies have also tried to thwart Clearview’s slurping of photos. In February, Google, YouTube, Twitter, and Facebook all served the startup cease-and-desist letters ordering it to stop stealing images from their platforms, and to delete existing pics in its massive database.

“For far too long tech companies have misused our most sensitive data while facing too little consequence,” said Abraham Scarr, director at the Illinois Public Interest Research Group, a nonprofit organization that’s also suing Clearview alongside the ACLU.

“The BIPA is unique in that it allows Illinois residents to control not only their biometric information, but also the laws governing its use, putting the power back into the hands of the people.”

Clearview’s lawyer Tor Ekeland told The Register: “Clearview AI is a search engine that uses only publicly available images accessible on the internet. It is absurd that the ACLU wants to censor which search engines people can use to access public information on the internet. The First Amendment forbids this.” ®

Sponsored: How to simplify data protection on Amazon Web Services

Source: https://go.theregister.co.uk/feed/www.theregister.co.uk/2020/05/28/clearview_aclu_lawsuit/

Continue Reading

AI

AI Autonomous Cars And The Problem Of Where To Drop Off Riders

Published

on

Having the AI self-driving car know where to drop off human passengers is a puzzling problem not high on the priority list of developers. (GETTY IMAGES)

By Lance Eliot, the AI Trends Insider

Determining where to best drop-off a passenger can be a problematic issue.

It seems relatively common and downright unnerving that oftentimes a ridesharing service or taxi unceremoniously opts to drop you off at a spot that is poorly chosen and raft with complications.

I remember one time, while in New York City, a cab driver was taking me to my hotel after my having arrived past midnight at the airport, and for reasons I’ll never know he opted to drop me about a block away from the hotel, doing so at a darkened corner, marked with graffiti, and looking quite like a warzone.

I walked nearly a city block at nighttime, in an area that I later discovered was infamous for being dangerous, including muggings and other unsavory acts.

In one sense, when we are dropped off from a ridesharing service or its equivalent, we often tend to assume that the driver has identified a suitable place to do the drop-off.

Presumably, we expect as a minimum:

·         The drop-off is near to the desired destination

·         The drop-off should be relatively easy to get out of the vehicle at the drop-off spot

·         The drop-off should be in a safe position to get out of the vehicle without harm

·         And it is a vital part of the journey and counts as much as the initial pick-up and the drive itself.

In my experience, the drop-off often seems to be a time for the driver to get rid of a passenger and in fact the driver’s mindset is often on where their next fare will be, since they’ve now exhausted the value of the existing passenger and are seeking more revenue by thinking about their next passenger.

Of course, you can even undermine yourself when it comes to doing a drop-off.

The other day, it was reported in the news that a woman got out of her car on the 405 freeway in Los Angeles when her car had stalled, and regrettably, horrifically, another car rammed into her and her stalled vehicle. A cascading series of car crashes then occurred, closing down much of the freeway in that area and backing up traffic for miles.

In some cases, when driving a car ourselves, we make judgements about when to get out of the vehicle, and in other cases such as ridesharing or taking a taxi, we are having someone else make a judgement for us.

In the case of a ridesharing or taxi driver, I eventually figured out that as the customer I need to double-check the drop-off, along with requesting an alternative spot to be dropped off if the circumstances seem to warrant it. You usually assume that the local driver you are relying on has a better sense as to what is suitable for a drop-off, but the driver might not be thinking about the conditions you face and instead could be concentrating on other matters entirely.

Here’s a question for you, how will AI-based true self-driving driverless autonomous cars know where to drop-off human passengers?

This is actually a quite puzzling problem that though not yet seemingly very high on the priority list of AI developers for autonomous cars, ultimately the drop-off matter will rear its problematic head as something needing to be solved.

For my overall framework about autonomous cars, see this link: https://aitrends.com/ai-insider/framework-ai-self-driving-driverless-cars-big-picture/

For why achieving a true self-driving car is like a moonshot, see my explanation here: https://aitrends.com/ai-insider/self-driving-car-mother-ai-projects-moonshot/

For my indication about edge or corner cases in AI autonomous cars, see this link: https://aitrends.com/ai-insider/edge-problems-core-true-self-driving-cars-achieving-last-mile/

For dangers that await pedestrians and how AI self-driving car should respond, see my discussion here: https://aitrends.com/ai-insider/avoiding-pedestrian-roadkill-self-driving-cars/

AI Issues Of Choosing Drop-off Points

The simplistic view of how the AI should drop you off consists of the AI system merely stopping at the exact location of where you’ve requested to go, as though it is merely a mathematically specified latitude and longitude, and then it is up to you to get out of the self-driving car.

This might mean that the autonomous car is double-parked, though if this is an illegal traffic act then it goes against the belief that self-driving cars should not be breaking the law.

I’ve spoken and written extensively that it is a falsehood to think that autonomous cars will always strictly obey all traffic laws, since there are many situations in which we as humans bend or at times violate the strict letter of the traffic laws, doing so because of the necessity of the moment or even at times are allowed to do so.

In any case, my point is that the AI system in this simplistic perspective is not doing what we would overall hope or expect a human driver to do when identifying a drop-off spot, which as I mentioned earlier should have these kinds of characteristics:

·         Close to the desired destination

·         Stopping at a spot that allows for getting out of the car

·         Ensuring the safety of the disembarking passengers

·         Ensuring the safety of the car in its stopped posture

·         Not marring the traffic during its stop

·         Etc.

Imagine for a moment what the AI would need to do to derive a drop-off spot based on those kinds of salient criteria.

The sensors of the self-driving car, such as the cameras, radar, ultrasonic, LIDAR, and other devices would need to be able to collect data in real-time about the surroundings of the destination, once the self-driving car has gotten near to that point, and then the AI needs to figure out where to bring the car to a halt and allow for the disembarking of the passengers. The AI needs to assess what is close to the destination, what might be an unsafe spot to stop, what is the status of traffic that’s behind the driverless car, and so on.

Let’s also toss other variables into the mix.

Suppose it is nighttime, does the drop-off selection change versus when dropping off in daylight (often, the answer is yes). Is it raining or snowing, and if so, does that impact the drop-off choice (usually, yes)? Is there any road repair taking place near to the destination and does that impact the options for doing the drop-off (yes)?

If you are saying to yourself that the passenger ought to take fate into their own hands and tell the AI system where to drop them off, yes, some AI developers are incorporating Natural Language Processing (NLP) that can interact with the passengers for such situations, though this does not entirely solve this drop-off problem.

Why?

Because the passenger might not know what is a good place to drop-off.

I’ve had situations whereby I argued with a ridesharing driver or cabbie about where I thought I should be dropped-off, yet it turned out their local knowledge was more attuned to what was a prudent and safer place to do so.

Plus, in the case of autonomous cars, keep in mind that the passengers in the driverless car might be all children and no adults. This means that you are potentially going to have a child trying to decide what is the right place to be dropped off.

I shudder to think if we are really going to have an AI system that lacks any semblance of common-sense be taking strict orders from a young child, whereas an adult human driver would be able to counteract any naïve and dangerous choice of drop-offs (presumably, hopefully).

For the use of Natural Language Processing in socio-conversations, see my discussion here: https://aitrends.com/features/socio-behavioral-computing-for-ai-self-driving-cars/

For my explanation about why it is that AI self-driving cars will need to drive illegally, see this link: https://aitrends.com/selfdrivingcars/illegal-driving-self-driving-cars/

For the role of children as riders in AI autonomous cars, see my indication here: https://www.aitrends.com/ai-insider/children-communicating-with-an-ai-autonomous-car/

For my insights about how nighttime use of AI self-driving cars can be difficult, see this link: https://www.aitrends.com/ai-insider/nighttime-driving-and-ai-autonomous-cars/

For the role of ODD’s in autonomous cars, here’s my discussion: https://www.aitrends.com/ai-insider/amalgamating-of-operational-design-domains-odds-for-ai-self-driving-cars/

More On The Drop-off Conundrum

The drop-off topic will especially come to play for self-driving cars at a Level 4, which is the level at which an autonomous car will seek to pullover or find a “minimal risk condition” setting when the AI has reached a point that it has exhausted its allowed Operational Design Domain (ODD). We are going to have passengers inside Level 4 self-driving cars that might get stranded in places that are not prudent for them, including say young children or perhaps someone elderly and having difficulty caring for their own well-being.

It has been reported that some of the initial tryouts of self-driving cars revealed that the autonomous cars got flummoxed somewhat when approaching a drop-off at a busy schoolground, which makes sense in that even as a human driver the chaotic situation of young kids running in and around cars at a school can be unnerving.

I remember when my children were youngsters how challenging it was to wade into the morass of cars coming and going at the start of school day and at the end of the school day.

One solution apparently for the reported case of the self-driving cars involved re-programming the drop- off of its elementary school aged passengers at a corner down the street from the school, thus apparently staying out of the traffic foray.

In the case of my own children, I had considered doing something similar, but subsequently realized that it meant they had a longer distance to walk to school, providing other potential untoward aspects and that it made more sense to dig into the traffic and drop them as closely to the school entrance as I could get.

Some hope that Machine Learning and Deep Learning will gradually improve the AI driving systems as to where to drop off people, potentially learning over time where to do so, though I caution that this is not a slam-dunk notion (partially due to the lack of common-sense reasoning for AI today).

Others say that we’ll just all have to adjust to the primitive AI systems and have all restaurants, stores, and other locales all stipulate a designated drop-off zone.

This seems like an arduous logistics aspect that would be unlikely for all possible drop-off situations. Another akin approach involves using V2V (vehicle-to-vehicle) electronic communications, allowing a car that has found a drop-off spot to inform other nearing cars as to where the drop-off is. Once again, this has various trade-offs and is not a cure-all.

Conclusion

It might seem like a ridiculous topic to some, the idea of worrying about dropping off people from autonomous cars just smacks of being an overkill kind of matter.

Just get to the desired destination via whatever coordinates are available, and make sure the autonomous car doesn’t hit anything or anyone while getting there.

The thing is, the last step, getting out of an autonomous car, might ruin your day, or worse lose a life, and we need to consider holistically the entire passenger journey from start to finish, including where to drop-off the humans riding in self-driving driverless cars.

It will be one small step for mankind, and one giant leap for AI autonomous cars.

Copyright 2020 Dr. Lance Eliot

This content is originally posted on AI Trends.

[Ed. Note: For reader’s interested in Dr. Eliot’s ongoing business analyses about the advent of self-driving cars, see his online Forbes column: https://forbes.com/sites/lanceeliot/]

Source: https://www.aitrends.com/ai-insider/ai-autonomous-cars-and-the-problem-of-where-to-drop-off-riders/

Continue Reading

AI

AI Careers: Kesha Williams, Software Engineer, Continues Her Exploration

Published

on

Helping information technology to diversify and especially to help women of color achieve in technology and business, has been a personal goal for Kesha Williams, software engineer, author and speaker. (GETTY IMAGES)

By John P. Desmond, AI Trends Editor

We recently had a chance to catch up on the career of Kesha Williams, software engineer, author, speaker and instructor. AI Trends published an Executive Interview with Kesha in June 2018. At the time she was in the Information Technology department at Chick-fil-A, the restaurant chain, with responsibility to lead and mentor junior software engineers, and deliver on innovative technology.

She decided to move on from Chick-fil-A after 15 years in June 2019. Now she works at A Cloud Guru, an online education platform for people interested in cloud computing. Most of the courses prepare students for certification exams. The company was established in Melbourne, Australia in 2015.

“I wanted a role that allowed me to be more hands on with the latest, greatest technology,” she said in a recent interview. “And I wanted to be able to help people on a broader scale, on a more global level. I always felt my part of being here on the planet is to help others, and more specifically to help those in tech.”

Kesha Williams, software engineer, author, speaker and instructor

A Cloud Guru offers certifications for Amazon Web Services (AWS), Microsoft Azure and Google Cloud. It also has what Williams calls “cloud adjacent” courses including on Python programming and machine learning. “These courses will help you ‘skill up’ in the cloud and prepare for certification exams,” she said.

Kesha’s role is as a training architect, focusing on online content around AWS, specifically in the AI space. “Many people have taken this time being at home, to work on skills or learn something new. It’s a great way to spend time during the lockdown,” she advised. A true techie.

AWS DeepComposer Helps Teach About Generative AI and GANs

Most recently, she has been using AWS DeepComposer, an educational training service through AWS that allows the user to compose music using generative AI and GANs (generative adversarial networks, a class of machine learning frameworks). “I have been learning about that, so I can teach others about machine learning and music composition,” she said.

Using music samples, the user trains a music genre model. That model learns how to create new music, based on studying the music files you upload to it. The user plays a melody on a keyboard, gives it to the model, the model composes a new song by adding instruments. She is working on a web series to teach students about that process.

“It’s a fun way to teach some of the more complex topics of GANs and machine learning,” she said. Fortunately she can fall back on youth choir days playing the piano. “I’m remembering things,” she said.

Amazon makes it easy to start out, not charging anything for up to 500 songs. A student can buy the keyboard for $99, or use a virtual keyboard available on the site. Behind the scenes, Amazon SageMaker is working. That will cost some money if the student continues. (SageMaker is a cloud machine-learning platform, launched in November 2017. It enables developers to create, train and deploy machine-learning models in the cloud, or on edge devices.)

So far, Williams has done about 30 songs. “I have used my machine learning skills to train my own genre model. I trained a reggae model; I love reggae.”

Kesha’s Korner is a blog on A Cloud Guru where Williams introduces people to machine learning, offering four to six-minute videos on specific topics. The videos are free to watch; pricing for the A Cloud Guru courses come with membership priced from $32/mo to $49/mo depending, “It’s been a fun series to demystify machine learning,” she said. “It generates a lot of conversations. I often receive feedback from students on which topics to talk about.”

Woman Who Code Planning Virtual Conference

Women Who Code is another interest. The organization works to help women be represented as technical leaders, executives, founders, venture capitalists, board members and software engineers.

The Connect Digital 2020 is the organization’s first entirely virtual conference, to be held on three successive Fridays in June, with Williams scheduled for Friday, June 19. At that meeting, she will deliver a talk about using machine learning for social good, then kick off a “hackathon” to start the following week. The hackathon will start with three technical workshops, the first an introduction to machine learning tools, the second about preparing data, the third about building models. “Their challenge is to take everything they have learned and use machine learning to build a model to help battle the spread of the Covid-19 virus,” she said. “They will have a month to go off and build it, then present it to a panel of judges.” The winner receives a year of free access to the A Cloud Guru platform.

“There are a lot of software engineers that want to make a transition to data science and machine learning,” she said.

Asked what advice she would have for young people or early-career people interested in exploiting AI, Williams said, “Whenever I try to demystify machine learning for people, I tell them it’s complex, but not as complex as most people make it out to be. I thought at first you needed a PhD and to work in a research lab to grasp it. But there are many tools and services out there, especially from AWS, that make these complex technologies approachable and affordable to play around with.

“When you are first learning, you will make a lot of mistakes,” she said. “Don’t beat yourself up. Just stay at it.”

Williams has concerns about AI going forward. “I have always been concerned about the lack of diversity in AI, about the bias issues and the horror stories we have seen when it comes to certain bad-performing models that are used to make decisions about people. It’s still an issue; we need to continue to talk about it and solve it.”

Being in information technology for 25 years has been and continues to be a good career. “It’s still exciting for me. Every day there is something new to learn.”

Learn more at Kesha’s Korner and Women Who Code.

Source: https://www.aitrends.com/ethics-and-social-issues/ai-careers-kesha-williams-software-engineer-continues-her-exploration/

Continue Reading
Cannabis7 seconds ago

Everything you need to know about the KandyPens Rubi Vaporizer

Blockchain13 mins ago

Protestors Invoke Bitcoin in the Wake of George Floyd’s Death

Blockchain29 mins ago

“OMG Is Not Ethereum Killer” Vitalik On Tether’s [USDT] Migration To OMG Network

Blockchain33 mins ago

Hacker Steals Database of the Largest Hosting Provider on the Dark Web

Blockchain34 mins ago

Bitcoin Price Prediction: Bitcoin (BTC) Holds at $9,500, as Price Remains Stable in the Upside Range

Blockchain40 mins ago

Meet the US Senate Candidate Who’s Invested in Bitcoin Since 2013

Blockchain57 mins ago

On the ‘Bitcoin Fixes This’ Meme

Blockchain1 hour ago

JP Morgan Is Easing Its Attitude Towards Crypto

Blockchain1 hour ago

Bitso Notes All-time High Liquidity in its XRP/MXN Payment Corridor

Blockchain1 hour ago

Bitcoin Price Currently Overvalued, According to This Crucial Trend

Blockchain1 hour ago

Interactive Brokers’ Clients Made Fewer Trades Than April and March

Blockchain1 hour ago

Bitcoin Is a Peaceful Protest: Crypto Leaders On The Minneapolis Riots Following George Floyd’s Death

Blockchain1 hour ago

Zynga (ZNGA) Stock Jumps 5%, Firm to Acquire Turkish Mobile Gaming Company Peak for $1.8B

Blockchain2 hours ago

This Bitcoin developer is using “swaps” to solve BTC’s long-running “pseudonymous” privacy issue

Blockchain2 hours ago

Cardano’s upcoming Shelley launch may spur price

Blockchain2 hours ago

Stock Market Volatile Today with U.S. Protests Pulling Economy Down, Dow Adds 100 Points Now

Blockchain2 hours ago

Bitcoin and Ethereum Whales Move $313,000,000 in Crypto As Ripple Unlocks 1,000,000,000 XRP

Cannabis2 hours ago

Is Cannabis Recession-Proof? We’re About to Find Out.

Blockchain2 hours ago

Cash or Plastic? Countries Where Crypto Debit Cards Are Fair Game

Blockchain2 hours ago

Bitcoin Price Prediction: BTC/USD Stabilizes Above $9,500 As The Bulls Struggling To Conquer $9,800

Trending