Connect with us

Ai

The AI stack thats changing retail personalization

Published

on

Consumer expectations are higher than ever as a new generation of shoppers look to shop for experiences rather than commodities. They expect instant and highly-tailored (pun intended?) customer service and recommendations across any retail channel.

To be forward-looking, brands and retailers are turning to startups in image recognition and machine learning to know, at a very deep level, what each consumer’s current context and personal preferences are and how they evolve. But while brands and retailers are sitting on enormous amounts of data, only a handful are actually leveraging it to its full potential.

To provide hyper-personalization in real time, a brand needs a deep understanding of its products and customer data. Imagine a case where a shopper is browsing the website for an edgy dress and the brand can recognize the shopper’s context and preference in other features like style, fit, occasion, color etc., then use this information implicitly while fetching similar dresses for the user.

Another situation is where the shopper searches for clothes inspired by their favorite fashion bloggers or Instagram influencers using images in place of text search. This would shorten product discovery time and help the brand build a hyper-personalized experience which the customer then rewards with loyalty.

With the sheer amount of products being sold online, shoppers primarily discover products through category or search-based navigation. However, inconsistencies in product metadata created by vendors or merchandisers lead to poor recall of products and broken search experiences. This is where image recognition and machine learning can deeply analyze enormous data sets and a vast assortment of visual features that exist in a product to automatically extract labels from the product images and improve the accuracy of search results.

Why is image recognition better than ever before?

retail

While computer vision has been around for decades, it has recently become more powerful, thanks to the rise of deep neural networks. Traditional vision techniques laid the foundation for learning edges, corners, colors and objects from input images but it required human engineering of the features to be looked at in the images. Also, the traditional algorithms found it difficult to cope up with the changes in illumination, viewpoint, scale, image quality, etc.

Deep learning, on the other hand, takes in massive training data and more computation power and delivers the horsepower to extract features from unstructured data sets and learn without human intervention. Inspired by the biological structure of the human brain, deep learning uses neural networks to analyze patterns and find correlations in unstructured data such as images, audio, video and text. DNNs are at the heart of today’s AI resurgence as they allow more complex problems to be tackled and solved with higher accuracy and less cumbersome fine-tuning.

How much training data do you need?

Read more: https://techcrunch.com/2019/11/12/the-ai-stack-thats-changing-retail-personalization/

Ai

Save over $200 with discounted student tickets to Robotics + AI 2020

Published

on

By

If you’re a current student and you love robots — and the AI that drives them — you do not want to miss out on TC Sessions: Robotics + AI 2020. Our day-long deep dive into these two life-altering technologies takes place on March 3 at UC Berkeley and features the best and brightest minds, makers and influencers.

We’ve set aside a limited number of deeply discounted tickets for students because, let’s face it, the future of robotics and AI can’t happen without cultivating the next generation. Tickets cost $50, which means you save more than $200. Reserve your student ticket now.

Not a student? No problem, we have a savings deal for you, too. If you register now, you’ll save $150 when you book an early-bird ticket by February 14.

More than 1,000 robotics and AI enthusiasts, experts and visionaries attended last year’s event, and we expect even more this year. Talk about a targeted audience and the perfect place for students to network for an internship, employment or even a future co-founder.

What can you expect this year? For starters, we have an outstanding lineup of speaker and demos — more than 20 presentations — on tap. Let’s take a quick look at just some of the offerings you don’t want to miss:

  • Saving Humanity from AI: Stuart Russell, UC Berkeley professor and AI authority, argues in his acclaimed new book, “Human Compatible,” that AI will doom humanity unless technologists fundamentally reform how they build AI algorithms.
  • Opening the Black Box with Explainable AI: Machine learning and AI models can be found in nearly every aspect of society today, but their inner workings are often as much a mystery to their creators as to those who use them. UC Berkeley’s Trevor Darrell, Krishna Gade of Fiddler Labs and Karen Myers from SRI International will discuss what we’re doing about it and what still needs to be done.
  • Engineering for the Red Planet: Maxar Technologies has been involved with U.S. space efforts for decades and is about to send its fifth robotic arm to Mars aboard NASA’s Mars 2020 rover. Lucy Condakchian, general manager of robotics at Maxar, will speak to the difficulty and exhilaration of designing robotics for use in the harsh environments of space and other planets.

That’s just a sample — take a gander at the event agenda to help you plan your time accordingly. We’ll add even more speakers in the coming weeks, so keep checking back.

TC Sessions: Robotics + AI 2020 takes place on March 3 at UC Berkeley. It’s a full day focused on exploring the future of robotics and a great opportunity for students to connect with leading technologists, founders, researchers and investors. Join us in Berkeley. Buy your student ticket today and get ready to build the future.

Is your company interested in sponsoring or exhibiting at TC Sessions: Robotics + AI 2020? Contact our sponsorship sales team by filling out this form.

Read more: https://techcrunch.com/2020/01/15/save-over-200-with-discounted-student-tickets-to-robotics-ai-2020/

Continue Reading

Ai

EU lawmakers are eyeing risk-based rules for AI, per leaked white paper

Published

on

By

The European Commission is considering a temporary ban on the use of facial recognition technology, according to a draft proposal for regulating artificial intelligence obtained by Euroactiv.

Creating rules to ensure AI is ‘trustworthy and human’ has been an early flagship policy promise of the new Commission, led by president Ursula von der Leyen.

But the leaked proposal suggests the EU’s executive body is in fact leaning towards tweaks of existing rules and sector/app specific risk-assessments and requirements, rather than anything as firm as blanket sectoral requirements or bans.

The leaked Commission white paper floats the idea of a three-to-five-year period in which the use of facial recognition technology could be prohibited in public places — to give EU lawmakers time to devise ways to assess and manage risks around the use of the technology, such as to people’s privacy rights or the risk of discriminatory impacts from biased algorithms.

“This would safeguard the rights of individuals, in particular against any possible abuse of the technology,” the Commission writes, adding that: “It would be necessary to foresee some exceptions, notably for activities in the context of research and development and for security purposes.”

However the text raises immediate concerns about imposing even a time-limited ban — which is described as “a far-reaching measure that might hamper the development and uptake of this technology” — and the Commission goes on to state that its preference “at this stage” is to rely on existing EU data protection rules, aka the General Data Protection Regulation (GDPR).

The white paper contains a number of options the Commission is still considering for regulating the use of artificial intelligence more generally.

These range from voluntary labelling; to imposing sectorial requirements for the public sector (including on the use of facial recognition tech); to mandatory risk-based requirements for “high-risk” applications (such as within risky sectors like healthcare, transport, policing and the judiciary, as well as for applications which can “produce legal effects for the individual or the legal entity or pose risk of injury, death or significant material damage”); to targeted amendments to existing EU product safety and liability legislation.

The proposal also emphasizes the need for an oversight governance regime to ensure rules are followed — though the Commission suggests leaving it open to Member States to choose whether to rely on existing governance bodies for this task or create new ones dedicated to regulating AI.

Per the draft white paper, the Commission says its preference for regulating AI are options 3 combined with 4 & 5: Aka mandatory risk-based requirements on developers (of whatever sub-set of AI apps are deemed “high-risk”) that could result in some “mandatory criteria”, combined with relevant tweaks to existing product safety and liability legislation, and an overarching governance framework.

Hence it appears to be leaning towards a relatively light-touch approach, focused on “building on existing EU legislation” and creating app-specific rules for a sub-set of “high-risk” AI apps/uses — and which likely won’t stretch to even a temporary ban on facial recognition technology.

Much of the white paper is also take up with discussion of strategies about “supporting the development and uptake of AI” and “facilitating access to data”.

“This risk-based approach would focus on areas where the public is at risk or an important legal interest is at stake,” the Commission writes. “This strictly targeted approach would not add any new additional administrative burden on applications that are deemed ‘low-risk’.”

EU commissioner Thierry Breton, who oversees the internal market portfolio, expressed resistance to creating rules for artificial intelligence last year — telling the EU parliament then that he “won’t be the voice of regulating AI“.

For “low-risk” AI apps, the white paper notes that provisions in the GDPR which give individuals the right to receive information about automated processing and profiling, and set a requirement to carry out a data protection impact assessment, would apply.

Albeit the regulation only defines limited rights and restrictions over automated processing — in instances where there’s a legal or similarly significant effect on the people involved. So it’s not clear how extensively it would in fact apply to “low-risk” apps.

If it’s the Commission’s intention to also rely on GDPR to regulate higher risk stuff — such as, for example, police forces’ use of facial recognition tech — instead of creating a more explicit sectoral framework to restrict their use of a highly privacy-hostile AI technologies — it could exacerbate an already confusingly legislative picture where law enforcement is concerned, according to Dr Michael Veale, a lecturer in digital rights and regulation at UCL.

“The situation is extremely unclear in the area of law enforcement, and particularly the use of public private partnerships in law enforcement. I would argue the GDPR in practice forbids facial recognition by private companies in a surveillance context without member states actively legislating an exemption into the law using their powers to derogate. However, the merchants of doubt at facial recognition firms wish to sow heavy uncertainty into that area of law to legitimise their businesses,” he told TechCrunch.

“As a result, extra clarity would be extremely welcome,” Veale added. “The issue isn’t restricted to facial recognition however: Any type of biometric monitoring, such a voice or gait recognition, should be covered by any ban, because in practice they have the same effect on individuals.”

An advisory body set up to advise the Commission on AI policy set out a number of recommendations in a report last year — including suggesting a ban on the use of AI for mass surveillance and social credit scoring systems of citizens.

But its recommendations were criticized by privacy and rights experts for falling short by failing to grasp wider societal power imbalances and structural inequality issues which AI risks exacerbating — including by supercharging existing rights-eroding business models.

In a paper last year Veale dubbed the advisory body’s work a “missed opportunity” — writing that the group “largely ignore infrastructure and power, which should be one of, if not the most, central concern around the regulation and governance of data, optimisation and ‘artificial intelligence’ in Europe going forwards”.

Read more: https://techcrunch.com/2020/01/17/eu-lawmakers-are-eyeing-risk-based-rules-for-ai-per-leaked-white-paper/

Continue Reading

Ai

Europe mulls five year ban on facial recognition in public… with loopholes for security and research

Published

on

By


Euro Commission also wants to loosen purse strings for AI investment while tightening reins

The European Commission is weighing whether to ban facial recognition systems in public areas for up to five years, according to a draft report on artificial intelligence policy in the European Union.

A copy of the unreleased report [PDF] was published on Thursday by EURACTIV, a Belgian non-profit media think tank.

The European Commission wants time to explore how artificial intelligence and facial recognition technology can be reconciled with Europe’s General Data Protection Regulation (GDPR). European lawmakers intend to allow EU citizens to benefit from AI-oriented systems and to encourage European investment in such technology while also seeking to limit the potential risks.

“As we are committed to making Europe fit for the digital age, we have to fully reap the benefits of Artificial Intelligence: to enable scientific breakthrough; to preserve the leadership of EU businesses; to improve the life of every EU citizen by enhancing diagnosis and healthcare or increasing the efficiency of farming,” a European Commission spokesperson said in an email to The Register.

“To maximize the benefits and address the challenges of Artificial Intelligence, Europe has to act as one and will define its own way, a human way. Technology has to serve a purpose, and the people. Trust and security of EU citizens will therefore be at the centre of the EU’s strategy.”

The EC’s spokesperson said the Commission intends to present a plan for a coordinated European approach on artificial intelligence, as described by EC President Ursula Gertrud von der Leyen in her Political Guidelines.

In keeping with those goals, the draft paper describes a series of questions that need to be answered to develop a functional regulatory framework.

“In spite of the opportunities that artificial intelligence can provide, it can also lead to harm,” the report says, pointing to material concerns like loss of life – e.g. the fatal crash involving an Uber self-driving car in 2018 – and intangible concerns like loss of privacy and unfair treatment – such as discriminatory Facebook job ads.

The document describes five potential regulatory options, one of which focuses on the use of artificial intelligence by public authorities and singles out facial recognition as particular application that should be addressed.

The proposed ban “would mean that the use of facial recognition technology by private or public actors in public spaces would be prohibited for a definite period (e.g. 3-5 years) during which a sound methodology for assessing the impacts of this technology and possible risk management measures could be identified and developed,” the report says.

But there would be exceptions, for research and development and for security purposes. And the paper doesn’t specifically address scenarios like images captured in Europe that might be run through facial recognition algorithms at a later date by private sector entities operating outside Europe.

Other regulatory options explored in the paper include: voluntary labeling, mandatory risk-based requirements for high-risk applications, AI-tailored legislation addressing safety and liability, and an AI-specific regulatory framework.

The paper, however, stresses that these contemplated rules would address process rather than results. In other words, a maker of a self-driving car would be able to declare compliance with EU requirements and could then slap an “ethical/trustworthy artificial intelligence” label on the vehicle without undergoing testing that might prove that claim.

Illustration of facial recognition matching a person in the street

Facial-recognition algos vary wildly, US Congress told, as politicians try to come up with new laws on advanced tech

READ MORE

Facial recognition remains a fraught topic in Europe, as it is pretty much everywhere else people have a say in their own government.

In a blog post published last October, EU data protection supervisor Wojciech Wiewiórowski warned that facial recognition may not be ethically compatible with democracy.

“We need to assess not only the technology on its own merits, but also the likely direction of travel if it continues to be deployed more and more widely,” said Wiewiórowski. “The next stage will be pressure to adopt other forms of objectification of the human being, gait, emotions, brainwaves.”

“Now is the moment for the EU, as it discusses the ethics of AI and the need for regulation, to determine whether – if ever – facial recognition technology can be permitted in a democratic society. If the answer is yes, only then do we turn questions of how and safeguards and accountability to be put in place.” ®

Sponsored: Detecting cyber attacks as a small to medium business

Source: https://go.theregister.co.uk/feed/www.theregister.co.uk/2020/01/17/eu_ban_facial_recognition/

Continue Reading

Trending