Determining where to best drop-off a passenger can be a problematic issue.
It seems relatively common and downright unnerving that oftentimes a ridesharing service or taxi unceremoniously opts to drop you off at a spot that is poorly chosen and raft with complications.
I remember one time, while in New York City, a cab driver was taking me to my hotel after my having arrived past midnight at the airport, and for reasons I’ll never know he opted to drop me about a block away from the hotel, doing so at a darkened corner, marked with graffiti, and looking quite like a warzone.
I walked nearly a city block at nighttime, in an area that I later discovered was infamous for being dangerous, including muggings and other unsavory acts.
In one sense, when we are dropped off from a ridesharing service or its equivalent, we often tend to assume that the driver has identified a suitable place to do the drop-off.
Presumably, we expect as a minimum:
· The drop-off is near to the desired destination
· The drop-off should be relatively easy to get out of the vehicle at the drop-off spot
· The drop-off should be in a safe position to get out of the vehicle without harm
· And it is a vital part of the journey and counts as much as the initial pick-up and the drive itself.
In my experience, the drop-off often seems to be a time for the driver to get rid of a passenger and in fact the driver’s mindset is often on where their next fare will be, since they’ve now exhausted the value of the existing passenger and are seeking more revenue by thinking about their next passenger.
Of course, you can even undermine yourself when it comes to doing a drop-off.
The other day, it was reported in the news that a woman got out of her car on the 405 freeway in Los Angeles when her car had stalled, and regrettably, horrifically, another car rammed into her and her stalled vehicle. A cascading series of car crashes then occurred, closing down much of the freeway in that area and backing up traffic for miles.
In some cases, when driving a car ourselves, we make judgements about when to get out of the vehicle, and in other cases such as ridesharing or taking a taxi, we are having someone else make a judgement for us.
In the case of a ridesharing or taxi driver, I eventually figured out that as the customer I need to double-check the drop-off, along with requesting an alternative spot to be dropped off if the circumstances seem to warrant it. You usually assume that the local driver you are relying on has a better sense as to what is suitable for a drop-off, but the driver might not be thinking about the conditions you face and instead could be concentrating on other matters entirely.
Here’s a question for you, how will AI-based true self-driving driverless autonomous cars know where to drop-off human passengers?
This is actually a quite puzzling problem that though not yet seemingly very high on the priority list of AI developers for autonomous cars, ultimately the drop-off matter will rear its problematic head as something needing to be solved.
The simplistic view of how the AI should drop you off consists of the AI system merely stopping at the exact location of where you’ve requested to go, as though it is merely a mathematically specified latitude and longitude, and then it is up to you to get out of the self-driving car.
This might mean that the autonomous car is double-parked, though if this is an illegal traffic act then it goes against the belief that self-driving cars should not be breaking the law.
I’ve spoken and written extensively that it is a falsehood to think that autonomous cars will always strictly obey all traffic laws, since there are many situations in which we as humans bend or at times violate the strict letter of the traffic laws, doing so because of the necessity of the moment or even at times are allowed to do so.
In any case, my point is that the AI system in this simplistic perspective is not doing what we would overall hope or expect a human driver to do when identifying a drop-off spot, which as I mentioned earlier should have these kinds of characteristics:
· Close to the desired destination
· Stopping at a spot that allows for getting out of the car
· Ensuring the safety of the disembarking passengers
· Ensuring the safety of the car in its stopped posture
· Not marring the traffic during its stop
Imagine for a moment what the AI would need to do to derive a drop-off spot based on those kinds of salient criteria.
The sensors of the self-driving car, such as the cameras, radar, ultrasonic, LIDAR, and other devices would need to be able to collect data in real-time about the surroundings of the destination, once the self-driving car has gotten near to that point, and then the AI needs to figure out where to bring the car to a halt and allow for the disembarking of the passengers. The AI needs to assess what is close to the destination, what might be an unsafe spot to stop, what is the status of traffic that’s behind the driverless car, and so on.
Let’s also toss other variables into the mix.
Suppose it is nighttime, does the drop-off selection change versus when dropping off in daylight (often, the answer is yes). Is it raining or snowing, and if so, does that impact the drop-off choice (usually, yes)? Is there any road repair taking place near to the destination and does that impact the options for doing the drop-off (yes)?
If you are saying to yourself that the passenger ought to take fate into their own hands and tell the AI system where to drop them off, yes, some AI developers are incorporating Natural Language Processing (NLP) that can interact with the passengers for such situations, though this does not entirely solve this drop-off problem.
Because the passenger might not know what is a good place to drop-off.
I’ve had situations whereby I argued with a ridesharing driver or cabbie about where I thought I should be dropped-off, yet it turned out their local knowledge was more attuned to what was a prudent and safer place to do so.
Plus, in the case of autonomous cars, keep in mind that the passengers in the driverless car might be all children and no adults. This means that you are potentially going to have a child trying to decide what is the right place to be dropped off.
I shudder to think if we are really going to have an AI system that lacks any semblance of common-sense be taking strict orders from a young child, whereas an adult human driver would be able to counteract any naïve and dangerous choice of drop-offs (presumably, hopefully).
The drop-off topic will especially come to play for self-driving cars at a Level 4, which is the level at which an autonomous car will seek to pullover or find a “minimal risk condition” setting when the AI has reached a point that it has exhausted its allowed Operational Design Domain (ODD). We are going to have passengers inside Level 4 self-driving cars that might get stranded in places that are not prudent for them, including say young children or perhaps someone elderly and having difficulty caring for their own well-being.
It has been reported that some of the initial tryouts of self-driving cars revealed that the autonomous cars got flummoxed somewhat when approaching a drop-off at a busy schoolground, which makes sense in that even as a human driver the chaotic situation of young kids running in and around cars at a school can be unnerving.
I remember when my children were youngsters how challenging it was to wade into the morass of cars coming and going at the start of school day and at the end of the school day.
One solution apparently for the reported case of the self-driving cars involved re-programming the drop- off of its elementary school aged passengers at a corner down the street from the school, thus apparently staying out of the traffic foray.
In the case of my own children, I had considered doing something similar, but subsequently realized that it meant they had a longer distance to walk to school, providing other potential untoward aspects and that it made more sense to dig into the traffic and drop them as closely to the school entrance as I could get.
Some hope that Machine Learning and Deep Learning will gradually improve the AI driving systems as to where to drop off people, potentially learning over time where to do so, though I caution that this is not a slam-dunk notion (partially due to the lack of common-sense reasoning for AI today).
Others say that we’ll just all have to adjust to the primitive AI systems and have all restaurants, stores, and other locales all stipulate a designated drop-off zone.
This seems like an arduous logistics aspect that would be unlikely for all possible drop-off situations. Another akin approach involves using V2V (vehicle-to-vehicle) electronic communications, allowing a car that has found a drop-off spot to inform other nearing cars as to where the drop-off is. Once again, this has various trade-offs and is not a cure-all.
It might seem like a ridiculous topic to some, the idea of worrying about dropping off people from autonomous cars just smacks of being an overkill kind of matter.
Just get to the desired destination via whatever coordinates are available, and make sure the autonomous car doesn’t hit anything or anyone while getting there.
The thing is, the last step, getting out of an autonomous car, might ruin your day, or worse lose a life, and we need to consider holistically the entire passenger journey from start to finish, including where to drop-off the humans riding in self-driving driverless cars.
It will be one small step for mankind, and one giant leap for AI autonomous cars.
Copyright 2020 Dr. Lance Eliot
This content is originally posted on AI Trends.
[Ed. Note: For reader’s interested in Dr. Eliot’s ongoing business analyses about the advent of self-driving cars, see his online Forbes column: https://forbes.com/sites/lanceeliot/]
The Federal Deposit Insurance Corp. is moving to boost the way it monitors for risks at thousands of U.S. banks, potentially scrapping quarterly reports that have been a fixture of oversight for more than 150 years yet often contain stale data.
The FDIC has been one of the cheerleaders and case studies for the efficiency increasing impact of XBRL based reporting forever. Therefore it will be fascinating to observe this competition and its outcome.
Amongst several fascinating presentations at the Eurofiling Innovation Day this week was an interesting demonstration on how XBRL reports can be used as the basis of explainable AI for bankruptcy prediction.
The black box nature of many AI models is one biggest issues of applying AI in regulated environments, where causal linkages are the bedrock of litigation etc. Making them explainable would remove a major headache for lots of use cases.
Standardized financials from Earnings Press Release and 8-Ks are now available via the Calcbench API minutes after published. Calcbench is leveraging our expertise in XBRL to get many of the numbers from the Income Statement, Balance Sheet and Statement of Cash Flows from the earnings press release or 8-K.
The time lag between the publication of earnings information and its availability in the XBRL format continues to be a roadblock for the wholesale adoption of XBRL by financial markets until regulators require immediate publication in the XBRL format in real time. The Calcbench API is a welcome stop gap measure.
Christian Dreyer CFA is well known in Swiss Fintech circles as an expert in XBRL and financial reporting for investors.
We have a self-imposed constraint of 3 news stories each week because we serve busy senior leaders in Fintech who need just enough information to get on with their job.
New readers can read 3 free articles. To become a member with full access to all that Daily Fintech offers, the cost is just USD 143 a year (= USD 0.39 per day or USD 2.75 per week). For less than one cup of coffee you get a week full of caffeine for the mind.
The parable of the frog in the boiling water is well known- you know, if you put a frog into boiling water it will immediately jump out, but if you put the frog into tepid water and gradually increase the temperature of the water it will slowly boil to death. It’s not true but it is a clever lede into the artificial intelligence evolution within insurance. Are there insurance ‘frogs’ in danger of tepid water turning hot, and are there frogs suffering from FOHW (fear of hot water?)
Patrick Kelahanis a CX, engineering & insurance consultant, working with Insurers, Attorneys & Owners in his day job. He also serves the insurance and Fintech world as the ‘Insurance Elephant’.
The frog and boiling water example is intuitive- stark change is noticed, gradual change not so much. It’s like Ernest Hemmingway’s quotation in “The Sun Also Rises”- “How did you go bankrupt? Gradually, and then suddenly!” In each of the examples the message is similar- adverse change is not always abrupt, but failure to notice or react to changing conditions can lead to a worst-case scenario. As such with insurance innovation.
A recent interview in The Telegraph by Michael Dwyer of Peter Cullum, non-executive Director of Global Risk Partners (and certainly one with a CV that qualifies him as a knowing authority), provided this view:
“Insurance is one business that is all about data. It’s about numbers. It’s about the algorithms. Quite frankly, in 10 years’ time, I predict that 70pc or 80pc of all underwriters will be redundant because it will be machine driven.
“We don’t need smart people to make what I’d regard as judgmental decisions because the data will make the decision for you.”
A clever insurance innovation colleague, Craig Polley, recently posed Peter’s insurance scenario for discussion and the topic generated lively debate- will underwriting become machine driven, or is there an overarching need for human intuition? I’m not brave enough to serve as arbiter of the discussion, but the chord Craig’s question struck leads to the broader point- is the insurance industry sitting in that tepid water now, and are the flames of AI potentially leading to par boiling?
I offered a thought recently to an AI advocate looking for some insight into how the concept is embraced by insurance organizations. In considering the fundamentals of insurance, I recounted that insurance as a product thrives best in environments where risk can be understood, predicted, and priced across populations with widely varied individual risk exposures as best determined by risk experience within the population or application of risk indicators. Blah, blah, blah. Insurance is a long-standing principle of sharing of the ultimate cost of risk where no one participant is unduly at a disadvantage, and no one party is at a financial advantage- it is a balance of cost and probability.
Underwriting has been built on a model of proxy information, on the law of large numbers, of historical performance, of significant populations and statistical sampling. There is not much new in that description, but what if the dynamic is changed, to an environment where the understanding of risk factors is not retrospective, but prospective?
Take commercial motor insurance for example. Reasonably expensive, plenty of human involvement in underwriting, high maximum loss outcomes for occurrences. Internal data are the primary source of rating the book of business. There are, however, new approaches being made in the industry that supplant traditional internal or proxy data with robust analysis of external data. Luminant Analytics is an example of a firm that leverages AI in providing not only provide predictive models for motor line loss frequency and severity trends, but also analytics that help companies expanding into new markets, where historical loss data is unavailable. Traditional underwriting has remained a solid approach, but is it now akin to turning the heat up on the industry frog?
The COVID-19 environment has by default prompted a dramatic increase in virtual claim handling techniques, changing what was not too long ago verboten- waiver of inspection on higher value claims, or acceptance of third party estimates in lieu of measure by the inch adjuster work. Yes, there will be severity hangovers and spikes in supplements, but carriers will find expediency trumps detail- as long as the customer is accepting of the change in methods. If we consider the recent announcement by US P&C carrier Allstate of significant staff layoffs as an indicator of the inroads of virtual efforts then there seemingly is hope for that figurative frog.
Elsewhere it was announced that the All England Club has not had its Wimbledon event cancellation cover renewed for 2021 (please recall that the Club was prescient in having cancellation cover in force that included pandemic benefits). The prior policy’s underwriters are apparently reluctant to shell out another potential $140 million with a recurrence of a pandemic, but are there other approaches to pandemic cover? The consortium of underwriting firms devised the cover seventeen years ago; can the cover for a marquee event benefit from AI methodology that simply didn’t exist in 2003? It’s apparent the ask for cover for the 2021 event attracted knowledgeable frogs that knew to jump out of hot water, but what if the exposure burner is turned down through better understanding of the breadth of data affecting the risk, that there is involvement of capital markets in diversifying the risk perhaps across many unique events’ outcomes and alternative risk financing, and leveraging of underwriting tools that are supported by AI and machine learning? Will it be found in due time that the written rule that pandemics cannot be underwritten as a peril will have less validity because well placed application of data analysis has wrangled the risk exposure to a reasonable bet by an ILS fund?
There are more examples of AI’s promise but let us not forget that AI is not the magic solution to all insurance tasks. Companies that invest in AI without a fitting use case simply are moving their frog to a different but jest as threatening a pot. Companies that invest in innovation that cannot bridge their legacy system to meaningful outcomes because there is no API functionality are turning the heat up themselves. Large scale innovation options that are coming to a twenty-year anniversary (think post Y2K) may have compounding legacy issues- old legacy and new legacy.
The insurance industry needs to consider not just individual instances of the gradual heat of change being applied.
What prevents the capital markets from applying AI methods (through design or purchase) in predicting or betting on risk outcomes? The more comprehensive and accurate risk prediction methods become the more direct the path between customer and risk financing partner also becomes. Insurance frogs need not fear the heat if there are fewer pots to work from, but no pots, no business.
The risk sharing/risk financing industry has evolved through application of available technology and tools, what’s to say AI does not become a double-edged sword for the insurance industry- a clever tool in the hands of insurers, or a clever tool in the hands of alternative financing that serves to cut away some of the insurers’ business? If asked, Peter Cullum might opine that it’s not just underwriting that AI will affect, but any other aspect of insurance that AI can effectively influence. Frogs beware.
You get three free articles on Daily Fintech; after that you will need to become a member for just US $143 per year ($0.39 per day) and get all our fresh content and archives and participate in our forum
Creators of the 80 Million Tiny Images data set from MIT and NYU took the collection offline this week, apologized, and asked other researchers to refrain from using the data set and delete any existing copies. The news was shared Monday in a letter by MIT professors Bill Freeman and Antonio Torralba and NYU professor Rob Fergus published on the MIT CSAIL website.
Introduced in 2006 and containing photos scraped from internet search engines, 80 Million Tiny Images was recently found to contain a range of racist, sexist, and otherwise offensive labels such as nearly 2,000 images labeled with the N-word, and labels like “rape suspect” and “child molester.” The data set also contained pornographic content like non-consensual photos taken up women’s skirts. Creators of the 79.3 million-image data set said it was too large and its 32 x 32 images too small, making visual inspection of the data set’s complete contents difficult. According to Google Scholar, 80 Million Tiny Images has been cited more 1,700 times.
Above: Offensive labels found in the 80 Million Tiny Images data set
“Biases, offensive and prejudicial images, and derogatory terminology alienates an important part of our community — precisely those that we are making efforts to include,” the professors wrote in a joint letter. “It also contributes to harmful biases in AI systems trained on such data. Additionally, the presence of such prejudicial images hurts efforts to foster a culture of inclusivity in the computer vision community. This is extremely unfortunate and runs counter to the values that we strive to uphold.”
The trio of professors say the data set’s shortcoming were brought to their attention by an analysis and audit published late last month (PDF) by University of Dublin Ph.D. student Abeba Birhane and Carnegie Mellon University Ph.D. student Vinay Prabhu. The authors say their assessment is the first known critique of 80 Million Tiny Images.
Both the paper authors and the 80 Million Tiny Images creators say part of the problem comes from automated data collection and nouns from the WordNet data set for semantic hierarchy. Before the data set was taken offline, the coauthors suggested the creators of 80 Million Tiny Images do like ImageNet creators did and assess labels used in the people category of the data set. The paper finds that large-scale image data sets erode privacy and can have a disproportionately negative impact on women, racial and ethnic minorities, and communities at the margin of society.
Birhane and Prabhu assert that the computer vision community must begin having more conversations about the ethical use of large-scale image data sets now in part due to the growing availability of image-scraping tools and reverse image search technology. Citing previous work like the Excavating AI analysis of ImageNet, the analysis of large-scale image data sets shows that it’s not just a matter of data, but a matter of a culture in academia and industry that finds it acceptable to create large-scale data sets without the consent of participants “under the guise of anonymization.”
“We posit that the deeper problems are rooted in the wider structural traditions, incentives, and discourse of a field that treats ethical issues as an afterthought. A field where in the wild is often a euphemism for without consent. We are up against a system that has veritably mastered ethics shopping, ethics bluewashing, ethics lobbying, ethics dumping, and ethics shirking,” the paper states.
To create more ethical large-scale image data sets, Birhane and Prabhu suggest:
Blur the faces of people in data sets
Do not use Creative Commons licensed material
Collect imagery with clear consent from data set participants
Include a data set audit card with large-scale image data sets, akin to the model cards Google AI uses and the datasheets for data sets Microsoft Research proposed
The work incorporates Birhane’s previous work on relational ethics, which suggests that the creators of machine learning systems should begin their work by speaking with the people most affected by machine learning systems, and that concepts of bias, fairness, and justice are moving targets.
“We indeed celebrate ImageNet’s achievement and recognize the creators’ efforts to grapple with some ethical questions. Nonetheless, ImageNet as well as other large image datasets remain troublesome,” the Birhane and Prabhu paper reads.