I’m writing to you from my apartment, drinking a cup of Jing’s finest Ceylon breakfast tea, looking out the window at a sunny yet empty Lisbon. About a week ago, my coworkers and I were advised to take our computers and chargers home with us. Soon followed the official email: our offices would remain closed until further notice to help mitigate the spread of COVID-19.
As someone who’s always had the perk of working remotely once or twice a week (something my orange fox of a dog deeply enjoys), this didn’t disrupt my work routine that much. Sure, there’s a shaky wifi connection every so often, or an over-excited dog to appease, but it’s been pretty much business as usual. Of course, not everyone can say the same. Overnight, countless startups, multinational corporations and small businesses are facing an unprecedented situation, as they’ve unwillingly signed up for the world’s biggest remote working experience — and most of them were not ready.
Over the past couple of days, I’ve been gathering a few resources and insights that could be useful for companies trying to figure out how to deal with this abrupt work-from-home scenario. I’ve also reached out to remote work consultants and leaders working in partial or fully remote companies to share advice on the tools, strategies and mindset managers need to be successful in this transition. Here’s what they had to say:
Step one: Communication
About a week ago, on LinkedIn, people started sharing documents with tips and tools to help companies who were forced into remote working. Alexandre Mendes, former Executive Director of Startup Braga, was one of them. “It was one of those rare moments when I thought about doing something, and actually did it right away,” he told me. He’s been studying remote working for a few years now, advising companies on how to transition towards remote working.
Consultant, Former Executive Director of Startup Braga
Most people think it’s about the tools, but it’s not. The tools are there, but the secret sauce is how you use them: how you’re using Slack to communicate, how you’re managing the teams, setting expectations, defining goals. Working remotely has a big influence on how we communicate — writing on Slack is not the same as writing an email, for example.
So first things first — let’s start with communication.
Establishing clear communication lines is a big thing at Gitlab’s, the world’s biggest all-remote company, with over 1,200 team members located in more than 65 countries around the world. They recently hosted a webinar with its CEO, Sid Sijbrandij, and Head of Remote, Darren Murph, to share advice on embracing this new remote reality.
Working remotely came naturally to Darren Murph, Head of Remote at Gitlab. “It was at the office where I felt out of place,” he says.
Head of Remote at Gitlab
I think it all starts and ends with communication because if communication isn’t in place, nothing else is going to work out, right? I would actually advise getting kind of a go team. Ask around at your company: has anyone ever worked remotely before? Does anybody have expertise in the space?
Get a go team together and start communicating with people through as few channels as possible. You want to avoid silos and fragmentations, especially during the early days of working out the work-from-home kinks, and be open to feedback. So whatever documentation and communication channel you choose, listen to people.
They’re going to have different issues — not all homes are ideal and amenable to work from home right out of the gate. And use that go team to kind of prioritize the feedback that you’re getting and try to find solutions to them as quickly as possible. Putting a plan in place so that people know they’re being listened to, the feedback is being heard and that solutions are being researched, it will help stabilize what could be quite a chaotic environment.
Schedule regular check-ins
As you start shifting towards remote work, maybe arrange a couple of extra check-ins with your team than you’d normally would. Try to understand how they’re adapting to this abrupt situation, what they’ve been up to, how their loved ones are. Managers should give employees who are struggling a bit more direction to help with the transition, supporting them every step of the way.
According to Harvard Business Review, research on emotional intelligence shows that “employees look to their managers for cues about how to react to sudden changes or crisis situations.” So if you’re employee is feeling anxious, try to avoid going down a doomsday spiral with them and reassure them instead. Thanking employees for their effort and communicating with simples sentence like “we’ll get through this”, can boost morale right up.
Step two: Forget business-as-usual
A regular work-from-home situation already presents some challenges, but this is a whole other ball game. It’s not that most people don’t really have home offices set up, which is true (I’ve been told about people rushing to the office to get their monitors, and on the bus home, spotting several people carrying monitors under their arms), but there’s just so much chaos in this scenario. Daycares and schools are closed, people are juggling work and taking care of kids, loved ones, and cats who insist on sitting on the keyboard during important work calls. You should expect a disruption of workflows and normal working hours, especially in the beginning.
“You’ve got to assume people aren’t always checking their messages. I think it’s a bit unrealistic to think that, under these circumstances, people can keep a full-time, 9-to-5 job,” Alexandre said.
Laurel Farrer is a remote work expert who’s been helping companies leverage a virtual workforce for the past ten years. For over a decade, she recounts she’s had to “hide her children from her roles for the sake of ‘professionalism’,” she recently posted on LinkedIn.
Remote work expert and strategist
Whether or not you have kids of your own, try to avoid frustration if any children interrupt a virtual teams meeting or if working hours need to be adjusted. Consolidating school, work, home, church, and childcare all into the same rooms for several weeks is a tough challenge. The world is scared and stressed right now, and the last thing we should be doing is taking it out on our families (or the family of a coworker) by making an innocent child feel like an irritant.
Business leaders should be extra flexible, supporting their employees and team members with these challenges. Let people change their work hours when they need to. Be understanding if someone has to leave in the middle of a meeting. That’s just what Mixpanel, an analytics startup, did. A couple of days ago, Anca Croitoru, one of the company’s Senior Customer Success Managers, woke up to a company-wide COVID-19 update.
“If your children or loved ones make noise during calls or walk into online meetings, feel free to introduce them to your coworkers,” one of the points read. The email acknowledged that having children or dependants full-time at home would impact their employees’ productivity, and it reassured them that it was ok, and that deadlines and expectations would be set accordingly.
Step three: Set up your tool stack
Before diving into the tools that can help you make the most out of this, a word of advice. Unless the tool you’re using is really not up to the challenge, stick to the tools your team already knows. It’s chaotic enough without introducing more unknown elements to the process. Keep consistency as much as possible.
💬 Communication: Slack
Despite all the articles claiming Slack is disrupting our work, it’s still the tool on everyone’s minds. It’s super intuitive to use, and most importantly, most companies already have it (I even created some workspaces for just a few friends).
P.S. For the sake of accountability and record tracking, keep all major communication in public channels instead of private ones.
From simple task lists to major cross-functional projects, these tools help teams stay on top of project deliverables and status, so it’s clear for everyone who’s doing what, when, and why.
Set up calls with your team, record them for future use, and even keep rooms open for anyone to join.
🤝 Real-time collaboration: GSuite
Docs, Sheets, and Slides are editable by anyone at your company (or even by external guests) in real-time, allowing your teams to work at one document together, and keeping a record of which alterations have been made, and by whom.
Perhaps more importantly, communicate very clearly how each tool is to be used — Email could be used for more formal announcements, Slack for project discussions, and Asana for requests, for example.
Of course, these are just a few of the most commonly mentioned ones. If you’re curious about what else is out there, you can check out Alexandre’s doc.
Step four: Think in outputs, not hours
I met John Riordan, Director of Support, Ireland at Shopify, for a brief virtual chat on Monday morning. He’s been working remotely since 2002, most of it managing teams, and while he believes the tools are easy enough to get around, the mindset isn’t. “So the biggest problem with this change is that it’s been forced upon people, and the mindset hasn’t changed. I worry about the mindset of people in the very senior leadership roles when the only thing they’ve ever known is office based. There’s a nervousness and a fear factor amongst the leadership. And that is the fear of the unknown.”
When everyone is working remotely, there’s one thing that needs to change immediately — the focus needs to move from time spent to output. Managing remote teams means that instead of worrying about where your employees are at all times during the day, constantly checking whether they’re online and get back to you within a couple of seconds, you should instead focus on assigning clear deliverables and outputs.
While office work gets dictated a lot by schedules, calendars and synchronous communication (either in meetings, slack channels or in person), remote working, especially in these circumstances, will inevitably drive asynchronous communication and workflows. That is, communicating and moving projects along without needing your peers to be available online.
In Gitlab’s Guide to All-Remote, they mention that the easiest way to enter into an asynchronous mindset is to ask this question: “How would I deliver this message, present this work, or move this project forward right now if no one else on my team (or in my company) were awake?”
According to them, asking this question removes the temptation to take shortcuts, “or to call a meeting to simply gather input” — alas, the dreaded meeting that could have been an email.
In order for asynchronous communication to work, you need a tool in place that allows you to gather all documentation, or context, available. A source of truth. It doesn’t matter if you pick Gitlab, Asana, Trello, or even a Slack channel, everyone just needs to agree on what the tool is to avoid splintered communication across multiple channels. When you’re quickly shifting all your operations towards a remote setting, a gentle tap on the shoulder asking for a bug to be fixed no longer works. Every change, every request needs to be documented and visible across the entire organization.
Ideally, you’ve hired hard-working, talented people who thrive on autonomy and empowerment, so they don’t need to be managed as much as being in the loop — again, communication is key here. Be very clear about what’s expected of your employees, keep communication lines open, and make sure they have all the information they need to do their jobs.
Step five: Keeping up with the culture
For a whole lot of us, the office inevitably becomes a sort of second home. We make friends, have lunch together, share harmless gossip during happy hours. Even those silly collective moments that rise out of nowhere and vanish as mysteriously as they appeared. How can you replicate this? How can teams still feel connected when they’re not sharing the same physical space?
To John Riordan, it’s all about starting the day together:
We have the teams broken out into 10 or 11 people who are working on the same period of time together in a day, and they all start together. They have what’s called a jumpstart meeting, so that’s a touch point every morning. And there’s a touch point at the end of the day. We’ve been doing this forever and one of the reasons for that is that it’s actually like being welcomed into the office and it’s like being told, okay, we’re done now. And the important thing is that these syncs don’t have to be work-focused.
At Unbabel, we’ve been doing something similar. Every day at 9.30am, our team gathers for a half hour Zoom chat where we share what we’ve been up to. You would think there’s not much to share given that we’re all cooped up in our apartments all day, but you would be surprised. Indoor workout advice, cat competitions, bored dogs, bread making, trying to guess who’s rocking some sweatpants — the possibilities are endless. We’re also hosting 5.00pm Margaritaville hangouts (margarita optional), so our team can unwind after a hard day’s (remote) work.
In a sense, we’re sort of mimicking what happens daily at the office. If that involves a lot of spontaneous banter or water cooler chatter, maybe have an open room or open Google Hangouts that’s just open all day. “You can turn your camera off if you want to and work away, but there’s always what I call a lifeline to humanity,” John told me. If it involves bemoaning the weather together as you bump into one another in the kitchen, grabbing lunch, or doing happy hour on Friday afternoons, try to arrange online replacements.
This can sound counter-productive, investing all this time and effort into creating virtual social interactions that have nothing to do with work itself, but it’s absolutely worth it. First, for the sake of team building, which can suffer greatly, and secondly, for our own mental health. These quick chats can help reducing feelings of isolation and promoting a sense of belonging. The sheer amount of pictures and videos being shared right now in our #tower-pets Slack channel just proves how we need silly cat videos now. We’re quarantined, overwhelmed, stressed and anxious about all the unknowns around us — connecting with a coworker, even if it’s just for some laughs — is really helpful.
Wade Foster, CEO at Zapier, recently shared on a blog article:
Co-founder and CEO at Zapier
One time, things were slowing down in our support channel, and one of our employees just said “let’s have a dance party.” Everyone picked a song on Spotify, recorded themselves doing a dance, then put the gif in Slack. We created a montage of everyone dancing, and it was awesome—people pulled their kids into it, pulled their dogs into it. This kind of thing helps people feel engaged and prevents that loneliness and isolation that everyone worries about with remote work.
Step six: Automate, automate, automate
Right about now, a lot of companies are in survival mode — especially in industries such as travel, hospitality. Support agents are working around the clock to help customers with their flight cancellations, hotel reservations, and so on. Non-stop calls, full inboxes, tickets pilling on — it’s one thing to handle a 20 or 30% increase in customer interactions, it’s a whole other ball game to deal with an increase of 300%.
If there’s one thing our Director of Customer Support Luís Pinto knows, is that in times of crisis, you should automate as much as possible:
Director of Customer Support at Unbabel
In my experience, the only way to redirect ticket volumes is through self-service. Every information the customer needs should be on support portals, FAQs, company website, email newsletters, even social media. If you can remove 20 or 30% of that volume, it’s such a big help. In times of crisis, support teams need to understand which are the ten or so most asked questions — did my flight get canceled? what are the next steps? what’s your refund policy? — all of these need to be readily available.
Step seven: Hang in there
This is not a normal work-from-home scenario. We’re all still adjusting, taking care of ourselves and loved ones, trying to figure this out and learn as we go. As Riordan said, “You’re going to make mistakes.”
Director of Support, Ireland at Shopify
If you go into this thinking that you’re not going to make mistakes, you’re wrong. You’re going to make mistakes. Find peers outside the company who have done or who are doing remote work and lean in and listen. If you try to take an office space culture, which I’m going to refer to as being quite square, and you put it into a remote culture, which is quite circular, it’s not going to work. We all know you can’t bang a square into a circle. It just doesn’t fit. So you’re going to have to understand that it’s going to take a while to mold into the right shape.
If there’s any upside to this, is just how kind strangers from the internet can be. Over the last few days, many messages have flooded social media offering services, advice, or just a friendly virtual shoulder. Companies with all-remote or almost all-remote cultures are sharing guides, organizing webinars, all so that this transition can be as smooth as it possibly can. Not just for business leaders, but also teachers and students struggling to keep up with the school year.
We weren’t ready for this, but we’ve been through worse and bounced back. With the right tools, mindset, and just a little guidance, you might just pull it off.
There’s plenty information online to help you with this transition. I, personally, recommend:
That’s it, folks. Stay home and stay healthy!
Here Come the AI Regulations
By AI Trends Staff
New laws will soon shape how companies use AI.
The five largest federal financial regulators in the US recently released a request for information how banks use AI, signaling that new guidance is coming for the finance business. Soon after that, the US Federal Trade Commission released a set of guidelines on “truth, fairness and equity” in AI, defining the illegal use of AI as any act that “causes more harm than good,” according to a recent account in Harvard Business Review.
And on April 21, the European Commission issued its own proposal for the regulation of AI (See AI Trends, April 22, 2021)
While we don’t know what these regulation will allow, “Three central trends unite nearly all current and proposed laws on AI, which means that there are concrete actions companies can undertake right now to ensure their systems don’t run afoul of any existing and future laws and regulations,” stated article author Andrew Burt, the managing partner of bnh.ai, a boutique law firm focused on AI and analytics.
First, conduct assessments of AI risks. As part of the effort, document how the risks have been minimized or resolved. Regulatory frameworks that refer to these “algorithmic impact assessments,” or “IA for AI,” are available.
For example, Virginia’s recently-passed Consumer Data Protection Act, requires assessments for certain types of high-risk algorithms.
The EU’s new proposal requires an eight-part technical document to be completed for high-risk AI systems that outlines “the foreseeable unintended outcomes and sources of risks” of each AI system, Burt states. The EU proposal is similar to the Algorithmic Accountability Act filed in the US Congress in 2019. The bill did not go anywhere but is expected to be reintroduced.
Second, accountability and independence. This suggestion is that the data scientists, lawyers and others evaluating the AI system have different incentives than those of the frontline data scientists. This could mean that the AI is tested and validated by different technical personnel than those who originally developed it, or organizations may choose to hire outside experts to assess the AI system.
“Ensuring that clear processes create independence between the developers and those evaluating the systems for risk is a central component of nearly all new regulatory frameworks on AI,” Burt states.
Third, continuous review. AI systems are “brittle and subject to high rates of failure,” with risks that grow and change over time, making it difficult to mitigate risk at a single point in time. “Lawmakers and regulators alike are sending the message that risk management is a continual process,” Burt stated.
Approaches in US, Europe and China Differ
The approaches between the US, Europe and China toward AI regulation differ in their approach, according to a recent account in The Verdict, based on analysis by Global Data, the data analytics and consulting company based in London.
“Europe appears more optimistic about the benefits of regulation, while the US has warned of the dangers of over regulation,”’ the account states. Meanwhile, “China continues to follow a government-first approach” and has been widely criticized for the use of AI technology to monitor citizens. The account noted examples in the rollout by Tencent last year of an AI-based credit scoring system to determine the “trust value” of people, and the installation of surveillance cameras outside people’s homes to monitor the quarantine imposed after the breakout of COVID-19.
“Whether the US’ tech industry-led efforts, China’s government-first approach, or Europe’s privacy and regulation-driven approach is the best way forward remains to be seen,” the account stated.
In the US, many companies are aware of the risk of new AI regulation that could stifle innovation and their ability to grow in the digital economy, suggested a recent report from pwc, the multinational professional services firm.
“It’s in a company’s interests to tackle risks related to data, governance, outputs, reporting, machine learning and AI models, ahead of regulation,” the pwc analysts state. They recommended business leaders assemble people from across the organization to oversee accountability and governance of technology, with oversight from a diverse team that includes members with business, IT and specialized AI skills.
Critics of European AI Act Cite Too Much Gray Area
While some argue that the European Commission’s proposed AI Act leaves too much gray area, the hope of the European Commission is that their proposed AI Act will provide guidance for businesses wanting to pursue AI, as well as a degree of legal certainty.
“Trust… we think is vitally important to allow the development we want of artificial intelligence,” stated Thierry Breton, European Commissioner for the Internal Market, in an account in TechCrunch. AI applications “need to be trustworthy, safe, non-discriminatory — that is absolutely crucial — but of course we also need to be able to understand how exactly these applications will work.”
“What we need is to have guidance. Especially in a new technology… We are, we will be, the first continent where we will give guidelines—we’ll say ‘hey, this is green, this is dark green, this is maybe a little bit orange and this is forbidden’. So now if you want to use artificial intelligence applications, go to Europe! You will know what to do, you will know how to do it, you will have partners who understand pretty well and, by the way, you will come also to the continent where you will have the largest amount of industrial data created on the planet for the next ten years.”
“So come here—because artificial intelligence is about data—we’ll give you the guidelines. We will also have the tools to do it and the infrastructure,” Breton suggested.
Another reaction was that the Commission’s proposal has overly broad exemptions, such as for law enforcement to use remote biometric surveillance including facial recognition technology, and it does not go far enough to address the risk of discrimination.
Reactions to the Commission’s proposal included plenty of criticism of overly broad exemptions for law enforcement’s use of remote biometric surveillance (such as facial recognition tech) as well as concerns that measures in the regulation to address the risk of AI systems discriminating don’t go nearly far enough.
“The legislation lacks any safeguards against discrimination, while the wide-ranging exemption for ‘safeguarding public security’ completely undercuts what little safeguards there are in relation to criminal justice,” stated Griff Ferris, legal and policy officer for Fair Trials, the global criminal justice watchdog based in London. “The framework must include rigorous safeguards and restrictions to prevent discrimination and protect the right to a fair trial. This should include restricting the use of systems that attempt to profile people and predict the risk of criminality.”
To accomplish this, he suggested, “The EU’s proposals need radical changes to prevent the hard-wiring of discrimination in criminal justice outcomes, protect the presumption of innocence and ensure meaningful accountability for AI in criminal justice.”
Pandemic Spurred Identity Fraud; AI and Biometrics Are Responding
By AI Trends Staff
Cyberattacks and identity fraud losses increased dramatically in 2020 as the pandemic made remote work the norm, setting the stage for AI and biometrics to combine in efforts to attain a higher level of protection.
One study found banks worldwide saw a 238% jump in cyberattacks between February and April 2020; a study from Javelin Strategy & Research found that identity fraud losses grew to $56 billion last year as fraudsters used stolen personal information to create synthetic identities, according to a recent account from Pymnts.com. In addition, automated bot attacks shot upward by 100 million between July and December, targeting companies in a range of industries.
Companies striving for better protection risk making life more difficult for their customers; another study found that 40% of financial institutions frequently mistake the online actions of legitimate customers to those of fraudsters.
“As we look toward the post-pandemic—or, more accurately, inter-pandemic—era, we see just how good fraudsters were at using synthetic identities to defeat manual and semi-manual onboarding processes,” stated Caleb Callahan, Vice President of Fraud at Stash Financial of New York, offering a personal finance app, in an interview with Pymnts.
SIM Sway Can Create a Synthetic Identity
One technique for achieving a synthetic identity is a SIM swap, in which someone contacts your wireless carrier and is able to convince the call center employee that they are you, using personal data that may have been exposed in hacks, data breaches or information publicly shared on social networks, according to an account on CNET.
Once your phone number is assigned to a new card, all of your incoming calls and text messages will be routed to whatever phone the new SIM card is in.
Identity theft losses were $712.4 billion-plus in 2020, up 42% from 2019, Callahan stated. “To be frank, our defenses are fragmented and too dependent on technologies such as SMS [texting] that were never designed to provide secure services. Banks and all businesses should be looking at how to unify data signals and layer checkpoints in order to keep up with today’s sophisticated fraudsters,” he stated.
Asked what tools and technologies would help differentiate between fraudsters and legitimate customers, Callahan stated, “in an ideal world, we would have a digital identity infrastructure that banks and others could depend on, but I think that we are some ways away from that right now.”
Going forward, “The needs of the travel and hospitality, health, education and other sectors might accelerate the evolution of infrastructure for safety and security,” Callahan foresees.
AI and Biometrics Seen as Offering Security Advantages
AI can be employed to protect digital identity fraud, such as by offering greater accuracy and speed when it comes to verifying a person’s identity, or by incorporating biometric data so that a cybercriminal would not be able to gain access to information by only providing credentials, according to an account in Forbes.
“AI has the power to save the world from digital identity fraud,” stated Deepak Gupta, author of the Forbes article and cofounder and CTO of LoginRadius, a cloud-based consumer identity platform. “In the fight against ID theft, it is already a strong weapon. AI systems are entirely likely to end the reign of the individual hacker.”
While he sees AI authentication as being in an early phase, Gupta recommended that companies examine the following: the use of intelligent adaptive authentication, such as local and device fingerprint; biometric authentication, based on the face or fingerprints; and smart data filters. “A well-developed AI protection system will have the ability to respond in nanoseconds to close a leak,” he stated.
Pandemic Altered Consumer Financial Behavior, Spurred Identity Fraud
The global pandemic has had a dramatic impact on consumer financial behavior. Consumers spent more time at home in 2020, transacted less than in previous years, and relied heavily on streaming services, digital commerce, and payments. They also corresponded more via email and text, for both work and personal life.
“The pandemic inspired a major shift in how criminals approach fraud,” stated John Buzzard, Lead Analyst, Fraud & Security, with Javelin Strategy & Research in a press release. “Identity fraud has evolved and now reflects the lengths criminals will take to directly target consumers in order to steal their personally identifiable information.”
Companies made quick adjustments to their business models, such as by increasing remote interactions with borrowers for loan originations and closings, and criminals pounced on new vulnerabilities they discovered. Nearly one-third of identity fraud victims say their financial services providers did not satisfactorily resolve their problems, and 38% of victims closed their accounts because of lack of resolution, the Javelin researchers found.
“It is clear that financial institutions must continue to proactively and transparently manage fraud as a means to deepen their customer relationships,” stated Eric Kraus, Vice President and General Manager of Fraud, Risk and Compliance, FIS. The company offers technology solutions for merchants, banks, and capital markets firms globally. “Through our continuing business relationships with financial institutions, we know firsthand that consumers are looking to their banks to resolve instances of fraud, regardless of how the fraud occurred,” he added.
This push from consumers who are becoming increasingly savvy online will lay a foundation for safer digital transactions.
“Static forms of consumer authentication must be replaced with a modern, standards-based approach that utilizes biometrics,” stated David Henstock, Vice President of Identity Products at Visa, the world’s leader in digital payments. “Businesses benefit from reduced customer friction, lower abandonment rates and fewer chargebacks, while consumers benefit from better fraud prevention and faster payment during checkout.”
The 2021 Identity Fraud Study from Javelin is now in its 18th year.
The Rocky Road Toward Explainable AI (XAI) For AI Autonomous Cars
By Lance Eliot, the AI Trends Insider
Our lives are filled with explanations. You go to see your primary physician due to a sore shoulder. The doctor tells you to rest your arm and avoid any heavy lifting. In addition, a prescription is given. You immediately wonder why you would need to take medication and also are undoubtedly interested in knowing what the medical diagnosis and overall prognosis are.
So, you ask for an explanation.
In a sense, you have just opened a bit of Pandora’s box, at least in regard to the nature of the explanation that you might get. For example, the medical doctor could rattle a lengthy and jargon-filled indication of shoulder anatomy and dive deeply into the chemical properties of the medication that has been prescribed. That’s probably not the explanation you were seeking.
It used to be that physicians did not expect patients to ask for explanations. Whatever was said by the doctor was considered sacrosanct. The very nerve of asking for an explanation was tantamount to questioning the veracity of a revered medical opinion. Some doctors would gruffly tell you to simply do as they have instructed (no questions permitted) or might utter something rather insipid like your shoulder needs help and this is the best course of action. Period, end of story.
Nowadays, medical doctors are aware of the need for viable explanations. There is specialized “bedside” training that takes place in medical schools. Hospitals have their own in-house courses. Upcoming medical doctors are graded on how they interact with patients. And so on.
Though that certainly has opened the door toward improved interaction with patients, it does not necessarily completely solve the explanations issue.
Knowing how to best provide an explanation is both art and science. You need to consider that there is the explainer that will be providing the explanation, and there is a person that will be the recipient of the explanation.
Explanations come in all shapes and sizes.
A person seeking an explanation might have in mind that they want a fully elaborated explanation, containing all available bells and whistles. The person giving the explanation might in their mind be thinking that the appropriate explanation is short and sweet. There you have it, an explanation mismatch brewing right before our eyes.
The explainer might do a crisp explanation and be happily satisfied with their explanation. Meanwhile, the person receiving the explanation is entirely dissatisfied. At this point, the person that received the explanation could potentially grit their teeth and just figure that this is all they are going to get. They might silently walk away and be darned upset, opting to not try and fight city hall, as it were, and merely accede to the minimal explanation proffered.
Perhaps the person receiving the explanation decides they would like to get a more elaborated version. They might stand their ground and ask for a more in-depth explanation. Now we need to consider what the explainer is going to do. The explainer might believe that the explanation was more than sufficient, and see no need to provide any additional articulation.
The explainer might be confused about why the initial explanation was not acceptable. Maybe the person receiving the explanation wasn’t listening or had failed to grasp the meaning of the words spoken. At this juncture, the explainer might therefore decide to repeat the same explanation that was just given and do so to ensure that the person receiving the original explanation really understood what was said.
You can likely anticipate that this is about to spiral out of control.
The person that is receiving this “elaborate” explanation is bound to notice that it is the same explanation repeated, nearly verbatim. That’s insulting! The person receiving the explanation now believes they are being belittled by the explainer. Either this person will hold their own tongue and give up trying to get an explanation, or try hurtling insults about how absurd an explanation the explanation was.
It can devolve into a messy affair, that’s for sure.
There is a delicate dance between the explainer and the providing of an explanation, along with the receiver and the desired nature of an explanation.
We usually take these differences for granted. You rarely see an explainer ask what kind of explanation someone wants to have. Instead, the explainer launches into whatever semblance of an explanation that they assume the person would find useful. Rushing into providing an explanation can have its benefits, though it can also start an unsightly verbal avalanche that is going to take down both the explainer and the person receiving the explanation.
Some suggest that the explainer ought to start by inquiring about the type of explanation that the other person is seeking. This might include asking what kind of background the other person has, in the case of a medical diagnosis, whether the other person is familiar with medical terminology and the field of medicine. There might also be a gentle inquiry as to whether the explanation should be done in one fell swoop or possibly divided into bite-sized pieces. Etc.
The difficulty with that kind of pre-game formation is that sometimes the receiver doesn’t want to go through that gauntlet. They just want an explanation (or so they say). Trying to do a preamble is likely to irritate that receiver, and they will feel as though the explanation is being purposely delayed. This could even smack of hiding from the facts or some other nefarious basis for delaying the explanation.
All told, we expect to get an explanation when we ask for one, and not have to go through a vast checklist beforehand.
Another twist to all of this entails the interactive dialogue that can occur during explanations.
The manner of explanations is not necessarily done in a one-breath fashion from start to end. Instead, it is more likely that during the explanation, the receiver will interrupt and ask for clarification or have questions that arise. This is certainly a sensible aspect. If the explanation is going awry, why have it go on and on, wherein instead the receiver can hopefully tailor or reshape the direction and style of the explanation.
For example, suppose that you are a medical professional and have gone to see a medical doctor about your sore shoulder. Imagine that the doctor doing the diagnosis does not realize that the patient is a fellow medical specialist. In that case, the explanation offered is likely to be aimed at a presumed non-medical knowledge base and proceed in potentially simplistic ways (with respect to medical advice). The person receiving the explanation would undoubtedly interrupt and clarify that they know about medicine and the explanation should be readjusted accordingly.
You might be tempted to believe that explanations can be rated as being either good or bad. Though you could take such a perspective, the general notion is that explanations and their beauty are in the eye of the beholder. One person’s favored explanation might be a disastrous or terrible one for someone else. That being said, there is still a modicum of a basis for assessing explanations and comparing them to each other.
We can add a twist on that twist. Suppose you receive an explanation and believe it to be a good one. Later on, you learn something else regarding the matter and realize that the explanation was perhaps incomplete. Worse still, it could be that the explanation was intentionally warped to give you a false impression of a given situation. In short, an explanation can be used to purposely create falsehoods.
That’s why getting an explanation is replete with problems. We often assume that if we ask for an explanation, and if it seems plausible, this attests that the matter is well-settled and above board. The thing is, an explanation can be distorted, either by design or by happenstance, and lead us into a false sense of veracity or truthfulness at hand.
Another angle to explanations deals with asking for an explanation versus being given an explanation when it has not been requested. An explainer might give you an explanation outright because they assume you want one, whereas you are satisfied to just continue on. At that point, if you disrupt the explanation, the explainer might be taken aback.
Why all this talk about explanations? Because of AI.
The increasing use of Artificial Intelligence (AI) in everyday computer systems is taking us down a path whereby the computer makes choices and we the humans have to live with those decisions. If you apply for a home loan, and an AI-based algorithm turns you down, the odds are that all you’ll know is that you did not get the loan. You won’t have any idea about why you were denied the loan.
Presumably, had you consulted with a human that was doing the loan granting, you might have been able to ask them to explain why you got turned down.
Note that this is not always the case, and it could be that the human would not be willing or able to explain the matter. The loan granting person might shrug their shoulders and say they have no idea why you were turned down, or they might tell you that company policy precludes them from giving you an explanation.
Ergo, I am not suggesting that just because a human is in the loop you will necessarily get an explanation. Plus, as repeatedly emphasized earlier, the explanation might be rather feeble and altogether useless.
In any case, there is a big hullabaloo these days that AI systems ought to be programmed to provide explanations for whatever they are undertaking.
This is known as Explainable AI (XAI).
XAI is growing quickly as an area of keen interest. People using AI systems are going to likely expect and somewhat demand that they get an explanation provided to them. Since the number of AI systems is rapidly growing, there is going to be a huge appetite for having a machine-produced explanation about what the AI has done or is doing.
The rub is that oftentimes the AI is arcane and not readily amenable to generating an explanation.
Take as an example the use of Machine Learning (ML) and Deep Learning (DL). These are computational pattern matching algorithms that examine data and try to ferret out mathematical patterns. Sometimes the inner computational aspects are complex and do not lend themselves to being explained in any everyday human-comprehensible and logic-based way.
This means that the AI is not intrinsically set up for providing explanations. In that case, there are usually attempts to add on an XAI component. This XAI either probes into the AI and tries to ferret out what took place, or it sits aside from the AI and has been preprogrammed to provide explanations based on what is assumed has occurred within the mathematically enigmatic mechanisms.
Some assert that you ought to build the XAI into the core of whatever AI is being devised. Thus, rather than bolting onto the AI some afterthought about producing explanations, the design of the AI from the ground-up should encompass a proclivity to produce explanations.
Amidst all of that technological pondering, there are the other aspects of what constitutes an explanation. If you revisit my earlier comments about how explanations tend to work, and the variability depending upon the explainer and the person receiving the explanation, you can readily see how difficult it might be to programmatically produce explanations.
The cheapest way to go involves merely having pre-canned explanations. A loan granting system might have been set up with five explanations for why a loan was denied. Upon your getting turned down for the loan, you get shown one of those five explanations. There is no interaction. There is no particular semblance that the explanation is fitting or suitable to you in particular.
Those are the pittance explanations.
A more robust and respectable XAI capability would consist of generating explanations on the fly, in real-time, and do so based on the particular situation at hand. In addition, the XAI would try to ascertain what flavor or style of explanation would be suitable for the person receiving the explanation.
And this explainer feature ought to allow for fluent interaction with the person getting the explanation. The receiver should be able to interrupt the explanation, getting the explainer or XAI to shift to other aspects or reshape the explanation based on what the person indicates.
Of course, those are the same types of considerations that human explainers should also take into account. This brings up the fact that doing excellent XAI is harder than it might seem. In a manner of speaking, you are likely to need to use AI within the XAI in order to be able to simulate or mimic what a human explainer is supposed to be able to do (though, as we know, not all humans are adept at giving explanations).
Shifting gears, you might be wondering what areas or applications could especially make use of XAI.
One such field of endeavor entails Autonomous Vehicles (AVs). We are gradually going to have autonomous forms of mobility, striving toward a mobility-for-all mantra. There will be self-driving cars, self-driving trucks, self-driving motorcycles, self-driving submersibles, self-driving drones, self-driving planes, and the rest.
You might at first thought be puzzled as to why AVs might need XAI. We can use self-driving cars to showcase how XAI is going to be a vital element for AVs.
The question is this: In what way will Explainable AI (XAI) be important to the advent of AVs and as showcased via the emergence of self-driving cars?
Let’s clarify what I mean by self-driving cars, and then we can jump further into the XAI AV discussion.
For my framework about AI autonomous cars, see the link here: https://aitrends.com/ai-insider/framework-ai-self-driving-driverless-cars-big-picture/
Why this is a moonshot effort, see my explanation here: https://aitrends.com/ai-insider/self-driving-car-mother-ai-projects-moonshot/
For more about the levels as a type of Richter scale, see my discussion here: https://aitrends.com/ai-insider/richter-scale-levels-self-driving-cars/
For the argument about bifurcating the levels, see my explanation here: https://aitrends.com/ai-insider/reframing-ai-levels-for-self-driving-cars-bifurcation-of-autonomy/
Understanding The Levels Of Self-Driving Cars
As a clarification, true self-driving cars are ones where the AI drives the car entirely on its own and there isn’t any human assistance during the driving task.
These driverless vehicles are considered Level 4 and Level 5, while a car that requires a human driver to co-share the driving effort is usually considered at Level 2 or Level 3. The cars that co-share the driving task are described as being semi-autonomous, and typically contain a variety of automated add-on’s that are referred to as ADAS (Advanced Driver-Assistance Systems).
There is not yet a true self-driving car at Level 5, which we don’t yet even know if this will be possible to achieve, and nor how long it will take to get there.
Meanwhile, the Level 4 efforts are gradually trying to get some traction by undergoing very narrow and selective public roadway trials, though there is controversy over whether this testing should be allowed (we are all life-or-death guinea pigs in an experiment taking place on our highways and byways, some contend).
Since semi-autonomous cars require a human driver, the adoption of those types of cars won’t be markedly different from driving conventional vehicles, so there’s not much new per se to cover about them on this topic (though, as you’ll see in a moment, the points next made are generally applicable).
For semi-autonomous cars, it is important that the public needs to be forewarned about a disturbing aspect that’s been arising lately, namely that despite those human drivers that keep posting videos of themselves falling asleep at the wheel of a Level 2 or Level 3 car, we all need to avoid being misled into believing that the driver can take away their attention from the driving task while driving a semi-autonomous car.
You are the responsible party for the driving actions of the vehicle, regardless of how much automation might be tossed into a Level 2 or Level 3.
For why remote piloting or operating of self-driving cars is generally eschewed, see my explanation here: https://aitrends.com/ai-insider/remote-piloting-is-a-self-driving-car-crutch/
To be wary of fake news about self-driving cars, see my tips here: https://aitrends.com/ai-insider/ai-fake-news-about-self-driving-cars/
The ethical implications of AI driving systems are significant, see my indication here: https://aitrends.com/selfdrivingcars/ethically-ambiguous-self-driving-cars/
Be aware of the pitfalls of normalization of deviance when it comes to self-driving cars, here’s my call to arms: https://aitrends.com/ai-insider/normalization-of-deviance-endangers-ai-self-driving-cars/
Self-Driving Cars And XAI
For Level 4 and Level 5 true self-driving vehicles, there won’t be a human driver involved in the driving task. All occupants will be passengers; the AI is doing the driving.
One aspect to immediately discuss entails the fact that the AI involved in today’s AI driving systems is not sentient. In other words, the AI is altogether a collective of computer-based programming and algorithms, and most assuredly not able to reason in the same manner that humans can.
Why this added emphasis about the AI not being sentient?
Because I want to underscore that when discussing the role of the AI driving system, I am not ascribing human qualities to the AI. Please be aware that there is an ongoing and dangerous tendency these days to anthropomorphize AI. In essence, people are assigning human-like sentience to today’s AI, despite the undeniable and inarguable fact that no such AI exists as yet.
Now that we’ve laid the stage appropriately, time to dive into the myriad of aspects that come to play on this topic about XAI.
First, be aware that many of the existing self-driving car tryouts have very little if any semblance of XAI in them. The initial belief was that people would get into a self-driving car, provide their destination, and be silently whisked to that locale. There would be no need for interaction with the AI driving system. There would be no need for an explanation or XAI capability.
We can revisit that assumption by considering what happens when you use ridesharing and have a human driver at the wheel.
There are certainly instances wherein you get into an Uber or Lyft vehicle and there is stony silence for the entirety of the trip. You’ve likely already provided the destination via the ride-request app. The person driving is intently doing the driving and ostensibly going to that destination. No need to chat. You can play video games on your smartphone and act as though there isn’t another human in the vehicle.
That’s perfectly fine.
Imagine though that during the driving journey, all of a sudden, the driver decides to go a route that you find unexpected or unusual. You might ask the driver why there is a change in the otherwise normal path to the destination. They would hopefully prompt an explanation from the human driver.
It could be that the human driver gives you no explanation or provides a flimsy explanation. Humans do that. In theory, a properly done XAI will provide an on-target explanation, though this can be challenging. Maybe the human driver tells you that there is construction taking place on the main highway, and to avoid a lengthy delay, an alternative course is being undertaken.
You might be satisfied with that explanation. On the other hand, perhaps you live in the area and are curious about the nature of the construction taking place. Thus, you ask the driver for further details about the construction. In a sense, you are interacting with an explainer and seeking additional nuances or facets about the explanation that was being provided.
Okay, put on your self-driving car thinking-cap and consider what a passenger might want from an XAI. A self-driving car is taking you to your home. The normal path that would be used is unexpectedly diverted from the AI driving system. You are likely to want to ask the AI why the driving journey is altering from your expected traversal. Many of the existing tryouts of self-driving cars would not have any direct means of having the AI explain this matter, and instead, you would need to connect with a remote agent of the fleet operator that oversees the self-driving cars.
In essence, rather than building the XAI, the matter is shunted over to a remote human to explain what is going on. This is something that won’t be especially scalable. In other words, once there are hundreds of thousands of self-driving cars on our roadways, the idea of having the riders always needing to contact a remote agent for the simplest of questions is going to be a huge labor cost and a logistics nightmare.
There ought to be a frontline XAI that exists with the AI driving system.
Assume that a Natural Language Processing (NLP) interface is coupled with the AI driving system, akin to the likes of Alexa or Siri. The passenger interacts with the NLP and can discuss common actions such as asking to change the destination midstream, or asking to swing through a fast-food eatery drive-thru, and so on.
In addition, the passenger can ask for explanations.
Suppose the AI driving system has to suddenly hit the brakes. The rider in the self-driving car might have been watching an especially fascinating cat video and not be aware of the roadway circumstances. After getting bounced around due to the harsh braking action, the passenger might anxiously ask why the AI driving system made such a sudden and abrasive driving action.
You would want the AI to immediately provide such an explanation. If the only possible way to get an explanation involved seeking a remote agent, envision what that might be like. There you are, inside the self-driving car, and it has just taken radical action, but you have no idea why it did so. You have to press a button or somehow activate a call to a remote agent. This might take a few moments to engage.
Once the remote agent is available (assuming that one is readily available), they might begin the dialogue with a usual canned speech, such as welcome to the greatest of all self-driving cars. You, meanwhile, have been sitting inside this self-driving car, which is still merrily driving along, and yet you have no clue why it out-of-the-blue hit the brakes.
The point here is that by the time you engage in a discussion with the human remote operator, a lot of time and driving aspects could have occurred. During that delay, you are puzzled, concerned, and worried about what the AI driving system might crazily do next.
If there was an XAI, perhaps you would have been able to ask the XAI what just happened. The XAI might instantly explain that there was a dog on the sidewalk that was running toward the self-driving car and appeared to be getting within striking distance. The AI driving system opted to do a fast braking action. The dog got the idea and safely scampered away.
A timely explanation, and one that then gives the passenger solace and relief, allowing them to settle back into their seat and watch more of those videos about frisky kittens and adorable puppies.
For more details about ODDs, see my indication at this link here: https://www.aitrends.com/ai-insider/amalgamating-of-operational-design-domains-odds-for-ai-self-driving-cars/
On the topic of off-road self-driving cars, here’s my details elicitation: https://www.aitrends.com/ai-insider/off-roading-as-a-challenging-use-case-for-ai-autonomous-cars/
I’ve urged that there must be a Chief Safety Officer at self-driving car makers, here’s the scoop: https://www.aitrends.com/ai-insider/chief-safety-officers-needed-in-ai-the-case-of-ai-self-driving-cars/
Expect that lawsuits are going to gradually become a significant part of the self-driving car industry, see my explanatory details here: https://aitrends.com/selfdrivingcars/self-driving-car-lawsuits-bonanza-ahead/
There are lots and lots of situations that can arise when riding in a car and for which you might desire an explanation. The car is suddenly brought to a halt. The car takes a curve rather strongly. The car veers into an adjacent lane without a comfortable margin of error. The car takes a road that you weren’t expecting to be on. Seemingly endless possibilities exist.
In that case, if indeed XAI is notably handy for self-driving cars, you might be wondering why it isn’t especially in place already.
Well, admittedly, for those AI developers under intense pressures to devise AI that can drive a car from point A to point B, doing so safely, the aspect of providing machine-generated explanations is pretty low on their priority list. They would fervently argue that it is a so-called edge or corner case. It can be gotten to when the sunshine of having achieved sufficiently self-driving cars has been achieved.
Humans that are riding in AVs of all kinds are going to want to have explanations. A cost-effective and immediately available means of providing explanations entails the embodiment of XAI into the AI systems that are doing the autonomous piloting.
One supposes that if you are inside a self-driving car and it is urgently doing some acrobatic driving maneuver, you might be hesitant to ask what is going on, in the same manner, that you might worry that you would be distracting a human driver that was doing something wild at the wheel.
Presumably, a well-devised XAI won’t be taxing on the AI driving system, and thus you are free to engage in a lengthy dialogue with the XAI. In fact, the likeliest question that self-driving cars are going to get is how does the AI driving system function. The XAI ought to be readied to cope with that kind of question.
The one thing we probably should not expect XAI to handle will be those questions that are afield of the driving chore. For example, asking the XAI to explain the meaning of life is something that could be argued as out-of-bounds and above the pay grade of the AI.
At least until the day that AI does become sentient, then you can certainly ask away.
Copyright 2021 Dr. Lance Eliot http://ai-selfdriving-cars.libsyn.com/website
Emerging Technologies Achievable Through The Cloud: 4 Practical Examples
Cloud computing is the foundation beneath some of the fastest growing industries in the world, So it’s not difficult to get lost in all the buzzwords that are thrown around cloud computing and digress from actual technological advances and benefits that are achievable with smart and efficient use of the cloud.
So what’s behind the hype? Some extremely powerful technologies and workflows. And that’s exactly what we’re going to take a look at in this article — the top 4 practical examples of technologies achievable through the cloud in 2020.
Contrary to popular belief, information alone won’t give companies a competitive advantage — executives also need to be able to base their decisions on data before the opportunities pass. However, most companies generate terabytes of data every week but are unable to capitalize on any of the data. Big data analytics is a solution to this problem.
Thanks to the advanced evolution of the cloud, companies are able to gather and analyze data at a nearly instantaneous rate. Leveraging big data analytics empowers organizations to run more efficiently in terms of cost and decision making. Companies can make data-driven decisions brought to them by data analysis tools that are provided through the cloud.
BigQuery from Google Cloud has many powerful features that allow users to view their data in real-time, providing continual up-to-date information to help guide business decisions. Big Query is a serverless NoOps (no operations) platform that separates compute and storage, meaning that better autoscaling is offered as they can be independently scaled as required. BigQuery’s Machine Learning and Business Intelligence Engine analysis of various data models are quite powerful. It integrates seamlessly with the Google Cloud AI platform and other tools like Data Studio.
Cloud service providers like Google Cloud Platform (GCP) use shared computing to process large datasets extremely quickly. Also known as cluster computing, Google uses hundreds of computers interconnected together for quick data analysis and completing complex computing tasks. Businesses like yours can also make use of similar services like cloud service providers to improve insights and decision-making.
Automating mundane and repetitive tasks is and should be the top priority for businesses in this age. Even automating the simplest tasks, most business environments can free up to 30% time for employees — allowing them to focus on more important matters.
Cloud service providers have made it extremely easy for businesses of all sizes to dabble with business process automation. For instance, at the most fundamental level, businesses can automate how they receive and sort documents through document management, to automating entire workflows including delivery pipelines and testing updates in a controlled cloud environment. Tools such as Google’s Document Understanding AI can actually help you ensure your data is accurate and compliant. This is especially helpful in highly regulated industries where accuracy and precision are crucial to operations. It is also quick and easy to request more compute if needed for deep learning and complex ML training by requesting GPUs or using a managed service like Kubeflow.
Another emerging technology that is now accessible to small to medium enterprises is machine learning. Put simply, machine learning refers to training computer algorithms to interpret and interact with data without human interference. With increasing accuracy, MI (a subset of AI) is becoming incredibly valuable to businesses as it has virtually unlimited use cases.
You can read more about how cloud solutions using AI and ML can help save time, cut costs, and improve rates of human error.
Although lesser-known among legacy businesses, the Internet of Things is one of the fastest-growing industries in the world and was valued at $190 billion in 2018. Alexa and Google Home are two of the most popular examples of IoT devices of which you’re most likely very familiar. Apart from that, smart TVs, smart refrigerators, smart LEDs, security systems, thermostats, and even cars (think Tesla) that operate over WiFi are all a part of the internet of things.
Think of IoT devices as part of a much larger network all of which have a backbone in the cloud. Aside from pure convenience, IoT can be seen making significant breakthroughs in other spaces such as health tech. Fitbit, for example, has partnered with Google to transform how their product integrates between fitness and the cloud. The device uses Google’s Cloud Healthcare API. The API is a service that “helps facilitate the exchange of data among healthcare applications and services that run on Google’s Cloud.” Even more interesting is that the API also integrates analytics tools like BigQuery, AI tools like AI Platform, and data processing tools like Dataflow.
Similar tools and APIs are available for businesses in different industries so they too can help connect their device to an online network and introduce security patches, fix bugs, add features, and more.
Though it has become significantly more popular in the last few years, augmented and virtual reality are not new technologies. Leftronic reports that the number of augmented reality users will reach 3.5 billion by 2023. Furthermore, they estimate that the AR and VR device market will hit $198 billion by 2025. In fact, large institutions like Boeing and NASA have been developing their own AR and VR technologies for training purposes for quite some time now. However, thanks to cloud proliferation, technologies like virtual reality are finally becoming accessible and more importantly, affordable for the average business to experiment with.
So how does it work?
When applications superimpose a CG image into the real world, they create an augmented reality experienced. Augmented reality places computer-generated objects in the human world, whereas virtual reality places you into a computer-generated world. Businesses can use this technology in a number of ways including giving consumers a virtual reality tour of their product or use it for training in a safe environment.
It’s also quite easy to get started with. Google’s Cloud Anchor allows developers to create experiences within their app for users to add virtual objects into an augmented reality environment. Thanks to Google’s ARCore Cloud Anchor service, experiences are allowed to be hosted and shared between users. Virtual Reality allows you to be transported to distant places and immerse yourself in foreign environments. Devices such as the Oculus Rift or Quest and the HTC Vive provide outstanding experiences that can run independently of a computer. When used at its capacity, Virtual Reality can be transformative for gaming, education, and immersive experiences.
These emerging technologies unlock a completely new frontier that businesses can compete in without exorbitant investments or technical knowledge. With all the right tools already available at their disposal, most businesses only need a helping hand to get started. If your organization is considering using the cloud to leverage an emerging technology but are unsure about the intricacies, reach out to D3V and set up a free strategic consultation with our certified cloud experts. Our team can help determine the best set of options for your company based on your business needs and aspirations.
JetBlue Hits Back At Eastern Airlines On Ecuador Flights
“Privacy is a ‘Privilege’ that Users Ought to Cherish”: Elena Nadoliksi
Build a cognitive search and a health knowledge graph using AWS AI services
Meme Coins Craze Attracting Money Behind Fall of Bitcoin
ONE Gas to Participate in American Gas Association Financial Forum
Shiba Inu: Know How to Buy the New Dogecoin Rival
Credit Karma Launches Instant Karma Rewards
Opimas estimates that over US$190 billion worth of Bitcoin is currently at risk due to subpar safekeeping
Pokémon Go Special Weekend announced, features global partners like Verizon, 7-Eleven Mexico, and Yoshinoya
Yieldly announces IDO
Sentiment Flippening: Why This Bitcoin Expert Doesn’t Own Ethereum
Valve launches Supporters Clubs, allows fans to directly support Dota Pro Circuit teams
5 Best Mid Laners in League of Legends Patch 11.10
Top Tips On Why And How To Get A Cyber Security Degree ?
Bella Aurora launches its first treatment for white patches on the skin
‘Destroy Sandcastles’ in Fortnite Locations Explained
Decentraland Price Prediction 2021-2025: MANA $25 by the End of 2025
PR Newswire6 days ago
Polystyrene Foam Market worth $32.2 billion by 2026 – Exclusive Report by MarketsandMarkets™
Energy1 week ago
Systém GameChange Solar 631 MW Genius Tracker™ bude vztyčen v jižním Texasu
Blockchain1 week ago
The Reason for Ethereum’s Recent Rally to ATH According to Changpeng Zhao
Aviation1 week ago
American Airlines Passenger Arrested After Alleged Crew Attack
Blockchain1 week ago
Chiliz Price Prediction 2021-2025: $1.76 By the End of 2025
Blockchain1 week ago
Mining Bitcoin: How to Mine Bitcoin
PR Newswire1 week ago
Memorial Day Grill Accessories Roundup
Private Equity1 week ago
Beyond the fanfare and SEC warnings, SPACs are here to stay
Blockchain1 week ago
Amid XRP lawsuit, Ripple appoints former US Treasurer to its board, and names new CFO
Aviation5 days ago
What Happened To Lufthansa’s Boeing 707 Aircraft?
Blockchain1 week ago
NYDIG: Bitcoin is Coming to Hundreds of American Banks This Year
Blockchain1 week ago
NYDIG: Bitcoin is Coming to Hundreds of American Banks This Year