Connect with us


IBM releases AI model toolkit to help developers measure uncertainty



Elevate your enterprise data technology and strategy at Transform 2021.

At its Digital Developer Conference today, IBM open-sourced Uncertainty Quantification 360 (UQ360), a new toolkit focused on enabling AI to understand and communicate its uncertainty. Following in the footsteps of IBM’s AI Fairness 360 and AI Explainability 360, the goal of UQ360 is to foster community practices across researchers, data scientists, developers, and others that might lead to better understanding and communication around the limitations of AI.

It’s commonly understood that deep learning models are overconfident — even when they make mistakes. Epistemic uncertainty describes what a model doesn’t know because the training data wasn’t appropriate. On the other hand, aleatoric uncertainty is the uncertainty arising from the natural randomness of observations. Given enough training samples, epistemic uncertainty will decrease, but aleatoric uncertainty can’t be reduced even when more data is provided.

UQ360 offers a set of algorithms and a taxonomy to quantify uncertainty, as well as capabilities to measure and improve uncertainty quantification (UQ). For every UQ algorithm provided in the UQ360 Python package, a user can make a choice of an appropriate style of communication by following IBM’s guidance on communicating UQ estimates, from descriptions to visualizations. UQ360 also includes an interactive experience that provides an introduction to producing UQ and ways to use UQ in a house price prediction application. Moreover, UQ360 includes a number of in-depth tutorials to demonstrate how to use UQ across the AI lifecycle.

The importance of uncertainty

Uncertainty is a major barrier standing in the way of self-supervised learning’s success, Facebook chief AI scientist Yann LeCun said at the International Conference on Learning Representation (ICLR) last year. Distributions are tables of values that link every possible value of a variable to the probability the value could occur. They represent uncertainty perfectly well where the variables are discrete, which is why architectures like Google’s BERT are so successful. But researchers haven’t yet discovered a way to usefully represent distributions where the variables are continuous — i.e., where they can be obtained only by measuring.

As IBM research staff members Prasanna Sattigeri and Q. Vera Liao note in a blog post, the choice of UQ method depends on a number of factors, including the underlying model, the type of machine learning task, characteristics of the data, and the user’s goal. Sometimes a chosen UQ method might not produce high-quality uncertainty estimates and could mislead users, so it’s crucial for developers to evaluate the quality of UQ and improve the quantification quality if necessary before deploying an AI system.

In a recent study conducted by Himabindu Lakkaraju, an assistant professor at Harvard University, showing uncertainty metrics to both people with a background in machine learning and non-experts had an equalizing effect on their resilience to AI predictions. While fostering trust in AI may never be as simple as providing metrics, awareness of the pitfalls could go some way toward protecting people from machine learning’s limitations.

“Common explainability techniques shed light on how AI works, but UQ exposes limits and potential failure points,” Sattigeri and Liao wrote. “Users of a house price prediction model would like to know the margin of error of the model predictions to estimate their gains or losses. Similarly, a product manager may notice that an AI model predicts a new feature A will perform better than a new feature B on average, but to see its worst-case effects on KPIs, the manager would also need to know the margin of error in the predictions.”


VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact. Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Coinsmart. Beste Bitcoin-Börse in Europa


Bank of America uses AI to predict business volatility, acquisitions



Bank of America is using artificial intelligence (AI) to reliably predict which companies are likely to be acquired close to a year in advance — just one way the $2.2 trillion bank is employing AI in its trading business. At a North America Fintech Connect virtual conference today, Rajesh Krishnamachari, global head of data science […]

Coinsmart. Beste Bitcoin-Börse in Europa

Continue Reading


An introduction to Explainable AI (XAI) and Explainable Boosting Machines (EBM)



An introduction to Explainable AI (XAI) and Explainable Boosting Machines (EBM)

Understanding why your AI-based models make the decisions they do is crucial for deploying practical solutions in the real-world. Here, we review some techniques in the field of Explainable AI (XAI), why explainability is important, example models of explainable AI using LIME and SHAP, and demonstrate how Explainable Boosting Machines (EBMs) can make explainability even easier.

By Chaitanya Krishna Kasaraneni, Data Science Intern at Predmatic AI.


Photo by Rock’n Roll Monkey on Unsplash.

In recent times, machine learning has become the core of developments in many fields such as sports, medicine, science, and technology. Machines (computers) have become so intelligent that they even defeated professionals in games like Go. Such developments raise questions if machines would also make for better drivers (autonomous vehicles) or even better doctors.

In many machine learning applications, the users rely on the model to make decisions. But, a doctor certainly cannot operate on a patient simply because “the model said so.” Even in low-risk situations, such as when choosing a movie to watch from a streaming platform, a certain measure of trust is required before we surrender hours of our time based on a model.

Despite the fact that many machine learning models are black boxes, understanding the rationale behind the model’s predictions would certainly help users decide when to trust or not to trust their predictions. This “understanding the rationale” leads to the concept called Explainable AI (XAI).

What is Explainable AI (XAI)?

Explainable AI refers to methods and techniques in the application of artificial intelligence technology (AI) such that the results of the solution can be understood by human experts. [Wikipedia]

How is Explainable AI different from Artificial Intelligence?

Ai Process 5 questions

XAI Process 5 statements

Difference Between AI and XAI.

In general, AI arrives at a result using an ML algorithm, but the architects of the AI systems do not fully understand how the algorithm reached that result.

On the other hand, XAI is a set of processes and methods that allows users to understand and trust the results/output created by a machine learning model/algorithm. XAI is used to describe an AI model, its expected impact, and potential biases. It helps characterize model accuracy, fairness, transparency, and outcomes in AI-powered decision-making. Explainable AI is crucial for an organization in building trust and confidence when putting AI models into production. AI explainability also helps an organization adopt a responsible approach to AI development.

Famous examples of such explainers are Local Interpretable Model-agnostic Explanations (LIME) and SHapley Additive exPlanations (SHAP).

  • LIME explains the predictions of any classifier in an interpretable and faithful manner by learning an interpretable model locally around the prediction.
  • SHAP is a game theoretic approach to explain the output of any machine learning model.

Explaining Predictions using SHAP

SHAP is a novel approach to XAI developed by Scott Lundberg here at Microsoft and eventually opened sourced.

SHAP has a strong mathematical foundation. It connects optimal credit allocation with local explanations using the classic Shapley values from game theory and their related extensions (see papers for details).

Shapley values

With Shapley values, each prediction can be broken down into individual contributions for every feature.

For example, consider your input data has 4 features (x1, x2, x3, x4) and the output of a model is 75, using Shapley values, you can say that feature x1 contributed 30, feature x2 contributed 20, feature x3 contributed -15, and feature x4 contributed 40. The sum of these 4 Shapley values is 30+20–15+40=75, i.e., the output of your model. This feels great, but sadly, these values are extremely hard to calculate.

For a general model, the time taken to compute Shapley values is exponential to the number of features. If your data has 10 features, this might still be okay. But if the data has more features, say 20, depending on your hardware, it might be impossible already. To be fair, if your model consists of trees, there are faster approximations to compute Shapley values, but it can still be slow.

SHAP using Python

In this article, we’ll be using red wine quality data to understand SHAP. The target value of this dataset is the quality rating from low to high (0–10). The input variables are the content of each wine sample, including fixed acidity, volatile acidity, citric acid, residual sugar, chlorides, free sulfur dioxide, total sulfur dioxide, density, pH, sulfates, and alcohol. There are 1,599 wine samples. The code can be found via this GitHub link.

In this post, we will build a random forest regression model and will use the TreeExplainer in SHAP. There is a SHAP Explainer for any ML algorithm — either tree-based or non-tree-based algorithms. It is called the KernelExplainer. If your model is a tree-based machine learning model, you should use the tree explainer TreeExplainer() that has been optimized to render fast results. If your model is a deep learning model, use the deep learning explainer DeepExplainer(). For all other types of algorithms (such as KNNs), use KernelExplainer().

The SHAP value works for either the case of a continuous or binary target variable.

Variable Importance Plot — Global Interpretability

A variable importance plot lists the most significant variables in descending order. The top variables contribute more to the model than the bottom ones and thus have high predictive power. Please refer to this notebook for code.

Variable Importance Plot using SHAP.

Summary Plot

Although the SHAP does not have built-in functions, you can output the plot using the matplotlib library.

The SHAP value plot can further show the positive and negative relationships of the predictors with the target variable.

Summary Plot.

This plot is made of all the dots in the train data. It demonstrates the following information:

  • Variables are ranked in descending order.
  • The horizontal location shows whether the effect of that value is associated with a higher or lower prediction.
  • The color shows whether that variable is high (in red) or low (in blue) for that observation.

Using SHAP, we can generate partial dependence plots. The partial dependence plot shows the marginal effect one or two features have on the predicted outcome of a machine learning model (J. H. Friedman 2001). It tells whether the relationship between the target and a feature is linear, monotonic, or more complex.

Black-box explanations are much better than no explanation at all. However, as we have seen, both LIME and SHAP have some shortcomings. It would be better if the model is performing well and is interpretable at the same time—Explainable Boosting Machine (EBM) is a representative of such a method.

Explainable Boosting Machine (EBM)

EBM is a glassbox model designed to have accuracy comparable to state-of-the-art machine learning methods like Random Forest and BoostedTrees, while being highly intelligible and explainable.

The EBM Algorithm is a fast implementation of the GA²M algorithm. In turn, the GA²M algorithm is an extension of the GAM algorithm. Therefore, let’s start with what the GAM algorithm is.

GAM Algorithm

GAM stands for Generalized Additive Model. It is more flexible than logistic regression but still interpretable. The hypothesis function for GAM is as follows:

The key part to notice is that instead of a linear term 𝛽ixi for a feature, now we have a function fi(xi). We will come back later to how this function is computed in EBM.

One limitation of GAM is that each feature function is learned independently. This prevents the model from capturing interactions between features and pushes the accuracy down.

GA²M algorithm

GA²M seeks to improve this. To do so, it also considers some pairwise interaction terms in addition to the function learned for each feature. This is not an easy problem to solve because there are a larger number of interaction pairs to consider, which increases compute time drastically. In GA²M, they use the FAST algorithm to pick up useful interactions efficiently. This is the hypothesis function for GA²M. Note the extra pairwise interaction terms.

By adding pairwise interaction terms, we get a stronger model while still being interpretable. This is because one can use a heatmap and visualize two features in 2D and their effect on the output clearly.

EBM Algorithm

Finally, let us talk about the EBM algorithm. In EBM, we learn each feature function fi(xi) using methods such as bagging and gradient boosting. To make the learning independent of the order of features, the authors use a very low learning rate and cycle through the features in a round robin fashion. The feature function fi for each feature represents how much each feature contributes to the model’s prediction for the problem and is hence directly interpretable. One can plot the individual function for each feature to visualize how it affects the prediction. The pairwise interaction terms can also be visualized on a heatmap as described earlier.

This implementation of EBM is also parallelizable, which is invaluable in large-scale systems. It also has the added advantage of having an extremely fast inference time.

Training the EBM

The EBM training part uses a combination of boosted trees and bagging. A good definition would probably be bagged boosted bagged shallow trees.

Shallow trees are trained in a boosted way. These are tiny trees (with a maximum of 3 leaves by default). Also, the boosting process is specific: Each tree is trained on only one feature. During each boosting round, trees are trained for each feature one after another. It ensures that:

  • The model is additive.
  • Each shape function uses only one feature.

This is the base of the algorithm, but other techniques further improve the performance:

  • Bagging, on top of this base model.
  • Optional bagging, for each boosting step. This step is disabled by default because it increases the training time.
  • Pairwise interactions.

Depending on the task, the third technique can dramatically boost performance. Once a model is trained with individual features, a second pass is done (using the same training procedure) but with pairs of features. The pair selection uses a dedicated algorithm that avoids trying all possible combinations (which would be infeasible when there are many features).

Finally, after all these steps, we have a tree ensemble. These trees are discretized simply by running them with all the possible values of the input features. This is easy since all features are discretized. So the maximum number of values to predict is the number of bins for each feature. In the end, these thousands of trees are simplified to binning and scoring vectors for each feature.

EBM using Python

We will use the same red wine quality data to understand InterpretML. The code can be found via this GitHub Link.

Exploring Data

The “summary” of training data displays a histogram of the target variable.

Summary displaying a histogram of Target.

When an individual feature (here fixed acidity) is selected, the graph shows the Pearson Correlation of that feature with the target. Also, a histogram of the selected feature is shown in blue color against the histogram of the target variable in red color.

Individual Feature against Target.

Training the Explainable Boosting Machine (EBM)

The ExplainableBoostingRegressor() model of the InterpretML library with the default hyper-parameters is used here. RegressionTree() and LinearRegression() are also trained for comparison.

ExplainableBoostingRegressor Model.

Explaining EBM Performance

RegressionPerf() is used to assess the performance of each model on the test data. The R-squared value of EBM is 0.37, which outperforms the R-squared error of linear regression and regression tree models.

Performance of (a) EBM, (b) Linear Regression, and (c) Regression Tree.

The global and local interpretability of each model can also be generated using the model.explain_global() and model.explain_local()methods, respectively.

InterpretML also provides a feature to combine everything and generate an interactive dashboard. Please refer to the notebook for graphs and a dashboard.


With the increasing growth of requirements for explainability and existing shortcomings of the XAI models, the times when one had to choose between accuracy and explainability are long gone. EBMs can be as efficient as boosted trees while being as easily explainable as logistic regression.

Original. Reposted with permission.


Coinsmart. Beste Bitcoin-Börse in Europa

Continue Reading

Artificial Intelligence

Digitizing Retail with New IoT Chip Adoption



Chip Adoption
Illustration: © IoT For All

Qualcomm has announced seven new chips designed to support new IoT devices in the retail sector. The line, which includes high-end and entry-level chips, includes chips with AI and image processing technology that will help make IoT devices with cameras more effective.

The launch is part of a broader trend in the IoT industry towards new applications in the retail sector — where business as usual has been significantly disrupted by the COVID-19 pandemic. These innovations could support major changes in the retail industry — like smart stores, interactive displays, and streamlined payment options.

Qualcomm Expands Chip Options to Support Retail IoT

The new chips, which may help accelerate the adoption of “smart retail,” are also designed to support new IoT applications in the warehousing and manufacturing sectors.

The line includes both entry-level chips, designed to support simpler IoT options for retailers and other businesses, as well as high-end chips that support a new range of devices and IoT features, including some powered by AI.

According to Qualcomm senior director of product management Nagaraju Naik, the high-end chips will support high-resolution video cameras and enable features like electronic pan, tilt, and zoom (or ePTZ).

The highest-end of the new chips accomplishes this with a range of features not present in many existing IoT chips — including reduced latency, triple-image signal processor (ISP) architecture, and an AI engine that supports up to seven concurrent cameras with 4K resolution each.

For several retail IoT applications — like interactive displays or security cameras that assist in smart store operations — these chips could help significantly improve device performance. Naik also said the chips would support new checkout and payment processing options — like “touchless [payment], smart carts, self-checkout, and mobile payments.”

In addition to these retail applications, the high-end chips will enable devices like autonomous picking robots in the manufacturing and warehousing industries.

IoT May Help Retailers Respond to a Changing Market

COVID-19 accelerated several existing trends in retail, and it’s likely that the pandemic significantly altered how consumers shop. According to research from WSL Strategic Retail, 48% of the population say they are now shopping for others they weren’t shopping for before the pandemic.

At the same time, the number of consumers shopping primarily or exclusively online has grown rapidly, and some industry observers believe these consumers will continue to shop online long after it is safe to return to stores.

New practices like Omni-shopping — the practice of consumers shopping in-store and using a retailer’s online storefront — will likely inform the tactics retailers will need to adopt if they want to succeed post-COVID-19.

The potential IoT offers, both in terms of data gathering and streamlining the in-store shopping experience, could be critical for retailers.

IoT devices enable touchless and smart payment options, such as allowing consumers to check out without needing to touch a credit card reader or similar device. In some cases, the new tech may enable checkout processes that do not require interacting with a cashier at all.

This new checkout experience is both streamlined and potentially more hygienic than the conventional experience. As a result, it could be appealing to customers who have left physical stores for convenient online shopping.

Novel IoT applications enabled by hardware like Qualcomm’s new chip line could help accelerate the digitization of retail over the next few years.

As data-gathering store sensors and interactive advertisements become more powerful and cost-effective, they will likely help businesses personalize advertising, optimize store layouts, and improve supply chain management.

These shifts could make in-store shopping a better proposition for customers who can just as easily shop online.

How New IoT Tech May Shape Retail’s Digital Future

The IoT industry has begun to invest in retail technology seriously. New hardware like Qualcomm’s IoT chips will likely help enable more powerful and cost-effective smart retail devices.

As the retail industry digitizes and adapts to the post-COVID-19 world, these devices could prove invaluable. Customers are turning away from in-store retail in favor of online shopping. Still, process changes and personalization made possible by new IoT technology could convince consumers to return to physical stores.

Coinsmart. Beste Bitcoin-Börse in Europa

Continue Reading

Artificial Intelligence

How Health Tech is Shaping the Future of Healthcare



Khunshan Ahmad Hacker Noon profile picture

@khunshanKhunshan Ahmad

Writes about tech. Software engineer and digital marketer by profession. Peace.

Technologies like Artificial Intelligence, Big Data, Machine Learning, Telemedicine, Virtual Reality, Augmented Reality, and the Internet of Things play a vital role in shaping the future of Health Tech. The goal is to make it easy for humans to take care of themselves and their overall health. 

In this article, we’ll discuss some of the ways AI, Telemedicine, AR, VR, IoT, and 3D technologies are improving healthcare and have become the driving forces of some medical technologies.

Artificial Intelligence in Health Tech

One of the top technologies causing a radical change in health tech is Artificial Intelligence. AI is the backbone of all modern emerging technologies. For the healthcare industry, AI-enabled solutions can assist medical research and help with new product development.

With Machine Learning, the most common form of AI, it has become possible for researchers now to reach conclusions easily and with better precision. Big Data, which goes hand in hand with ML, is used to analyze enormous amounts of patient data and detect the patterns of diseases. It includes diagnosing diseases to discovering links between genetic codes and robots assisting surgeries. Altogether, it can lead to better outcomes and patient engagement with immediate returns through cost reduction.

A deep neural network, called the LSAN, is developed by researchers at Penn State University. The new ML model would predict any future health conditions of a patient by scanning and analyzing the electronic health records of the patients.

Image: Fraunhofer FOKUS

AI in Cancer Care

The integration of AI technology in cancer care is one area that can make a breakthrough impact on humanity. Cancer screening today is inconvenient and invasive. The detection of two common cancers, colon and breast, require screening technologies developed 50 years ago. 

Cancer patients have a 90% chance of survival if cancer is detected at stage I versus only a 5% chance at stage IV, so early detection is a critical means of improving patient outcomes.

Helio Health is an AI-driven healthcare startup focused on developing and commercializing early cancer detection tests from a simple blood draw. The company’s mission is to simplify cancer screening so lives can be saved by detecting cancer earlier. Helio Health has secured $86 million in venture funding and currently in clinical trials for its lead liver cancer detection test, the HelioLiver Test. Helio’s development program is currently focused on liver, colon, breast, and lung cancer, and actively collaborating with top national cancer centers. 

Telemedicine in Health Tech

Telemedicine technologies have been making a huge impact. Telemedicine boomed during the COVID-19 pandemic and I believe the trend is going to stay. There are plenty of reasons for that, but the real reason is as more and more gadgets, gears, and wearable devices – like Ring, FitBit, or Embr Wave – are becoming part of health tech. Apple announced a breakthrough ECG app – approved by the FDA – empowering patients to maintain a log of their electrocardiogram anytime. 

Our smartphones can now also pair with third-party health devices like glucometers, heart monitors, body scales, toothbrushes, and spirometers for other important and vital metrics. 

Wearable Devices

Wearable and mobile devices are becoming popular as they are providing more accurate results than before. One of the leading causes of death worldwide is hypertension. But your smartphone can measure your blood pressure as well. The Biospectal OptiBP app, funded by Bill and Melinda Gates, is a mobile-only app that measures your blood pressure at any time. The app is very accurate, and aided by telemedicine, can really make a difference to fight the global hypertension crisis even in low-income countries.

Image: CNET

Health tech devices can even transmit data automatically from such devices to telemedicine service providers. This growing number of health devices and their convenience is not only just helping researchers with day-to-day data but also opening a new era of at-home telehealth. 

Eye Exams Can Now be Done Online

Another example of telemedicine is Stanton Optical, a leading eye health provider. During the pandemic, they started to offer all eye care as part of their telemedicine initiative. The optical company offered patients to receive a customized eye care treatment plan and prescription through a virtual video screen with a local Ophthalmologist (MD) or Optometrist. It allowed patients to receive eye care during a pandemic when many eye care providers are turning away patients within the safety of their homes.

Many other startups like DoctorSpring and Second Opinions are providing telemedicine services. DoctorSpring allows you to do a medical consultancy from board-certified doctors 24×7. Second Opinions also let you submit a medical questionnaire before scheduling an online meeting with a board-certified doctor.

Health tech is making it possible for the healthcare industry to manage major diseases like diagnosing and treating cancer, diabetes, hypertension and helping patients suffering from mental health issues. 

Neural Interfaces Can Also Improve Health Tech

Elon Musk’s neural technology company, Neuralink, is working on a brain chip that will be implanted into the human skull and connected directly to the brain. The goal of the Neuralink chip is to provide an interface to communicate with the brain. It can help to improve mental health and cure brain disorders like Alzheimer’s, Parkinson’s disease, and spinal cord injuries. It would initially focus on curing and treating major traumas and brain injuries, and can also be used to restore eyesight and hearing. The chip electrodes can read signals from the brain and write them down and will be helpful in curing paralysis as well. 

Screengrab: Neuralink YouTube

The brain chip designed by Neuralink is still in the phases of trials. It was first implanted in the brain of a pig for initial trials. The chip was recently implanted in a monkey, and a video showed it playing a video game by using its brain only. Elon Musk claimed in a tweet that his company and the chip will be ready for human trials later this year. However, he made a similar claim in 2019 that the chip would be tested on humans by the end of 2020. 

Facebook made a formal entry into the neural world when it acquired CTRL Labs, a startup co-founded by Internet Explorer creator and neuroscientist Thomas Reardon, in 2019. CTRL Labs is working on a similar brain-machine interface, but unlike brain implanted chips by Neuralink, CTRL Labs’ mainstream product is their wristband which they demonstrated to transmit electrical signals from the brain into computer input. 

CTRL Labs wristband is part of Facebook’s AR/VR research group. Facebook plans to use the neural interface technology of CTRL Lab’s wristband that connects to their AR/VR devices more naturally and intuitively. Neural technology development is a clear indication that we will see a breakthrough in health tech very soon. 

Medical VR/AR Solutions

Virtual Reality (VR) and Augmented Reality (AR) have proved to be significant and useful visual technologies. They have already advanced so much in the healthcare sector that medical practitioners can now render 3D images of human autonomy and their CT scans for better examination and to precisely locate the blood vessels, bones, and muscles. 

Osso VR, a startup based in Palo Alto, has raised $14 million in September to build a virtual reality surgical training and assessment platform. This can help surgeons in training to repeat steps many times virtually. Orthopedic residency programs using Osso VR include Columbia University, David Geffen School of Medicine at UCLA, Harvard Medical School, and more.

Image: Osso VR

Organovo has already printed human liver cells and tissues in 3D. Their ExVive3D Liver Tissue is helping the pharmaceutical and the healthcare industry in testing the conditions of the human liver. 

The technology is advancing quickly enough that we could soon see surgeons and medical staff frequently use VR or AR glasses during critical surgeries, and it was found that the individuals who take help from this health tech perform surgeries quicker and with better precision. Surgeries are quicker and more precise now with the help of AR and VR.

Final Thoughts on the Current State of Health Tech

Healthcare has always been of immense importance to human beings. Health tech is constantly improving the healthcare sector, and the ways of providing basic healthcare to humans have become easier and more effective. 

Do you think I missed an important health tech development? Share your views in the HackerNoon Community

Want to keep up with all the latest health topics? Subscribe to our newsletter in the footer below.

by Khunshan Ahmad @khunshan. Writes about tech. Software engineer and digital marketer by profession. Peace.Read my stories


Join Hacker Noon

Create your free account to unlock your custom reading experience.

Coinsmart. Beste Bitcoin-Börse in Europa

Continue Reading
Esports4 days ago

Lost Ark Founders Pack: Everything You Need to Know

Aviation4 days ago

Delta Air Lines Flight Diverts To Oklahoma Over Unruly Off-Duty Flight Attendant

Aviation3 days ago

Spirit Airlines Just Made The Best Argument For Lifting LaGuardia’s Perimeter Rule

Energy5 days ago

Daiki Axis Co., Ltd. (4245, First Section, Tokyo Stock Exchange) Overview of Operating Performance for the First Three Months Ended March 31, 2021

Esports2 days ago

Clash of Clans June 2021 Update patch notes

Esports4 days ago

Genshin Impact Murals: Location Guide

Blockchain3 days ago

Africa Leading Bitcoin P2P Trading Volume Growth in 2021

Esports2 days ago

Legends of Runeterra Patch 2.10.0 brings bug fixes and Pool Party cosmetics

Blockchain5 days ago

DCR Technical Analysis: Look for Support Levels of $130.13 and $126.01

Gaming3 days ago

Forza Horizon 5 Announced, Launches November 9

Cyber Security5 days ago

Cybersecurity Degrees in Colorado

Aviation2 days ago

Boeing 727 Set To Be Turned Into Luxury Hotel Experience

Esports4 days ago

Genshin Impact Music Rock Puzzle Guide

Blockchain5 days ago

Bitcoin Weekly Chart Appears to be in the Green Zone

Big Data2 days ago

In El Salvador’s bitcoin beach town, digital divide slows uptake

Blockchain2 days ago

Since It Adopted Bitcoin As Legal Tender, The World Is Looking At El Salvador

Blockchain4 days ago

Kyber Network (KNC) Price Prediction 2021-2025: Will KNC Hit $4 by 2021?

Blockchain4 days ago

Binance Is Launching a Decentralized NFT Platform

HRTech2 days ago

Pre-Owned Luxury Car dealer Luxury Ride to add 80 Employees across functions to boost growth

Blockchain4 days ago

Ripple price analysis: Ripple retests $0.80 support, prepares to push higher?