Connect with us

Big Data

Labelling Data Using Snorkel

Avatar

Published

on

Labelling Data Using Snorkel

In this tutorial, we walk through the process of using Snorkel to generate labels for an unlabelled dataset. We will provide you examples of basic Snorkel components by guiding you through a real clinical application of Snorkel.


By Alister D’Costa, Stefan Denkovski, Michal Malyska, Sally Moon, Brandon Rufino, NLP4H

Image for post

In this tutorial, we will walk through the process of using Snorkel to generate labels for an unlabelled dataset. We will provide you examples of basic Snorkel components by guiding you through a real clinical application of Snorkel. Specifically, we will use Snorkel to try to boost our results in predicting Multiple Sclerosis (MS) severity scores. Enjoy!

Check out the Snorkel Intro Tutorial for a walk through on spam labelling. For more examples of high-performance in real-world uses of Snorkel, see Snorkel’s publication list.

Check out our other work focused on NLP for MS severity classification here.

What is Snorkel?

 
Snorkel is a system that facilitates the process of building and managing training datasets without manual labelling. The first component of a Snorkel pipeline includes labelling functions, which are designed to be weak heuristic functions that predict a label given unlabelled data. The labelling functions that we developed for MS severity score labelling were the following:

  • Multiple key-word searches (using regular expressions) within the text. For example, in finding a severity score we searched for the phrase in numeric format and in roman numeral format.
  • Common baselines such as logistic regression, linear discriminant analysis, and support vector machines which were trained using term frequency-inverse document frequency (or tf-idf for short) features.
  • Word2Vec Convolutional Neural Network (CNN).
  • Our MS-BERT classifier described in this blog post.

The second component of the Snorkel pipeline is a generative model that outputs a single confidence weighted training label per data point given predictions from all the labelling functions. It does this by learning to estimate the accuracy and correlations of the labelling functions based on their agreements and disagreements.

Snorkel Tutorial

 
To reiterate, in this article we demonstrate label generation for MS severity scores. A common measurement of MS severity is EDSS or the Expanded Disability Status Scale. This is a scale that increases from 0 to 10 depending on the severity of MS symptoms. We will refer to EDSS in general as the MS severity score but for our keen readers we thought we would provide this information. This score is further described here.

Step 0: Acquire a Dataset

 
In our task, we worked with a dataset compiled by a leading MS research hospital, containing over 70,000 MS consult notes for about 5000 patients. Of the 70,000 notes only 16,000 are manually labeled by an expert for MS severity. This means that their are approximately 54,000 unlabelled notes. As you may or not be aware, having a larger dataset to train models generally lead to better model performance. Hence, we used Snorkel to generate what we call ‘silver’ labels for our 54,000 unlabelled notes. The 16,000 ‘gold’ labelled notes were used to train our classifiers before creating their respective labelling function.

Step 1: Installing Snorkel

 
To install Snorkel to your project, you can run the following:

Step 2: Adding the Labelling Functions

Setting up

 
Labelling functions allow you to define weak heuristics and rules that predict a label given unlabelled data. These heuristics can be derived from expert knowledge or other labelling models. In the case of MS severity score prediction, our labelling functions included: key-word search functions derived from clinicians, baseline models trained to predict MS severity scores (tf-idf, word2vec cnn, etc.), and our MS-BERT classifier.

As you will see below, you mark labelling functions by adding “@labeling_function()” above the function. For each labelling function, a single row of a dataframe containing unlabelled data (i.e. one observation/sample) is passed in. Each labelling function applies heuristics or models to obtain a prediction for each row. If the prediction is not found, the function abstains (i.e. returns -1).

When all labelling functions have been defined, you can make use of the “PandasLFApplier” to obtain a matrix of predictions given all labelling functions.

Upon running the following code, you will obtain a (N X num_lfs) L_predictions matrix, where N is number of observations in ‘df_unlabelled’ and ‘num_lfs’ is the number of labelling functions defined in ‘lfs’.

 

Labelling Function Example #1: Key-Word Search

 
Below shows an example of a key-word search (using regular expressions) used to extract MS severity scores recorded in decimal form. The regular expression functions are applied to attempt to search for the MS severity score recorded in decimal form. If found, the function returns the score in the appropriate output format. Else, the function abstains (i.e. returns -1) to indicate that the score is not found.

Labelling Function Example #2: Trained Classifier

 
Above we see an example using a key-word search. To integrate a trained classifier, you must perform one extra step. That is, you must train and export your model before creating your labelling function. Here is an example of how we trained a logistic regression that was built on top of tf-idf features.

With the model trained, implementing a labelling function is as simple as this:

Step 3(a): Using Snorkel’s Majority Vote

 
Some would say the simpliest function Snorkel uses to generate a label is ‘Majority Vote’. Majority Vote, as the name implies, makes a prediction based on the most voted for class.

To implement Majority Vote you must specify the ‘cardinality’ (i.e. number of classes).

 

Step 3(b): Using Snorkel’s Label Model

 
To take advantage of Snorkel’s full functionality, we used the ‘Label Model’ to generate a single confidence-weighted label given a matrix of predictions obtained from all the labelling functions (i.e. L_unlabelled). The Label Model predicts by learning to estimate the accuracy and correlations of the labelling functions based on their agreements and disagreements.

You can define a Label Model and specify ‘cardinality’. After you fit the Label Model with L_unlabelled, it will generate single predictions for the unlabelled data.

 

Step 4: Evaluation Tools

LF Analysis — Coverage, Overlaps, Conflicts

 
To better understand how your labelling functions are functioning, you can make use of Snorkel’s LFAnalysis. The LF analysis reports the polarity, coverage, overlap, and conflicts of each labelling function.

The definition of these terms are as follows and you can refer to the Snorkel documentation for more information:

  • Polarity: Infer the polarities of each LF based on evidence in a label matrix.
  • Coverage: Computes the fraction of data points with at least one label.
  • Overlap: Computes the fraction of data points with at least two (non-abstain) labels.
  • Conflicts: Computes the fraction of data points each labelling function disagrees with at least one other labelling function.

LFAnalysis will provide an analysis of how your labelling functions performed relative to each other.

 

‘get_label_buckets’

 
Snorkel provides some more evaluation tools to help you understand the quality of your labelling functions. In particular, ‘get_label_buckets’ is a handy way to combine labels and make comparisons. For more information, read the Snorkel documentation.

The following code allows you to compare the true labels (y_gold) and predicted labels (y_preds) to view data points that were correctly or incorrectly labelled by Snorkel. This will allow you to pin-point which data points are difficult to correctly label, so that you can fine-tune your labelling functions to cover these edge cases.

Note, for this analysis we went back and created a L_train matrix which contains our labelling function predictions for our ‘gold’ labelled dataset.

 

Alternatively, you can use ‘get_label_buckets’ to make comparisons between labelling functions.

The following code allows you to compare the label predictions in L_unlabelled to observe how different labelling functions label datapoints differently.

 

Step 5: Deployment

Choosing the Best Labelling Model to Label Unlabelled Data

 
Following the procedure outlined above, we developed various labelling functions based on key-word searches, baseline models, and our MS-BERT classifier. We experimented with various ensembles of labelling functions and used Snorkel’s Label Model to obtain predictions for a held-out labelled dataset. This allowed us to determine which ensemble of labelling functions would be best to label our unlabelled dataset.

As shown in the table below, we observed that the MS-BERT classifier (MSBC) alone outperformed all ensembles that contain itself by at least 0.02 on Macro-F1. The addition of weaker heuristics and classifiers consistently decreased the ensemble’s performance. Furthermore, we observed that the amount of conflict for the MS-BERT classifier increased as weaker classifiers and heuristics were added to the ensemble.

Figure

Note, Rule Based (RB) refers to our key-word searches. LDA refers to linear discriminant analysis. TFIDFs refer to all models built on top of tf-idf features (i.e. logistic regression, linear discriminant analysis, and support vector machines).

To understand our findings, we have to remind ourselves that Snorkel’s label model learns to predict the accuracy and correlations of the labelling functions based on agreements and disagreements amongst each other. Therefore in the presence of a strong labelling function, such as our MS-BERT classifier, the addition of weaker labelling functions introduces more disagreements with the strong labelling functions and therefore decreases performance. From these findings, we learned that Snorkel may be more suited for situations where you only have weak heuristics and rules. However, if you already have a strong labelling function, developing a Snorkel ensemble with weaker heuristics may compromise performance.

Therefore, the MS-BERT classifier alone was chosen to label our unlabelled dataset.

Semi-Supervised Labelling Results

 
The MS-BERT classifier was used to obtain ‘silver’ labels for our unlabelled dataset. These ‘silver’ labels were combined with our ‘gold’ labels to obtain a silver+gold dataset. To infer the quality of the silver labels, new MS-BERT classifiers were developed: 1) MS-BERT+ (trained on silver+gold labelled data); and 2) MS-BERT-silver (trained on silver labelled data). These classifiers were evaluated on a held-out test dataset that was previously used to evaluate our original MS-BERT classifier (trained on gold labelled data). MS-BERT+ achieved a Macro-F1 of 0.86238 and a Micro-F1 of 0.92569, and MS-BERT-silver achieved a Macro-F1 of 0.82922 and a Micro-F1 of 0.91442. Although their performance was slightly lower that our original MS-BERT classifier (Macro-F1 of 0.88296, Micro-F1 of 0.94177), they still outperformed the previous best baseline models for MS severity prediction. The strong results of MS-BERT-silver helps show the effectiveness of using our MS-BERT classifier as a labelling function. It demonstrates potential to reduce tedious hours required by a professional to read through a patient’s consult note and manually generate MS severity scores.

Thank You!

 
Thanks for reading everyone! If you have any questions please do not hesitate to contact us at nlp4health (at gmail dot) com. 🙂

Acknowledgements

 
We would like to thank the researchers and staff at the Data Science and Advanced Analytics (DSAA) department, St. Michael’s Hospital, for providing consistent support and guidance throughout this project. We would also like to thank Dr. Marzyeh Ghassemi, and Taylor Killan for providing us the opportunity to work on this exciting project. Lastly, we would like to thank Dr. Tony Antoniou and Dr. Jiwon Oh from the MS clinic at St. Michael’s Hospital for their support on the neurological examination notes.

Originally published at https://nlp4h.com.

 
Bio: The authors are a group of graduate students at University of Toronto working on NLP for healthcare.

Original. Reposted with permission.

Related:

Source: https://www.kdnuggets.com/2020/07/labelling-data-using-snorkel.html

Big Data

How Can Technology Help Fight the COVID-19 Pandemic?

Avatar

Published

on

Illustration: © IoT For All

As the COVID-19 pandemic continues unfolding, technology solutions and government initiatives are multiplying to help monitor and control the virus’s journey. Their aid includes reducing the load on the health system and reinforcing the efforts of overworking and burned-out healthcare workers.

While smart technologies cannot replace or compensate public institution measures, they do play a crucial role in emergency responses. Let’s take a look at the promising use cases of how technology can help fight the novel coronavirus outbreak.

Technologies Used for Good

People tend to think of technology as a heartless machine, which is true, but only until it’s used for good. Just look at all the wonderful things we’ve managed to do with its help.

Telemedicine is gaining traction by offering remote patient monitoring and interactive remote doctor’s visits. At the same time, 3D printing and open-source solutions are facilitating the production of more affordable face masks, ventilators, and breathing filters as well as optimizing the supply of the medical equipment. Even more, the pandemic has driven scientists to desperate measures. They are now experimenting with gene editing, synthetic biology, and nanotechnology to develop and test vaccines faster than ever in the history of humanity.

Smart technologies like the Internet of things (IoT), big data, and artificial intelligence (AI) are being massively adopted to help track the disease spread and contagion, manage insurance payments, uphold medical supply chains, and enforce restrictive measures. Let’s go step by step to see how IoT, AI, big data, and mobile solutions are actually enhancing medical care.

IoT for Smart Patient Care Management and Home Automation

IoT has already found its use among healthcare providers. Today, connected patient imaging, health devices or applications, worker solutions, and ambulance programs are being adopted globally. But COVID-19 made the technology take on new applications to help the world combat the epidemic. Tracking quarantine, pre-screening and diagnosing, cleaning and disinfecting, innovative usage of drones, reducing in-home infections, are all “new normals” thanks to IoT.

null

For example, an American health technology company Kinsa creates smart thermometers that screen and aggregate people’s temperature and symptoms data in real-time. Having gathered data from over one million connected thermometers, Kinsa rolled out its US HealthWeather™ Map.

The map is updated daily, highlighting how severely the population is being affected by influenza-like illness (ILI). This real-time information helps health authorities see an increase. In fevers as early indicators of the community spread of COVID-19 to streamline the allocation of health resources. These areas are marked in the “Atypical” mode of the map.

To slow down the spread of COVID-19, a team of Seattle engineers created Immutouch, a smart wristband vibrating every time a person wearing it tries to touch their face.

null

Smart speakers, lights, and security systems are being used to open doors and switch on lights to reduce in-home infections. These gadgets allow people to avoid touching the surfaces of doorknobs, switches, mail, packages, or anything that could easily spread germs.

The Role of Big Data in Fighting Coronavirus

Tapping into big data is a must to develop real-time forecasts and arm healthcare professionals with a profound database to help with decision-making.

null

IBM Clinical Development system is an advanced Electronic Data Capture (EDC) platform that allows an accelerated delivery of medications to market and reduces the time and cost of clinical trials thanks to cognitive computing, patient data assets, and IoT. Additionally, the U.S. government had been in active talks with Facebook, Google, and others to determine how to use location data to glean insights for combating the COVID-19 pandemic.

Could Mobile Apps be Used to Control the Pandemic?

The COVID-19 pandemic has become a game-changer for the healthcare continuum. Today’s mobile apps are on guard to help patients receive online therapy, at-home testing, conclude self-checks, and improve mental well-being. Thanks to smartphone apps, it is now possible to trace the virus’s journey and help limit its spread.

Apple COVID-19, for instance, was created in partnership with the Centers for Disease Control and Prevention (CDC), the White House, and the Federal Emergency Management Agency (FEMA). The application contains vital and relevant information from trusted sources on the coronavirus pandemic: hand hygiene practices, social distancing FAQs, quarantine guidelines, self-checking tutorials, tips on cleaning, and disinfecting surfaces. On top of that, it has a screening tool that advises people on what to do when a person has COVID-19 symptoms, has just returned from abroad, or has come in close contact with someone who might be infected with the disease.

Meanwhile, health authorities in Abu Dhabi have created the TraceCovid app for Bluetooth-enabled smartphones to minimize the spread of the disease. The service allows tracing individuals who have come into proximity with a person tested positive for COVID-19. Thanks to it, medical professionals сan react faster and render the necessary healthcare. Germany, in turn, is going to roll out a smartphone app, which will use Bluetooth to alert people if they are close to someone with the confirmed viral infection.

null

Telemedicine has also proved to be an efficient tool for flattening the curve. The Sheba Medical Centre, the largest Israeli hospital, launched a telehealth program for remote patient-monitoring to control the pandemic spread. Doctolib, a Franco-German company, Qare (France), Livi (Sweden), Push Doctor (the UK), Compugroup Medical (Germany) are offering virtual doctors too.

Using AI to Identify, Track and Forecast Outbreaks

Artificial intelligence-powered by natural language processing (NLP) and location monitoring is crucial for identifying, tracking, and scanning outbreaks, predicting hotspots and helping make better decisions.

For example, Microsoft collaborated with the U.S. Centers for Disease Control and Prevention (CDC) to create an AI-based COVID-19 Assessment bot to treat patients more effectively and allocate limited resources. The bot, nicknamed Clara, can evaluate symptoms, advice on the next steps to take and track users who need urgent care the most.

The Canadian startup BlueDot has applied AI to spot and track the spread of COVID-19 and predict outbreaks, and the Japanese company Bespoke rolled out Bebot, an AI-powered chatbot that was developed specifically for travelers. This mobile app informs and assists them with coronavirus-related questions as they move about.

Conclusion

There’s no doubt that the coronavirus pandemic has become a real-life test for everyone. It has caused tremendous damage, but at the same time, it has forced tech innovators to roll out advanced solutions, and it seems that they don’t plan on slowing down anytime soon.

Healthcare providers across the globe are continually switching to smart technologies. So if you are in the smart technology niche, consider the current trends to steer your business in the right direction.

Source: https://www.iotforall.com/how-can-technology-help-fight-covid19/

Continue Reading

Big Data

Chatbots and Intelligent Automation Solutions Paving the Way towards Seamless Business Continuity

Avatar

Published

on

Frequent business disruptions in the form of storms, pandemics, lockdowns, etc., pose risk to seamless operations and revenue generation in service industries. One day of operation disruption leads to losses worth millions. Semi-automation is not able to stop the cascading business effects of an unprecedented business disruption. Services such as banks, financial services, insurance, healthcare, information technology services, etc., cannot afford the risk of downtime. Chatbots powered by Intelligent Automation is that indispensable solution in the omni-channel customer interface that keeps the business moving 24×7 even in the face of a major business disruption such as long prevailing pandemic.

How do Intelligent Automation powered Chat-bots offer seamless business continuity?

Chatbots engage diverse skill sets such as Robotic Process Automation (RPA) and Artificial Intelligence (AI) / Machine Learning (ML), in short Intelligent Automation, and offer a lifeline to the service industry businesses. Chatbots are located on the key pages of a business website or social media pages of the business, and can be accessed by customers and prospects round the clock in different international languages. They augment the services of the regular service desk and helps tide over most emergency situations.

Chatbots can handle complex queries and the functioning depends on training data set and the streamlined data in the CRM database. All chatbot interactions can be further cleaned and stored in the CRM and analysed. Based on these interactions at different stages of the customer journey, the chatbots can make intelligent suggestions to the customer during the subsequent customer interaction.

The chatbots offer tremendous business benefits. The responses are highly accurate and relevant and have a minuscule turnaround time. The on time responses right from order booking to bill payment while taking care of customer preference ensures high productivity and thereby generates high revenue even when a business executive is not able to interact directly with a customer.

In conclusion:

Chatbot solution powered by Intelligent Automation is that indispensable tool in the omni-channel customer service desk of a service industry business. It helps to keep the business up and running even when customer executives are not able to interact directly with the customer due to unprecedented business disruptions. Chatbot solutions thereby enable businesses to stay up and functioning at all times in a 24x7x365 scenario.

Image Credit: https://www.freepik.com/yanalya

Source: https://datafloq.com/read/chatbots-intelligent-automation-solutions-paving-way-towards-seamless-business-continuity/8850

Continue Reading

Big Data

How Hazelcast hopes to make digital transformation mainstream

Avatar

Published

on

Commentary: Even as the coronavirus pandemic has hastened digital transformation efforts, success remains elusive for many companies. This one-stop shop to digital transformation might help.

Digital transformation

Image: metamorworks, Getty Images/iStockphoto

It’s no secret that, as CircleCI CEO Jim Rose put it, “The pandemic has compressed the time[line]” for digital transformation. What is perhaps surprising is just how broad and deep that transformation is spreading. In an interview with Hazelcast chief product officer David Brimley, he stressed that while Fortune 500 e-commerce and finance companies have historically paid the bills for Hazelcast, provider of an open source in-memory data grid (IMDG), mid-sized enterprises “are coming to us and saying, we want to start digitizing and [adding digital] channels for our business.”

How they get there, and how fast, is the question. 

SEE: Digital transformation road map (free PDF) (TechRepublic)

A one-stop shop to digital transformation 

As keen as companies are to move workloads to the cloud to facilitate digital transformation, not all companies are alike in their readiness, Brimley said. In particular, these mid-sized enterprises may lack the personnel or other resources to push aggressively into the cloud, whatever their intentions. As such, he said, many companies are trying to figure out “the quickest way I can get the applications and hardware I’ve got today in my own data centers and add a digital channel on the top of it as quickly as I can.” 

No PhD in Computer Geekery required.

SEE: Special report: Prepare for serverless computing (free PDF) (TechRepublic)

By pairing Hazelcast IMDG for distributed coordination and in-memory data storage with Hazelcast Jet for building streaming data pipelines, Brimley said, organizations can build digital integration hubs without having the technical chops of a Netflix or Facebook. “There are a lot of companies that can’t make head nor tail of this plethora of Cloud Native Computing Foundation products [Kubernetes, Envy, Fluentd, etc.], and they just want to stand up a Java process, have it clustered together, have some way of running their ‘microservices’ on this Java cluster, and off they go.”

Once, a company (and open source project) like Hazelcast would have had to pitch themselves to banks and credit card companies for low-latency, high-performance distributed systems; these were the types of organizations that valued IMDGs. Today, however, such concerns span a much broader range of companies, particularly with this crushing need to achieve digital transformation.  

For Brimley and Hazelcast, they’re not pitching themselves as a database or any particular technology. Even the IMDG label might not fit particularly well. After all, the company isn’t positioning itself as about technology, per se, but rather about solving business problems; about how developers can use Hazelcast to capture “interesting new architectural patterns,” in Brimley’s words. It’s taking on the “I need to embrace an event-driven architecture crowd,” and not selling a data cache or, yes, not even an in-memory data grid.

Disclosure: I work for AWS, but these are my views and don’t reflect those of my employer.

Also see

Source: https://www.techrepublic.com/article/how-hazelcast-hopes-to-make-digital-transformation-mainstream/#ftag=RSS56d97e7

Continue Reading
Nano Technology6 hours ago

SEMI Partners with GLOBALFOUNDRIES to Offer Apprenticeship Program Aimed at Building the Electronics Talent Pipeline

Fisher Yu, University of Arkansas CREDIT University of Arkansas
Nano Technology6 hours ago

Materials science researchers develop first electrically injected laser: The diode laser uses semiconducting material germanium tin and could improve micro-processing speed and efficiency at much lower costs

Nano Technology6 hours ago

Advance in programmable synthetic materials: Reading sequence of metal atoms in MOFs allows encoding of multiple chemical functions

Blockchain6 hours ago

Invest 3% in Bitcoin to Avoid COVID-19 Lockdown Devaluation — BitGo CEO

Blockchain6 hours ago

Cointelegraph Launches Newsletter for Professional Investors

Blockchain6 hours ago

Bitcoin Cash short-term Price Analysis: 12 August

Blockchain7 hours ago

Token Launches From Ethereum to Telegram: Where Do We Go From Here?

AR/VR7 hours ago

Enterprise VR Hardware Specialist Varjo Raises $54 Million in Latest Funding Round

Blockchain7 hours ago

Grayscale Bitcoin Trust Saw Surge in Investor Interest After March

Blockchain7 hours ago

VeChain & Oxford Announce New Framework to Assess Consensus Protocols

Blockchain7 hours ago

Championing Blockchain Education in Africa: Women Leading the Bitcoin Cause

Gaming7 hours ago

Evening Reading – August 11, 2020

Blockchain8 hours ago

Chainlink: Traders under zero loss, but why?

Blockchain9 hours ago

The Babylon Project: A Blockchain Focused Hackathon with a Commitment to Diversity & Inclusion

AR/VR9 hours ago

Varjo Raises $54M Financing to Support Its Retina-Quality VR/AR Headsets for Enterprise

Blockchain9 hours ago

Ethereum, Zcash, Dogecoin Price Analysis: 12 August

Blockchain9 hours ago

Peer-to-Peer Exchange CryptoLocally Now Offers Instant Credit Card Payment

Blockchain9 hours ago

Cardano (ADA) Holds On to Crucial Support By a Thread

Blockchain11 hours ago

Bitcoin Creates Double-Top After Failing Close Above $12,000

Blockchain11 hours ago

DeFi Farmers Rush to Yam and Serum for Explosive Yields

Energy12 hours ago

Copper Foil Market Size Worth $10.3 Billion By 2027 | CAGR: 9.7%: Grand View Research, Inc.

Energy13 hours ago

Corundum Market Size Worth $3.5 Billion By 2027 | CAGR: 4.0%: Grand View Research, Inc.

AR/VR13 hours ago

Mozilla is Shuttering its XR Team Amidst Major Layoff, But ‘Hubs’ Will Continue

Energy13 hours ago

New Energy Challenger, Rebel Energy, Places Blue Prism Digital Workers at the Heart of its Launch Plans

Science13 hours ago

Teknosa grows by 580 percent in e-commerce and pulls its operating profit into positive territory in Q2, despite the pandemic

Science13 hours ago

Novo Ventures Portfolio Company F2G Closes US$60.8 Million Financing

Science13 hours ago

F2G Closes US$60.8 Million Financing to fund late stage development of novel mechanism antifungal agent

Blockchain14 hours ago

LocalCryptos Integrates Inbuilt Crypto-To-Crypto Exchanges, Powered by ChangeNOW

Publications14 hours ago

Putin’s plan for Russia’s coronavirus vaccine is at ‘high risk of backfiring,’ expert says

Publications14 hours ago

UK enters recession after GDP plunged by a record 20.4% in the second quarter

Gaming14 hours ago

Another Steam Game Festival Is Coming In October

Science14 hours ago

Top 25 Nationally Ranked Carr, Riggs & Ingram (CRI) Welcomes Cookeville-Based Firm, Duncan, Wheeler & Wilkerson, P.C.

Science14 hours ago

Avast plc Half Year Results For The Six-Months Ended 30 June 2020

Cyber Security15 hours ago

Russian hackers steal Prince Harry and Meghan Markle photos via Cyber Attack

Gaming15 hours ago

Oddworld: New ‘N Tasty Coming To Switch In October

Gaming15 hours ago

Linkin Park’s Mike Shinoda Is Writing A Song For Gamescom 2020

Cyber Security15 hours ago

Texas School District experiences DDoS Cyber Attack

Gaming15 hours ago

‘EVE: Echoes’ from CCP Games and Netease Is Now Available Early on the App Store, Servers Go Live Tomorrow

Gaming15 hours ago

Hans Zimmer Created An Extended Netflix “Ta Dum” Sound For Theatres

Cannabis15 hours ago

Everything you need to know about the Exxus Snap VV

Trending