Connect with us

AI

AWS Textract Teardown – Pros and Cons of using Amazon’s Textract in 2021

Avatar

Published

on

Text Recognition in 2021

In many companies and organizations, plenty of valuable business data is stored in documents. This data is at the heart of digital transformation. Unfortunately, according to statistics, 80% of all this data is embedded in unstructured formats like business invoices, emails, receipts, PDF documents, and many more. Therefore, to extract and make the most out of information from these documents, companies slowly started relying on Artificial intelligence (AI) based services. Out of those that provide AI-based services, Amazon had been one of the most prominent players for a long time. It had its wings spread across different solutions like document processing, speech recognition, text analytics, and much more.

In this blog, we’ll be looking at Amazon’s AWS Textract, a fully managed machine learning service that automatically extracts printed text, handwriting, tables, and other data from scanned documents. Let’s get started!

In simple terms, AWS Textract is a deep learning-based service that converts different types of documents into an editable format. Consider we have hard copies of invoices from different companies and store all the vital information from them on excel/spreadsheets. Usually, we rely on data entry operators to manually enter them, which is hectic, time-consuming, and error-prone. But using Textract, all we need to do is upload our invoices to it and in turn, it returns all the text, forms, key-value pairs, and tables in the documents in a more structured way. Below is a screenshot of how AWS does intelligent information extraction:

Information Extraction on AWS Textract

Not just typed text, AWS Textract also identifies handwritten texts in the documents. This makes information extraction more useful, as in some cases extracting handwritten text is more complicated to extract than typed ones. Now let’s see some of the common use cases for using Textract:

Robust and Normalised Data Capture: Amazon Textract enables text and tabular data extraction from a wide variety of documents, such as financial documents, research reports, and medical notes. However, these are not custom-made APIs, but they learn from a vast amount of data every day, and with this continuous learning, extracting unstructured and structured data from your document will be much easier.

Key-Value Pair Extraction: Key-Value pair extraction has become a common problem for document processing but with Amazon Textract this can be easily solved. We can build pipelines for key-value pair extraction using Textract that automates document processing right from scanning documents to pushing data to excel sheets etc.

Creating an intelligent search index: Amazon Textract enables you to create libraries of text detected in image and PDF files.

Using intelligent text extraction for Natural Language Processing (NLP) – Amazon Textract enables you to extract text into words and lines. It also groups text by table cells if Amazon Textract document table analysis is enabled. Amazon Textract provides you with control over how text is grouped as input for NLP.


Looking for an intelligent Text Recognition solution? Head over to Nanonets and use the solution with accuracy above 95% .


In this section, we’ll be discussing how AWS Textract works. We know that strong AI and ML algorithms are behind them; however, there aren’t any open-source models to dive into the specifics. But I’ll try to decode the workings by summarising the documentation that can be found here. Let’s get started!

How Amazon (AWS) Textract Works?
How Amazon (AWS) Textract Works? (Source: Dev.to)

Firstly, whenever a new or a scanned document is sent into Textract, it creates a list of block objects for all the detected text. For example, say an invoice consists of hundred words today, AWS makes hundred block objects for all the words. These blocks contain information about a detected item, where it’s located, and the confidence that Amazon Textract has in the accuracy of the processing.

Usually, most of the documents are made of the following blocks:

  • Page
  • Lines and words of text
  • Form data (Key-value pairs)
  • Tables and Cells
  • Selection elements

Below is an example and the block data structure of AWS Textract:

{ "Blocks":[ { "Geometry": { "BoundingBox": { "Width": 1.0, "Top": 0.0, "Left": 0.0, "Height": 1.0 }, "Polygon": [ { "Y": 0.0, "X": 0.0 }, { "Y": 0.0, "X": 1.0 }, { "Y": 1.0, "X": 1.0 }, { "Y": 1.0, "X": 0.0 } ] }, "Relationships": [ { "Type": "CHILD", "Ids": [ "2602b0a6-20e3-4e6e-9e46-3be57fd0844b", "82aedd57-187f-43dd-9eb1-4f312ca30042", "52be1777-53f7-42f6-a7cf-6d09bdc15a30", "7ca7caa6-00ef-4cda-b1aa-5571dfed1a7c" ] } ], "BlockType": "PAGE", "Id": "8136b2dc-37c1-4300-a9da-6ed8b276ea97" }..... ], "DocumentMetadata": { "Pages": 1 }
}

However, the content inside the blocks changes based on the operation we call. For the text detection operation, the blocks return the pages, lines, and words of detected text. If we’re using the document analysis operations the blocks will return the detected pages, key-value pairs, tables, selection elements, and text. However, this only explains the higher-level working of Textract, in the next section let’s dive into OCR behind Textract.

There are no specifics regarding the type of OCR Amazon Textract uses as it’s a commercial product. However, we can compare it to one of the most popular open-source OCR, “Tesseract”, to understand its accuracy and capability to extract various types of documents.

AWS Textract OCR
AWS Textract OCR (Source: AWS Website)

Tesseract OCR is based on LSTM, a deep learning-based neural network architecture that performs exceptionally well on text data. Following are the formats of documents that tesseract supports: plain text, hOCR (HTML), PDF, invisible-text-only PDF, TSV. It has Unicode (UTF-8) support and supports more than 100 languages out of the box. However, as all the code is open-source, it can be trained to recognize other languages, but this requires deep learning and computer vision expertise. When it comes to the table and key-value pair extraction, tesseract fails. Still, we can build custom pipelines to solve this problem.

Textract OCR is also a deep learning-based neural network architecture, but this cannot be completely customizable or trained on a custom dataset. Its job is to parse and extract all the data that’s inside a document. However, Textract automatically tunes to your data and achieves higher accuracy on the go if a human verifies the extracted information (human in the loop). For tasks like table extraction and key-value pair extraction, Textract does a fair job achieving higher accuracy than Tesseract. But it’s limited only to a few languages and document formats.

Below are some of the document types that can be processed using AWS Textract:

  • Regular Invoices / Bills
  • Financial Documents
  • Medical Documents
  • Handwritten Documents
  • Payslips or Employee Documents

In the next section, let’s look at the Textract Python API.


Looking for an intelligent OCR solution to extract information from your documents? Head over to Nanonets to extract text from documents in any format in any language.


Amazon Textract API can be utilized in various programming languages. In this section, we’ll be looking at a code-block of key-value extraction using Textract with Python. For more information on language and API support do check out the docs here.

This code snippet is an example of how we can perform key-value pair extraction on documents utilizing Textract’s Python API. To get this working, we’ll have to also configure API key’s on the AWS dashboard. Now let’s dive into the code snippet,

Firstly, we import all the necessary packages for pushing documents to AWS and processing the extracted text.

import boto3
import sys
import re
import json

Next, we have a function named get_kv_map, in here we use boto3 to communicate with the Amazon Textract API, upload the document, and fetch the block response. We now get all the key-value pairs by checking the ‘BlockType’ and return it in dictionaries.

def get_kv_map(file_name): with open(file_name, 'rb') as file: img_test = file.read() bytes_test = bytearray(img_test) print('Image loaded', file_name) # process using image bytes client = boto3.client('textract') response = client.analyze_document(Document={'Bytes': bytes_test}, FeatureTypes=['FORMS']) # Get the text blocks blocks=response['Blocks'] # get key and value maps key_map = {} value_map = {} block_map = {} for block in blocks: block_id = block['Id'] block_map[block_id] = block if block['BlockType'] == "KEY_VALUE_SET": if 'KEY' in block['EntityTypes']: key_map[block_id] = block else: value_map[block_id] = block return key_map, value_map, block_map

After that, we have a function that gets the relationship between the extracted key-value pairs using the block items. Basically, using the relationships present in the block information (JSON) this function associates the keys and values in the document.

def get_kv_relationship(key_map, value_map, block_map): kvs = {} for block_id, key_block in key_map.items(): value_block = find_value_block(key_block, value_map) key = get_text(key_block, block_map) val = get_text(value_block, block_map) kvs[key] = val return kvs def find_value_block(key_block, value_map): for relationship in key_block['Relationships']: if relationship['Type'] == 'VALUE': for value_id in relationship['Ids']: value_block = value_map[value_id] return value_block

Lastly, we return the text present in the saved key-value pairs.

def get_text(result, blocks_map): text = '' if 'Relationships' in result: for relationship in result['Relationships']: if relationship['Type'] == 'CHILD': for child_id in relationship['Ids']: word = blocks_map[child_id] if word['BlockType'] == 'WORD': text += word['Text'] + ' ' if word['BlockType'] == 'SELECTION_ELEMENT': if word['SelectionStatus'] == 'SELECTED': text += 'X' return text def print_kvs(kvs): for key, value in kvs.items(): print(key, ":", value) def search_value(kvs, search_key): for key, value in kvs.items(): if re.search(search_key, key, re.IGNORECASE): return value def main(file_name): key_map, value_map, block_map = get_kv_map(file_name) # Get Key Value relationship kvs = get_kv_relationship(key_map, value_map, block_map) print("nn== FOUND KEY : VALUE pairs ===n") print_kvs(kvs) # Start searching a key value while input('n Do you want to search a value for a key? (enter "n" for exit) ') != 'n': search_key = input('n Enter a search key:') print('The value is:', search_value(kvs, search_key)) if __name__ == "__main__": file_name = sys.argv[1] main(file_name)

In this way, we can use the AWS Textract API to perform different information extraction tasks. The functions/approach is similar to most of the programming languages. We can also customize the approach based on our use cases if we’re to utilize the APIs.


Want to automate data entry from documents? Nanonets’ AI based OCR solution can help extract key information from  structured/unstructured documents and put the process on auto-pilot!


Pros and Cons of using AWS Textract

Pros:

Easy Setup with AWS Services: Setting up Textract with another AWS service is an easy task compared to other providers. For example, storing extracted document information with Amazon DynamoDB or S3 can be done by configuring an add-on.

Secure: Amazon Textract conforms to the AWS shared responsibility model, which includes regulations and guidelines for data protection. AWS is responsible for protecting the global infrastructure that runs all the AWS services; therefore we need not worry about our data being leaked or used by any others.

Cons:

Inability to Extract Custom Fields: There could be multiple data fields in a given invoice, say Invoice ID, Due Date, Transaction Date etc. These fields are something that are common in most invoices. But if we want to extract a custom field from an invoice, say, GST number or bank information, Textract does a poor job.

Integrations with upstream and downstream providers: Textract doesn’t allow you to integrate with different providers easily, say, for example, we’ll have to build an RPA pipeline with a third-party service; it would be difficult to find appropriate plugins that suit Textract.

Ability to define table headers: For table extraction tasks, textract doesn’t allow you to define table headers. Therefore, it would be not easy to search or find a particular column or a table in a given document.

No Fraud Checks: Modern OCRs are now able to find if a given document is original or fake by validating dates and finding pixelated regions. AWS Textract doesn’t come with this, its only job is to pick all the text from an uploaded document.

No Vertical Text Extraction: In some of the documents, invoice numbers or addresses can be found in a vertical alignment. At present, AWS only supports horizontal text extraction with a slight in-plane rotation.

Language Limit: Amazon Textract supports English, Spanish, German, French, Italian, and Portuguese text detection. Amazon Textract will not return the language detected in its output.

Everything’s Cloud: Any document processed with Textract goes into the cloud, only supporting a few regions. More information here. However, some companies might not be interested in taking their documents to the cloud for reasons like confidentiality or legal requirements. Still, unfortunately, AWS Textract does not support any on-premise deployment for document processing.

Retraining: If our accuracy is low on information extraction tasks for a set of documents, Textract doesn’t allow us to re-train them. To resolve this, we’ll have to again invest in a human review workflow, where an operator has to manually verify and annotate wrongly extracted values, which is again time consuming.

Conclusion

We hope this review of AWS Textract has been useful as you consider different solutions for data extraction/text recognition from your documents. We’ll keep updating this post periodically to cover latest changes.

Please add your thoughts and questions about using Amazon’s Textract solution in the comments section.

Source: Hero image from AWS website.

Start using Nanonets for Automation

Try out the model or request a demo today!

TRY NOW

Coinsmart. Beste Bitcoin-Börse in Europa
Source: https://nanonets.com/blog/aws-textract-teardown-pros-cons-review/

AI

Understanding dimensionality reduction in machine learning models

Avatar

Published

on

Join Transform 2021 this July 12-16. Register for the AI event of the year.


Machine learning algorithms have gained fame for being able to ferret out relevant information from datasets with many features, such as tables with dozens of rows and images with millions of pixels. Thanks to advances in cloud computing, you can often run very large machine learning models without noticing how much computational power works behind the scenes.

But every new feature that you add to your problem adds to its complexity, making it harder to solve it with machine learning algorithms. Data scientists use dimensionality reduction, a set of techniques that remove excessive and irrelevant features from their machine learning models.

Dimensionality reduction slashes the costs of machine learning and sometimes makes it possible to solve complicated problems with simpler models.

The curse of dimensionality

Machine learning models map features to outcomes. For instance, say you want to create a model that predicts the amount of rainfall in one month. You have a dataset of different information collected from different cities in separate months. The data points include temperature, humidity, city population, traffic, number of concerts held in the city, wind speed, wind direction, air pressure, number of bus tickets purchased, and the amount of rainfall. Obviously, not all this information is relevant to rainfall prediction.

Some of the features might have nothing to do with the target variable. Evidently, population and number of bus tickets purchased do not affect rainfall. Other features might be correlated to the target variable, but not have a causal relation to it. For instance, the number of outdoor concerts might be correlated to the volume of rainfall, but it is not a good predictor for rain. In other cases, such as carbon emission, there might be a link between the feature and the target variable, but the effect will be negligible.

In this example, it is evident which features are valuable and which are useless. in other problems, the excessive features might not be obvious and need further data analysis.

But why bother to remove the extra dimensions? When you have too many features, you’ll also need a more complex model. A more complex model means you’ll need a lot more training data and more compute power to train your model to an acceptable level.

And since machine learning has no understanding of causality, models try to map any feature included in their dataset to the target variable, even if there’s no causal relation. This can lead to models that are imprecise and erroneous.

On the other hand, reducing the number of features can make your machine learning model simpler, more efficient, and less data-hungry.

The problems caused by too many features are often referred to as the “curse of dimensionality,” and they’re not limited to tabular data. Consider a machine learning model that classifies images. If your dataset is composed of 100×100-pixel images, then your problem space has 10,000 features, one per pixel. However, even in image classification problems, some of the features are excessive and can be removed.

Dimensionality reduction identifies and removes the features that are hurting the machine learning model’s performance or aren’t contributing to its accuracy. There are several dimensionality techniques, each of which is useful for certain situations.

Feature selection

Feature selection

A basic and very efficient dimensionality reduction method is to identify and select a subset of the features that are most relevant to target variable. This technique is called “feature selection.” Feature selection is especially effective when you’re dealing with tabular data in which each column represents a specific kind of information.

When doing feature selection, data scientists do two things: keep features that are highly correlated with the target variable and contribute the most to the dataset’s variance. Libraries such as Python’s Scikit-learn have plenty of good functions to analyze, visualize, and select the right features for machine learning models.

For instance, a data scientist can use scatter plots and heatmaps to visualize the covariance of different features. If two features are highly correlated to each other, then they will have a similar effect on the target variable, and including both in the machine learning model will be unnecessary. Therefore, you can remove one of them without causing a negative impact on the model’s performance.

Heatmap

Above: Heatmaps illustrate the covariance between different features. They are a good guide to finding and culling features that are excessive.

The same tools can help visualize the correlations between the features and the target variable. This helps remove variables that do not affect the target. For instance, you might find out that out of 25 features in your dataset, seven of them account for 95 percent of the effect on the target variable. This will enable you to shave off 18 features and make your machine learning model a lot simpler without suffering a significant penalty to your model’s accuracy.

Projection techniques

Sometimes, you don’t have the option to remove individual features. But this doesn’t mean that you can’t simplify your machine learning model. Projection techniques, also known as “feature extraction,” simplify a model by compressing several features into a lower-dimensional space.

A common example used to represent projection techniques is the “swiss roll” (pictured below), a set of data points that swirl around a focal point in three dimensions. This dataset has three features. The value of each point (the target variable) is measured based on how close it is along the convoluted path to the center of the swiss roll. In the picture below, red points are closer to the center and the yellow points are farther along the roll.

Swiss roll

In its current state, creating a machine learning model that maps the features of the swiss roll points to their value is a difficult task and would require a complex model with many parameters. But with the help of dimensionality reduction techniques, the points can be projected to a lower-dimension space that can be learned with a simple machine learning model.

There are various projection techniques. In the case of the above example, we used “locally-linear embedding,” an algorithm that reduces the dimension of the problem space while preserving the key elements that separate the values of data points. When our data is processed with the LLE, the result looks like the following image, which is like an unrolled version of the swiss roll. As you can see, points of each color remain together. In fact, this problem can still be simplified into a single feature and modeled with linear regression, the simplest machine learning algorithm.

Swiss roll, projected

While this example is hypothetical, you’ll often face problems that can be simplified if you project the features to a lower-dimensional space. For instance, “principal component analysis” (PCA), a popular dimensionality reduction algorithm, has found many useful applications to simplify machine learning problems.

In the excellent book Hands-on Machine Learning with Python, data scientist Aurelien Geron shows how you can use PCA to reduce the MNIST dataset from 784 features (28×28 pixels) to 150 features while preserving 95 percent of the variance. This level of dimensionality reduction has a huge impact on the costs of training and running artificial neural networks.

dimensionality reduction mnist dataset

There are a few caveats to consider about projection techniques. Once you develop a projection technique, you must transform new data points to the lower dimension space before running them through your machine learning model. However, the costs of this preprocessing step are not comparable to the gains of having a lighter model. A second consideration is that transformed data points are not directly representative of their original features and transforming them back to the original space can be tricky and in some cases impossible. This might make it difficult to interpret the inferences made by your model.

Dimensionality reduction in the machine learning toolbox

Having too many features will make your model inefficient. But cutting removing too many features will not help either. Dimensionality reduction is one among many tools data scientists can use to make better machine learning models. And as with every tool, they must be used with caution and care.

Ben Dickson is a software engineer and the founder of TechTalks, a blog that explores the ways technology is solving and creating problems.

This story originally appeared on Bdtechtalks.com. Copyright 2021

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact. Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Coinsmart. Beste Bitcoin-Börse in Europa
Source: https://venturebeat.com/2021/05/16/understanding-dimensionality-reduction-in-machine-learning-models/

Continue Reading

AI

Bitcoin Mining Company Vows to be Carbon Neutral Following Tesla’s Recent Statement

Avatar

Published

on

Last week, Elon Musk and Tesla shocked the entire crypto industry following an announcement that the electric car company will no longer accept bitcoin payments for “environmental reasons.”

A Hard Pill For Bitcoin Maximalists

Giving its reasons, Tesla argued that Bitcoin mining operation requires massive energy consumption, which is generated from fossil fuel, especially coal, and as such, causes environmental pollution.

The announcement caused a market dip which saw over $4 billion of both short and long positions liquidated as the entire capitalization lost almost $400 billion in a day.

For Bitcoin maximalists and proponents, Tesla’s decision was a hard pill to swallow, and that was evident in their responses to the electric car company and its CEO.

While the likes of Max Keiser lambasted Musk for his company’s move, noting that it was due to political pressure, others like popular YouTuber Chris Dunn were seen canceling their Tesla Cybertruck orders.


ADVERTISEMENT

Adding more fuel to the fire, Musk also responded to a long Twitter thread by Peter McCormack, implying that Bitcoin is not actually decentralized.

Musk Working With Dogecoin Devs

Elon Musk, who named himself the “Dogefather” on SNL, created a Twitter poll, asking his nearly 55 million followers if they want Tesla to integrate DOGE as a payment option.

The poll, which had almost 4 million votes, was favorable for Dogecoin, as more than 75% of the community voted “Yes.”

Following Tesla’s announcement, the billionaire tweeted that he is working closely with Dogecoin developers to improve transaction efficiency, saying that it is “potentially promising.”

Tesla dropping bitcoin as a payment instrument over energy concerns, with the possibility of integrating dogecoin payments, comes as a surprise to bitcoiners since the two cryptocurrencies use a Proof-of-Work (PoW) consensus algorithm and, as such, face the same underlying energy problem.

Elon Musk: Dogecoin Wins Bitcoin

Despite using a PoW algorithm, Elon Musk continues to favor Dogecoin over Bitcoin. Responding to a tweet that covered some of the reasons why Musk easily chose DOGE over BTC, the billionaire CEO agreed that Dogecoin wins Bitcoin in many ways.

Comparing DOGE to BTC, Musk noted that “DOGE speeds up block time 10X, increases block size 10X & drops fee 100X. Then it wins hands down.”

Max Keiser: Who’s The Bigger Idiot?

As Elon Musk continues his lovey-dovey affair with Dogecoin, Bitcoin proponents continue to criticize the Dogefather.

Following Musk’s comments on Dogecoin today, popular Bitcoin advocate Max Keiser took to his Twitter page to ridicule the Tesla boss while recalling when gold bug Peter Schiff described Bitcoin as “intrinsically worthless” after he lost access to his BTC wallet.

“Who’s the bigger idiot?” Keiser asked.

Aside from Keiser, other Bitcoin proponents such as Michael Saylor replied to Tesla’s CEO:

SPECIAL OFFER (Sponsored)

Binance Futures 50 USDT FREE Voucher: Use this link to register & get 10% off fees and 50 USDT when trading 500 USDT (limited offer).

PrimeXBT Special Offer: Use this link to register & enter POTATO50 code to get 50% free bonus on any deposit up to 1 BTC.

You Might Also Like:

Coinsmart. Beste Bitcoin-Börse in Europa
Source: https://coingenius.news/bitcoin-mining-company-vows-to-be-carbon-neutral-following-teslas-recent-statement-6/?utm_source=rss&utm_medium=rss&utm_campaign=bitcoin-mining-company-vows-to-be-carbon-neutral-following-teslas-recent-statement-6

Continue Reading

AI

Bitcoin Proponents Against Elon Musk Following Heated Dogecoin vs Bitcoin Tweets

Avatar

Published

on

Last week, Elon Musk and Tesla shocked the entire crypto industry following an announcement that the electric car company will no longer accept bitcoin payments for “environmental reasons.”

A Hard Pill For Bitcoin Maximalists

Giving its reasons, Tesla argued that Bitcoin mining operation requires massive energy consumption, which is generated from fossil fuel, especially coal, and as such, causes environmental pollution.

The announcement caused a market dip which saw over $4 billion of both short and long positions liquidated as the entire capitalization lost almost $400 billion in a day.

For Bitcoin maximalists and proponents, Tesla’s decision was a hard pill to swallow, and that was evident in their responses to the electric car company and its CEO.

While the likes of Max Keiser lambasted Musk for his company’s move, noting that it was due to political pressure, others like popular YouTuber Chris Dunn were seen canceling their Tesla Cybertruck orders.


ADVERTISEMENT

Adding more fuel to the fire, Musk also responded to a long Twitter thread by Peter McCormack, implying that Bitcoin is not actually decentralized.

Musk Working With Dogecoin Devs

Elon Musk, who named himself the “Dogefather” on SNL, created a Twitter poll, asking his nearly 55 million followers if they want Tesla to integrate DOGE as a payment option.

The poll, which had almost 4 million votes, was favorable for Dogecoin, as more than 75% of the community voted “Yes.”

Following Tesla’s announcement, the billionaire tweeted that he is working closely with Dogecoin developers to improve transaction efficiency, saying that it is “potentially promising.”

Tesla dropping bitcoin as a payment instrument over energy concerns, with the possibility of integrating dogecoin payments, comes as a surprise to bitcoiners since the two cryptocurrencies use a Proof-of-Work (PoW) consensus algorithm and, as such, face the same underlying energy problem.

Elon Musk: Dogecoin Wins Bitcoin

Despite using a PoW algorithm, Elon Musk continues to favor Dogecoin over Bitcoin. Responding to a tweet that covered some of the reasons why Musk easily chose DOGE over BTC, the billionaire CEO agreed that Dogecoin wins Bitcoin in many ways.

Comparing DOGE to BTC, Musk noted that “DOGE speeds up block time 10X, increases block size 10X & drops fee 100X. Then it wins hands down.”

Max Keiser: Who’s The Bigger Idiot?

As Elon Musk continues his lovey-dovey affair with Dogecoin, Bitcoin proponents continue to criticize the Dogefather.

Following Musk’s comments on Dogecoin today, popular Bitcoin advocate Max Keiser took to his Twitter page to ridicule the Tesla boss while recalling when gold bug Peter Schiff described Bitcoin as “intrinsically worthless” after he lost access to his BTC wallet.

“Who’s the bigger idiot?” Keiser asked.

Aside from Keiser, other Bitcoin proponents such as Michael Saylor replied to Tesla’s CEO:

SPECIAL OFFER (Sponsored)

Binance Futures 50 USDT FREE Voucher: Use this link to register & get 10% off fees and 50 USDT when trading 500 USDT (limited offer).

PrimeXBT Special Offer: Use this link to register & enter POTATO50 code to get 50% free bonus on any deposit up to 1 BTC.

You Might Also Like:

Coinsmart. Beste Bitcoin-Börse in Europa
Source: https://coingenius.news/bitcoin-proponents-against-elon-musk-following-heated-dogecoin-vs-bitcoin-tweets/?utm_source=rss&utm_medium=rss&utm_campaign=bitcoin-proponents-against-elon-musk-following-heated-dogecoin-vs-bitcoin-tweets

Continue Reading

AI

PlotX v2 Mainnet Launch: DeFi Prediction Markets

Avatar

Published

on

In early Sunday trading, BTC prices had fallen to their lowest levels for over 11 weeks, hitting $46,700 before a minor recovery.

The last time Bitcoin dropped to these levels was at the end of February during the second major correction of this ongoing rally. A rebound off that bottom sent prices above $60K for the first time in the two weeks that followed.

Later today, Bitcoin is going to close another weekly candle. In case the candle closes at those levels, this will become the worst weekly close since February 22nd, when BTC ended the week at $45,240, according to Bitstamp. Two weeks ago the weekly candle closed at $49,200, which the current lowest week close since February.

Second ‘Lower Low’ For Bitcoin

This time around, things feel slightly different and the bearish sentiment is returning to crypto-asset markets. Since its all-time high of $65K on April 14, Bitcoin has made a lower high and has now formed a second lower low on the daily chart, which is indicative of a larger downtrend developing.

Analyst ‘CryptoFibonacci’ has been eyeing the weekly chart which also suggests the bulls could be running out of steam.


ADVERTISEMENT

The move appears to have been driven by Elon Musk again with a tweet about Bitcoin’s energy consumption on May 13. Bitcoin’s fear and greed index has dropped to 20 – ‘extreme fear’ – its lowest level since the March 2020 market crash. At the time of press, BTC was trading at just under $48,000, down 4% over the past 24 hours.

Market Cap Shrinks by $150B

As usual, the move has initiated a selloff for the majority of other cryptocurrencies resulting in around $150 billion exiting the markets over the past day or so.

The total market cap has declined to $2.3 trillion after an all-time high of $2.5 trillion on May 12. Things are still high on the long term view but losses could accelerate rapidly if the bearish sentiment increases.

Not all crypto assets are correcting this weekend, and some have been building on recent gains to push even higher – although they are few in number.

Those weekend warriors include Cardano which has added 4.8% on the day to trade at $2.27 according to Coingecko. ADA hit an all-time high on Saturday, May 15 reaching $2.36, a gain of 54% over the past 30 days.

Ripple’s XRP is also seeing a resurgence with a 13% pump on the day to flip Cardano for the fourth spot. XRP is currently trading at $1.58 with a market cap of $73 billion. The only other two cryptocurrencies in the green at the time of writing are Stellar and Solana, gaining 3.7% and 12% respectively.

SPECIAL OFFER (Sponsored)

Binance Futures 50 USDT FREE Voucher: Use this link to register & get 10% off fees and 50 USDT when trading 500 USDT (limited offer).

PrimeXBT Special Offer: Use this link to register & enter POTATO50 code to get 50% free bonus on any deposit up to 1 BTC.

You Might Also Like:

Coinsmart. Beste Bitcoin-Börse in Europa
Source: https://coingenius.news/plotx-v2-mainnet-launch-defi-prediction-markets-58/?utm_source=rss&utm_medium=rss&utm_campaign=plotx-v2-mainnet-launch-defi-prediction-markets-58

Continue Reading
AI5 days ago

Build a cognitive search and a health knowledge graph using AWS AI services

Esports4 days ago

‘Destroy Sandcastles’ in Fortnite Locations Explained

Blockchain4 days ago

Shiba Inu: Know How to Buy the New Dogecoin Rival

Blockchain5 days ago

Meme Coins Craze Attracting Money Behind Fall of Bitcoin

Esports5 days ago

Valve launches Supporters Clubs, allows fans to directly support Dota Pro Circuit teams

Blockchain5 days ago

Sentiment Flippening: Why This Bitcoin Expert Doesn’t Own Ethereum

Blockchain4 days ago

Texas House Passes Bill that Recognizes Crypto Under Commercial Law

Aviation4 days ago

American Airlines Continues To Build Up Its Core Hub Strategy

Aviation5 days ago

Reuters: American Airlines adds stops to two flights after pipeline outage

ACN Newswire5 days ago

Duet Protocol closes first-round funding at US$3 million

Cyber Security5 days ago

Pending Data Protection and Security Laws At-A-Glance: APAC

AI5 days ago

Onestream: Data analysis, AI tools usage increased in 2021

Blockchain5 days ago

QAN Raises $2.1 Million in Venture Capital to Build DeFi Ecosystem

Blockchain4 days ago

Facebook’s Diem Enters Crypto Space With Diem USD Stablecoin

Esports4 days ago

Video: s1mple – MVP of DreamHack Masters Spring 2021

Business Insider5 days ago

Rally Expected To Stall For China Stock Market

Blockchain4 days ago

NSAV ANNOUNCES LAUNCH OF VIRTUABROKER’S PROPRIETARY CRYPTOCURRENCY PRICE SEARCH FEATURE

Business Insider4 days ago

HDI Announces Voting Results for Annual General and Special Meeting

Esports4 days ago

TiMi Studios partners with Xbox Game Studios to bring a “new game sensory experience” to players

AR/VR1 day ago

Next Dimension Podcast – Pico Neo 3, PSVR 2, HTC Vive Pro 2 & Vive Focus 3!

Trending