Connect with us

AI

Building a visual search application with Amazon SageMaker and Amazon ES

Avatar

Published

on

Sometimes it’s hard to find the right words to describe what you’re looking for. As the adage goes, “A picture is worth a thousand words.” Often, it’s easier to show a physical example or image than to try to describe an item with words, especially when using a search engine to find what you’re looking for.

In this post, you build a visual image search application from scratch in under an hour, including a full-stack web application for serving the visual search results.

Visual search can improve customer engagement in retail businesses and e-commerce, particularly for fashion and home decoration retailers. Visual search allows retailers to suggest thematically or stylistically related items to shoppers, which retailers would struggle to achieve by using a text query alone. According to Gartner, “By 2021, early adopter brands that redesign their websites to support visual and voice search will increase digital commerce revenue by 30%.”

High-level example of visual searching

Amazon SageMaker is a fully managed service that provides every developer and data scientist with the ability to build, train, and deploy machine learning (ML) models quickly. Amazon Elasticsearch Service (Amazon ES) is a fully managed service that makes it easy for you to deploy, secure, and run Elasticsearch cost-effectively at scale. Amazon ES offers k-Nearest Neighbor (KNN) search, which can enhance search in similar use cases such as product recommendations, fraud detection, and image, video, and semantic document retrieval. Built using the lightweight and efficient Non-Metric Space Library (NMSLIB), KNN enables high-scale, low-latency, nearest neighbor search on billions of documents across thousands of dimensions with the same ease as running any regular Elasticsearch query.

The following diagram illustrates the visual search architecture.

Overview of solution

Implementing the visual search architecture consists of two phases:

  1. Building a reference KNN index on Amazon ES from a sample image dataset.
  2. Submitting a new image to the Amazon SageMaker endpoint and Amazon ES to return similar images.

KNN reference index creation

In this step, from each image you extract 2,048 feature vectors from a pre-trained Resnet50 model hosted in Amazon SageMaker. Each vector is stored to a KNN index in an Amazon ES domain. For this use case, you use images from FEIDEGGER, a Zalando research dataset consisting of 8,732 high-resolution fashion images. The following screenshot illustrates the workflow for creating KNN index.

The process includes the following steps:

  1. Users interact with a Jupyter notebook on an Amazon SageMaker notebook instance.
  2. A pre-trained Resnet50 deep neural net from Keras is downloaded, the last classifier layer is removed, and the new model artifact is serialized and stored in Amazon Simple Storage Service (Amazon S3). The model is used to start a TensorFlow Serving API on an Amazon SageMaker real-time endpoint.
  3. The fashion images are pushed through the endpoint, which runs the images through the neural network to extract the image features, or embeddings.
  4. The notebook code writes the image embeddings to the KNN index in an Amazon ES domain.

Visual search from a query image

In this step, you present a query image from the application, which passes through the Amazon SageMaker hosted model to extract 2,048 features. You use these features to query the KNN index in Amazon ES. KNN for Amazon ES lets you search for points in a vector space and find the “nearest neighbors” for those points by Euclidean distance or cosine similarity (the default is Euclidean distance). When it finds the nearest neighbors vectors (for example, k = 3 nearest neighbors) for a given image, it returns the associated Amazon S3 images to the application. The following diagram illustrates the visual search full-stack application architecture.

The process includes the following steps:

  1. The end-user accesses the web application from their browser or mobile device.
  2. A user-uploaded image is sent to Amazon API Gateway and AWS Lambda as a base64 encoded string and is re-encoded as bytes in the Lambda function.
    1. A publicly readable image URL is passed as a string and downloaded as bytes in the function.
  3. The bytes are sent as the payload for inference to an Amazon SageMaker real-time endpoint, and the model returns a vector of the image embeddings.
  4. The function passes the image embedding vector in the search query to the k-nearest neighbor in the index in the Amazon ES domain. A list of k similar images and their respective Amazon S3 URIs is returned.
  5. The function generates pre-signed Amazon S3 URLs to return back to the client web application, used to display similar images in the browser.

AWS services

To build the end-to-end application, you use the following AWS services:

  • AWS AmplifyAWS Amplify is a JavaScript library for front-end and mobile developers building cloud-enabled applications. For more information, see the GitHub repo.
  • Amazon API Gateway – A fully managed service to create, publish, maintain, monitor, and secure APIs at any scale.
  • AWS CloudFormationAWS CloudFormation gives developers and businesses an easy way to create a collection of related AWS and third-party resources and provision them in an orderly and predictable fashion.
  • Amazon ES – A managed service that makes it easy to deploy, operate, and scale Elasticsearch clusters at scale.
  • AWS IAMAWS Identity and Access Management (IAM) enables you to manage access to AWS services and resources securely.
  • AWS Lambda – An event-driven, serverless computing platform that runs code in response to events and automatically manages the computing resources the code requires.
  • Amazon SageMaker – A fully managed end-to-end ML platform to build, train, tune, and deploy ML models at scale.
  • AWS SAMAWS Serverless Application Model (AWS SAM) is an open-source framework for building serverless applications.
  • Amazon S3 – An object storage service that offers an extremely durable, highly available, and infinitely scalable data storage infrastructure at very low cost.

Prerequisites

For this walkthrough, you should have an AWS account with appropriate IAM permissions to launch the CloudFormation template.

Deploying your solution

You use a CloudFormation stack to deploy the solution. The stack creates all the necessary resources, including the following:

  • An Amazon SageMaker notebook instance to run Python code in a Jupyter notebook
  • An IAM role associated with the notebook instance
  • An Amazon ES domain to store and retrieve image embedding vectors into a KNN index
  • Two S3 buckets: one for storing the source fashion images and another for hosting a static website

From the Jupyter notebook, you also deploy the following:

  • An Amazon SageMaker endpoint for getting image feature vectors and embeddings in real time.
  • An AWS SAM template for a serverless back end using API Gateway and Lambda.
  • A static front-end website hosted on an S3 bucket to demonstrate a real-world, end-to-end ML application. The front-end code uses ReactJS and the Amplify JavaScript library.

To get started, complete the following steps:

  1. Sign in to the AWS Management Console with your IAM user name and password.
  2. Choose Launch Stack and open it in a new tab:
  3. On the Quick create stack page, select the check box to acknowledge the creation of IAM resources.
  4. Choose Create stack.
  5. Wait for the stack to complete executing.

You can examine various events from the stack creation process on the Events tab. When the stack creation is complete, you see the status CREATE_COMPLETE.

You can look on the Resources tab to see all the resources the CloudFormation template created.

  1. On the Outputs tab, choose the SageMakerNotebookURL value.

This hyperlink opens the Jupyter notebook on your Amazon SageMaker notebook instance that you use to complete the rest of the lab.

You should be on the Jupyter notebook landing page.

  1. Choose visual-image-search.ipynb.

Building a KNN index on Amazon ES

For this step, you should be at the beginning of the notebook with the title Visual image search. Follow the steps in the notebook and run each cell in order.

You use a pre-trained Resnet50 model hosted on an Amazon SageMaker endpoint to generate the image feature vectors (embeddings). The embeddings are saved to the Amazon ES domain created in the CloudFormation stack. For more information, see the markdown cells in the notebook.

Continue when you reach the cell Deploying a full-stack visual search application in your notebook.

The notebook contains several important cells.

To load a pre-trained ResNet50 model without the final CNN classifier layer, see the following code (this model is used just as an image feature extractor):

#Import Resnet50 model
model = tf.keras.applications.ResNet50(weights='imagenet', include_top=False,input_shape=(3, 224, 224),pooling='avg')

You save the model as a TensorFlow SavedModel format, which contains a complete TensorFlow program, including weights and computation. See the following code:

#Save the model in SavedModel format
model.save('./export/Servo/1/', save_format='tf')

Upload the model artifact (model.tar.gz) to Amazon S3 with the following code:

#Upload the model to S3
sagemaker_session = sagemaker.Session()
inputs = sagemaker_session.upload_data(path='model.tar.gz', key_prefix='model')
inputs

You deploy the model into an Amazon SageMaker TensorFlow Serving-based server using the Amazon SageMaker Python SDK. The server provides a super-set of the TensorFlow Serving REST API. See the following code:

#Deploy the model in Sagemaker Endpoint. This process will take ~10 min.
from sagemaker.tensorflow.serving import Model sagemaker_model = Model(entry_point='inference.py', model_data = 's3://' + sagemaker_session.default_bucket() + '/model/model.tar.gz', role = role, framework_version='2.1.0', source_dir='./src' ) predictor = sagemaker_model.deploy(initial_instance_count=3, instance_type='ml.m5.xlarge')

Extract the reference images features from the Amazon SageMaker endpoint with the following code:

# define a function to extract image features
from time import sleep sm_client = boto3.client('sagemaker-runtime')
ENDPOINT_NAME = predictor.endpoint def get_predictions(payload): return sm_client.invoke_endpoint(EndpointName=ENDPOINT_NAME, ContentType='application/x-image', Body=payload) def extract_features(s3_uri): key = s3_uri.replace(f's3://{bucket}/', '') payload = s3.get_object(Bucket=bucket,Key=key)['Body'].read() try: response = get_predictions(payload) except: sleep(0.1) response = get_predictions(payload) del payload response_body = json.loads((response['Body'].read())) feature_lst = response_body['predictions'][0] return s3_uri, feature_lst

You define Amazon ES KNN index mapping with the following code:

#Define KNN Elasticsearch index mapping
knn_index = { "settings": { "index.knn": True }, "mappings": { "properties": { "zalando_img_vector": { "type": "knn_vector", "dimension": 2048 } } }
}

Import the image feature vector and associated Amazon S3 image URI into the Amazon ES KNN Index with the following code:

# defining a function to import the feature vectors corrosponds to each S3 URI into Elasticsearch KNN index
# This process will take around ~3 min. def es_import(i): es.index(index='idx_zalando', body={"zalando_img_vector": i[1], "image": i[0]} ) process_map(es_import, result, max_workers=workers)

Building a full-stack visual search application

Now that you have a working Amazon SageMaker endpoint for extracting image features and a KNN index on Amazon ES, you’re ready to build a real-world full-stack ML-powered web app. You use an AWS SAM template to deploy a serverless REST API with API Gateway and Lambda. The REST API accepts new images, generates the embeddings, and returns similar images to the client. Then you upload a front-end website that interacts with your new REST API to Amazon S3. The front-end code uses Amplify to integrate with your REST API.

  1. In the following cell, prepopulate a CloudFormation template that creates necessary resources such as Lambda and API Gateway for full-stack application:
    s3_resource.Object(bucket, 'backend/template.yaml').upload_file('./backend/template.yaml', ExtraArgs={'ACL':'public-read'}) sam_template_url = f'https://{bucket}.s3.amazonaws.com/backend/template.yaml' # Generate the CloudFormation Quick Create Link print("Click the URL below to create the backend API for visual search:n")
    print(( 'https://console.aws.amazon.com/cloudformation/home?region=us-east-1#/stacks/create/review' f'?templateURL={sam_template_url}' '&stackName=vis-search-api' f'&param_BucketName={outputs["s3BucketTraining"]}' f'&param_DomainName={outputs["esDomainName"]}' f'&param_ElasticSearchURL={outputs["esHostName"]}' f'&param_SagemakerEndpoint={predictor.endpoint}'
    ))
    

    The following screenshot shows the output: a pre-generated CloudFormation template link.

  2. Choose the link.

You are sent to the Quick create stack page.

  1. Select the check boxes to acknowledge the creation of IAM resources, IAM resources with custom names, and CAPABILITY_AUTO_EXPAND.
  2. Choose Create stack.

After the stack creation is complete, you see the status CREATE_COMPLETE. You can look on the Resources tab to see all the resources the CloudFormation template created.

  1. After the stack is created, proceed through the cells.

The following cell indicates that your full-stack application, including front-end and back-end code, are successfully deployed:

print('Click the URL below:n')
print(outputs['S3BucketSecureURL'] + '/index.html')

The following screenshot shows the URL output.

  1. Choose the link.

You are sent to the application page, where you can upload an image of a dress or provide the URL link of a dress and get similar dresses.

  1. When you’re done testing and experimenting with your visual search application, run the last two cells at the bottom of the notebook:
    # Delete the endpoint
    predictor.delete_endpoint() # Empty S3 Contents
    training_bucket_resource = s3_resource.Bucket(bucket)
    training_bucket_resource.objects.all().delete() hosting_bucket_resource = s3_resource.Bucket(outputs['s3BucketHostingBucketName'])
    hosting_bucket_resource.objects.all().delete()
    

    These cells terminate your Amazon SageMaker endpoint and empty your S3 buckets to prepare you for cleaning up your resources.

Cleaning up

To delete the rest of your AWS resources, go to the AWS CloudFormation console and delete the vis-search-api and vis-search stacks.

Conclusion

In this post, we showed you how to create an ML-based visual search application using Amazon SageMaker and the Amazon ES KNN index. You used a pre-trained Resnet50 model trained on an ImageNet dataset. However, you can also use other pre-trained models, such as VGG, Inception, and MobileNet, and fine-tune with your own dataset.

A GPU instance is recommended for most deep learning purposes. Training new models is faster on a GPU instance than a CPU instance. You can scale sub-linearly when you have multi-GPU instances or if you use distributed training across many instances with GPUs. However, we used CPU instances for this use case so that you can complete the walkthrough under the AWS Free Tier.

For more information about the code sample in the post, see the GitHub repo. For more information about Amazon ES, see the following:


About the Authors

Amit Mukherjee is a Sr. Partner Solutions Architect with AWS. He provides architectural guidance to help partners achieve success in the cloud. He has a special interest in AI and machine learning. In his spare time, he enjoys spending quality time with his family.

Laith Al-Saadoon is a Sr. Solutions Architect with a focus on data analytics at AWS. He spends his days obsessing over designing customer architectures to process enormous amounts of data at scale. In his free time, he follows the latest in machine learning and artificial intelligence.

Source: https://aws.amazon.com/blogs/machine-learning/building-a-visual-search-application-with-amazon-sagemaker-and-amazon-es/

AI

Using embedded analytics in software applications can drive your business forward

Avatar

Published

on

Analytics in your tools can help users gain insights that can help move your clients and the organization to the next level.

People interacting with charts and analyzing statistics. Data visualization concept. 3d vector illustration. People work

Image: Mykyta Dolmatov, Getty Images/iStockphoto

More about Big Data

More than two years ago, Edsby, which provides a learning management system for educational institutions, began embedding analytics into its software that enabled teachers and administrators to detect student learning trends, assess test scores across student populations, and more, all in the spirit of improving education results. 

The Edsby example is not an isolated event. Increasingly, commercial and company in-house software developers are being asked to deliver more value with their applications. In other words, don’t just write applications that process transactions; tell us about the trends and insights transactions reveal by embedding analytics as part of the application.

“Software teams are responsible for building applications with embedded analytics that help their end users make better decisions,” said Steve Schneider, CEO of Logi Analytics, which provides embedded analytics tools for software developers.” This is the idea of providing high-level analytics in the context of an application that people use every day.”

SEE: Microservices: A cheat sheet (free PDF) (TechRepublic)

Schneider said what users want is transactional apps with built-in analytics capabilities that can provide insights to a variety of users with different interests and skill sets. “These are highly sophisticated analytics that must be accessible right from the application,” he said. 

With the help of pick-and-click tools, transaction application developers are spared the time of having to learn how to embed analytics from the ground up in their apps. Instead, they can choose to embed an analytics dashboard into their application, or they can quickly orchestrate an API call to another application without a need to custom develop all of the code.

“You can just click on the Embed command, and the tool will give you a Java script,” Schneider said. “In some cases, you have to do a little configuration for security, but it makes it much easier to get analytics-enriched apps to your user market faster.”

Getting apps to market faster

Here’s how an embedded analytics tool can speed apps to market.

A marketing person is tasked with buying ads and organizing campaigns. He or she gathers information and feeds it to IT, which periodically issues reports that show the results of ad placements and campaigns.

SEE: How to overcome business continuity challenges (free PDF) (TechRepublic)

Now with an application that contains embedded analytics, the marketing person can directly drill down into the reporting information embedded in the app without having to contact IT. This can be done through a self-service interface in real time.

“In one case, a manufacturer was trying to improve operational performance through the use of an application and set of stated metrics,” Schneider said. “Everyone had to log in to the application to record their metrics, but the overall goal of improving performance remained elusive. The manufacturer decided to augment the original application with an embedded analytics dashboard that displayed the key metrics and each team’s performance. This provided visibility to everyone. This quickly evolved into a friendly competition between different groups of employees to see who could achieve the best scores, and the overall corporate metrics performance improved.” 

For most developers, embedding analytics in applications is still in early stages—but embedded analytics in apps is an area that is poised to expand, and that at some point will be able to incorporate both structured and unstructured data in in-app visualizations.

Best practices for embedded analytics

Companies and commercial enterprises interested in using embedded analytics in transactional applications should consider these two best practices:

  1. Think about the users of your application and the problems that they’re trying to solve

This begins with asking users what information they need in order to be successful. “Application developers can also benefit if they think more like product managers,” Schneider said. In other words, what can I do with embedded analytics in my application to truly delight my customer—even if it is the user next door in accounting who I see every day?

2. Start simple

If you haven’t used embedded analytics in applications before, choose a relatively easy-to-achieve objective for your first app and work with a cooperative user. By building a series of successful and high usable apps from the start, you instill confidence in this new style of application. At the same time, you can be defining and standardizing your embedded app development methodology in IT.

Also see

Source: https://www.techrepublic.com/article/using-embedded-analytics-in-software-applications-can-drive-your-business-forward/#ftag=RSS56d97e7

Continue Reading

AI

China and AI: What the World Can Learn and What It Should Be Wary of

Avatar

Published

on


China announced in 2017 its ambition to become the world leader in artificial intelligence (AI) by 2030. While the US still leads in absolute terms, China appears to be making more rapid progress than either the US or the EU, and central and local government spending on AI in China is estimated to be in the tens of billions of dollars.

The move has ledat least in the Westto warnings of a global AI arms race and concerns about the growing reach of China’s authoritarian surveillance state. But treating China as a “villain” in this way is both overly simplistic and potentially costly. While there are undoubtedly aspects of the Chinese government’s approach to AI that are highly concerning and rightly should be condemned, it’s important that this does not cloud all analysis of China’s AI innovation.

The world needs to engage seriously with China’s AI development and take a closer look at what’s really going on. The story is complex and it’s important to highlight where China is making promising advances in useful AI applications and to challenge common misconceptions, as well as to caution against problematic uses.

Nesta has explored the broad spectrum of AI activity in Chinathe good, the bad, and the unexpected.

The Good

China’s approach to AI development and implementation is fast-paced and pragmatic, oriented towards finding applications which can help solve real-world problems. Rapid progress is being made in the field of healthcare, for example, as China grapples with providing easy access to affordable and high-quality services for its aging population.

Applications include “AI doctor” chatbots, which help to connect communities in remote areas with experienced consultants via telemedicine; machine learning to speed up pharmaceutical research; and the use of deep learning for medical image processing, which can help with the early detection of cancer and other diseases.

Since the outbreak of Covid-19, medical AI applications have surged as Chinese researchers and tech companies have rushed to try and combat the virus by speeding up screening, diagnosis, and new drug development. AI tools used in Wuhan, China, to tackle Covid-19 by helping accelerate CT scan diagnosis are now being used in Italy and have been also offered to the NHS in the UK.

The Bad

But there are also elements of China’s use of AI that are seriously concerning. Positive advances in practical AI applications that are benefiting citizens and society don’t detract from the fact that China’s authoritarian government is also using AI and citizens’ data in ways that violate privacy and civil liberties.

Most disturbingly, reports and leaked documents have revealed the government’s use of facial recognition technologies to enable the surveillance and detention of Muslim ethnic minorities in China’s Xinjiang province.

The emergence of opaque social governance systems that lack accountability mechanisms are also a cause for concern.

In Shanghai’s “smart court” system, for example, AI-generated assessments are used to help with sentencing decisions. But it is difficult for defendants to assess the tool’s potential biases, the quality of the data, and the soundness of the algorithm, making it hard for them to challenge the decisions made.

China’s experience reminds us of the need for transparency and accountability when it comes to AI in public services. Systems must be designed and implemented in ways that are inclusive and protect citizens’ digital rights.

The Unexpected

Commentators have often interpreted the State Council’s 2017 Artificial Intelligence Development Plan as an indication that China’s AI mobilization is a top-down, centrally planned strategy.

But a closer look at the dynamics of China’s AI development reveals the importance of local government in implementing innovation policy. Municipal and provincial governments across China are establishing cross-sector partnerships with research institutions and tech companies to create local AI innovation ecosystems and drive rapid research and development.

Beyond the thriving major cities of Beijing, Shanghai, and Shenzhen, efforts to develop successful innovation hubs are also underway in other regions. A promising example is the city of Hangzhou, in Zhejiang Province, which has established an “AI Town,” clustering together the tech company Alibaba, Zhejiang University, and local businesses to work collaboratively on AI development. China’s local ecosystem approach could offer interesting insights to policymakers in the UK aiming to boost research and innovation outside the capital and tackle longstanding regional economic imbalances.

China’s accelerating AI innovation deserves the world’s full attention, but it is unhelpful to reduce all the many developments into a simplistic narrative about China as a threat or a villain. Observers outside China need to engage seriously with the debate and make more of an effort to understandand learn fromthe nuances of what’s really happening.The Conversation

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Image Credit: Dominik Vanyi on Unsplash

Source: https://singularityhub.com/2020/07/03/china-and-ai-what-the-world-can-learn-and-what-it-should-be-wary-of/

Continue Reading

AI

Building a Discord Bot for ChatOps , Pentesting or Server Automation (Part 5)

Avatar

Published

on

Coding and debugging with Visual Studio Code

Open Visual Studio Code and press CTRL+Shift+P to enter the input window. Write “ssh” and select “Remote-SSH: Add New SSH Host…” for adding our server. It will ask you IP Address and the user of our Digital Ocean server

The app will show us the success message allowing us to connect directly

Once again press CTRL+Shift+P and enter “Remote-SSH: Connect to Host…” and select the connection

Now we will use the knowledge of the previous steps. Create the “.env” file with your secret constants, the “requirements.txt” file with the dependencies and the “bot.py” file with your existing bot’s code

To test it quickly we need a “.env” file with the “DISCORD_TOKEN” constant

A “requirements.txt” file like this one

And for the simplest bot code write this in the “bot.py” file

In summary

Go back to the terminal or use the integrated terminal in Visual Studio Code and install the requirements with the command

To test the bot write the command

You should see the “<Your bots name and id> is connected” message in the terminal and in Discord you should see the bot status as online

If you like to debug in Visual Studio Code to fix some bugs or to understand the logic, press F5 key in the IDE and select “Python File”

The IDE will enter debug mode allowing you to breakpoint the code and see the content of the variables

We are all set for this step.

If you encounter typos or something doesn’t work no more write me a comment and I will keep this guide updated. Last update June 28 2020.

Source: https://chatbotslife.com/building-a-discord-bot-for-chatops-pentesting-or-server-automation-part-5-feea1c09b2de?source=rss—-a49517e4c30b—4

Continue Reading
Gaming3 hours ago

Xur’s location and wares for July 3, 2020 – Destiny 2

Gaming3 hours ago

Destroy All Humans! Dependence Day trailer pokes fun at July 4

Gaming3 hours ago

Torchlight 3 hands-on preview: Burning brightly

The rolling plains of Colorado, drenched in the never-moving sun.
Gaming4 hours ago

Hunting Simulator 2 review: Doggone it

Gaming5 hours ago

All Mermaid DIYs And Clothing Items In Animal Crossing: New Horizons

Blockchain5 hours ago

Ransomware Targets Outdated Microsoft Excel Macros to Deploy Attacks

Gaming5 hours ago

What’s New In Animal Crossing: New Horizons’ Summer Update

Gaming5 hours ago

How To Find Pascal In Animal Crossing: New Horizons

Blockchain6 hours ago

Analyst Who Predicted Bitcoin’s V-Shaped Reversal at $3,700 Is Bullish

Blockchain6 hours ago

Here’s Why Ethereum’s Consolidation Could Result in an Explosive Move to $480

Gaming6 hours ago

Check On Your Black Gamer Friends

Blockchain6 hours ago

European Authorities Take Down Encryption-Based Criminal Group

venezuela-raises-petrol-prices-mandates-support-for-petro-at-gas-stations-3.jpg
Blockchain6 hours ago

Financial Services Dominate European Blockchain Dev: Report

Blockchain6 hours ago

Is Ripple exploring ODL between Europe, Mexico, Australia?

Blockchain6 hours ago

Vitalik: We Underestimated How Long Proof-of-Stake and Sharding Would Take to Complete

CovId196 hours ago

Major League Baseball Cancels 2020 All-Star Game Because Of Coronavirus

Blockchain7 hours ago

Tron (TRX) Jumps Into DeFi Frenzy with Three New Products

IOT7 hours ago

Sky Anchor Puts Radios Up High, No Tower Needed

CovId197 hours ago

When Your Dad Owns A Pizzeria, The Pandemic Means Learning To Make The Perfect Pie

jordan-henderson-i-changed-from-wanting-to-be-a-player-that-did-everything-jonathan-liew.jpg
CovId197 hours ago

Jordan Henderson: ‘I changed from wanting to be a player that did everything’ | Jonathan Liew

venezuela-raises-petrol-prices-mandates-support-for-petro-at-gas-stations-3.jpg
BBC7 hours ago

Celebrity MasterChef review – anyone for a giant lasagne?

Blockchain7 hours ago

Here’s the “Do or Die” Price That Will Determine Ethereum’s Macro Trend

Blockchain7 hours ago

UK Regulators Shut Down Crypto Exchange Following £1.5m Scam

Blockchain7 hours ago

2020 Top DeFi Projects to Follow

Mobility7 hours ago

Lime brings Jump bikes back to London

IOT7 hours ago

Must-See Cyberpunk Films: Hackers #cyberpunk

IOT7 hours ago

COMING SOON – Filtering Mask with Math Pattern

venezuela-raises-petrol-prices-mandates-support-for-petro-at-gas-stations-3.jpg
Blockchain7 hours ago

IRS Calls for Tools to Investigate Privacy Colin Transactions 

Gaming7 hours ago

New Sea Creatures Guide — Animal Crossing: New Horizons

Blockchain7 hours ago

In bitcoin, is anonymous really anonymous?

Blockchain7 hours ago

Telecom Giant Thinks Blockchain Can Make Phone Insurance More Convenient

Blockchain7 hours ago

Kyber Network (KNC) Price Skyrockets 28% Today, Here Is Why

venezuela-raises-petrol-prices-mandates-support-for-petro-at-gas-stations-3.jpg
Blockchain7 hours ago

Cryptocurrency News Roundup for July 3, 2020

Blockchain7 hours ago

This Binance Launchpad Alum Believes It Has Cardano, EOS & Algorand Beat

Blockchain7 hours ago

PnxBet Review – Cryptocurrency Online Sportsbook and Casino With Instant Deposits

Cyber Security7 hours ago

Facebook Flaw Allowed Thousands Of Developers To Gather Personal Data

Blockchain8 hours ago

OKEx Now Features Latin American Fiat Gateway with Latamex

Blockchain8 hours ago

Price Analysis 7/3: BTC, ETH, XRP, BCH, BSV, LTC, ADA, BNB, EOS. CRO

Blockchain8 hours ago

UK Regulators Shutter Phony Crypto Exchange GPay

Blockchain8 hours ago

Bitcoin’s price expectation depends on how much money you have

Trending