Connect with us

AI

Can artificial intelligence give elephants a winning edge?

Avatar

Published

on

Images of elephants roaming the African plains are imprinted on all of our minds and something easily recognized as a symbol of Africa. But the future of elephants today is uncertain. An elephant is currently being killed by poachers every 15 minutes, and humans, who love watching them so much, have declared war on their species. Most people are not poachers, ivory collectors or intentionally harming wildlife, but silence or indifference to the battle at hand is as deadly.

You can choose to read this article, feel bad for a moment and then move on to your next email and start your day.

Or, perhaps you will pause and think: Our opportunities to help save wildlife, especially elephants, are right in front of us and grow every day. And some of these opportunities are rooted in machine learning (ML) and the magical outcome we fondly call AI.

Open-source developers are giving elephants a neural edge

Six months ago, amid a COVID-infused world, Hackster.io, a large open-source community owned by Avnet, and Smart Parks, a Dutch-based organization focused on wildlife conservation, reached out to tech industry leaders, including Microsoft, u-blox and Taoglas, Nordic Semiconductors, Western Digital and Edge Impulse with an idea to fund the R&D, manufacturing and shipping of 10 of the most advanced elephant tracking collars ever built.

These modern tracking collars are designed to deploy advanced machine-learning (ML) algorithms with the most extended battery life ever delivered for similar devices and a networking range more expansive than ever seen before. To make this vision even more audacious, they called to fully open-source and freely share the outcome of this effort via OpenCollar.io, a conservation organization championing open-source tracking collar hardware and software for environmental and wildlife monitoring projects.

Our opportunities to help save wildlife — especially elephants — are right in front of us and grow every day.

The tracker, ElephantEdge, would be built by specialist engineering firm Irnas, with the Hackster community coming together to make fully deployable ML models by Edge Impulse and telemetry dashboards by Avnet that will run the newly built hardware. Such an ambitious project was never attempted before, and many doubted that such a collaborative and innovative project could be pulled off.

Creating the world’s best elephant-tracking device

Only they pulled it off. Brilliantly. The new ElephantEdge tracker is considered the most advanced of its kind, with eight years of battery life and hundreds of miles worth of LoRaWAN networking repeaters range, running TinyML models that will provide park rangers with a better understanding of elephant acoustics, motion, location, environmental anomalies and more. The tracker can communicate with an array of sensors, connected by LoRaWAN technology to park rangers’ phones and laptops.

This gives rangers a more accurate image and location to track than earlier systems that captured and reported on pictures of all wildlife, which ran down the trackers’ battery life. The advanced ML software that runs on these trackers is built explicitly for elephants and developed by the Hackster.io community in a public design challenge.

“Elephants are the gardeners of the ecosystems as their roaming in itself creates space for other species to thrive. Our ElephantEdge project brings in people from all over the world to create the best technology vital for the survival of these gentle giants. Every day they are threatened by habitat destruction and poaching. This innovation and partnerships allow us to gain more insight into their behavior so we can improve protection,” said Smart Parks co-founder Tim van Dam.

Open-source, community-powered, conservation-AI at work

With hardware built by Irnas and Smart Parks, the community was busy building the algorithms to make it sing. Software developer and data scientist Swapnil Verma and Mausam Jain in the U.K. and Japan created Elephant AI. Using Edge Impulse, the team developed two ML models that will tap the tracker’s onboard sensors and provide critical information for park rangers.

The first community-led project, called Human Presence Detection, will alert park rangers of poaching risk using audio sampling to detect human presence in areas where humans are not supposed to be. This algorithm uses audio sensors to record sound and sight while sending it over the LoRaWAN network directly to a ranger’s phone to create an immediate alert.

The second model they named “Elephant Activity Monitoring.” It detects general elephant activity, taking time-series input from the tracker’s accelerometer to spot and make sense of running, sleeping and grazing to provide conservation specialists with the critical information they need to protect the elephants.

Another brilliant community development came from the other side of the world. Sara Olsson, a Swedish software engineer who has a passion for the national world, created a TinyML and IoT monitoring dashboard to help park rangers with conservation efforts.

With little resources and support, Sara built a full telemetry dashboard combined with ML algorithms to monitor camera traps and watering holes, while reducing network traffic by processing data on the collar and considerably saving battery life. To validate her hypothesis, she used 1,155 data models and 311 tests!

Sara Olsson’s TinyML and IoT monitoring dashboard. Image Credits: Sara Olsson

She completed her work in the Edge Impulse studio, creating the models and testing them with camera traps streams from Africam using an OpenMV camera from her home’s comfort.

Technology for good works, but human behavior must change

Project ElephantEdge is an example of how commercial and public interest can converge and result in a collaborative sustainability effort to advance wildlife conservation efforts. The new collar can generate critical data and equip park rangers with better data to make urgent life-saving decisions about protecting their territories. By the end of 2021, at least ten elephants will be sporting the new collars in selected parks across Africa, in partnership with the World Wildlife Fund and Vulcan’s EarthRanger, unleashing a new wave of conservation, learning and defending.

Naturally, this is great, the technology works, and it’s helping elephants like never before. But in reality, the root cause of the problem runs much more profound. Humans must change their relationship to the natural world for proper elephant habitat and population revival to occur.

“The threat to elephants is greater than it’s ever been,” said Richard Leakey, a leading palaeoanthropologist and conservationist scholar. The main argument for allowing trophy or ivory hunting is that it raises money for conservation and local communities. However, a recent report revealed that only 3% of Africa’s hunting revenue trickles down to communities in hunting areas. Animals don’t need to die to make money for the communities you live around.

With great technology, collaboration and a commitment to address the underlying cultural conditions and the ivory trade that leads to most elephant deaths, there’s a real chance to save these singular creatures.

Source: https://techcrunch.com/2020/11/20/can-artificial-intelligence-give-elephants-a-winning-edge/

AI

AI clocks first-known ‘binary sextuply-eclipsing sextuple star system’. Another AI will be along shortly to tell us how to pronounce that properly

Avatar

Published

on

Astronomers have discovered the first-known “sextuply-eclipsing sextuple star system,” after a neural network flagged it up in data collected by NASA’s Transiting Exoplanet Survey Satellite (TESS).

The star system, codenamed TIC 168789840, is an oddball compared to its peers. Not only does it contain six suns, they’re split into three pairs of eclipsing binary stars. That means the suns in each pair, to an observer, pass directly in front of one another in their orbits. The stars in each pair are gravitationally bound to each other and to every other sun in their system, meaning each pair circles around each other, and around a common center of mass.

To get an idea of how this is structured, consider each star pair to be labeled A, B, and C. The A pair circle one another every 1.6 days, and C every 1.3 days. The A and C system completes a full orbit around their galactic center in a little under four years.

The remaining pair, B, is much further away, and is described as an outer binary. The B suns revolve around one another every 8.22 days, and it takes them about two thousand years to run a lap around the system’s common center of mass, according to a paper due to appear in The Astrophysical Journal detailing these findings. If that’s all a little mind-boggling, here’s a rough sketch of the system’s structure taken from the paper:

stars

The system’s orbital mechanics … Click to enlarge

An alien living on a hypothetical planet orbiting one of the inner quadruple stars would see four very bright suns in the sky and another two dimmer ones further away. These stars would periodically disappear, as they eclipsed one another. The chances of anyone observing this, however, are pretty slim to none as it doesn’t look like there are any exoplanets in TIC 168789840.

Finding the first sextuply-eclipsing sextuple star system with machine learning

NASA’s TESS telescope gathers a massive amount of data. Instead of manually poring over tens of millions of objects, scientists instead feed the data into machine-learning algorithms designed to highlight the most interesting examples for further examination.

Brian Powell, first author of the study and a data scientist at NASA’s High Energy Astrophysics Science Archive Research Center, trained a classifier to spot eclipsing binary systems.

The neural network looks for the characteristic dip in an object’s light curve, caused when one star passes in front of the other. It assigns a score on the likelihood it’s identified a eclipsing binary systems: ones rated above 0.9 on a scale up to 1.0 is considered a strong candidate.

tess

Looking for a new hobby to kill the COVID-19 blues? Join NASA’s Planet Patrol to hunt for alien worlds

READ MORE

The computer-vision model that performs all this is made up of a layer of approximately 5.5 million parameters, and was taught using more than 40,000 training examples on a cluster of eight Nvidia V100 GPUs for approximately two days.

At first, TIC 168789840 didn’t seem so odd. “The neural network was trained to look for the feature of the eclipse in the light curve with no concern as to periodicity,” Powell told The Register.

“Therefore, to the neural net, an eclipsing binary is no different than an eclipsing sextuple, both of them would likely have an output near 1.0.”

Upon closer inspection, however, the scientists were shocked when they realized they had discovered the first-known triplet eclipsing binary system. Each star locked in their pairs are very similar to one another in terms of mass, radius, and temperature.

“The fact that all three binaries show eclipses allows us to determine the radii and relative temperatures of each star. This, together with measurement of the radial velocities, allows us to determine the masses of the stars. Having this much information on a multiple star system of this order is quite rare,” Powell added.

There are 17 or so known sextuple star systems, though TIC 168789840 is the first structure where the sextuple suns are also binary eclipsing stars. Scientists hope that studying all its structural and physical properties will unlock mysteries of how multiple star systems are born. ®

Source: https://go.theregister.com/feed/www.theregister.com/2021/01/26/sextuple_star_system/

Continue Reading

AI

Governance: Companies mature in their use of AI know that it needs guardrails

Avatar

Published

on

Quality governance ensures responsible data models and AI execution, as well as helps the data models stay true to the business objectives.

Retro wooden robot with light bulb on bright background

Image: Getty Images/iStockphoto

More about artificial intelligence

The fundamentals of traditional IT governance have focused on service-level agreements like uptime and response time, and also on oversight of areas such as security and data privacy. The beauty of these goals is that they are concrete and easy to understand. This makes them attainable with minimal confusion if an organization is committed to getting the job done.

SEE: TechRepublic Premium editorial calendar: IT policies, checklists, toolkits, and research for download (TechRepublic Premium)

Unfortunately, governance becomes a much less-definable task in the world of artificial intelligence (AI), and a premature one for many organizations.

“This can come down to the level of AI maturity that a company is at,” said Scott Zoldi, chief analytics officer at FICO. “Companies are in a variety of stages of the AI lifecycle, from exploring use cases and hiring staff, to building the models, and having a couple of instances deployed but not widely across the organization. Model governance comes into play when companies are mature in their use of AI technology, are invested in it, and realize that AI’s predictive and business value should be accompanied by guardrails.”

Because AI is more opaque than enterprise IT environments, AI requires a governance strategy that asks questions of architectures and that requires architectures to be more transparent,” Zoldi said.

SEE: 3 steps for better data modeling with IT and data science (TechRepublic)

Achieving transparency in AI governance begins with being able explain in plain language the technology behind AI and how it operates to board members, senior management, end users, and non-AI IT staff. Questions that AI practitioners should be able to answer should include but not be limited to, how data is prepared and taken into AI systems, which data is being taken in and why, and how the AI operates on the data to return answers to the questions that the business is asking. AI practitioners should also explain how both data and what you ask of it continuously change over time as business and other conditions change.

This is a pathway to ensuring responsible data models and AI execution, and also a way to ensure that the data models that a company develops for its AI stay true to its business objectives.

One central AI governance challenge is ensuring that the data and the AI operating on it are as bias-free as possible.

“AI governance is a board-level responsibility to mitigate pressures from regulators and advocacy groups,” Zoldi said. “Boards of directors should care about AI governance because AI technology makes decisions that profoundly affect everyone. Will a borrower be invisibly discriminated against and denied a loan? Will a patient’s disease be incorrectly diagnosed, or a citizen unjustly arrested for a crime he did not commit?  

How to achieve AI fairness

The increasing magnitude of AI’s life-altering decisions underscores the urgency with which AI fairness and bias should be ushered onto boards’ agendas.”

SEE: Equitable tech: AI-enabled platform to reduce bias in datasets released  (TechRepublic)

Zoldi said that to eliminate bias, boards must understand and enforce auditable, immutable AI model governance based on four classic tenets of corporate governance: accountability, fairness, transparency, and responsibility. He believes this can be achieved if organizations focus their AI governance on ethical, efficient, and explainable AI.

Ethical AI ensures that models operate without bias toward a protected group, and are used only in areas where we have confidence in the decisions the models generate. These issues have strong business implications; models that make biased decisions against protected groups aren’t just wrong, they are illegal.

Efficient AI helps AI make the leap from the development lab to making decisions in production that can be accepted with confidence. Otherwise, an inordinate amount of time and resources are invested in models that don’t deliver real-world business value. 

Explainable AI makes sure that companies using AI models can meet a growing list of regulations, starting with GDPR, to be able to explain how the model made its decision, and why.”

SEE: Encourage AI adoption by moving shadow AI into the daylight (TechRepublic)

Some organizations are already tackling these AI governance challenges, while others are just beginning to think about them.

This is why, when putting together an internal team to address governance, a best practice approach is a three-tiered structure that begins with an executive sponsor at the top to champion AI at a corporate level.

“One tier down, executives such as the CAO, CTO, CFO, and head of legal should lead the oversight of AI governance from a policy and process perspective,” Zoldi said. “Finally, at the blocking-and-tackling level, senior practitioners from the various model development and model delivery areas, who work together with AI technology on a daily basis, should hash out how to meet those corporate governance standards.”

 Also see

Source: https://www.techrepublic.com/article/governance-companies-mature-in-their-use-of-ai-know-that-it-needs-guardrails/#ftag=RSS56d97e7

Continue Reading

AI

Gartner: The future of AI is not as rosy as some might think

Avatar

Published

on

A Gartner report predicts that the second-order consequences of widespread AI will have massive societal impacts, to the point of making us unsure if and when we can trust our own eyes.

Vector of a face made of digital particles as symbol of artificial intelligence and machine learning

Image: iStockphoto/Feodora Chiosea

More about artificial intelligence

Gartner has released a series of Predicts 2021 research reports, including one that outlines the serious, wide-reaching ethical and social problems it predicts artificial intelligence (AI) to cause in the next several years. In Predicts 2021: Artificial Intelligence and Its Impact on People and Society, five Gartner analysts report on different predictions it believes will come to fruition by 2025. The report calls particular attention to what it calls second-order consequences of artificial intelligence that arise as unintended results of new technologies.

SEE: TechRepublic Premium editorial calendar: IT policies, checklists, toolkits, and research for download (TechRepublic Premium)

Generative AI, for example, is now able to create amazingly realistic photographs of people and objects that don’t actually exist; Gartner predicts that by 2023, 20% of account takeovers will use deepfakes generated by this type of AI. “AI capabilities that can create and generate hyper-realistic content will have a transformational effect on the extent to which people can trust their own eyes,” the report said.

The report tackles five different predictions for the AI market, and gives recommendations for how businesses can address those challenges and adapt to the future: 

  • By 2025, pretrained AI models will be largely concentrated among 1% of vendors, making responsible use of AI a societal concern
  • In 2023, 20% of successful account takeover attacks will use deepfakes as part of social engineering attacks
  • By 2024, 60% of AI providers will include harm/misuse mitigation as a part of their software
  • By 2025, 10% of governments will avoid privacy and security concerns by using synthetic populations to train AI 
  • By 2025, 75% of workplace conversations will be recorded and analyzed for use in adding organizational value and assessing risk

Each of those analyses is enough to make AI-watchers sit up and take notice, but when combined it creates a picture of a grim future rife with ethical concerns, potential misuse of AI, and loss of privacy in the workplace. 

How businesses can respond 

Concerns over AI’s effect on privacy and truth are sure to be major topics in the coming years if Gartner’s analysts are accurate in their predictions, and successful businesses will need to be ready to adapt quickly to those concerns.

A recurring theme in the report is the establishment of ethics boards at companies that rely on AI, whether as a service or a product. This is mentioned particularly for businesses that plan to record and analyze workplace conversations: Boards with employee representation should be established to ensure fair use of conversations data, Gartner said.

SEE: Natural language processing: A cheat sheet (TechRepublic)

Gartner also recommends that businesses establish criteria for responsible AI consumption and prioritize vendors that “can demonstrate responsible development of AI and clarity in addressing related societal concerns.”

As for security concerns surrounding deepfakes and generative AI, Gartner recommends that organizations should schedule training about deepfakes. “We are now entering a zero-trust world. Nothing can be trusted unless it is certified as authenticated using cryptographic digital signatures,” the report said. 

There’s a lot to digest in this report, from figures saying that the best deepfake detection software will top out at a 50% identification rate in the long term, to the prediction that in 2023 a major US corporation will adopt conversation analysis to determine employee compensation. There’s much to be worried about in these analyses, but potential antidotes are included as well. The full report is available at Gartner, but interested parties will need to pay for access.

Also see

Source: https://www.techrepublic.com/article/gartner-the-future-of-ai-is-not-as-rosy-as-some-might-think/#ftag=RSS56d97e7

Continue Reading

AI

Model serving in Java with AWS Elastic Beanstalk made easy with Deep Java Library

Avatar

Published

on

Deploying your machine learning (ML) models to run on a REST endpoint has never been easier. Using AWS Elastic Beanstalk and Amazon Elastic Compute Cloud (Amazon EC2) to host your endpoint and Deep Java Library (DJL) to load your deep learning models for inference makes the model deployment process extremely easy to set up. Setting up a model on Elastic Beanstalk is great if you require fast response times on all your inference calls. In this post, we cover deploying a model on Elastic Beanstalk using DJL and sending an image through a post call to get inference results on what the image contains.

About DJL

DJL is a deep learning framework written in Java that supports training and inference. DJL is built on top of modern deep learning engines (such as TenserFlow, PyTorch, and MXNet). You can easily use DJL to train your model or deploy your favorite models from a variety of engines without any additional conversion. It contains a powerful model zoo design that allows you to manage trained models and load them in a single line. The built-in model zoo currently supports more than 70 pre-trained and ready-to-use models from GluonCV, HuggingFace, TorchHub, and Keras.

Benefits

The primary benefit of hosting your model using Elastic Beanstalk and DJL is that it’s very easy to set up and provides consistent sub-second responses to a post request. With DJL, you don’t need to download any other libraries or worry about importing dependencies for your chosen deep learning framework. Using Elastic Beanstalk has two advantages:

  • No cold startup – Compared to an AWS Lambda solution, the EC2 instance is running all the time, so any call to your endpoint runs instantly and there isn’t any ovdeeerhead when starting up new containers.
  • Scalable – Compared to a server-based solution, you can allow Elastic Beanstalk to scale horizontally.

Configurations

You need to have the following gradle dependencies set up to run our PyTorch model:

plugins { id 'org.springframework.boot' version '2.3.0.RELEASE' id 'io.spring.dependency-management' version '1.0.9.RELEASE' id 'java'
} dependencies { implementation platform("ai.djl:bom:0.8.0") implementation "ai.djl.pytorch:pytorch-model-zoo" implementation "ai.djl.pytorch:pytorch-native-auto" implementation "org.springframework.boot:spring-boot-starter" implementation "org.springframework.boot:spring-boot-starter-web"
}

The code

We first create a RESTful endpoint using Java SpringBoot and have it accept an image request. We decode the image and turn it into an Image object to pass into our model. The model is autowired by the Spring framework by calling the model() method. For simplicity, we create the predictor object on each request, where we pass our image for inference (you can optimize this by using an object pool) . When inference is complete, we return the results to the requester. See the following code:

 @Autowired ZooModel<Image, Classifications> model; /** * This method is the REST endpoint where the user can post their images * to run inference against a model of their choice using DJL. * * @param input the request body containing the image * @return returns the top 3 probable items from the model output * @throws IOException if failed read HTTP request */ @PostMapping(value = "/doodle") public String handleRequest(InputStream input) throws IOException { Image img = ImageFactory.getInstance().fromInputStream(input); try (Predictor<Image, Classifications> predictor = model.newPredictor()) { Classifications classifications = predictor.predict(img); return GSON.toJson(classifications.topK(3)) + System.lineSeparator(); } catch (RuntimeException | TranslateException e) { logger.error("", e); Map<String, String> error = new ConcurrentHashMap<>(); error.put("status", "Invoke failed: " + e.toString()); return GSON.toJson(error) + System.lineSeparator(); } } @Bean public ZooModel<Image, Classifications> model() throws ModelException, IOException { Translator<Image, Classifications> translator = ImageClassificationTranslator.builder() .optFlag(Image.Flag.GRAYSCALE) .setPipeline(new Pipeline(new ToTensor())) .optApplySoftmax(true) .build(); Criteria<Image, Classifications> criteria = Criteria.builder() .setTypes(Image.class, Classifications.class) .optModelUrls(MODEL_URL) .optTranslator(translator) .build(); return ModelZoo.loadModel(criteria); } 

A full copy of the code is available on the GitHub repo.

Building your JAR file

Go into the beanstalk-model-serving directory and enter the following code:

cd beanstalk-model-serving
./gradlew build

This creates a JAR file found in build/libs/beanstalk-model-serving-0.0.1-SNAPSHOT.jar

Deploying to Elastic Beanstalk

To deploy this model, complete the following steps:

  1. On the Elastic Beanstalk console, create a new environment.
  2. For our use case, we name the environment DJL-Demo.
  3. For Platform, select Managed platform.
  4. For Platform settings, choose Java 8 and the appropriate branch and version.

  1. When selecting your application code, choose Choose file and upload the beanstalk-model-serving-0.0.1-SNAPSHOT.jar that was created in your build.
  2. Choose Create environment.

After Elastic Beanstalk creates the environment, we need to update the Software and Capacity boxes in our configuration, located on the Configuration overview page.

  1. For the Software configuration, we add an additional setting in the Environment Properties section with the name SERVER_PORT and value 5000.
  2. For the Capacity configuration, we change the instance type to t2.small to give our endpoint a little more compute and memory.
  3. Choose Apply configuration and wait for your endpoint to update.

Calling your endpoint

Now we can call our Elastic Beanstalk endpoint with our image of a smiley face.

See the following code:

curl -X POST -T smiley.png <endpoint>.elasticbeanstalk.com/inference

We get the following response:

[ { "className": "smiley_face", "probability": 0.9874626994132996 }, { "className": "face", "probability": 0.004804758355021477 }, { "className": "mouth", "probability": 0.0015588520327582955 }
]

The output predicts that a smiley face is the most probable item in our image. Success!

Limitations

If your model isn’t called often and there isn’t a requirement for fast inference, we recommend deploying your models on a serverless service such as Lambda. However, this adds overhead due to the cold startup nature of the service. Hosting your models through Elastic Beanstalk may be slightly more expensive because the EC2 instance runs 24 hours a day, so you pay for the service even when you’re not using it. However, if you expect a lot of inference requests a month, we have found the cost of model serving on Lambda is equal to the cost of Elastic Beanstalk using a t3.small when there are about 2.57 million inference requests to the endpoint.

Conclusion

In this post, we demonstrated how to start deploying and serving your deep learning models using Elastic Beanstalk and DJL. You just need to set up your endpoint with Java Spring, build your JAR file, upload that file to Elastic Beanstalk, update some configurations, and it’s deployed!

We also discussed some of the pros and cons of this deployment process, namely that it’s ideal if you need fast inference calls, but the cost is higher when compared to hosting it on a serverless endpoint with lower utilization.

This demo is available in full in the DJL demo GitHub repo. You can also find other examples of serving models with DJL across different JVM tools like Spark and AWS products like Lambda. Whatever your requirements, there is an option for you.

Follow our GitHub, demo repository, Slack channel, and Twitter for more documentation and examples of DJL!


About the Author

Frank Liu is a Software Engineer for AWS Deep Learning. He focuses on building innovative deep learning tools for software engineers and scientists. In his spare time, he enjoys hiking with friends and family.

Source: https://aws.amazon.com/blogs/machine-learning/model-serving-in-java-with-aws-elastic-beanstalk-made-easy-with-deep-java-library/

Continue Reading
Blockchain3 days ago

Buying the Bitcoin Dip: MicroStrategy Scoops $10M Worth of BTC Following $7K Daily Crash

Blockchain3 days ago

Bitcoin Correction Intact While Altcoins Skyrocket: The Crypto Weekly Recap

Blockchain3 days ago

Canadian VR Company Sells $4.2M of Bitcoin Following the Double-Spending FUD

Blockchain3 days ago

MicroStrategy CEO claims to have “thousands” of executives interested in Bitcoin

custom-packet-sniffer-is-a-great-way-to-learn-can.png
Blockchain4 days ago

TA: Ethereum Starts Recovery, Why ETH Could Face Resistance Near $1,250

Amb Crypto3 days ago

Monero, OMG Network, DigiByte Price Analysis: 23 January

Amb Crypto3 days ago

Chainlink Price Analysis: 23 January

Amb Crypto2 days ago

Will range-bound Bitcoin fuel an altcoin rally?

Amb Crypto2 days ago

Bitcoin Price Analysis: 24 January

Amb Crypto4 days ago

Popular analyst prefers altcoins LINK, UNI, others during Bitcoin & Eth’s correction phase

Amb Crypto3 days ago

Bitcoin Cash, Synthetix, Dash Price Analysis: 23 January

Amb Crypto3 days ago

Why has Bitcoin’s brief recovery not been enough

Blockchain4 days ago

Bitcoin Cash Analysis: Strong Support Forming Near $400

Automotive3 days ago

Tesla Powerwalls selected for first 100% solar and battery neighborhood in Australia

Blockchain4 days ago

OIO Holdings Appoints Rudy Lim as CEO of Blockchain Business Subsidiary

SPAC Insiders4 days ago

Virtuoso Acquisition Corp. (VOSOU) Prices Upsized $200M IPO

Amb Crypto3 days ago

Why now is the best time to buy Bitcoin, Ethereum

Amb Crypto3 days ago

Stellar Lumens, Cosmos, Zcash Price Analysis: 23 January

AI2 days ago

Plato had Big Data and AI firmly on his radar

Cyber Security5 days ago

Einstein Healthcare Network Announces August Breach

Trending