Connect with us


React Native Vs. Flutter- Which to choose for developing a mobile app for your startup?




With people using different types of devices (smartphones, tablets, iPhones, etc.), developing cross-platform applications has become a trendy business agenda for startups and enterprises. A cross-platform app is compatible with multiple operating systems and hence, can be accessed from any device. To develop such applications, there are two frameworks, Flutter and React Native, which are being preferred by the developers all across the world.

Now, if you too want a cross-platform mobile application for your business, then you must be thinking about the difference between these two or thinking that which among these two would be suitable for your app. Let’s check out the below-given points that show the comparison between React Native and Flutter framework.


  • React Native- Officially released in 2015, React Native is an open-source mobile application framework that allows developers to use JS and React together with native platform abilities. This highly reliable framework comes up with many ready-made components to make the app development process effortless.
  • Flutter- Initially released in 2017, Flutter is a well-recognized open-source framework that is used to develop cross-platform mobile apps. This free to use framework have hot reload functionality and is highly customizable. 

Technical Architecture

  • React Native- The architecture of this framework depends on the JS runtime environment, which is also recognized as the JS bridge. React Native uses Flux architecture provided by Facebook.
  • Flutter- The Dart-based framework utilizes the Skia C++ engine, which includes required compositions, protocols, and channels.

Winner- Flutter has many built-in components and doesn’t require any bridge to communicate with the native components, whereas, React Native relies on the JS bridge for the same. This results in poor performance, so, the former is the winner.

Corporate backing

  • React Native- React Native is developed and backed by Facebook; in fact, the company has also coded many of its internal products using this framework.
  • Flutter- Flutter was released by Google in 2017. It’s an open-source SDK (Software Development Kit) that uses a single codebase for developing mobile apps for Android and iOS platforms.

Winner- Both React Native and Flutter has strong corporate backing.


  • React Native- To install React Native, you need Node Package Manager. Developers with JavaScript background can install the framework without any hassle, but, others might have to learn NPM for the successful installation of React Native. This is required to know where exactly the binary code is located.
  • Flutter- To install this framework, a developer can download a binary source platform from Github. And those who want to install it on macOS have to download file and then use a path variable.

Winner- React Native (as it can be installed just by NPM)

Programming Language

  • React Native- JavaScript is the language that is popularly used with the React Native to build cross-platform apps. JS is a dynamically typed language and there is no dearth of developers holding expertise in the same. In other words, any developers with hands-on experience of working with JavaScript can learn React Native easily.
  • Flutter- Flutter uses Dart to build applications. A developer, mainly from a C++ or Java background, needs to learn Dart to develop apps using this framework.

Winner- React Native (because of the learning curve involved in Flutter)


  • React Native- React Native uses a JavaScript bridge to communicate with the native components, which affects its performance.
  • Flutter- Flutter doesn’t require such a bridge as Dart code is compiled into native code, which increases the performance of this framework.

Winner- Flutter

Testing and Community Support

  • React Native- React Native includes JS framework, which already has a few unit level testing tools such as Jest. On the other hand, it doesn’t include any tool for UI level testing. Released in 2015, this framework has great community support.
  • Flutter- Flutter comes up with a rich set of testing features for testing an app at three major levels: widget, integration, and unit. In the case of community support, Flutter is a bit behind the React Native.

Winner- Flutter is the clear winner in case of testing support. When talking about community support, React Native has a plus point over Flutter.

Release Automation Support

  • React Native– React Native depends on third-party libraries to offer build and automation support.
  • Flutter- Flutter comes up with a great automation tooling to deploy apps from the command line.

Winner- Flutter is an undisputed winner in the race of release automation support.

Final Words

Both React Native and Flutter have their advantages and disadvantages. Some might be in favor of the former, whereas, others may vote for the latter. Choosing one among these depends on your specific app requirements and a few other factors. If you are a naïve and have little knowledge of mobile app development and these two frameworks, then connect to a mobile app development company or hire mobile app developers to develop your application using either React Native or Flutter. A professional and reliable app development company will also suggest to you that which framework would be best for your app development project


hire mobile developer



Is Recogni Critical To Autonomous Vehicles From Legacy Automakers?




July 5th, 2020 by Zachary Shahan 

In CleanTechnica‘s exclusive, in-depth interview with Peter Mertens, former board member of Audi, Volkswagen Group, Volvo, & Jaguar Land Rover and former R&D head of Audi (published yesterday), Dr. Mertens briefly discussed a few startups he’s now on the board of. One of those was Recogni, and no offense to the others (which I’ll come back to in another article), but Recogni seemed to be the one that got him most lit up.

Image courtesy Recogni

As Mertens stated at one point, his initial thought when being introduced to the company was, “Okay, wow, these guys are completely crazy! I mean, what they [are trying] to achieve is almost impossible. But if they could achieve it, [it’s] really gonna be a breakthrough — in terms of vision processing, enabling autonomous drive in a way that no one has ever thought it was possible. And then I met the guys and I said, ‘they can do it,’ and guess what — I mean, with proof of concept right now — they will deliver.”

That is all he said about Recogni, but it was perhaps the most excited he was in the whole one-hour interview (which I highly recommend watching) and his statements were quite grand, unlike many other more cautious or measured statements on other topics.

With such excitement, I wanted to look into this. First, though, a couple of notes on why Recogni now has a leg up without even looking at any of its tech. As noted previously by Alex Voigt and basically confirmed in his interview with Dr. Mertens, following recovery from a health matter, Dr. Mertens was basically being offered a CEO position somewhere under the Volkswagen Group umbrella. He declined returning to corporate life, seemingly strictly for personal reasons (to not overwork himself in such an environment again), but his opinion on these matters is surely still valued very highly within the many walls of Volkswagen Group. If he recommends a startup like Recogni to the automotive giant at some point for acquisition or investment, without even glancing at anything else, they would take the recommendation seriously and look with an open mind (or even eager mind) into the suggestion. Naturally, I skipped over the part of simply opening the door to Volkswagen Group and several other automotive giants — which many startups struggle to do — because this would go beyond opening a door, it would be a recommendation (biased, of course) from someone who probably would have been the final decision maker on the topic if he stayed at the company.

With that corporate consideration out of the way, let’s see what we can find on the tech itself. We can start with a few more comments (biased, naturally) that the company highlights on its website from lead investors, including from the AI investment arm of the other largest automaker on the planet:

“Autonomous systems are becoming smarter, driven by more powerful edge processing. The next opportunity is to achieve this higher machine intelligence at much lower power. We are excited by Recogni’s inference architecture for high-performance, low-power AI computing at the edge, and look forward to working with the team to build a world of safe and efficient autonomous systems.” — Jim Adler, Founding Managing Director of Toyota AI Ventures

“The ability to process sensor data on the edge efficiently and in real-time is essential in the development of autonomous vehicles. We believe that Recogni has the right approach and an experienced team to help solve these critical issues as the automotive industry continues on its path towards semi-autonomous and fully autonomous vehicles.” — Marcus Behrendt, BMW i Ventures

“We truly believe in the sensor fusion based on camera, radar, and lidar, but computational requirements for those algorithms remains one of the critical bottlenecks in autonomous driving today. Recogni solves this problem with a unique and disruptive approach — we are proud to back this team of world-class IC and system developers, as well as automotive AI experts.” — Sebastian Stamm, Fluxunit — Osram Ventures

So, as you can glean just from these company-highlighted comments of praise, this tech is based on improving the efficiency of computational processing of sensor data (“edge processing”). Recogni’s claim is that it processes an enormous amount of data using very little power. Recogni CEO RK Anand summarizes the challenge and solution in his own words:

“The issues with the Level 2+, 3, 4 and 5 autonomy ecosystem range from capturing/generating training data to inferring in real-time. These vehicles need datacenter class performance while consuming minuscule amounts of power. Leveraging our background in machine learning, computer vision, silicon, and system design, we are engineering a fundamentally new system that benefits the auto industry with very high efficiency at the lowest power consumption.”

In three quick notes, the company explains in more detail what it is processing, how it is unique, and where in the AI-vehicle system it is operating:

  • “It’s the only multi-ocular camera system architecture purpose-built for object recognition that extracts passive stereoscopic depth at the pixel level.”
  • “Recogni achieves greater processing efficiency & speed by storing weights (parameters) of the object library on-chip, where the computational analysis is performed.”
  • “Recogni’s module is pipelined and operates at greater than 8Mpixel images at 60 frames per second, where it is able to recognize (detect, segment & classify) objects, fuse depth-sensor information into the objects, and provide the intelligence to the central system within a few milliseconds.”

Tesla’s software and autonomous vehicle hardware leadership is often mentioned superficially, with basic language any mere mortal like me would understand, but when you dive deeper, a few things have become very clear:

  • Tesla excels in vehicle efficiency — because its leadership understands how critical this is for both electric vehicle range and for achieving Full Self Driving. You need to preserve as much energy for autonomous driving processing as possible.
  • Tesla’s whole vehicle is built around the computer hardware and software inside. It is the only major automaker that essentially builds computers and puts a vehicle architecture around them rather tagging small computers onto an old-school automobile design here or there.
  • Tesla is extremely vertically integrated when it comes to all these matters, whereas conventional automakers outsource almost all of their computing needs (hardware and software). Volkswagen is trying to change in this regard, but seems to be having problems. Inside of Tesla, a core part of Tesla, is an autonomous driving startup like no other automaker has.
  • Tesla is constantly, obsessively collecting as much data as it can from vehicles on the road in order to improve its self-driving systems (hardware and software).

The next three (last three) quotes from Recogni further highlight the importance of those above points:

  • “True driverless vehicles must analyze the environment, recognize objects at a distance, and make a decision in less than 50 milliseconds for urban driving and less than 30 milliseconds for highway driving.”
  • “Latency constraints require all image processing to be done within the car’s systems.”
  • “Cars have limited energy and power they can devote to the computational tasks without affecting range.”

George Hotz, another player in this space with his own startup, made similar points to me in 2017 when explaining Tesla’s leadership, laughing about what other automakers were doing (some harsh criticism in words and in tone), and sort of pitching his own efforts (if you watch the video below, note that he was supposed to be at the Paris conference in person but missed his flight and was thus broadcast live onto the projector screen — I was not filming my computer screen).

Tesla fanboy, Tesla fanboy … I thought this was about Recogni? Yes, this is still about Recogni and trying to put it in context, which makes it important to explain what Tesla is doing that’s so different from other automakers. Don’t believe me? Then read some thoughts from Chief Business Officer and cofounder of Recogni, Ashwini Choudhary, who had a section of his latest article in Forbes titled “The Tesla Way.”

“Tesla has taken a multipronged approach to mete out its assault on the auto industry. First, the company built an electric-powered car that is fun to drive, and then it gave the driver a user experience that almost feels like a smartphone.

“Second, Tesla is essentially a software company. While the old auto industry is playing catch-up to the new user interface, Tesla is taking the fight to them with autonomous vehicles. The company is miles ahead of the rest of the industry (no pun intended), both from a technology and a marketing perspective. It is ironic to look at this giant industry from this perspective, but it’s necessary.

“An electronic, technology-centric approach to making a car is the disruption the auto industry needs to survive. Tesla is a technology company that makes cars. The company is driving innovation at an exponential rate with periodic remote software updates that change the personality of these cars. Other companies are manufacturing firms that make cars but don’t understand technology as Tesla does.”

Perhaps not the way one would invite themselves into the home of someone they want to marry, but Choudhary’s point for automakers is that Tesla is solving the automotive problem of the day for itself and they better find someone to help solve that problem ASAP as well.

While it must be 100% clear at this point that you can’t add bolt-on autonomous-driving solutions to a vehicle not designs at its roots for autonomous driving, there is still debate about whether vertical integration or horizontal integration is best for this evolution. Alex Voigt has argued very strongly on CleanTechnica that vertical integration is the way to go. Recogni’s case is the opposite:

“[E]lectronics technology is not [automakers’] core competency, and just throwing a large team at the issue will not solve the problem. These companies need to make a bet on the right partnerships. Either the companies go completely vertical, similar to Tesla, and develop the entire technology stack themselves — which will be difficult for the incumbents given their manufacturing DNA — or they go horizontal and source ‘best of breed’ technology from various entities and integrate them intelligently. The second approach will enable automotive OEMs to leapfrog Tesla altogether.”

The ability to beat Tesla is a bold promise, but it’s one that Choudhary repeated in his late-January article in Forbes: “What I saw recently at CES 2020 gives me hope. I met several CTOs of automotive OEMs and tier-one parts suppliers who have the vision and want to beat Tesla by going horizontal.”

I’m excited to see how this plays out! I’m a fan of creative and innovation-inducing competition, not monopolies. Whether horizontal integration can beat vertical integration is yet to be proven, but even if it can, it is going to depend on a stellar team leading the integration and in primary control of vehicle development. Let’s see who can put together the best teams and the best tech.

Recogni has just two press releases in its docket, an unstealthing press release on July 31, 2019, and an announcement in early November 2019 of Peter Mertens (see top of article) joining the board. However, I expect we’ll be hearing much more about the company in the coming years. I certainly look forward to reaching out for more insight into what they’re offering. 


Latest CleanTechnica.TV Episode

Latest Cleantech Talk Episode

Tags: AI, Ashwini Choudhary, audi, BMW, BMW I Ventures, Fluxunit, image processing, Osram Ventures, Peter Mertens, Recogni, Tesla, Toyota, Toyota AI Ventures, volkswagen

About the Author

Zachary Shahan is tryin’ to help society help itself one word at a time. He spends most of his time here on CleanTechnica as its director, chief editor, and CEO. Zach is recognized globally as an electric vehicle, solar energy, and energy storage expert. He has presented about cleantech at conferences in India, the UAE, Ukraine, Poland, Germany, the Netherlands, the USA, Canada, and Curaçao. Zach has long-term investments in Tesla [TSLA] — after years of covering solar and EVs, he simply has a lot of faith in this company and feels like it is a good cleantech company to invest in. But he does not offer (explicitly or implicitly) investment advice of any sort on Tesla or any other company.


Continue Reading


Social media must add a do-not-track option for images of our faces




Facial recognition systems are a powerful AI innovation that perfectly showcase The First Law of Technology: “technology is neither good nor bad, nor is it neutral.” On one hand, law-enforcement agencies claim that facial recognition helps to effectively fight crime and identify suspects. On the other hand, civil rights groups such as the American Civil Liberties Union have long maintained that unchecked facial recognition capability in the hands of law-enforcement agencies enables mass surveillance and presents a unique threat to privacy.

Research has also shown that even mature facial recognition systems have significant racial and gender biases; that is, they tend to perform poorly when identifying women and people of color. In 2018, a researcher at MIT showed many top image classifiers misclassify lighter-skinned male faces with error rates of 0.8% but misclassify darker-skinned females with error rates as high as 34.7%. More recently, the ACLU of Michigan filed a complaint in what is believed to be the first known case in the United States of a wrongful arrest because of a false facial recognition match. These biases can make facial recognition technology particularly harmful in the context of law-enforcement.

One example that has received attention recently is “Depixelizer.”

VB Transform 2020 Online – July 15-17. Join leading AI executives: Register for the free livestream.

The project uses a powerful AI technique called a Generative Adversarial Network (GAN) to reconstruct blurred or pixelated images; however, machine learning researchers on Twitter found that when Depixelizer is given pixelated images of non-white faces, it reconstructs those faces to look white. For example, researchers found it reconstructed former President Barack Obama as a white man and Representative Alexandria Ocasio-Cortez as a white woman.

While the creator of the project probably didn’t intend to achieve this outcome, it likely occurred because the model was trained on a skewed dataset that lacked diversity of images, or perhaps for other reasons specific to GANs. Whatever the cause, this case illustrates how tricky it can be to create an accurate, unbiased facial recognition classifier without specifically trying.

Preventing the abuse of facial recognition systems

Currently, there are three main ways to safeguard the public interest from abusive use of facial recognition systems.

First, at a legal level, governments can implement legislation to regulate how facial recognition technology is used. Currently, there is no US federal law or regulation regarding the use of facial recognition by law enforcement. Many local governments are passing laws that either completely ban or heavily regulate the use of facial recognition systems by law enforcement, however, this progress is slow and may result in a patchwork of differing regulations.

Second, at a corporate level, companies can take a stand. Tech giants are currently evaluating the implications of their facial recognition technology. In response to the recent momentum of the Black Lives Matter movement, IBM has stopped development of new facial recognition technology, and Amazon and Microsoft have temporarily paused their collaborations with law enforcement agencies. However, facial recognition is not a domain limited to large tech firms anymore. Many facial recognition systems are available in the open-source domains and a number of smaller tech startups are eager to fill any gap in the market. For now, newly-enacted privacy laws like the California Consumer Privacy Act (CCPA) do not appear to provide adequate defense against such companies. It remains to be seen whether future interpretations of CCPA (and other new state laws) will ramp up legal protections against questionable collection and use of such facial data.

Lastly, people at an individual level can attempt to take matters into their own hands and take steps to evade or confuse video surveillance systems. A number of accessories, including glasses, makeup, and t-shirts are being created and marketed as defenses against facial recognition software. Some of these accessories, however, make the person wearing them more conspicuous. They may also not be reliable or practical. Even if they worked perfectly, it is not possible for people to have them on constantly, and law-enforcement officers can still ask individuals to remove them.

What is needed is a solution that allows people to block AI from acting on their own faces. Since privacy-encroaching facial recognition companies rely on social media platforms to scrape and collect user facial data, we envision adding a “DO NOT TRACK ME” (DNT-ME) flag to images uploaded to social networking and image-hosting platforms. When platforms see an image uploaded with this flag, they respect it by adding adversarial perturbations to the image before making it available to the public for download or scraping.

Facial recognition, like many AI systems, is vulnerable to small-but-targeted perturbations which, when added to an image, force a misclassification. Adding adversarial perturbations to facial recognition systems can stop them from linking two different images of the same person1. Unlike physical accessories, these digital perturbations are nearly invisible to the human eye and maintain an image’s original visual appearance.

(Above: Adversarial perturbations from the original paper by Goodfellow et al.)

This approach of DO NOT TRACK ME for images is analogous to the DO NOT TRACK (DNT) approach in the context of web-browsing, which relies on websites to honor requests. Much like browser DNT, the success and effectiveness of this measure would rely on the willingness of participating platforms to endorse and implement the method – thus demonstrating their commitment to protecting user privacy. DO NOT TRACK ME would achieve the following:

Prevent abuse: Some facial recognition companies scrape social networks in order to collect large quantities of facial data, link them to individuals, and provide unvetted tracking services to law enforcement. Social networking platforms that adopt DNT-ME will be able to block such companies from abusing the platform and defend user privacy.

Integrate seamlessly: Platforms that adopt DNT-ME will still receive clean user images for their own AI-related tasks. Given the special properties of adversarial perturbations, they will not be noticeable to users and will not affect user experience of the platform negatively.

Encourage long-term adoption: In theory, users could introduce their own adversarial perturbations rather than relying on social networking platforms to do it for them. However, perturbations created in a “black-box” manner are noticeable and are likely to break the functionality of the image for the platform itself. In the long run, a black-box approach is likely to either be dropped by the user or antagonize the platforms. DNT-ME adoption by social networking platforms makes it easier to create perturbations that serve both the user and the platform.

Set precedent for other use cases: As has been the case with other privacy abuses, inaction by tech firms to contain abuses on their platforms has led to strong, and perhaps over-reaching, government regulation. Recently, many tech companies have taken proactive steps to prevent their platforms from being used for mass-surveillance. For example, Signal recently added a filter to blur any face shared using its messaging platform, and Zoom now provides end-to-end encryption on video calls. We believe DNT-ME presents another opportunity for tech companies to ensure the technology they develop respects user choice and is not used to harm people.

It’s important to note, however, that although DNT-ME would be a great start, it only addresses part of the problem. While independent researchers can audit facial recognition systems developed by companies, there is no mechanism for publicly auditing systems developed within the government. This is concerning considering these systems are used in such important cases as immigration, customs enforcement, court and bail systems, and law enforcement. It is therefore absolutely vital that mechanisms be put in place to allow outside researchers to check these systems for racial and gender bias, as well as other problems that have yet to be discovered.

It is the tech community’s responsibility to avoid harm through technology, but we should also actively create systems that repair harm caused by technology. We should be thinking outside the box about ways we can improve user privacy and security, and meet today’s challenges.

Saurabh Shintre and Daniel Kats are Senior Researchers at NortonLifeLock Labs.


Continue Reading


Detroit cops employed facial recognition algos that only misidentifies suspects 96 per cent of the time




In brief Cops in Detroit have admitted using facial-recognition technology that fails to accurately identify potential suspects a whopping 96 per cent of the time.

The revelation was made by the American police force’s chief James Craig during a public hearing, this week. Craig was grilled over the wrongful arrest of Robert Williams, who was mistaken as a shoplifter by facial-recognition software used by officers.

“If we would use the software only [to identify subjects], we would not solve the case 95-97 per cent of the time,” Craig said, Vice first reported. “That’s if we relied totally on the software, which would be against our current policy … If we were just to use the technology by itself, to identify someone, I would say 96 per cent of the time it would misidentify.”

The software was developed by DataWorks Plus, a biometric technology biz based in South Carolina. Multiple studies have demonstrated facial-recognition algorithms often struggle with identifying women and people with darker skin compared to Caucasian men.

US national AI cloud mulled

A bipartisan law bill calling for the US government to set up a cloud platform allowing researchers to access computational resources for public AI research was submitted to Congress for consideration, this week.

The National AI Research Resource Task Force Act was drawn up by politicians across the House of Representatives and the Senate. Several universities and research institutions, including Google, OpenAI, and Nvidia, also announced their support for the legislation.

“It is an essential first step towards establishment of a national resource that would accelerate and strengthen AI research across the U.S. by removing the high-cost barrier to entry of compute and data resources,” said Eric Schmidt, ex-CEO of Google and chairman of the National Security Commission on Artificial Intelligence.

”If realized, this infrastructure would democratize AI R&D outside of elite universities and big technology companies and further enable the application of AI approaches across scientific fields and disciplines, unlocking breakthroughs that will drive growth in our economy and strengthen national security.”

Transformer models just keep getting bigger

Folks over at Google have built a set of new API tools to scale up machine-learning models that contain more than 600 billion parameters.

The paper [PDF] is pretty technical, and it describes various techniques to split up and train such large models more easily. The Googlers demonstrated their ideas by testing it on a massive transformer-based machine translation model. They trained the system containing over 600 billion parameters on 2,048 TPU v3 math accelerators for four days.

The giant model was able to translate 100 different languages into English and “achieved far superior translation quality compared to prior art,” the researchers claimed. The system is a beefed up version of Sparsely-Gated Mixture-of-Experts, a system introduced in 2017 that initially had 137 billion parameters. ®


Continue Reading
Quantum47 mins ago

New Hub Paper: ‘Towards a Quantum Software Modeling Language’

Cannabis1 hour ago

Fine Art Shippers to Partner with AGS Cargo in Brazil

Fintech1 hour ago

Yapeal ist Live und lanciert erste Visa Debit-Karte der Schweiz

Fintech1 hour ago

Sunrise and YAPEAL Agree Partnership for Mobile Digital Swiss Financial Services App

Cannabis1 hour ago

Cannabis, Hemp, and CBD Marketing Resources to Make Your Life Easier (and Help Your Business to Stand Out!)

Cyber Security1 hour ago

Semantic UI CDN

Fintech2 hours ago

Major banks take another step towards ‘truly European’ payment system

CNBC2 hours ago

Shanghai soars more than 5%, leading gains in Asia as ‘bull sentiment’ drives markets

Blockchain2 hours ago

Singapore witnessed over 50% growth in the blockchain sector.

Blockchain2 hours ago

Celsius Network Review

Business Insider2 hours ago

Here’s an exclusive look at the pitch deck robot startup BotsAndUs used to raise $2.5 million

Publications2 hours ago

Uber reportedly agrees to acquire Postmates for $2.65 billion

Blockchain2 hours ago

Tron (TRX) Price Jumps 5% And Showing Early Signs of A Fresh Rally

Cannabis2 hours ago

Growing with the sun: Cannabis companies look to outdoor cultivation

Cyber Security2 hours ago

Cyber Attack on Iran Nuclear Facility by USA or Israel

Cyber Security2 hours ago

The UK to axe Huawei from 5G network build due to security fears

Cyber Security3 hours ago

Behave – A New Browser Extension to Find web sites that Perform Browser-Based Port Scans or Attack

Fintech3 hours ago

Germany Wants to Give BaFin More Responsibility Amid Wirecard Scandal

Blockchain3 hours ago

Insights: After Reaching 4 Million Users, Luno Enters Kenya and Ghana Next

Blockchain3 hours ago

Bitcoin News Summary – July 6, 2020

Blockchain3 hours ago

Customer Service Is Key, According to OKEx’s CEO

Business Insider3 hours ago

Uber will acquire food delivery startup Postmates in $2.6 billion all-stock deal, reports say

Cannabis3 hours ago

Halo Announces Closing of Los Angeles Dispensary Project Acquisition & Passage of Los Angeles Ordinance to Accelerate Approval of Winning Applicants

Fintech3 hours ago

Open banking’s first loan approved

Cannabis3 hours ago

Marijuana use in pregnancy may cause sleep problems in kids

CNBC3 hours ago

UK to phase out Huawei gear from 5G networks in a major policy U-turn after U.S. sanctions, reports say

Cyber Security4 hours ago

Data exfiltration: The art of distancing

Blockchain4 hours ago

Astronomers find the first known exposed core of a gas giant

Fintech4 hours ago

SelfWealth (ASX:SWF) welcomes influx of active traders in June quarter

Cyber Security4 hours ago

Awake Security Introduces Powerhouse Advisory Board

Blockchain4 hours ago

Singapore’s Blockchain Landscape Has Grown More than 50% Since Last Year

Fintech4 hours ago

Lydia expands credit offering in partnership with Younited Credit

Cyber Security4 hours ago

Cybersecurity software sales and training in a no-touch world

Fintech4 hours ago

illuminance Solutions partners with Microsoft Partner Wiise to offer significant software savings to NDIS providers

Blockchain4 hours ago

Bitcoin at Potentially Significant Turning Point: $9,000 Holds The Key

CNBC4 hours ago

Oil prices mixed as coronavirus spike casts shadow over U.S. demand

CNBC4 hours ago

City in China’s Inner Mongolia warns after suspected bubonic plague case

Cyber Security4 hours ago

10 Best Vulnerability Scanning Tools For Penetration Testing – 2020

Cyber Security5 hours ago

Review: Cybersecurity Threats, Malware Trends, and Strategies

Business Insider5 hours ago

Remains of missing Texas soldier Vanessa Guillen have been identified, family lawyer says