The delays in getting autonomous cars on the road are mentioned often on this site. A “breakthrough” LiDAR technology developed by Argo AI could change all that. It means we could be seeing commercial autonomous vehicles on the road finally, making deliveries and offering side-sharing.
Argo LiDAR Technology
Argo AI introduced Argo LiDAR, which allows the technology to bypass the shortcomings that parked the ideas of autonomous delivery and ride-sharing until now.
Argo’s Self-Driving System (SDS) allows driverless cars to be aware of their surroundings – 360 degrees, day or night. Cars with Argo LiDAR are safe on city streets, suburban neighborhoods, and highways, according to the company’s blog post.
The breakthrough in the Argo AI LiDAR came about when Argo acquired a company developing long-range radar. The resulting LiDAR has a range of 400 meters. This allows dark objects to be detected, and ultra-high resolution perception allows for photorealistic imaging, which allows small objects to be identified.
“Argo Lidar takes us to a whole new level of self-driving technology, unlocking our ability to power both delivery and ride-hail services,” said Argo AI CEO and founder Bryan Salesky. “The Argo Self-Driving System delivers the safety, scale, and service experience that businesses want and their customers demand, especially coming out of the pandemic.”
Applications of the LiDAR Technology
The Argo AI blog post shared the advantages the new LiDAR technology could bring to commercial autonomous driving.
- Safe for use in cities, suburbs, and highways, allowing an easy connection to a warehouse.
- Scaled use in six U.S. cities and Europe will take place this year.
- Service between addresses in urban and suburban areas would help with deliveries.
Argo AI has partnered with Ford Motor Company and Volkswagen Group to develop commercially-available autonomous vehicles.
“We have unparalleled autonomous driving technology and operations capabilities,” added Salesky. “Proving out these abilities every day, across six cities, from our nation’s capital to Miami to Silicon Valley, we are ready to enable the next phase of growth for delivery, retail, and ride-sharing partners.”
The LiDAR sensor, as part of the Argo SDS, joins custom sensors to allow commercial autonomous cars to:
- See dark vehicles that reflect less than 1 percent of light, even at long range and at night.
- Navigate left turns into oncoming traffic with a 360° view.
- Transition instantly from darkness to bright light.
- Distinguish between small, moving animals and vegetation.
The Argo AI Hardware Development team is working with a manufacturer for production of the LiDAR sensor. The first to be
produced are being road-tested. There are plans with Ford and Volkswagen for widespread commercialization.
To show how long this technology has been in the works, read how Apple was working on it two years ago . And yet, it has no autonomous car on the roads.
Image Credit: Argo AI Newsroom
The Doctor Will See You Now: Standard of Care for Patients
Remote working, remote learning, and even remote happy hour: the pandemic has forced us to adopt new ways of work and play to adapt to a new reality. And so, it fitted that the pan-EU research project IntellIoT gathered (remote) health tech experts to discuss the future of (remote) healthcare against this very particular backdrop.
Expert commentary revolved around the promise and potential pitfalls of integrating artificial intelligence, machine learning, and wearable technology into the future of healthcare practice. Representatives from cardiovascular medicine, technology providers, pioneering startups, and medical research shared their insights on building tomorrow’s Medtech solutions for better patient outcomes, increased efficiency, accuracy, and safety, as well as lower costs of care.
From launching an “Uber for at-home medical tests” to envisioning a not-so-distant future where personalized, real-time remote treatment is the standard of care, read on to learn how leveraging the power of AI will determine the future of medicine.
Remote Monitoring Outperforms In-Person Consultations
Though at first glance it might seem contrary to common sense (or downright implausible), doctors can gain a more accurate picture of patients’ health by remote monitoring than during an in-person examination. One need only to compare the breadth and depth of historical data points collected by a wearable device over a series of hours, days, or weeks to those few select moments during an on-site consultation: the sheer data volume—for example, heart rate measured regularly over the course of days, rather than the few seconds during a doctor’s visit—paint a much clearer picture of a patient’s overall cardiovascular health.
“When I go to the doctor,” explains Jörn Watzke, “[they] can make very detailed analyses with all the medical equipment they have. But this is a one-shot measurement at that moment. The big advantage [of a wearable] is that you get the data 24/7; you get it historically…You get it in high resolution; you can get it in real-time, you get a lot of details and also measure multiple sensors…these are huge benefits.” Watzke is Senior Director Global Business Development & Sales at Garmin, whose wearable devices collect millions of health data points from each user, from familiar metrics like daily steps to more subtle measurements such as heart rate variability or blood oxygen levels.
Propelled by ever-increasing precision sensor technology, Garmin’s wearables use both acceleration and PPG (photoplethysmogram, to detect microvascular blood volume changes) sensors to measure heart rate and variability, as well as an additional sensor for oxygen saturation. This combination of health parameters, collected by a wearable device, can already identify symptoms such as fatigue, drowsiness, and stress and point the way to risk identification of atrial fibrillation or sleep apnea, among other conditions. While Gamin’s technology can be used for said risk assessment (as well as rehabilitation and disease prevention), these wearables are not medical devices but consumer goods and do not offer direct diagnoses or perform any treatments.
Although wearable technology has made strides in recent years, there is still far to go. Watzke is convinced that it’s only a matter of time until ever-more reliable and precise sensors unlock as yet imaginary capabilities, such as noninvasive glucose measurement: “[I remember] when people said a PPG sensor would never work. Meanwhile, a PPG sensor brings really reliable data…We’ll see higher accuracy, and we’ll also see new sensors…it will be an exciting future.”
Show Me the Studies: Doctors Demand Proof
So how has the wearable tech and its resulting datasets been integrated into actual medical practice? “As a cardiologist,” explains Maria Marketou, “I can say that cardiovascular medicine is at the forefront of many machine learning applications. We need AI because we need to bring big data information together and incorporate it into mainstream clinical practice.” Marketou is a Senior Consultant at the University Hospital Cardiology Clinic in Heraklion, Greece. From diagnosis to post-operative care, doctors already rely on AI to parse immense sums of data from wearable devices and interpret imaging: AI is used in radiology to more accurately detect and analyze details invisible to the human eye.
From a practitioner’s perspective, the theoretically positive implications for AI integration are fairly clear: speeding up administrative tasks means less time spent arranging appointments and follow-up care, resulting in more one-on-one time with patients. Increasingly complex data sets can be more accurately interpreted, reducing human error. Quickly identifying patterns could lead to automatically diagnosing diseases, and remote monitoring of patients via wearables that transfer personalized, real-time data improves the quality of care, increases compliance, and reduces costs, not to mention making healthcare more easily accessible for the patient.
But there are significant hurdles to implementation, not least a wariness on both doctors and patients to mix computers and medicine. Doctors won’t incorporate new tools into their practice without the gold standard of proven effective through clinical trials. Patients are wary of having their medical data tracked and shared. Plus, why should they trust a computer algorithm to make decisions about their health? And society views increasing automation with skepticism, raising fears of disappearing jobs.
While AI has the power to transform medical care in many respects, we shouldn’t jump too quickly to any conclusions, warns Marketou: “[Medicine] has several limitations…medical guidelines cannot cover all the clinical cases, and the variability of experience among physicians is high [and] the patient’s outcome is unpredictable…I’m convinced that AI networks are very powerful, but to succeed, we need to keep in mind that they will never be powerful enough to understand the complexity of medicine.” According to Marketou, it is essential to build solutions for healthcare complementary to doctors’ experience and expertise.
Keeping Humans In the Loop
The EU research project aims to tackle this in its medical use case. With additional workstreams in both manufacturing and agriculture, the project fosters the development of humanized IoT and AI devices and systems, championing end-user trust, adequate security, and privacy by design. The medical use case works with heart failure patients to apply AI, 5G, and IoT to improve the management of chronic disease in an outpatient setting.
In addition to remote monitoring and rehabilitation of patients and supporting clinicians as they care for them, the project will also identify predictors for positive or negative patient outcomes: in short, what types of environmental variables might predict whether a patient would fare better or worse?
One of the larger topics that the project has incorporated since its inception is the often thorny topic of winning end-user trust when merging AI, IoT, and healthcare. On the one hand, with the success of doctors and the health of patients at the center of the project, humans are kept in the technical development loop by default. On the other hand, how can you win patients’ trust and medical practitioners’ buy-in, considering the technological complexity at hand? The stakes for willing adoption on the part of both doctor and patient are high: in the case of a heart attack, between life or death. And in the case of adopting new technology, between tools with the power to heal and those that might hurt, both in terms of health outcomes and data privacy concerns.
While data privacy and security questions are front-of-mind in the development of AI and IoT health tech solutions, there are other data concerns afoot. “Any AI system is only as good as the quantity and quality of its data,” says Marketou from her office in Heraklion. This sentiment was echoed by Fadi Haddad, Head of Global Business Development at Medicus AI, a Vienna-based health tech startup leveraging AI to help people better understand and care for their health. “This is one of the bigger challenges currently in health care…that health data is not harmonized or structured in one place. You have different data structures, namings, units, ranges, and in some cases, different file types.” Medicus developed its own AI-based interoperability engine to address this problem, one that merges different information systems by leveraging natural language processing, optical character recognition, and machine learning.
The result? With the additional support of doctors, Medicus uses this clean data to build its medical reasoning engine, which delivers personalized information to the end-user by interpreting various data points. The company is working to build products that support people to reach optimal health outcomes and is on a mission to bridge the gaps in remote medical practice, providing complete and comprehensive remote care. For example, Medicus has piloted projects to support pregnant people in Hong Kong and the recently launched door-to-door phlebotomist testing pilot, enabling remote patients to book an appointment to collect a test sample from the comfort of their home. “It works like an Uber ordering experience. You can track when the phlebotomist is coming to you and when the results are ready,” explains Haddad. Sent to the Medicus app, the results are in turn explained and interpreted for the patient.
The Future of Medicine: Early Detection, Prevention, and Personalization
So how soon will we all be signing up for at-home phlebotomy tests? Perhaps not too far from now. The COVID-19 pandemic has accelerated telemedicine adoption, priming the populace to embrace remote care: “We now see a huge push into telemedicine because people were really forced during the pandemic…to offer the service to patients,” explains Watzke, the Senior Director at Garmin. “Doctors who were a little bit reserved are now much more open [to telemedicine].”
But the revolution in medical care doesn’t stop at new modes of consultation and testing: with increased ability to collect and assess patients’ medical data, doctors and insurers (and patients themselves) can anticipate a new push for early detection of diseases, resulting in prevention measures, and the personalization of care: imagine a future where based upon data collected from a wearable device, cross-referenced to medical history and clinical study data, medical providers can catch pre-diabetes or even the beginnings of depression, spurring earlier intervention. In another twist, this variety of data—measuring and merging various systems including cardiovascular, pulmonary, and/or endocrine, etc.—can provide a more holistic view of illness and better understand the interrelation of the various parts of the human body.
The future of MedTech is bright, with thrilling opportunities for leveraging big data, AI, sensor technology, and wearables to improve patient outcomes and support medical professionals.
Hyperscalers, the Edge, and Cloud: What Does It All Mean?
With the proliferation of IoT, connectivity and computing technologies are becoming more diverse. It can seem there are so many ways to power an IoT ecosystem, and it can be hard to cut through the hype of buzzwords to understand what it means for your unique business case.
What are Hyperscalers?
This term stems from hyper-scale computing, which is an agile method of processing data. Depending on data traffic, scale can quickly go up or down. Hyperscalers have taken this computing method and applied it to data centers and the cloud to accommodate fluctuating demand.
Major hyper scalers in the business offer Infrastructure as a Service (IaaS) to help meet enterprises seeking digital platforms. Essentially, hyperscalers manage the physical infrastructure while the end-user customizes a virtualized computing infrastructure.
The infrastructure layer in a technology stack is where the computing power lies. IoT is gaining traction amongst hyperscalers and telecoms, who are beginning to invest in building IoT platforms. An IoT platform helps bring IoT solutions to market faster and streamlines the process to deployment.
This is a significant nod toward the importance of IoT in the technology realm, but hyperscalers certainly aren’t the first to develop IoT platforms. It is important to research the benefits of using an IoT platform from a cloud services provider versus an IoT expert.
IoT Infrastructure Using the Cloud or Edge
IoT sensors and devices are responsible for collecting data, and connectivity is responsible for communicating the data. The part of the infrastructure that computes the data can either be the cloud or the edge.
The cloud is a centralized approach to process data and works well for power and capacity. It allows scalability for enterprises, and the pay-as-you-go model makes it an affordable approach to smaller organizations that do not want to build out an entire computing infrastructure in-house.
Edge computing is rising in popularity due to the speed at which data can be computed. Instead of sending data from the edge to the cloud, the data is computed right at the edge. The edge can mean several things, though.
- Telco edge: Computing on the telco edge is situated near mobile cell sites and/or nearby data centers and in combination with the cloud. This marries the low latency and reduced backhaul benefits of data centers to the scalability and mobility of the cloud.
- Network edge: The network edge provides scalability and agility to meet demand through lower latency and greater throughput and reliability. The network edge sits outside the network core and comprises data centers, routers, and fixed wireless access.
- Device edge: Edge devices will collect data via a sensor, and the device itself will compute, making this the fastest edge computing option of the three.
Cloud vs. Edge: Which to Choose?
When it comes to choosing between the edge and the cloud, it boils down to speed and cost. Artificial intelligence and machine learning in robotics are use cases where edge computing makes the most sense. Processing close to the device level makes sense when there is less tolerance for latency. Also, in use cases such as autonomous vehicles, a slower reaction time from a machine in automated processes can spell disaster.
But not all IoT use cases are mission-critical, and latency isn’t a primary factor. Smart agriculture, for instance, wouldn’t sink costs into edge devices or developing a network or telco edge since the low power devices with slower processors are at the core of the data aggregation.
The Solution to Overcoming Cyber Threats in a 5G World and Beyond
Click to learn more about author Michael Abad-Santos.
As 5G networks continue to roll out worldwide, the looming question of how we can overcome cyber threats continues to elude even the experts. In fact, it is for this very reason the National Security Agency (NSA) released unclassified 5G security guidance, which includes Potential Threat Vectors to 5G Infrastructure and other documents that examine ways to mitigate such threats in order to aid government and industry in integrating security into every aspect of the 5G ecosystem.
Current NSA 5G security recommendations emphasize Zero Trust as the foundation of a multilayered approach. Forrester first introduced the concept in 2014. It is based on the assumption that everything – and everyone with an Internet of Things (IoT) device – is already compromised and endorses validation of devices, apps, individual users, and networks before access is granted. It also includes a process for detecting and remediating threats, with recommendations such as:
Although it is being widely embraced, particularly by organizations challenged with a dispersed workforce that has made the network perimeter all but disappear, it has not provided the panacea for overcoming cyber threats today nor likely into the 5G future. Here’s why: 5G architecture is based on a virtualized, highly distributed, software-defined infrastructure and relies heavily on application programming interface (API) to support service functions, as well as radio frequency (RF) signals for transmitting data, which are inherently insecure and easy to intercept. Given that billions of IoT devices are in use every day that rely on RF for Wi-Fi, Bluetooth, Bluetooth Low Energy, Zigbee, and Cellular connectivity, it should come as no surprise that thousands of hacks occur on a daily basis.
To overcome – not just mitigate – cyber threats in a 5G world and beyond, it may require the use of another technology, Optical Wireless Communication (OWC), also known as Free-Space Optical (FSO) communication. Transmitting data via narrow beams of light, OWC has been in use by NASA for decades to support critical exploration activities such as its Laser Communications Relay Demonstration and the Orion Exploration Mission 2 Optical Communications project. Because this point-to-point, line-of-sight technology uses lasers focused on the intended recipient, it is extremely difficult to not only intercept but also detect, which is why government agencies such as the U.S. Department of Defense and many commercial users like StarLink are using OWC for satellite communications. With free space in its operating spectrum, OWC is also unregulated, which means there is no cost or licensing required for use. That’s a massive advantage when considering that the recent acquisition cost of 5G RF spectrum at the FCC Auction 107 topped out at $81.11 billion (becoming the most expensive mid-band 5G spectrum auction ever worldwide).
Faster, more secure, and reliable than RF – even for the most bandwidth-intensive applications – OWC has become a safe, well-established technology being used to augment existing RF and fiber capabilities to close the last-mile gap in internet access. Looking to the future and perhaps the sixth generation of connectivity, OWC may become the leading technology holding the key to securing the datasphere. For this to occur, several issues must be addressed and solved, including potential interference caused by rain, dense clouds, fog, snow, heavy pollution, and high winds. In the meantime, we will likely hear more about this amazing technology not just in the space and defense industry but in the commercial sector where the stakes, as well as risks, have never been higher.
Detecting Humans in Smart Homes with Computer Vision
Co-founder and CTO at Integra Sources PhD in Physics and Mathematics
Computer vision (CV) is intended to detect, process, and distinguish objects in digital images and videos. Completing such tasks requires different technologies, libraries, and frameworks. OpenCV provides a big choice of tools used for object detection, face recognition, image restoration, and many other applications. Here, you’ll learn how to use OpenCV for real-time human detection in the Internet of Things (IoT) home automation.
The OpenCV Library Overview
OpenCV is a set of libraries with over 2500 solutions that vary from classical machine learning (ML) algorithms, such as linear regression, support vector machines (SVMs), and decision trees, to deep learning and neural networks. OpenCV is open-source—it can be freely used, modified, and distributed under the Apache license.
The library can run on Windows, Linux, macOS, Android, iOS and support software written in C/C++, Python, and Java. It has strong cross-platform capability and compatibility with other frameworks. For example, you can easily port and run TensorFlow, Caffe, PyTorch, and other models in OpenCV with almost no adjustments.
The OpenCV library is equipped with the GPU module that provides high computational power to capture videos, process images, and handle other operations in real-time. Leveraging the OpenCV GPU module, developers can create advanced algorithms for high-performance computer vision applications.
OpenCV in IoT Home Automation
The OpenCV library has found wide applications in smart homes—the Internet of Things systems that assist people in running household functions. The networks of IoT devices can control lights, regulate indoor temperature, water plants, and turn on the TV.
Providing security is an integral part of IoT home automation. Deploying computer vision applications for people detecting improves safety in many alarm and video intercom systems. Implementing OpenCV face recognition can prevent strangers from entering a house or apartment.
Apart from protecting homes from intruders, it is necessary to ensure the safety of people who live alone and cannot always take care of themselves. Computer vision people detection systems based on OpenCV algorithms and neural networks can remotely monitor elderly people and people with health problems and disabilities. In case of emergency, they can alert relatives or caregivers. Here, we’ll share our personal experience in building a remote monitoring system for real-time human detection with OpenCV.
How to Use Computer Vision to Detect People in Smart Homes with the OpenCV Library
The primary task of the Algodroid project was to integrate a CV system into an IoT solution to recognize life-threatening situations and provide safety for the elderly in their homes. We used OpenCV to implement computer vision for people detecting and skeleton visualization.
To segment a human skeleton, we used TensorFlow-based BodyPix. This is an open-source ML model that segments a human body into 24 parts and visualizes each part as a set of pixels of the same color.
BodyPix segments a human body into parts and visualizes each part as a set of pixels of the same color.
After segmenting a body, our human detection system determined its biomechanical data, such as body geometry and movements. These parameters were calculated and classified by OpenCV motion tracking algorithms.
By using simulation libraries, we created a physical model of a human body based on real proportions, biometric and biomechanical data. We placed the model in a virtual environment and generated probable scenarios of human actions. Based on these scenarios, the algorithms learned to estimate the posture.
Implementing people detection using computer vision becomes challenging in indoor spaces, such as homes. A person can be hidden behind a piece of furniture, so the camera will not capture the entire body, and the system will not get the complete biometric data. After trying different approaches, we combined neural networks with decision trees—classical machine learning algorithms available in the OpenCV library. These algorithms are fast and easy to understand. They can learn from small amounts of data or when some data is missing.
The biggest advantage of using OpenCV for this project was the opportunity to combine different methods and approaches within one platform. Our fall detection system comprised both machine learning and deep learning solutions. As a result, the system could detect 10 target states out of 10 activities. The DNN module allowed us to train neural networks on other frameworks and then successfully run them on OpenCV.
We developed a communication system that collected data from all cameras installed in the house. After identifying a fall, it could send the picture and notify an emergency medical service for further help.
Modern smart homes and other IoT systems often employ machine learning and artificial intelligence technologies and solutions. For example, remote monitoring systems based on computer vision can secure homes and look after elderly people who live independently. Cameras track people’s activities, and algorithms analyze their behaviors, identifying emergencies.
OpenCV is an open-source library rich in tools for building real-time CV applications. Its algorithms can process images, detect and track objects and people, describe their features, and fulfill many other tasks. To get more information about this library, you can visit our blog and learn how to implement OpenCV for people detection in IoT home automation and what challenges you can face while working on similar projects.
Create your free account to unlock your custom reading experience.
World of Warcraft 9.1 Release Date: When is it?
Delta Air Lines Flight Diverts To Oklahoma Over Unruly Off-Duty Flight Attendant
Spirit Airlines Just Made The Best Argument For Lifting LaGuardia’s Perimeter Rule
Clash of Clans June 2021 Update patch notes
Africa Leading Bitcoin P2P Trading Volume Growth in 2021
Boeing 727 Set To Be Turned Into Luxury Hotel Experience
Forza Horizon 5 Announced, Launches November 9
In El Salvador’s bitcoin beach town, digital divide slows uptake
Ripple price analysis: Ripple retests $0.80 support, prepares to push higher?
Pre-Owned Luxury Car dealer Luxury Ride to add 80 Employees across functions to boost growth
Binance Is Launching a Decentralized NFT Platform
Since It Adopted Bitcoin As Legal Tender, The World Is Looking At El Salvador
Digital turns physical: Top NFT galleries to visit in-person in 2021
Dogecoin Breaches More Demand Zones as Sellers Threaten To Short Further
Former PayPal Employees Launch Cross-Border Payment System
XCMG dostarcza ponad 100 sztuk żurawi dostosowanych do regionu geograficznego dla międzynarodowych klientów
Delta Air Lines Airbus A320 Returns To Minneapolis After Multiple Issues
DeFi Deep Dive — Avalanche, DeFi in Under a Second
Litecoin price analysis: Litecoin price ready to challenge the $160 mark despite bearish pressure
Her Story Creator’s Next Game is Immortality, Releases in 2022
Esports1 week ago
Genshin Impact Echoing Conch Locations Guide
Esports1 week ago
All 17 character locations in Collections in Fortnite Chapter 2, season 7
Esports1 week ago
Here are all the milestones in Fortnite Chapter 2, season 7
Esports1 day ago
World of Warcraft 9.1 Release Date: When is it?
Esports1 week ago
Free boxes and skins up for grabs in Brawl Stars to celebrate one-year anniversary of China release
Gaming1 week ago
MUCK: How To Get The Best Weapon | Wyvern Dagger Guide
Esports1 week ago
What Time Does Minecraft 1.17 Release?
Esports1 week ago
How to Fly UFOs in Fortnite
Esports1 week ago
MLB The Show 21 Kitchen Sink 2 Pack: Base Round Revealed
Energy7 days ago
Recon Updates Progress on its Technology-Driven Solutions for Electric Submersible Progressing Cavity Pump with $5 Million Orders Secured
Energy1 week ago
Prístav v Baku začína s výstavbou strategického terminálu pre hnojivá v meste Alat
Blockchain1 week ago
Doge meme Shiba Inu dog to be auctioned off as NFT