Connect with us

IOT

Keep an OpenMV Mind

Avatar

Published

on

Five new products to support the OpenMV H7 Camera, a FLIR Lepton Module, and a new version of the :MOVE mini buggy.

Favorited Favorite 0

Welcome, welcome, welcome! We have a load of new products to showcase today and it all starts with five new supporting products for the popular OpenMV H7 Camera, including WiFi and LCD shields, and three unique lens and module options. One of those options includes the ability to add a FLIR Lepton module, so we are now offering the sensor on its own! Rounding out the day we also have a new version of the :MOVE mini buggy for micro:bit. Now, let’s take a closer look!

[embedded content]

Customize your OpenMV H7 Cam!

The WiFi Shield gives your OpenMV Cam the ability to connect to the Internet wirelessly. This shield features an ATWINC1500 FCC Certified WiFi module that can transmit data at up to 48 Mbps, making it perfect for streaming video from the OpenMV Camera. Your OpenMV Cam’s firmware already has built-in support for controlling the WiFi Shield using the network module.


The LCD Shield gives your OpenMV Camera the ability to display what it sees on-the-go while not connected to your computer. This shield features a 1.8″ 128×160 16-bpp (RGB565) TFT LCD display with a controllable backlight. Your OpenMV Cam’s firmware already has built-in support for controlling the LCD Shield using the LCD module.


The Global Shutter Camera Module allows your OpenMV Cam to capture high quality grayscale images not affected by motion blur. The module features the MT9V034 Global Shutter Camera Module, capable of taking snapshot pictures on demand and able to run 80 FPS in QVGA mode, 200 FPS in QQVGA mode, and 400 FPS in QQQVGA mode.


The FLIR® Lepton® Adapter Module allows your OpenMV Camera to interface with the FLIR® Lepton® (version 1, 2 or 3) thermal imaging sensors for thermal vision applications. Combining machine vision with thermal imaging allows you to better pinpoint or identify objects you wish to to measure the temperature of with astounding accuracy.

In order to help support this module, we are now offering the FLIR Lepton 2.5 Thermal Imaging Module on its own as well. Please be aware that we currently have a limit of one module per order.


The OpenMV Ultra Wide Angle Lens gives your OpenMV Camera the ability to see a wider field of view (FOV). This lens can easily be screwed into your existing module and has about a 100° FOV. The standard lens that ships with your OpenMV Camera has about a 70° FOV, which is good but not ideal for security applications.


The Kitronik :MOVE mini MK 2 buggy kit for the BBC micro:bit is the latest version of the ever popular :MOVE mini that provides a fun introduction to robotics. The :MOVE mini is a two-wheeled robot suitable for autonomous operation, remote control projects via a Bluetooth application, or being controlled using a second BBC micro:bit via the micro:bit’s radio functionality.


That’s it for this week! As always, we can’t wait to see what you make! Shoot us a tweet @sparkfun, or let us know on Instagram or Facebook. We’d love to see what projects you’ve made!

Never miss a new product!

Source: https://www.sparkfun.com/news/3314

IOT

RT-Thread Studio IoT IDE v2.1.0 Update: Fresh Boards, NXP, and MicroChip

Avatar

Published

on

RT-Thread IoT OS Hacker Noon profile picture

RT-Thread Studio IDE v2.1.0 is releasing! Back to the last article, you may get to know the features of RT-Thread Studio and may have downloaded it for development. So let’s see what’s NEW in v2.1.0.

Make Your Own Board Supported Packages!

RT-Thread Studio V2.1.0 offers a tool associated with tutorials, which helps developers create the BSP visually. Developers can now easily make a board support package (BSP) and upload it online via SDK Manager.

The BSP tool supports configuring graphically the information of dev boards, documentation, and projects. The prompt of every configuration item is shown in front of the interface to help you understand. Also, this time, the Studio team gives a sweet thought. They make the configuration information is available for preview! Check out this tutorial to make a BSP by yourself. 

40+ Fresh BSPs Online

More than 40 new board support packages are supported in RT-Thread Studio V2.1.0, so we now have a total of 70 BSPs, covering eight vendors such as Allwinner, AlphaScale, ArteryTek, Bluetrum, GigaDevice, MicroChip, MindMotion, NXP, ST, TI, Synwit.

In particular, the RT-Thread V4.0.3 source resource pack has been added on RT-Thread Studio V2.1.0.

Support MDK Development

RT-Thread Studio v2.1.0 supports bilateral synchronous co-development with the MDK project. You can import an existing RT-Thread MDK project directly into RT-Thread Studio, and the configuration of the MDK will be automatically synchronized with the RT-Thread Studio project.

RT-Thread Studio provides a bilateral synchronization mechanism that allows the project to switch between MDK and RT-Thread Studio at any time. The MDK configuration feature is also supported to perform configuration items such as C/C++, ASM, Linker and automatically synchronized with MDK projects when the configuration is completed. If you modify some configurations on the MDK, you can manually trigger synchronization at RT-Thread Studio to sync the modified configurations with RT-Thread studio projects.

Support CubeMX Development

RT-Thread Studio v2.1.0 is also in collaboration with STM32CubeMX, where you can open CubeMX Settings directly in RT-Thread Studio. After configuration, click the button GENERATE CODE, and the code generated in CubeMX will automatically be copied into the RT-Thread Studio project directory, no further modifications are required. Then it is automatically added to the compilation. We only need to compile, download and debug programs as usual. Check out this tutorial for more information: 

Perfect and Add A New QEMU Simulator

QEMU in RT-Thread Studio v2.1.0 has added two simulators for the stm32f401 and the stm32f410 series respectively. You can download the latest version of QEMU in SDK Manager. When configuring QEMU, select the emulator in the pull-down box of the Emulator configuration bar.

The configuration interface has also made some updates: First, the configuration of the serial port was added in this version. When a different serial is selected, the standard IO device is relocated to the corresponding serial port.

Second, SD Card Memory is now optional and compatible with situations where an SD card is not required. More importantly, the commands such as -show-cursor are moved to Extra Command, where you can customize the parameters of these commands to make QEMU more flexible to use.

Download RT-Thread Studio V2.1.0

Ideas for RT-Thread Studio, talk with them. 

Questions when using RT-Thread Studio, create a post to ask in RT-Thread Club!

Also published at: https://club.rt-thread.io/ask/question/56.html

Tags

Join Hacker Noon

Create your free account to unlock your custom reading experience.

Coinsmart. Beste Bitcoin-Börse in Europa
Source: https://hackernoon.com/rt-thread-studio-iot-ide-v210-update-fresh-boards-nxp-and-microchip-fg3n33mm?source=rss

Continue Reading

AI

Device monitoring and management startup Memfault nabs $8.5M

Avatar

Published

on

Join Transform 2021 for the most important themes in enterprise AI & Data. Learn more.


Memfault, a startup developing software for consumer device firmware delivery, monitoring, and diagnostics, today closed an $8.5 million series A funding round. CEO François Baldassari says the capital will enable Memfault to scale its engineering team and make investments across product development and marketing.

Slow, inefficient, costly, and reactive processes continue to plague firmware engineering teams. Often, companies recruit customers as product testers — the first indication of a device issue comes through users contacting customer service or voicing dissatisfaction on social media. With 30 billion internet of things (IoT) devices predicted to be in use by 2025, hardware monitoring and debugging methods could struggle to keep pace. As a case in point, Palo Alto Networks’ Unit 42 estimates that 98% of all IoT device traffic is unencrypted, exposing personal and confidential data on the network.

Memfault, which was founded in 2019 by veterans of Oculus, Fitbit, and Pebble, offers a solution in a cloud-based firmware observability platform. Using the platform, customers can capture and remotely debug issues as well as continuously monitor fleets of connected devices. Memfault’s software development kit is designed to be deployed on devices to capture data and send it to the cloud for analysis. The backend identifies, classifies, and deduplicates error reports, spotlighting the issues likely to be most prevalent.

Baldassari says that he, Tyler Hoffman, and Christopher Coleman first conceived of Memfault while working on the embedded software team at smartwatch startup Pebble. Every week, thousands of customers reached out to complain about Bluetooth connectivity issues, battery life regressions, and unexpected resets. Investigating these bugs was time-consuming — teams had to either reproduce issues on their own units or ask customers to mail their watches back so that they could crack them open and wire in debug probes. To improve the process, Baldassari and his cofounders drew inspiration from web development and infrastructure to build a framework that supported the management of fleets of millions of devices, which became Memfault.

By aggregating bugs across software releases and hardware revisions, Memfault says its platform can determine which devices are impacted and what stack they’re running. Developers can inspect backtraces, variables, and registers when encountering an error, and for updates, they can split devices into cohorts to limit fleet-wide issues. Memfault also delivers real-time reports on device check-ins and notifications of unexpected connectivity inactivity. Teams can view device and fleet health data like battery life, connectivity state, and memory usage or track how many devices have installed a release — and how many have encountered problems.

“We’re building feedback mechanisms into our software which allows our users to label an error we have not caught, to merge duplicate errors together, and to split up distinct errors which have been merged by mistake,” Baldassari told VentureBeat via email. “This data is a shoo-in for machine learning, and will allow us to automatically detect errors which cannot be identified with simple heuristics.”

Memfault

IDC forecasts that global IoT revenue will reach $742 billion in 2020. But despite the industry’s long and continued growth, not all organizations think they’re ready for it — in a recent Kaspersky Lab survey, 54% said the risks associated with connectivity and integration of IoT ecosystems remained a major challenge.

That’s perhaps why Memfault has competition in Amazon’s AWS IoT Device Management and Microsoft’s Azure IoT Edge, which support a full range of containerization and isolation features. Another heavyweight rival is Google’s Cloud IoT, a set of tools that connect, process, store, and analyze edge device data. Not to be outdone, startups like Balena, Zededa, Particle, and Axonius offer full-stack IoT device management and development tools.

But Baldassari believes that Memfault’s automation features in particular give the platform a leg up from the rest of the pack. “Despite the ubiquity of connected devices, hardware teams are too often bound by a lack of visibility into device health and a reactive cycle of waiting to be notified of potential issues,” he said in a press release. “Memfault has reimagined hardware diagnostics to instead operate with the similar flexibility, speed, and innovation that has proven so successful with software development. Memfault has saved our customers millions of dollars and engineering hours, and empowered teams to approach product development with the confidence that they can ship better products, faster, with the knowledge they can fix bugs, patch, and update without ever disrupting the user experience.”

Partech led Memfault’s series A raise with participation from Uncork Capital, bringing the San Francisco, California-based company’s total raised to $11 million. In addition to bolstering its existing initiatives, Memfault says it’ll use the funding to launch a self-service of its product for “bottom-up” adoption rather than the sales-driven, top-down approach it has today.

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact. Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Coinsmart. Beste Bitcoin-Börse in Europa
Source: https://venturebeat.com/2021/04/01/device-monitoring-and-management-startup-memfault-nabs-8-5m/

Continue Reading

IOT

SoC Integration Complexity: Size Doesn’t (Always) Matter

Avatar

Published

on

It’s common when talking about complexity in systems-on-chip (SoCs) to haul out monster examples: application processors, giant AI chips, and the like. Breaking with that tradition, consider an internet of things (IoT) design, which can still challenge engineers with plenty of complexity in architecture and integration. This complexity springs from two drivers: very low power consumption, even using harvested MEMS power instead of a battery, and quick turnaround to build out a huge family of products based on a common SoC platform while keeping tight control on development and unit costs.


Fig. 1: Block diagram of a low-power TI CC26xx processor. (Sources: The Linley Group, “Low-Power Design Using NoC Technology”; TI)

For these types of always-on IoT chips, a real-time clock is needed to wake the system up periodically – to sense, compute, communicate and then go back to sleep; a microcontroller (MCU) for control, processing, plus security features; and local memory and flash to store software. I/O is required for provisioning, debugging, and interfacing to multiple external sensors/actuators. Also necessary is a wireless interface, such as Bluetooth Low Energy, because the aim is first at warehouse applications, and relatively short-range links are OK for that application.

This is already a complex SoC, and the designer hasn’t even started to think about adding more features. For a product built around this chip to run for years on a coin cell battery or a solar panel, almost all of this functionality has to be powered down most of the time. Most devices will have to be in switchable power domains and quite likely switchable voltage domains for dynamic voltage and frequency scaling (DVFS) support. A power manager is needed to control this power and voltage switching, which will have to be built/generated for this SoC. That power state controller will add control and status registers (CSRs) to ultimately connect with the embedded software stack.


Fig. 2: There are ten power domains in the TI CC26xx SoC. The processor has two voltage domains in addition to always-on logic (marked with *). (Sources: The Linley Group, “Low-Power Design Using NoC Technology”; TI)

Running through this SoC is the interconnect, the on-chip communications backbone connecting all these devices, interfaces, and CSRs. Remember that interconnects consume power, too, even passively, through clock toggling and even leakage power while quiescent. Because they connect everything, conventional buses are either all on or all off, which isn’t great when trying to eke out extra years of battery life. Designers also need fine-grained power management within the interconnect, another capability lacking in old bus technology.

How can a design team achieve extremely low power consumption in IoT chips like these? By dumping the power-hungry bus and switching to a network-on-chip (NoC) interconnect!

Real-world production chip implementation has shown that switching to a NoC lowers overall power consumption by anywhere from two to nine times compared to buses and crossbars. The primary reasons NoCs have lower power consumption are due to the lower die area of NoCs compared to buses and crossbars and multilevel clock gating (local, unit-level, and root), which enables sophisticated implementation of multiple power domains. This provides three levels of clock gating. For the TI IoT chips, the engineering team implemented multiple overlapping power and clock domains to meet their use cases using the least amount of power possible while limiting current draw to just 0.55mA in idle mode. Using a NoC to reduce active and standby power allowed the team to create IoT chips that can run for over a year using a standard CR2032 coin battery.

Low power is not enough to create successful IoT chips. These markets are fickle with a need for low cost while meeting constantly changing requirements for wireless connectivity standards, sensors, display, and actuator interfaces. Now engineers must think about variants, or derivatives, based on our initial IoT platform architecture. These can range from a narrowband internet of things (NB-IoT) wireless option for agricultural and logistics markets to an audio interface alarm and AI-based anomaly detection. It makes perfect strategic sense to create multiple derivative chips from a common architectural SoC platform, but how will this affect implementation if someone made the mistake of choosing a bus? Conventional bus structures have a disproportionate influence on the floorplan. Change a little functionally, and the floorplan may have to change considerably, resulting in a de facto “re-spin” of the chip architecture, defeating the purpose of having a platform strategy. Can an engineer anticipate all of this while still working on the baseline product? Is there a way to build more floorplan reusability into that first implementation?

A platform strategy for low-power SoCs isn’t just about the interconnect IP. As the engineer tweaks and enhances each design by adding, removing or reconfiguring IPs, and optimizing interconnect structure and power management, the software interface to the hardware will change, too. Getting that interface exactly right is rather critical. A mistake here might make the device non-operational, but at least someone would figure that out quickly. More damaging to the bottom line would be a small bug that may leave on a power domain when it should have shut off. An expected 1-year battery life drops to three months. A foolproof memory map can’t afford to depend on manual updates and verification. It must be generated automatically. IP-XACT based IP deployment technology provides state-of-the-art capabilities to maintain traceability and guarantee correctness of this type of design data throughout the product lifecycle.

Even though these designs are small compared to mega-SoCs, there’s still plenty of complexity, yet plenty of opportunity to get it wrong. At Arteris IP, we’re laser-focused on maximizing automation and optimization in SoC integration to make sure our users always get it “first time right.” Give us a call!

Kurt Shuler

Kurt Shuler

  (all posts)
Kurt Shuler is vice president of marketing at ArterisIP. He is a member of the US Technical Advisory Group (TAG) to the ISO 26262/TC22/SC3/WG16 working group and helps create safety standards for semiconductors and semiconductor IP. He has extensive IP, semiconductor, and software marketing experiences in the mobile, consumer, automotive, and enterprise segments working for Intel, Texas Instruments, and four startups. Prior to his entry into technology, he flew as an air commando in the US Air Force Special Operations Forces. Shuler earned a B.S. in Aeronautical Engineering from the United States Air Force Academy and an M.B.A. from the MIT Sloan School of Management.

Coinsmart. Beste Bitcoin-Börse in Europa
Source: https://semiengineering.com/soc-integration-complexity-size-doesnt-always-matter/

Continue Reading

Artificial Intelligence

Audio Analytics: Vital Technology for Autonomous Vehicles

Avatar

Published

on

Automotive Audio analytics
Illustration: © IoT For All

Artificial Intelligence (AI) and Machine Learning (ML) are projected to play a major role transformation of the Automotive Industry by designing future-state autonomous vehicles enabled by AI and ML. With the advancement of supply chain management, manufacturing operations, mobility services, image and video analytics, audio analytics, next-generation autonomous vehicles are poised to transform the automobile’s consumer perception. As these technologies continue to develop, the autonomous automotive industry is positioned to reach a global market size of nearly 60 billion USD by 2030.

Audio Analytics under Machine Learning in driverless cars consists of Audio classification, NLP, voice/speech, and sound recognition. Voice recognition, in particular, has become an integral part of autonomous vehicle technology providing enhanced control for the driver. Up until this point, in the traditional models of cars, speech recognition was a challenge because of the lack of efficient algorithms, reliable connectivity, and processing power at the edge. Further, in-car cabin noise reduced the performance of the audio analytics, which resulted in false recognition.

Audio analytics in machines has been a subject of constant research. With technological advancement, new products are coming online like Amazon’s Alexa and Apple’s Siri. These systems are rapidly evolving through cloud computing technology, a tactic that other recognition systems lacked previously. 

Recently, various Machine Learning algorithms like kNN (K Nearest Neighbour), SVM (Support Vector Machine), EBT (Ensemble Bagged Trees), Deep Neural Networks (DNN), and Natural Language Processing (NLP) have made Audio Analytics more effective and better positioned to add value to autonomous vehicles.

In audio analytics, data is pre-processed to remove the noise, and then the audio feature will be extracted from the audio data. The audio features such as MFCC (Mel-frequency cepstral coefficient) and statistical features like Kurtosis and Variance are used here. The frequency bands of MFCC are equally spaced on the Mel scale, which is very close to the human auditory system’s response.  Finally, the trained model is used for inference, a real-time audio stream is taken from the multiple microphones installed in the car, which is then pre-processed, and the features will be extracted. The extracted feature will be passed to the trained model to correctly recognize the audio, which will help make the right decision in autonomous vehicles.

Data Processing & ML Model Training

With new technologies, end user’s trust is the key point, and NLP is a game-changer to build this trust in autonomous vehicles. NLP allows passengers to control the car using voice commands, such as asking to stop at a restaurant, change the route, stop at the nearest mall, switch on/off lights, open and close the doors, and many more. This makes the passenger experience rich and interactive.

Let’s take a look at a few use cases where audio analytics provide benefits to autonomous vehicles.

Emergency Siren Detection

The sound of the siren of any emergency vehicle such as an ambulance, fire truck, or police car can be detected using the various deep learning models and machine learning models like SVM (support vector machine). The supervised learning model – SVM is used for classification and regression analysis. The SVM classification model is trained using huge data of the emergency siren sound and non-emergency sounds. With this model, the system is developed, identifying the siren sound to make appropriate decisions for an autonomous car to avoid any dangerous situation. With this detection system, an autonomous car can decide to pull over and give away for the emergency vehicle to pass.

Engine Sound Anomaly Detection

Automatic early detection of a possible engine failure could be an essential feature for an autonomous car. The car engine makes a certain sound when it works under normal conditions and makes a different sound when it is exhibiting problems. Many machine learning algorithms available among K-means clustering can be used to detect anomalies in engine sound. In k-means clustering, each data point of sound is assigned to the k group of clusters. Assignment of the data point is based on the mean near the centroid of that cluster. In the anomalous engine sound, the data point will fall outside of the normal cluster and be a part of the anomalous cluster. With this model, the health of the engine can constantly be monitored. If there is an anomalous sound event, then an autonomous car can warn the user and help make proper decisions to avoid dangerous situations. This can avoid a complete break-down of the engine.

Lane Change on Honking

For an autonomous car to work exactly as a human-driven car, it must work effectively in the scenario where it is mandatory to change its lane when the vehicle from behind needs to pass urgently, indicated with honking. Random forest, a machine learning algorithm, will be best suited for this type of classification problem. It is a supervised classification algorithm. As its name suggests, it will create the forest of decision trees and finally merge all the decision trees to accurately classify. A system can be developed using this model, identifying the certain pattern of horn and taking the decision accordingly.

NLP (Natural Language Processing) processes the human language to extract the meaning, which can help make decisions. Rather than just giving commands, the occupant can actually speak to the self-driving car. Suppose you have assigned your autonomous car a name like Adriana, then you can say to your car, “Adriana, take me to my favorite coffee shop.” This is still a simple sentence to understand, but we can also make the autonomous car understand even more complex sentences such as “take me to my favorite coffee shop and before reaching there, stop at Jim’s home and pick him up.” Importantly to note, self-driving vehicles should not obey the owner’s instructions blindly to avoid any dangerous situations, such as dangerous, life-threatening situations. To make effective decisions in dangerous situations, autonomous vehicles need a more powerful NLP which actually interprets what humans have told, and it can echo back the consequences of that.

Thus, machine learning-based audio analytics is attributed to the increasing popularity of autonomous vehicles due to the safety and reliability enhancements. As Machine Learning continues to develop, more and more service-based offerings are becoming available that offer such services as audio analytics, NLP, voice recognition, and more, enhancing passenger experience, on-road safety, and timely engine maintenance automobiles.

Coinsmart. Beste Bitcoin-Börse in Europa
Source: https://www.iotforall.com/audio-analytics-vital-technology-for-autonomous-vehicles

Continue Reading
Esports4 days ago

chessbae removed as moderator from Chess.com amid drama

Esports3 days ago

DreamHack Online Open Ft. Fortnite April Edition – How To Register, Format, Dates, Prize Pool & More

Esports3 days ago

Hikaru Nakamura drops chessbae, apologizes for YouTube strike

Esports22 hours ago

Free Fire World Series APK Download for Android

Fintech4 days ago

Australia’s Peppermint Innovation signs agreement with the Philippine’s leading micro-financial services provider

Esports4 days ago

Twitch bans Adin Ross after ZIAS uses Homophobic slurs during his stream

Esports5 days ago

The best way to play Hecarim in League of Legends season 11

Esports4 days ago

Coven and Abomination highlight the new TFT Reckoning Traits

Esports4 days ago

Ludwig has announced when he’ll end his subathon Twitch stream

Esports3 days ago

Apex Legends update 1.65 brings five new LTMs for War Games

Esports2 days ago

Position 5 Faceless Void is making waves in North American Dota 2 pubs after patch 7.29

Esports3 days ago

Ludwig Closes Out Month-Long Streaming Marathon in First Place – Weekly Twitch Top 10s, April 5-11

Esports3 days ago

Fortnite: Patch Notes v16.20 – Off-Road Vehicle Mods, 50-Player Creative Lobbies, Bug Fixes & More

Esports4 days ago

Heroic defeat Gambit Esports in ESL Pro League 13 grand final

Blockchain4 days ago

Stock-to-Flow-Analyse: Bitcoin bei 288.000 USD

Esports22 hours ago

Dota 2: Top Mid Heroes of Patch 7.29

Fintech4 days ago

SME finance platform offers advanced commission product

Esports3 days ago

Complete guide to romance and marriage in Stardew Valley

Blockchain4 days ago

Welche Probleme bringen US-Bitcoin ETFs mit sich?

Esports3 days ago

flusha announces new CSGO roster featuring suNny and sergej

Trending