Connect with us

IOT

Developing a Critical Infrastructure Cybersecurity Strategy

Avatar

Published

on

Given the blossoming of attacks on organizations — from energy to health care firms — the need for robust critical infrastructure cybersecurity has expanded. 

Takeaways include the following:

  • Critical infrastructure protection is a long-standing priority, but many organizations lag in their response to cyberthreats. 
  • COVID-19 has broadened the definition of critical infrastructure while also providing a reminder for enterprise companies to question which systems are essential to operations. This article builds on the advice in chapter one of this series in “Addressing IoT Security Challenges From the Cloud to the Edge.” 
  • Organizations managing critical infrastructure should develop a proactive cybersecurity posture, but coronavirus-led disruptions heighten the challenge. 

By now, the need for comprehensive cybersecurity for critical infrastructure is clear. Public accounts are widespread concerning the risk of malicious actors targeting the electrical grid, dams, voting systems and other federally designated critical infrastructure. But the majority of organizations that provide essential services have taken only incremental steps in addressing cyber risk. “Many [operational technology] organizations have pretty nascent cybersecurity programs,” said Sean Peasley, a partner at Deloitte. 

The term “critical infrastructure” initially referred to public works such as transportation infrastructure and public utilities, but, since the 1990s, the definition has steadily expanded. Sectors under the rubric now include, among other things, health care, energy and utilities, and various manufacturers. “And practically speaking, we’re finding out in the era of COVID, that critical infrastructure is even broader than we thought,” said Kieran Norton, a principal at Deloitte. Makers of personal protective equipment, for instance, play a role in mitigating the crisis. “We’ve also learned that supply chain disruption during a pandemic, for instance, could potentially be catastrophic,” Norton said. Not surprisingly, logistics firms have cemented their role as essential. The U.S. government has declared that pulp and paper and meat-packing industries are essential as well. So the overlap between critical infrastructure and operational technology (OT) security continues to blur. No matter what the name, few of the industries in this domain have reached a high degree of cyber-effectiveness, according to research on industrial security from the Ponemon Institute underwritten by TÜV Rheinland. 

Traditional critical infrastructure entities may have decades of experience with traditional risk management and safety initiatives, but for many, cyberssecurity is a relatively new priority. And broadly speaking, organizations managing critical infrastructure tend to be slow moving. “My general experience is that OT security is about 10 to 15 years behind the IT security space,” said Andrew Howard, CEO of Kudelski Security.  

Meanwhile, the threat landscape for critical infrastructure organizations continues to grow more precarious. The number of attackers targeting such infrastructure is surging, as is the number of connected devices in many critical infrastructure environments. According to the X-Force Threat Intelligence Index 2020 from IBM, the volume of attacks on industrial control systems in 2019 was higher than the previous three years combined. 

Such attacks have made headlines in 2020. Ransomware attackers successfully targeted Honda and Taiwan’s energy utility and a U.S. natural gas facility. Israel’s water supply was reportedly attacked. The Japanese telecommunications firm NTT has had its internal network breached. 

Risk Assess Continually

If you can’t measure something, you can’t improve it. But that advice doubly applies to critical infrastructure cybersecurity, where risk and risk reduction can be challenging to quantify. Many organizations struggle to keep an accurate asset inventory, given the diversity and complexity of their environments. Meanwhile, experts specializing in OT cybersecurity are in short supply. Compounding this risk is the complicated nature of third-party risk management, including assessing potential vulnerabilities introduced via procured hardware, software or contractors.  

While risk assessment should be a continual process, critical infrastructure organizations should begin with periodic in-depth risk assessments designed to quantify threats, vulnerabilities and potential consequences of cyberattacks and other causes of operational disruption. Potential vulnerabilities include shared passwords, unpatched systems, software and hardware of unknown provenance and overly permissive firewalls.  

But such security assessments can be tricky to perform. There’s an array of device types to track, ranging from pumps and valves, legacy controllers and myriad computing devices. Additionally, understanding the ramifications of an industrial system breach necessitates an in-depth operational knowledge. In an environment with scores of different systems, the problem is compounded. 

Traditional network scanning techniques require care. Active network and vulnerability scanning techniques of industrial control systems can crash control systems. Using active scanning safely in a critical infrastructure environment generally can be done safely, according to Dale Peterson, a consultant specializing in industrial control system security. But it requires working closely with operations to address the risk. While passive techniques for network monitoring are less intrusive, they are also less accurate. “This debate is often where that IT security view clashes with the OT view. The IT security person is inclined to go with active scanning, but the person in charge of monitoring a critical infrastructure system often prefers a passive approach because they don’t want to put it at risk.” 

Especially with in-depth assessments, organizations are likely to uncover a long list of problems and question the remediation to prioritize. Also compounding the problem, many cybersecurity professionals generally don’t have direct experience with all equipment undergoing audit, and thus must rely on interviews with seasoned asset owners and operators to gauge their cyber risk.   

Organizations should weigh both severity and ease of remediation. Access control is often a theme here, Miklovic said. “Boundary interfaces always are the weakest part of any cybersecurity problem, whether it be a protocol boundary or a physical boundary,” he said. “Even in the industrial cybersecurity world, one of the biggest breach points still is USB drives.”    

While it is quick and inexpensive for a staff member to use super-glue or solder to plug unused USB drives, some organizations focus too much on addressing the “easy stuff” in their remediation, Howard said. “Yes, there are threshold mitigations you should knock out immediately. But after that, you should prioritize based on risk.”

Quantifying that risk is possible using a two-by-two matrix that weighs the likelihood of a vulnerability’s impact and potential severity, according to Joe Saunders, CEO of RunSafe. 

Building a risk profile for each system is rarely straightforward. Interviews with asset owners and operators are key to understand the impact if a given system were to crash. “You can have a machine that seems to be vulnerable and high risk,” Miklovic said. But if it goes down, it may cause only isolated problems rather than bringing everything”to a grinding halt.”  

Another factor that can complicate risk assessment is the tendency for organizations to prioritize cyber-priorities solely based on the time or money invested. “What an organization thinks is valuable may be quite different from what a cybercriminal thinks is valuable,” said Bill Malik, vice president, infrastructure strategies at Trend Micro. 

When it comes to legacy equipment, organizations can be  limited in their ability to reduce risk. A device running a decades-old operating system likely can’t be updated. “The strategy that’s typically taken on these systems is to isolate and monitor,” Howard said. “My experience is that the isolation is usually pretty porous.” 

New Risks in the New Normal

Risk management in critical infrastructure has become increasingly challenging with growing cybersecurity concerns. The need for those organizations to develop COVID-19 response plans while expanding remote working for some workers adds further complexity. “I think the main sort of change that we see in critical infrastructure environments is the work-from-home scenario,” said Jamil Jaffer, senior vice president for strategy, partnerships and corporate Development at IronNet Cybersecurity. 

The work-from-home paradigm has complicated protecting vulnerable systems, Howard said. “Now, you have employees using VPN to connect to production systems from home to make changes,” he said. “They would probably not have done that before.” 

Similarly, some organizations could be tempted to grant third-parties such as vendors and technicians remote access to sensitive systems. “There’s probably less focus on cybersecurity when many people are focused on getting their work done and keeping their job,” Norton said.  

Network availability is another consideration for organizations looking to scale up remote working capabilities in critical infrastructure contexts. “In the past, you had organizations with 10%–20% of their workers using traditional remote access infrastructure,” Norton said. As organizations have scaled up remote working capabilities, “many have run into problems with bandwidth, scale and deploying assets,” Norton said.

While expanding connectivity for industrial assets can potentially create more vulnerabilities, COVID-19 also underscored the risk of old-fashioned contingency plans that rely on workers’ physical presence, manual processes, and paperwork. 

Although traditionally slow to change, critical infrastructure organizations shouldn’t shy away from making wholesale changes to their technology architecture as they rethink core processes and workflows. “If this is the new normal, you probably need to redesign your infrastructure,” Norton said. 

Toward Proactive Cybersecurity 

Ultimately, critical infrastructure organizations seek to transition from entrenched, manual processes that offer incremental risk reduction toward a more-proactive cybersecurity posture. “Industrial environments tend to be complex and constantly evolving,” said Natali Tshuva, CEO of Sternum. “Security controls are needed not only to assess the current status but to also offer sustainable protection and peace of mind for years to come.” 

Traditionally, industrial and critical infrastructure security meant physical security, encompassing safety and access control within a physical perimeter. Many traditional industrial protocols are fundamentally insecure because their designers assumed only authorized personnel would have access to them. But the rise of remote working, cloud computing and IIoT have undercut the castle-and-moat security model. The influence of that legacy model, however, is one reason many critical infrastructure organizations — as well as enterprise companies — have a reactive security approach. 

The emphasis of such a redesign should be creating robust and efficient workflows based on universal security policies. “Move the security controls as close as possible to the assets,” Norton counseled. 

 The process includes creating a comprehensive and evolving security policy for the following assets:

  • Equipment and devices: Such hardware could range from legacy industrial equipment to IoT devices to corporate-issued laptops. “Understanding those devices in context relative to users is super important,” Norton said. Organizations should secure industrial controllers, advised Joe Saunders, CEO of RunSafe Security. Securing sensors and gateways, by contrast, is relatively straightforward. “But controllers are performance-sensitive and deep in the infrastructure.”      
  • Networks and users: As for users, security staff should constrain access as much as feasibly possible based on controls outlined in an organizational security policy. “You can have a policy engine that’s talking to those security controls that allows you to dynamically apply, through the context of the user and the application, logic,” Norton said. Organizations should also invest in network breach detection capabilities. 
  • Data. Data classification and discovery are valuable tools for evaluating the level of control needed to protect a given data type. 
  • Workflow, workloads and processes. The degree of protection required accounts for these processes’ intrinsic value to your organization and the likelihood of adversaries interfering with them. This task also includes fortifying the supply chain and ensuring that contractors and suppliers comply with a specified security controls level. 
  • Software development processes. Critical infrastructure organizations “should build security into software development, so the software you deploy is resilient,” Saunders said.    

While cyber-hygiene is  vital, a common pitfall in security is to under-prioritize threat detection, response and recovery. “A quick rule of thumb is to spend 50% of your effort on prevention, and detection and spend 50% of your effort on response recovery,” said Matt Selheimer, an executive at PAS Global. “Traditionally, the approach many organizations have taken is to put the preventive controls in place first,” Norton said. But given the complexity of examining risk in critical infrastructure environments, response and recovery sometimes take a back seat. “If something does go wrong, you want to be able to identify it quickly and shut it down,” Norton said. “That’s just as important as preventing something because you know that something’s eventually going to go wrong.” 

Organizations aspiring to transition to a proactive cybersecurity posture can draw inspiration from various frameworks, ranging from the comprehensive ISO 27002 and standards specific to industrial control systems such as ISA/IEC 62443. A relative newcomer is the Cybersecurity Maturity Model Certification (CMMC) from the Department of Defense — designed to specify the security level required for organizations to bid on various government programs. Broken into five tiers, the first three specify basic, intermediate and good cyber-hygiene. The two upper tiers require more sophisticated cybersecurity management. The fourth stipulates that “all cyber activities are reviewed and measured for effectiveness” with review results shared with management. The top tier adds standardized and comprehensive documentation related to all relevant units. 

CMMC Level 1 Basic cyber hygiene (performed) Select practices are documented where required
CMMC Level 2 Intermediate cyber hygiene (documented)   Each practice is documented and a policy exists for all activities 
CMMC Level 3 Good cyber hygiene (managed) In addition to practices above, a cyber plan exists and is operationalized to include all activities. 
CMMC Level 4 Proactive (reviewed)  All cyber-activities are reviewed and measured for effectiveness. Results are shared with management. 
CMMC Level 5 Advanced progressive (optimizing)  In addition to practices above, this stage adds a standardized documentation across the organization.

“It’s the first framework we’ve seen with a mapped-out maturity model specific to integrators and their subcontractors bidding on sensitive government programs,” said Tony Cole, chief technology officer at Attivo Networks. The framework could encourage critical infrastructure organizations to develop a more sophisticated understanding of internal cyber risk as well as the due diligence required from third parties. There’s a level of objectivity to the framework that could be helpful, Cole said. “According to the model, a third-party auditor has to come in and confirm the cybersecurity level of a contractor. No self-reported surveys,” he said. “Somebody has to audit it.” 

Automation is also an element to consider when designing a proactive security strategy. Techniques such as machine learning can help organizations automate routine security monitoring tasks such as network breach detection and implement controls to stop the spread of attacks.  

Embedded security protections, which are increasingly available on diverse, resource-constrained devices, provide intrinsic threat protection. On-device protection should also “include comprehensive asset management capabilities” Tshuva said. Such controls support network visibility and can provide automatic alerts for attacks. 

Organizations that rush to find ways to automate security monitoring without a robust and contextual security policy often face an explosion of false alarms, Selheimer warned. But in the end, all organizations should plan on investing time in tuning security controls. “It’s no different in OT than in IT. People in the [security operations center] spend a lot of time tuning firewall rules and security information, event management correlation rules to reduce the noise,” Selheimer said.

Complicating matters further is the unique and varied critical infrastructure landscape, which can complicate deploying off-the-shelf security automation and AI tools. “There are certainly some limitations. But there are also ways to address that, “Norton said. Organizations can, for instance, isolate sensitive operational systems and use automation and orchestration tools to protect the resulting enclave. “Through automation and orchestration, automate as much you can and then orchestrate where you can’t automate to make sure that you’ve got effective capabilities and are responding and adjusting to threats,” Norton said. 

In the end, critical infrastructure security threats will likely shift rapidly. “To be proactive means you’re constantly adjusting your cyber-posture to address what’s happening both in terms of direct impacts against the organization as well as what you’re seeing happen from an industry perspective,” Norton said.

Source: https://www.iotworldtoday.com/2020/06/12/developing-a-critical-infrastructure-cybersecurity-strategy/

IOT

RT-Thread Studio IoT IDE v2.1.0 Update: Fresh Boards, NXP, and MicroChip

Avatar

Published

on

RT-Thread IoT OS Hacker Noon profile picture

RT-Thread Studio IDE v2.1.0 is releasing! Back to the last article, you may get to know the features of RT-Thread Studio and may have downloaded it for development. So let’s see what’s NEW in v2.1.0.

Make Your Own Board Supported Packages!

RT-Thread Studio V2.1.0 offers a tool associated with tutorials, which helps developers create the BSP visually. Developers can now easily make a board support package (BSP) and upload it online via SDK Manager.

The BSP tool supports configuring graphically the information of dev boards, documentation, and projects. The prompt of every configuration item is shown in front of the interface to help you understand. Also, this time, the Studio team gives a sweet thought. They make the configuration information is available for preview! Check out this tutorial to make a BSP by yourself. 

40+ Fresh BSPs Online

More than 40 new board support packages are supported in RT-Thread Studio V2.1.0, so we now have a total of 70 BSPs, covering eight vendors such as Allwinner, AlphaScale, ArteryTek, Bluetrum, GigaDevice, MicroChip, MindMotion, NXP, ST, TI, Synwit.

In particular, the RT-Thread V4.0.3 source resource pack has been added on RT-Thread Studio V2.1.0.

Support MDK Development

RT-Thread Studio v2.1.0 supports bilateral synchronous co-development with the MDK project. You can import an existing RT-Thread MDK project directly into RT-Thread Studio, and the configuration of the MDK will be automatically synchronized with the RT-Thread Studio project.

RT-Thread Studio provides a bilateral synchronization mechanism that allows the project to switch between MDK and RT-Thread Studio at any time. The MDK configuration feature is also supported to perform configuration items such as C/C++, ASM, Linker and automatically synchronized with MDK projects when the configuration is completed. If you modify some configurations on the MDK, you can manually trigger synchronization at RT-Thread Studio to sync the modified configurations with RT-Thread studio projects.

Support CubeMX Development

RT-Thread Studio v2.1.0 is also in collaboration with STM32CubeMX, where you can open CubeMX Settings directly in RT-Thread Studio. After configuration, click the button GENERATE CODE, and the code generated in CubeMX will automatically be copied into the RT-Thread Studio project directory, no further modifications are required. Then it is automatically added to the compilation. We only need to compile, download and debug programs as usual. Check out this tutorial for more information: 

Perfect and Add A New QEMU Simulator

QEMU in RT-Thread Studio v2.1.0 has added two simulators for the stm32f401 and the stm32f410 series respectively. You can download the latest version of QEMU in SDK Manager. When configuring QEMU, select the emulator in the pull-down box of the Emulator configuration bar.

The configuration interface has also made some updates: First, the configuration of the serial port was added in this version. When a different serial is selected, the standard IO device is relocated to the corresponding serial port.

Second, SD Card Memory is now optional and compatible with situations where an SD card is not required. More importantly, the commands such as -show-cursor are moved to Extra Command, where you can customize the parameters of these commands to make QEMU more flexible to use.

Download RT-Thread Studio V2.1.0

Ideas for RT-Thread Studio, talk with them. 

Questions when using RT-Thread Studio, create a post to ask in RT-Thread Club!

Also published at: https://club.rt-thread.io/ask/question/56.html

Tags

Join Hacker Noon

Create your free account to unlock your custom reading experience.

Coinsmart. Beste Bitcoin-Börse in Europa
Source: https://hackernoon.com/rt-thread-studio-iot-ide-v210-update-fresh-boards-nxp-and-microchip-fg3n33mm?source=rss

Continue Reading

AI

Device monitoring and management startup Memfault nabs $8.5M

Avatar

Published

on

Join Transform 2021 for the most important themes in enterprise AI & Data. Learn more.


Memfault, a startup developing software for consumer device firmware delivery, monitoring, and diagnostics, today closed an $8.5 million series A funding round. CEO François Baldassari says the capital will enable Memfault to scale its engineering team and make investments across product development and marketing.

Slow, inefficient, costly, and reactive processes continue to plague firmware engineering teams. Often, companies recruit customers as product testers — the first indication of a device issue comes through users contacting customer service or voicing dissatisfaction on social media. With 30 billion internet of things (IoT) devices predicted to be in use by 2025, hardware monitoring and debugging methods could struggle to keep pace. As a case in point, Palo Alto Networks’ Unit 42 estimates that 98% of all IoT device traffic is unencrypted, exposing personal and confidential data on the network.

Memfault, which was founded in 2019 by veterans of Oculus, Fitbit, and Pebble, offers a solution in a cloud-based firmware observability platform. Using the platform, customers can capture and remotely debug issues as well as continuously monitor fleets of connected devices. Memfault’s software development kit is designed to be deployed on devices to capture data and send it to the cloud for analysis. The backend identifies, classifies, and deduplicates error reports, spotlighting the issues likely to be most prevalent.

Baldassari says that he, Tyler Hoffman, and Christopher Coleman first conceived of Memfault while working on the embedded software team at smartwatch startup Pebble. Every week, thousands of customers reached out to complain about Bluetooth connectivity issues, battery life regressions, and unexpected resets. Investigating these bugs was time-consuming — teams had to either reproduce issues on their own units or ask customers to mail their watches back so that they could crack them open and wire in debug probes. To improve the process, Baldassari and his cofounders drew inspiration from web development and infrastructure to build a framework that supported the management of fleets of millions of devices, which became Memfault.

By aggregating bugs across software releases and hardware revisions, Memfault says its platform can determine which devices are impacted and what stack they’re running. Developers can inspect backtraces, variables, and registers when encountering an error, and for updates, they can split devices into cohorts to limit fleet-wide issues. Memfault also delivers real-time reports on device check-ins and notifications of unexpected connectivity inactivity. Teams can view device and fleet health data like battery life, connectivity state, and memory usage or track how many devices have installed a release — and how many have encountered problems.

“We’re building feedback mechanisms into our software which allows our users to label an error we have not caught, to merge duplicate errors together, and to split up distinct errors which have been merged by mistake,” Baldassari told VentureBeat via email. “This data is a shoo-in for machine learning, and will allow us to automatically detect errors which cannot be identified with simple heuristics.”

Memfault

IDC forecasts that global IoT revenue will reach $742 billion in 2020. But despite the industry’s long and continued growth, not all organizations think they’re ready for it — in a recent Kaspersky Lab survey, 54% said the risks associated with connectivity and integration of IoT ecosystems remained a major challenge.

That’s perhaps why Memfault has competition in Amazon’s AWS IoT Device Management and Microsoft’s Azure IoT Edge, which support a full range of containerization and isolation features. Another heavyweight rival is Google’s Cloud IoT, a set of tools that connect, process, store, and analyze edge device data. Not to be outdone, startups like Balena, Zededa, Particle, and Axonius offer full-stack IoT device management and development tools.

But Baldassari believes that Memfault’s automation features in particular give the platform a leg up from the rest of the pack. “Despite the ubiquity of connected devices, hardware teams are too often bound by a lack of visibility into device health and a reactive cycle of waiting to be notified of potential issues,” he said in a press release. “Memfault has reimagined hardware diagnostics to instead operate with the similar flexibility, speed, and innovation that has proven so successful with software development. Memfault has saved our customers millions of dollars and engineering hours, and empowered teams to approach product development with the confidence that they can ship better products, faster, with the knowledge they can fix bugs, patch, and update without ever disrupting the user experience.”

Partech led Memfault’s series A raise with participation from Uncork Capital, bringing the San Francisco, California-based company’s total raised to $11 million. In addition to bolstering its existing initiatives, Memfault says it’ll use the funding to launch a self-service of its product for “bottom-up” adoption rather than the sales-driven, top-down approach it has today.

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact. Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Coinsmart. Beste Bitcoin-Börse in Europa
Source: https://venturebeat.com/2021/04/01/device-monitoring-and-management-startup-memfault-nabs-8-5m/

Continue Reading

IOT

SoC Integration Complexity: Size Doesn’t (Always) Matter

Avatar

Published

on

It’s common when talking about complexity in systems-on-chip (SoCs) to haul out monster examples: application processors, giant AI chips, and the like. Breaking with that tradition, consider an internet of things (IoT) design, which can still challenge engineers with plenty of complexity in architecture and integration. This complexity springs from two drivers: very low power consumption, even using harvested MEMS power instead of a battery, and quick turnaround to build out a huge family of products based on a common SoC platform while keeping tight control on development and unit costs.


Fig. 1: Block diagram of a low-power TI CC26xx processor. (Sources: The Linley Group, “Low-Power Design Using NoC Technology”; TI)

For these types of always-on IoT chips, a real-time clock is needed to wake the system up periodically – to sense, compute, communicate and then go back to sleep; a microcontroller (MCU) for control, processing, plus security features; and local memory and flash to store software. I/O is required for provisioning, debugging, and interfacing to multiple external sensors/actuators. Also necessary is a wireless interface, such as Bluetooth Low Energy, because the aim is first at warehouse applications, and relatively short-range links are OK for that application.

This is already a complex SoC, and the designer hasn’t even started to think about adding more features. For a product built around this chip to run for years on a coin cell battery or a solar panel, almost all of this functionality has to be powered down most of the time. Most devices will have to be in switchable power domains and quite likely switchable voltage domains for dynamic voltage and frequency scaling (DVFS) support. A power manager is needed to control this power and voltage switching, which will have to be built/generated for this SoC. That power state controller will add control and status registers (CSRs) to ultimately connect with the embedded software stack.


Fig. 2: There are ten power domains in the TI CC26xx SoC. The processor has two voltage domains in addition to always-on logic (marked with *). (Sources: The Linley Group, “Low-Power Design Using NoC Technology”; TI)

Running through this SoC is the interconnect, the on-chip communications backbone connecting all these devices, interfaces, and CSRs. Remember that interconnects consume power, too, even passively, through clock toggling and even leakage power while quiescent. Because they connect everything, conventional buses are either all on or all off, which isn’t great when trying to eke out extra years of battery life. Designers also need fine-grained power management within the interconnect, another capability lacking in old bus technology.

How can a design team achieve extremely low power consumption in IoT chips like these? By dumping the power-hungry bus and switching to a network-on-chip (NoC) interconnect!

Real-world production chip implementation has shown that switching to a NoC lowers overall power consumption by anywhere from two to nine times compared to buses and crossbars. The primary reasons NoCs have lower power consumption are due to the lower die area of NoCs compared to buses and crossbars and multilevel clock gating (local, unit-level, and root), which enables sophisticated implementation of multiple power domains. This provides three levels of clock gating. For the TI IoT chips, the engineering team implemented multiple overlapping power and clock domains to meet their use cases using the least amount of power possible while limiting current draw to just 0.55mA in idle mode. Using a NoC to reduce active and standby power allowed the team to create IoT chips that can run for over a year using a standard CR2032 coin battery.

Low power is not enough to create successful IoT chips. These markets are fickle with a need for low cost while meeting constantly changing requirements for wireless connectivity standards, sensors, display, and actuator interfaces. Now engineers must think about variants, or derivatives, based on our initial IoT platform architecture. These can range from a narrowband internet of things (NB-IoT) wireless option for agricultural and logistics markets to an audio interface alarm and AI-based anomaly detection. It makes perfect strategic sense to create multiple derivative chips from a common architectural SoC platform, but how will this affect implementation if someone made the mistake of choosing a bus? Conventional bus structures have a disproportionate influence on the floorplan. Change a little functionally, and the floorplan may have to change considerably, resulting in a de facto “re-spin” of the chip architecture, defeating the purpose of having a platform strategy. Can an engineer anticipate all of this while still working on the baseline product? Is there a way to build more floorplan reusability into that first implementation?

A platform strategy for low-power SoCs isn’t just about the interconnect IP. As the engineer tweaks and enhances each design by adding, removing or reconfiguring IPs, and optimizing interconnect structure and power management, the software interface to the hardware will change, too. Getting that interface exactly right is rather critical. A mistake here might make the device non-operational, but at least someone would figure that out quickly. More damaging to the bottom line would be a small bug that may leave on a power domain when it should have shut off. An expected 1-year battery life drops to three months. A foolproof memory map can’t afford to depend on manual updates and verification. It must be generated automatically. IP-XACT based IP deployment technology provides state-of-the-art capabilities to maintain traceability and guarantee correctness of this type of design data throughout the product lifecycle.

Even though these designs are small compared to mega-SoCs, there’s still plenty of complexity, yet plenty of opportunity to get it wrong. At Arteris IP, we’re laser-focused on maximizing automation and optimization in SoC integration to make sure our users always get it “first time right.” Give us a call!

Kurt Shuler

Kurt Shuler

  (all posts)
Kurt Shuler is vice president of marketing at ArterisIP. He is a member of the US Technical Advisory Group (TAG) to the ISO 26262/TC22/SC3/WG16 working group and helps create safety standards for semiconductors and semiconductor IP. He has extensive IP, semiconductor, and software marketing experiences in the mobile, consumer, automotive, and enterprise segments working for Intel, Texas Instruments, and four startups. Prior to his entry into technology, he flew as an air commando in the US Air Force Special Operations Forces. Shuler earned a B.S. in Aeronautical Engineering from the United States Air Force Academy and an M.B.A. from the MIT Sloan School of Management.

Coinsmart. Beste Bitcoin-Börse in Europa
Source: https://semiengineering.com/soc-integration-complexity-size-doesnt-always-matter/

Continue Reading

Artificial Intelligence

Audio Analytics: Vital Technology for Autonomous Vehicles

Avatar

Published

on

Automotive Audio analytics
Illustration: © IoT For All

Artificial Intelligence (AI) and Machine Learning (ML) are projected to play a major role transformation of the Automotive Industry by designing future-state autonomous vehicles enabled by AI and ML. With the advancement of supply chain management, manufacturing operations, mobility services, image and video analytics, audio analytics, next-generation autonomous vehicles are poised to transform the automobile’s consumer perception. As these technologies continue to develop, the autonomous automotive industry is positioned to reach a global market size of nearly 60 billion USD by 2030.

Audio Analytics under Machine Learning in driverless cars consists of Audio classification, NLP, voice/speech, and sound recognition. Voice recognition, in particular, has become an integral part of autonomous vehicle technology providing enhanced control for the driver. Up until this point, in the traditional models of cars, speech recognition was a challenge because of the lack of efficient algorithms, reliable connectivity, and processing power at the edge. Further, in-car cabin noise reduced the performance of the audio analytics, which resulted in false recognition.

Audio analytics in machines has been a subject of constant research. With technological advancement, new products are coming online like Amazon’s Alexa and Apple’s Siri. These systems are rapidly evolving through cloud computing technology, a tactic that other recognition systems lacked previously. 

Recently, various Machine Learning algorithms like kNN (K Nearest Neighbour), SVM (Support Vector Machine), EBT (Ensemble Bagged Trees), Deep Neural Networks (DNN), and Natural Language Processing (NLP) have made Audio Analytics more effective and better positioned to add value to autonomous vehicles.

In audio analytics, data is pre-processed to remove the noise, and then the audio feature will be extracted from the audio data. The audio features such as MFCC (Mel-frequency cepstral coefficient) and statistical features like Kurtosis and Variance are used here. The frequency bands of MFCC are equally spaced on the Mel scale, which is very close to the human auditory system’s response.  Finally, the trained model is used for inference, a real-time audio stream is taken from the multiple microphones installed in the car, which is then pre-processed, and the features will be extracted. The extracted feature will be passed to the trained model to correctly recognize the audio, which will help make the right decision in autonomous vehicles.

Data Processing & ML Model Training

With new technologies, end user’s trust is the key point, and NLP is a game-changer to build this trust in autonomous vehicles. NLP allows passengers to control the car using voice commands, such as asking to stop at a restaurant, change the route, stop at the nearest mall, switch on/off lights, open and close the doors, and many more. This makes the passenger experience rich and interactive.

Let’s take a look at a few use cases where audio analytics provide benefits to autonomous vehicles.

Emergency Siren Detection

The sound of the siren of any emergency vehicle such as an ambulance, fire truck, or police car can be detected using the various deep learning models and machine learning models like SVM (support vector machine). The supervised learning model – SVM is used for classification and regression analysis. The SVM classification model is trained using huge data of the emergency siren sound and non-emergency sounds. With this model, the system is developed, identifying the siren sound to make appropriate decisions for an autonomous car to avoid any dangerous situation. With this detection system, an autonomous car can decide to pull over and give away for the emergency vehicle to pass.

Engine Sound Anomaly Detection

Automatic early detection of a possible engine failure could be an essential feature for an autonomous car. The car engine makes a certain sound when it works under normal conditions and makes a different sound when it is exhibiting problems. Many machine learning algorithms available among K-means clustering can be used to detect anomalies in engine sound. In k-means clustering, each data point of sound is assigned to the k group of clusters. Assignment of the data point is based on the mean near the centroid of that cluster. In the anomalous engine sound, the data point will fall outside of the normal cluster and be a part of the anomalous cluster. With this model, the health of the engine can constantly be monitored. If there is an anomalous sound event, then an autonomous car can warn the user and help make proper decisions to avoid dangerous situations. This can avoid a complete break-down of the engine.

Lane Change on Honking

For an autonomous car to work exactly as a human-driven car, it must work effectively in the scenario where it is mandatory to change its lane when the vehicle from behind needs to pass urgently, indicated with honking. Random forest, a machine learning algorithm, will be best suited for this type of classification problem. It is a supervised classification algorithm. As its name suggests, it will create the forest of decision trees and finally merge all the decision trees to accurately classify. A system can be developed using this model, identifying the certain pattern of horn and taking the decision accordingly.

NLP (Natural Language Processing) processes the human language to extract the meaning, which can help make decisions. Rather than just giving commands, the occupant can actually speak to the self-driving car. Suppose you have assigned your autonomous car a name like Adriana, then you can say to your car, “Adriana, take me to my favorite coffee shop.” This is still a simple sentence to understand, but we can also make the autonomous car understand even more complex sentences such as “take me to my favorite coffee shop and before reaching there, stop at Jim’s home and pick him up.” Importantly to note, self-driving vehicles should not obey the owner’s instructions blindly to avoid any dangerous situations, such as dangerous, life-threatening situations. To make effective decisions in dangerous situations, autonomous vehicles need a more powerful NLP which actually interprets what humans have told, and it can echo back the consequences of that.

Thus, machine learning-based audio analytics is attributed to the increasing popularity of autonomous vehicles due to the safety and reliability enhancements. As Machine Learning continues to develop, more and more service-based offerings are becoming available that offer such services as audio analytics, NLP, voice recognition, and more, enhancing passenger experience, on-road safety, and timely engine maintenance automobiles.

Coinsmart. Beste Bitcoin-Börse in Europa
Source: https://www.iotforall.com/audio-analytics-vital-technology-for-autonomous-vehicles

Continue Reading
Esports3 days ago

chessbae removed as moderator from Chess.com amid drama

Esports2 days ago

DreamHack Online Open Ft. Fortnite April Edition – How To Register, Format, Dates, Prize Pool & More

Esports4 days ago

Why did Twitch ban the word “obese” from its predictions?

Esports2 days ago

Hikaru Nakamura drops chessbae, apologizes for YouTube strike

Esports5 days ago

Dota 2: Patch 7.29 Analysis Of Top Changes

Esports5 days ago

Dota 2 patch 7.29: Impact of Outposts, Water Runes and other major general gameplay changes

Esports4 days ago

Dota 2: Team Nigma Completes Dota 2 Roster With iLTW

Fintech3 days ago

Australia’s Peppermint Innovation signs agreement with the Philippine’s leading micro-financial services provider

Esports4 days ago

Hikaru Nakamura accused of striking Eric Hansen’s YouTube channel

Esports4 days ago

Fortnite: Blatant Cheater Finishes Second In A Solo Cash Cup

Blockchain5 days ago

Revolut integriert 11 neue Kryptowährungen

Esports5 days ago

LoL: Blaber Named 2021 LCS Spring Split Honda MVP

Esports4 days ago

Twitch bans Adin Ross after ZIAS uses Homophobic slurs during his stream

Blockchain5 days ago

Bitcoin Kurs durchbricht 60.000 USD-Marke

Esports5 days ago

Code S RO16: Maru & Dream advance to RO8

Esports4 days ago

LoL: LCS MSS Lower Bracket Finals Recap- Team Liquid vs TSM

Esports4 days ago

LoL: Rekkles Named 2021 LEC Spring Split MVP

Esports4 days ago

The best way to play Hecarim in League of Legends season 11

Esports4 days ago

LoL: LEC 2021 Spring Lower Bracket Finals Recap- G2 Esports vs Rogue

Esports4 days ago

Broodmother reworked, Necronomicon removed in 7.29 update

Trending