Co-founder @LoginRadius, developer…love learning new things
The Internet of Things (IoT) is creating an exciting new world of new and improved experiences for all. It necessitates the management of exponentially more identities than current CIAM systems can handle.
CIAM is no longer primarily concerned with managing consumers but also with managing the hundreds of thousands of “things” that can be connected to a network.
These devices are often linked and are expected to communicate with other things, mobile devices, and the backend infrastructure. Some have even coined the term “Identity of Things” (IDoT) to describe this modern identity ecosystem.
The IoT refers to the interactions between:
- computers and humans
- devices and devices
- devices and application/services
- humans and application/services
Since the industry is just getting started with IoT design and deployment, now is a good time to think about how CIAM fits in with other security services needed by an IoT-connected company.
Key Identity Management Challenges in IoT
As mentioned earlier, CIAM is responsible for identifying people and controlling access to various data types (like sensitive data, non-sensitive data, or device data). It also assists in identifying devices and controlling user access to data, thus minimizing data breaches and malicious activities.
The age of IoT is here. However, the issue is not that it allows for things to be connected easily to the internet. Instead, how easy it is to access these items has become a threat to consumer data and must be protected.
This brings us to the key identity management challenges in IoT.
Credential abuse happens when you lend your passwords or username to another person. This is quite common among employees. They do this to help their colleagues avoid the frustrations of having an invalid password or being unable to access email or other resources.
Credential abuse is almost always motivated by criminal intent. Since there isn’t a proper IAM or CIAM solution in place, hackers may gain accidental access to areas they can manipulate.
Speaking of IoT, only a handful of interconnected devices have a password management system capable of protecting corporate data. According to ABI Research analysts, this lack of a proper identity management solution presents a great opportunity for cybercriminals.
Default Password Risks
Many CIAM and IoT devices are shipped with default passwords that anyone could guess. Users are required to change the default password of IoT devices. Although most users do, some prefer to wait for a long time before they change it.
Nevertheless, those who change their default passwords still choose the names of close family and friends for their passwords. That’s an unacceptable security practice!
71% of Forrester Research survey respondents agree that consumer-facing business apps and services must prioritize their security standpoint.
Enterprises can seize opportunities and engage consumers with personalized and secured strategies, such as by:
- Identifying the requirements of your customers and stakeholders
- Using an on-demand CIAM platform that can scale to meet the needs of your company and its customers
- Using a combination of digital skills, identity strategy, and best-of-breed CIAM technology to create frictionless, multichannel experiences.
- Using a CIAM services model to align with IoT devices, accelerate time to market, and become market-adaptive
Implementing Security for Identities Right From the Beginning
While IoT security is clearly a hot topic on everyone’s radar, there are a few things enterprises can do to get the most of their IoT investments.
Deploy access control
You should determine the behaviors and activities that are deemed acceptable by your connected objects and define rules of engagement for them within your ecosystem.
You can also create a baseline of expected behavior, which may then be tracked and monitored to spot abnormalities or activities that are outside of permitted parameters.
Mandate IoT to meet security standards
Organizations routinely rely on service providers to fulfill their needs. These providers provide everything from consulting services to equipment that can be deployed on-site.
In the age of IoT, the problem is that there’s very little scope for the consumer to determine if any of the technology has been compromised.
Therefore, you should subject IoT devices to the controls described in standard security frameworks. For example:
- Include a security clause in your contracts;
- Request fresh vulnerability scans or demand your right to conduct your own vulnerability scans;
- Mandate vendors to offer timely upgrades in order to address detected flaws;
- After any firmware changes, rescan the devices to check that any previously identified issues have been resolved and no new ones have developed.
Safeguard against IoT identity spoofing
Here is the thing. Hackers and their techniques have exponentially multiplied over the years with examples like counterfeiters and forgers. It goes without saying that this amplifies the attack surface or the attack vector, which can severely impact IoT security.
As a countermeasure, security technologies should verify the identity of IoT devices and ensure they are tied to an appropriate identity management and access control solution.
Overall, every IoT device must have its own identity. Without it, an organization is highly vulnerable to being spoofed or hacked.
With the growth of IoT, businesses have unprecedented opportunities to integrate technology into their everyday business operations and give consumers a more personalized experience.
Meanwhile, to get the job done seamlessly, enterprises are busy updating privacy policies and rushing to ensure compliance fast. If they fail to prioritize security policies, consumer trust may be compromised, leading to businesses losing profits in the long run—justifying the need for a consumer IAM solution.
Create your free account to unlock your custom reading experience.
GlobalPlatform Welcomes Ana Tavares Lattibeaudiere As New Executive Director
Succeeds retiring Executive Director Kevin Gillick, bringing extensive strategic IoT & telecommunications expertise to the organization
October 28, 2021 – GlobalPlatform, the standard for secure digital services and devices, has announced the appointment of Ana Tavares Lattibeaudiere as Executive Director. Ana brings 20+ years of industry knowledge, non-profit strategy, and business development to the role. After spells at Accenture, BCG, and Deloitte, Ana spent 15 years in various roles at GSMA. Most recently she held the position of Head of North America, responsible for driving global initiatives across eSIM, future networks, IoT, RCS, identity, and diversity.
“I am very pleased to welcome Ana as the Executive Director of GlobalPlatform,” comments Stéphanie El Rhomri, Chair of the GlobalPlatform Board of Directors. “With more than 62 billion GlobalPlatform-certified secure components deployed, GlobalPlatform’s legacy is an impressive one, helping diverse stakeholders to build, certify and manage billions of innovative digital services and devices. We’re coming out of a period of rapid change; in how we communicate, do business, and interact with the world around us. As we look to the future of our industries and organization, Ana’s strategic IoT expertise and leadership skills will help us to realize our vision to provide greater end-to-end security, privacy, simplicity, and convenience for everyone.”
“I’m proud and honored to be joining GlobalPlatform at such an exciting time and when security has never been more important,” adds Ana. “As Stéphanie noted, we have made tremendous progress through the great contributions of our members across a vast number of industries in a relatively short lifetime. But we have important opportunities ahead of us as we look to deliver the foundations of security for new digital services and devices that will be part of our connected lives. I look forward to working with Stéphanie, the GlobalPlatform Board, and our members to deliver real impact for consumers and businesses around the world.”
Ana succeeds Kevin Gillick who led the organization for 15+ years. Over that time, GlobalPlatform evolved from its origins in standardizing smartcards and secure elements, to standardize trusted execution environments, offer globally recognized certification. In recent years, the organization has moved to support the IoT ecosystem with key initiatives including its Device Trust Architecture for accessing secure services within a device; the IoTopia Framework for secure launch and management of connected devices; and the SESIP Methodology for IoT device certification.
Learn more about GlobalPlatform in its annual report and see how the organization can support your business, security, regulatory and data protection needs.
For further media information, please contact Alistair Cochrane:
[email protected] / or on +44 (0) 113 350 1922
GlobalPlatform is a technical standards organization that enables the efficient launch and management of innovative, secure-by-design digital services and devices, which deliver end-to-end security, privacy, simplicity, and convenience to users. It achieves this by providing standardized technologies and certifications that empower technology and service providers to develop, certify, deploy and manage digital services and devices in line with their business, security, regulatory, and data protection needs. Key offerings include secure component specifications; the Device Trust Architecture for accessing secure services within a device; the IoTopia Framework for secure launch and management of connected devices; and the SESIP Methodology for IoT device certification.
GlobalPlatform technologies are used in billions of smart cards, smartphones, wearables, and other connected and IoT devices to enable convenient and trusted digital services across market sectors, including healthcare, government and enterprise ID, payments, smart cities, industrial automation, smart home, telecoms, transportation, utilities, and OEMs.
GlobalPlatform standardized technologies and certifications are developed through effective industry-driven collaboration, led by multiple diverse member companies working in partnership with industry and regulatory bodies and other interested parties from around the world.
PlatoAi. Web3 Reimagined. Data Intelligence Amplified.
Click here to access.
AIoT: the Perfect Union Between the Internet of Things and Artificial Intelligence
IoT Without Big Data is Nothing
Imagine Industrial IoT as the nervous system of a company: it is a network of sensors that collects valuable information from all corners of a production plant and stores it in a repository for data analysis and exploitation. This network is necessary to measure and obtain data in order to make informed decisions. But what happens next? what should we do with all that data? We always talk about making good decisions based on reliable information, but although it may sound obvious, it is not always that easy to achieve that goal. In this article, we will go a bit beyond IoT and will focus on the data and how to leverage it with AIoT and data analytics.
We’ll be discussing specifically the analysis phase, the process that turns data first into information and then into knowledge (sometimes also referred to as business logic). In the end, however, we won’t stray far from the core subject of IoT, because for us IoT without Big Data is meaningless.
Big Data and Data Analytics
In recent decades, especially in the ’10’s, we have witnessed an incredible flood of data (both structured and unstructured), mass-produced by the ubiquity of digital technologies. In the particular case of the industrial world, taking advantage of and fully utilizing this huge amount of information is paramount to success.
This need to process business data has given rise to the largely interchangeable terms “Big Data,” “Data Science,” and “Data Analytics,” which we could collectively define as the processes we follow to examine the data captured by our network of devices, with the goal of revealing obfuscated trends, patterns or correlations. This is done with the underlying goal of improving the business with new types of knowledge.
Because it is a recently created term, there are different definitions for Big Data. One of them provided by Gartner outlines 3 key aspects: the volume of data, its variety, and the velocity with which it is captured. These are commonly referred to as the 3 V’s, although other definitions expand on this to include 5 V’s, adding the veracity of the data and the value they bring to the business.
We believe, though, that it does not make much sense to go into theoretical disquisitions on what does and does not qualify as Big Data, because thanks to the ubiquity of data collection devices, Big Data analysis and processing is already applicable to large swaths of the industrial world.
IoT and Big Data
How do IoT and Big Data relate to each other? The main point of connection is usually a database. In general terms, we could say that the work of IoT ends at that database; put another way, the goal of IoT is to dump all the data acquired in a more or less orderly manner in a common repository. The domain of Big Data starts by accessing that repository to manipulate the acquired data and get the information needed.
In any case, it is useful to visualize IoT Big Data Analytics as a toolbox. Depending on the type of information and knowledge we want to acquire from the data, we will draw one tool or another from it. Many of these tools come in the form of traditional algorithms, as well as improvements to or adaptations of those algorithms, with very similar statistical and algebraic principles. These algorithms were not invented in this century, to the surprise of many who wonder why they are now more relevant than before.
The quick answer is that the volume of data available is now much greater than when said algorithms were first conceived, but more importantly, the computing power of today’s machines allows the use of these techniques on a larger scale, giving new uses to old methodologies.
But we don’t want to give the impression that everything has already been invented and that the current trend in data analysis has brought nothing new to the table; quite the opposite in fact. The data ecosystem is very broad and has witnessed significant innovation in recent years.
One of those fastest-growing areas is Artificial Intelligence. It could be argued that this does not count as a recent invention, since this phenomenon was discussed as early as 1956. However, Artificial Intelligence is so broad a concept and its impact so widespread that it is often considered a self-contained discipline. The reality however is that, in some ways, it plays an integral part in Big Data and Data Analytics. It is another of the tools that are already contained in our metaphorical toolbox but found a natural evolution with AIoT.
AIoT: the Artificial Intelligence of Things
The exponential growth in the volume of data requires novel ways of analyzing it. In this context, Artificial Intelligence becomes particularly relevant. According to Forbes, the two main trends that are dominating the technology industry are the Internet of Things (IoT) and Artificial Intelligence.
IoT and AI are two independent technologies that have a significant impact on each other. While IoT can be thought of as the digital nervous system, AI would likewise be an advanced brain that makes the decisions that control the overall system. According to IBM, the true potential of IoT will only be achieved through the introduction of AIoT.
But what is Artificial Intelligence, and how is it different from conventional algorithms?
We usually speak of Artificial Intelligence when a machine mimics the cognitive functions of humans. That is, it solves problems in the same way as a human would, or hypothetically if a machine were able to find new ways of understanding data. AI’s strength is its ability to generate new algorithms to solve complex problems -and this is the key- independently of a programmer’s input. Thus we could think of Artificial Intelligence in general and Machine Learning in particular (which is the segment within AI with the greatest projected potential for growth) as algorithms that invent algorithms.
Edge AI and Cloud AI
The combination of IoT and AI brings us the concept of AIoT (Artificial Intelligence of Things), intelligent and connected systems that are able to make decisions on their own, evaluate the results of these decisions, and improve over time.
This combination can be done in several ways, of which we would like to highlight two:
- On the one hand we could continue to conceptualize AI as a centralized system that processes all impulses and makes decisions. In this case we would be referring to a system in the cloud that centrally receives all telemetry and acts accordingly. This would be referred to as Cloud AI (Artificial Intelligence in the Cloud).
- On the other hand, we must also talk about a very important part of our metaphorical nervous system: reflexes. Reflexes are autonomous decisions that the nervous system makes without the need to send all the information to the central processor (the brain). These decisions are made in the periphery, close to the source where data was originated. This is called Edge AI (Artificial Intelligence at the Edge).
Use Cases for Edge AI and Cloud AI
Cloud AI provides a thorough analysis process that takes into account the entire system, whereas Edge AI gives us rapidity of response and autonomy. But as with the human body, these two ways of reacting are not mutually exclusive, and can in fact be complementary.
As an example, a water control system can block a valve in the field the moment it detects a leak to prevent major water losses and, in parallel, send a notification to the central system where higher-level decisions can be made, such as opening alternative valves to channel water through another circuit.
The possibilities are endless and can go beyond this simplified example of reactive maintenance, with a sophisticated system able to predict possible events and thus, enabling the possibility of predictive maintenance.
Another example of AIoT data analytics can be found in Smart Grids, where we have smart devices at the edge analyzing the electricity flows at each node and making load balancing decisions locally, while in parallel it sends all this data to the cloud for analysis to generate a more comprehensive, nationwide energy strategy. Macroscopic level analysis would allow load balancing decisions to be made at a regional level, or even decreasing or increasing electricity production by shutting down hydroelectric plants or launching a power purchase process from a neighbouring country.
PlatoAi. Web3 Reimagined. Data Intelligence Amplified.
Click here to access.
Urban Farming Brings Feast to Famine and Reduces Environmental Impact
The demand for agricultural products has never been higher, the smart farming market is growing rapidly all the time, and Urban Farming might be a key part of reducing the impact of all that growth upon environmental sustainability.
The problems with large-scale farming, in terms of sustainability, are many and varied. Monocultures threaten ecosystem biodiversity, the need for agricultural land expansion encourages deforestation (especially in developing nations), and industrial farming accounts for almost a quarter of all human-generated greenhouse gas emissions according to some estimates.
Urban Farming: Some Size-Based Options
With the population explosion in urban areas, people are living farther than ever from the source of their food, which compounds the impacts of all these problems. Indoor farms, community gardens, and rooftop urban farming make it easier to avoid creating a monoculture, obviously do not require land to be cleared, and have perhaps the most significant impact of bringing food closer to the table for large segments of the urban population, reducing emissions through transport.
Rooftop farming is pretty much what it says on the tin: farms are established on the rooves of city buildings, which benefits residents with all sorts of plant growth and reduces the carbon footprint of the building, all while using rainwater and runoff for sustainable water consumption and improving air quality via photosynthesis.
Vertical farms, also known as indoor farms, can also be extremely productive, typically outproducing similarly sized outdoor farms by orders of magnitude. They usually use either hydroponic or aeroponic growing techniques to reduce resource usage and encourage volumetric production. They do have the drawback of increased energy consumption, however. Indoor farming notoriously uses a huge amount of electricity to manage and power its operations, which has a distinct impact on the sustainability of this solution. The sheer volume of produce grown with this method of farming shouldn’t be discounted, though.
Community gardens are also very popular, good for air quality, and generate a variety of produce. It also allows for the elimination of so-called “food deserts” in urban areas where nothing fresh is available for sale. The downside to this approach is in volume. There typically is just not enough land available to produce large yields from these small collective farms.
Government Buy-In on Urban Farming
Recognizing the importance of these urban farming initiatives, the US Department of Agriculture (USDA) has recently announced more than $6.6 million in grants and cooperative agreements for Smart Urban Farming in 2021, made available through the Office of Urban Agriculture and Innovative Production. That figure is about $2.5 million more than was provided in 2020 through this program.
The USDA’s Urban Agriculture and Innovative Production (UAIP) Competitive Grants Program supports urban farming programs through two grant types: Planning Projects that help to establish community gardens and nonprofit farms, and Implementation Projects targeted at increasing food production and access in economically distressed communities and developing business plans and zoning.
Through its Community Compost and Food Waste Reduction (CCFWR) Projects, the USDA is also funding pilot projects for municipal compost and food waste reduction. The department also recently announced the new Secretary’s Advisory Committee for Urban Agriculture, the members of which should be announced later this year.
In a recent speech at Colorado State University, US Secretary of Agriculture Tom Vilsack said, “The market wants it. Consumers want to know where their food is coming from and whether it’s contributing to a changing climate.” He went on to say that the USDA hopes to use these programs to not only encourage the growth of Urban Farming but also to perform data collection on climate indicators in smart cities and communities, so they can measure the impact of the initiative. Vilsack also talked about the principal climate change threats the USDA foresees affecting American agriculture. They include climate’s impact on productivity through heat, disease, or pests; drought; and the lack of resilience in the existing agricultural ecosystem, among other threats.
“We have focused on continued efficiency and productivity and sacrificed diversity and resiliency,” Vilsack concluded.
Urban Farming as enabled by IoT technology, monitoring, and automation is unlikely to replace large outdoor farms anytime soon, but the short-term impacts these farms could have might be needed sooner rather than later.
PlatoAi. Web3 Reimagined. Data Intelligence Amplified.
Click here to access.
Small Dataset-Based Object Detection
Getting started with any machine learning project often starts with the question: “How much data is enough?” The answer depends on a number of factors such as the diversity of production data, the availability of open-source datasets, and the expected performance of the system; the list can go on for quite a while. In this article, we’d like to debunk a popular myth about machines only learning from large amounts of data, and share a use case of applying ML with a small dataset.
With the rapid adoption of deep learning in computer vision, there are a growing number of diverse tasks that need to be solved with the help of machines. To understand these small dataset machine learning applications in the real world, let’s focus on the task of object detection.
What is Object Detection?
Object detection is a branch of computer vision that deals with identifying and locating objects in a photo or video. The goal of object detection is to find objects with certain characteristics in a digital image or video with the help of machine learning. Often, object detection is a preliminary step for item recognition: first we have to identify objects, and only then can we apply recognition models to identify certain elements.
Object Detection Business Use Cases
Object detection is a core task of AI-powered solutions for tasks such as visual inspection, warehouse automation, inventory management, security, and more. Below are some object detection use cases that are successfully implemented across industries.
From quality assurance and inventory management to sorting and assembly, object detection plays an important role in the automation of many manufacturing processes. Machine learning algorithms allow the system to quickly detect any defects, or automatically count and locate objects. These algorithms allow them to improve inventory accuracy by minimizing human error and the time spent checking and sorting these objects.
Machine learning is used in self-driving cars, pedestrian detection, and optimizing traffic flow in cities. Object detection is used to perceive vehicles and obstacles in the immediate vicinity the driver. In transportation, object recognition is used to detect and count vehicles. It’s also used for traffic analysis and helps detect cars that have stopped on highways or crossroads.
Object detection helps detect SKUs (Stock Keeping Units) by analyzing and comparing shelf images with the ideal state. Сomputer vision techniques integrated into hardware help reduce waiting time in retail stores, track the way customers interact with products, and automate delivery.
Object detection is used for studying medical images like CT scans, MRIs, and X-rays. It’s also used in cancer screening in order to help identify high-risk patients, detect abnormalities, and even provide surgical assistance. Applying object detection and recognition to assist with medical examinations for telehealth is a new trend set to change the way healthcare is delivered to patients.
Safety and surveillance
Among the applications of object detection are video surveillance systems capable of people detection and facial recognition. Using machine learning algorithms, such systems are designed for biometric authentication and remote surveillance. This technology has even been used for suicide prevention.
Logistics and warehouse automation
Object detection models are capable of visually inspecting products for defect detection, as well as inventory management, quality control, and automation of supply chain management. AI-powered logistics solutions use object detection models instead of barcode detection, thus replacing manual scanning.
How to Develop an Object Detection System: the PoC Approach
Developing an object detection system to be used for tasks such as the ones mentioned above is no different than any other ML project. It typically starts with building a hypothesis to be checked during several rounds of experimentation.
Such a hypothesis is a part of the Proof of Concept (PoC) approach in software development. It aligns with machine learning, as in this case, the delivery is not an end product. Conducting research allows us to come up with results that will allow us to determine that either the chosen approach could be used, or that there’s a need to run extra experiments to choose a different direction.
If the question is “how much data is enough for machine learning”, the hypothesis may be an initial statement such as “150 data samples are enough for the model to reach an optimal level of performance.”
Experienced ML practitioners such as Andrew Ng (co-founder of Google Brain and ex-chief scientist at Baidu) recommend quickly building the first iteration of the system with machine learning functionality, then deploying it and iterating from there.
This approach allows us to create a functional and scalable prototype system that can be upgraded with the data and feedback from the production team. This solution is far more efficient when compared to the prospect of trying to build the final system from the get-go. A prototype of this nature does not necessarily require large amounts of data.
To answer the question of “how much data is enough,” it’s undeniably true that no machine learning expert can predict exactly how much data is needed. The only way to find out is to establish a hypothesis and test it under real-world conditions. This is exactly what we’ve done with the following object detection example.
Case Study: Object Detection Using Small Dataset for Automated Items Counting in Logistics
Our goal was to create a system capable of detecting objects for logistics. Transportation of goods from production to warehouse or from warehouse to facilities often requires intermediate control and coordination of the actual quantity using invoices and a database. If performed manually, this task would require hours of human work and would involve high risk of loss, damage, or injury.
Our initial hypothesis was that a small annotated dataset would be sufficient to address the issue of automatically counting various items for logistics purposes.
The traditional approach to the problem that many would take is to use classic computer vision techniques. For instance, one might combine a Sobel filter edge detection algorithm with Hough circle transform methods to detect and count round objects. This method is simple and relatively reliable; however, it is more suitable for a controlled environment, such as a production line which produces objects that have a well-defined round or oval shape.
In the use case we selected, the classical methods are far less reliable since the shape of the objects, quality of the images, and lighting conditions can all vary greatly. Furthermore, these classical methods cannot learn from the data collected. This makes it difficult to refine the system by collecting more data. In this case, the best option would be to instead fine-tune a neural network-based object detector.
Data collection and labeling
To perform an experiment of object detection with a small dataset, we collected and manually annotated several images available via public sources. We decided to focus on the detection of wood logs, and divided the annotated images into train and validation splits.
We additionally gathered a set of test images without labels where the logs would be in some way different from the train and validation images (orientation, size, shape, or color of logs) to see where the limits to the model’s detection capabilities lie for the given train set.
Since we are dealing with object detection, image annotations are represented as bounding boxes. To create them, we used an open-source browser-based tool, VGG Image Annotator, which has sufficient functionality for creating a small-scale dataset. Unfortunately, the tool produces annotations in its own format which we then converted to the COCO object detection standard.
In object detection, the quantity of data is determined not just by the number of images in the dataset, but also by the quantity of individual object instances in each image. In our case, the images were quite densely packed with objects – the number of instances reached 50-90 per image.
Detectron2 Object Detection
Let’s have a closer look at how Faster R-CNN works for object detection. First, an input image is passed through backbone (a deep CNN model pre-trained to an image classification problem) and is converted into a compressed representation called a feature map. Feature maps are then processed by the Region Proposal Network (RPN) that identifies areas in the feature maps that are likely to contain an object of interest.
Next, the areas are extracted from the feature maps using the RoI pooling operation and processed by bounding box offset head (which predicts accurate bounding box coordinates for each region) and object classification head (which predicts the class of the object in the region).
Faster R-CNN (Region-based Convolutional Neural Network) is the 3rd iteration of R-CNN architecture.
Faster R-CNN is a two-stage object detection model. It includes the RPN sub-network to sample object proposals. However, this is not the only solution to the small dataset for object detection.
There are also one-stage detector models attempting to find the relevant objects without this region proposal screening stage. One-stage detectors have simpler architectures, and are typically faster but less accurate compared to two-stage models. Examples include the Yolov4 and Yolov5 architectures, – some of the lighter configured models from these families can reach up to 50-140 FPS (although compromising the detection quality), in comparison to Faster R-CNN which runs at 15-25 FPS at maximum.
The original paper on Faster R-CNN explained was published in 2016 and received some small improvements to the architecture over time, which were reflected in the Detectron2 library that we used.
For example, the model configuration selected for our experiments, R50-FPN, uses backbone ResNet-50 with Feature Pyramid Network – a concept that was introduced in the CVPR 2017 paper and has since become a staple of CNN backbones for feature extraction. In simpler terms, in Feature Pyramid Networks we use are not limited to the deepest feature maps extracted from CNN but also include low- and medium-level feature maps. This allows small object detection that would be otherwise lost during the compression down to the deepest levels.
In our experiments, we used the following methodology:
- Take a Faster R-CNN instance pre-trained on COCO 2017 dataset with 80 object classes.
- Replace 320 units in bounding box regression and 80 units in classification heads with 4 and 1 units respectively, in order to train the model for 1 novel class (bounding box regression head has 4 units for each class in order to regress X, Y, W, H dimensions of bounding box where X, Y are the center coords of the bbox center and W, H are its width and height).
After some preliminary runs we picked the following training parameters:
- Model config: R50-FPN
- Learning rate: 0.000125
- Batch size: 2
- Batch size for RoI heads: 128
- Max iterations: 200
With the parameters set, we started looking into the most interesting aspect of training: how many training instances were needed to obtain decent results on the validation set. Since even 1 image contained up to 90 instances, we had to randomly remove part of the annotations to test a smaller number of instances. What we discovered was that for our validation set with 98 instances, at 10 training instances we could pick up only 1-2 test instances, at 25 we already got approximately 40, and at 75 and higher we were able to predict all the instances.
Increasing the number of training instances from 75 to 100 and 200 led to the same final training results. However the model converged faster due to the higher diversity of the training examples.
Predictions of the model trained with 237 instances on image from validation set can be seen in the image below; there are several false positives (indicated by red arrows) but they have low confidence and thus could be mostly filtered out by setting the confidence threshold at ~80%.
In the next step, we explored the performance of the trained model on test images without labels. As expected, images similar to the training set distribution had confident and high quality predictions, whereas the images where the logs had and unusual shape, color, or orientation were much tougher for the model to work with.
However, even on the challenging images from the test set we observed a positive effect from increasing the number of training instances. In the image below we show how the model learns to pick up additional instances (marked by green stars) with the increase in the number of train images (1 train image – 91 instances, 2-4 images – 127-237 instances).
To sum up, the results showed that the model was able to pick up ~95% of the instances in the validation dataset. After fine-tuning with 75-200 object instances provided validation data resembled the train data. This proves that selecting proper training examples makes quality object detection possible in a limited data scenario.
Future of Object Detection
Object detection is one of the most commonly used computer vision technologies that has emerged in recent years. The reason for this is primarily versatility. Some of the existing models are successfully implemented in consumer electronics or integrated in driver-assistance software, while others are the basis for robotic solutions used to automate logistics and transform the healthcare and manufacturing industries.
The task of object detection is essential for digital transformation, as it serves as a basis for AI-driven software and robotics, which in the long run will enable us to free people from performing tedious jobs and mitigate multiple risks.
PlatoAi. Web3 Reimagined. Data Intelligence Amplified.
Click here to access.
Phasmophobia Nightmare Halloween Update Coming Oct. 25
The VR Job Hub: Make Real, Survios, SAIC & Armature Studio
Q3 Saw Europe’s EV Share Break New Ground Above 20% & Overtake Diesel For First Time
Blockchain Technology Explained – Technology that will change How we use Money
Live coverage: Ariane 5 rocket counting down to launch from French Guiana
ZelioSoft 2 – A Beginners Impression
Spacestation Gaming’s Frexs talks ALGS, input wars, and building a winning team
Supercell reveals upcoming 2021 Brawl-O-Ween event on Brawl Stars
Under watchful eye of NASA, teams prep for final Ariane 5 flight before Webb
How to Change Your BattleTag Name for Free
Cloud9, NRG dominate NA ALGS day 3
Fredy122 talks Rogue’s strength in the LEC’s best-of-one format
What Is a VPN?
U.S. MC-130J Lands On Highway In Sweden To Unload HIMARS Artillery System During Special Ops Exercise
Pokemon GO November Update Unveiled
Overwatch Basketball Trick Lets Zarya Launch Enemies into the Sky
Only 6 players hold on to perfect pick’ems halfway through the League 2021 Worlds quarterfinals
WORKPRO® Tools Renews Commitment And Increases Pledge To National Breast Cancer Foundation, Inc.®
What do you get in Treasure Packs in Apex Legends?
TOP-5 Useful Programs to Optimize the Work of Office Employees
Blockchain6 days ago
People’s payment attitude: Why cash Remains the most Common Means of Payment & How Technology and Crypto have more Advantages as a Means of payment
Automotive1 week ago
This Toyota Mirai 1:10 Scale RC Car Actually Runs On Hydrogen
Energy5 days ago
IJGlobal reconoce a Atlas Renewable Energy con sus premios ESG Energy Deal of the Year 2020 y ESG Social
Energy6 days ago
U Power ties up with Bosch to collaborate on Super Board technology
Supply Chain6 days ago
LPG tubes – what to think about
Fintech1 week ago
PNC cuts nearly 600 apps for BBVA conversion
Startups6 days ago
The 12 TikTok facts you should know
Automotive1 week ago
7 Secrets That Automakers Wish You Don’t Know
Gaming1 week ago
New Steam Games You Might Have Missed In August 2021
Blockchain1 week ago
What Is the Best Crypto IRA for Me? Use These 6 Pieces of Criteria to Find Out More
Esports1 week ago
New World team share details of upcoming server transfers in Q&A
Gaming1 week ago
Norway will crack down on the unlicensed iGaming market with a new gaming law