Connect with us

Publications

Monitoring Microservices: A Step By Step Guide

Published

on

Microservices are an architectural approach for developing applications. They are distributed and loosely coupled to support independent deployment, scalability of an application that helps developers to rapidly deliver more functionality and reliability.  Therefore, the increasing need for microservices can be summarized in one word – speed. 

Unsurprisingly, it has changed undulate effects within software management, including monitoring systems. So, today we will be discussing the radical changes required to monitor microservices efficiently in production environments. We’ll check why we need to monitor systems, what the challenges are in monitoring microservices, and how to overcome them.

Why Monitor Microservices?

Monitoring is a critical part of every IT system. However, the challenges that are associated with microservices are comparatively new and different. For instance, deployed as a single application, a monolithic system has dependencies and failures that are completely different from what one can find in microservices. 

Although the reason behind monitoring microservices is similar to any other type of distributed system, i.e., failure, being a system with multiple services, microservices-based applications require more intensive and different monitoring methods. 

No one wants their system to fail. However, it’s not only one reason why you should be monitoring microservices. Systems are not only about ups and downs – but complex systems can also operate in a degraded state while impacting the performance. These states can often proclaim the upcoming failures in the system. That’s why monitoring the system behavior can alert the developers about the degraded state prior to complete failure. 

Additionally, system monitoring can produce insightful information that can help in enhancing the performance of services. The data, including performance and failure, can be used to determine certain failure patterns of the system and resolve them. 

Challenges in Monitoring Microservices

The cloud-native architecture that is based on Kubernetes and Containers like Docker has become popular for driving microservices. However, they have also added the complexity layer in the microservice system. 

In the containerized workloads, you will have to monitor infrastructure metrics ( Host Memory & CPU), the runtime of Kubernetes & Containers (Node Resource Utilization and Running Pods), and application metrics (request rate and duration background). 

With dozens of microservices where each one is running its database and programming language while being scaled, deployed, and upgraded independently, a lack of monitoring can cause unpredictable issues and poor performance of the system. 

A Step-By-Step Guide To Monitor Microservices 

Till now, we have gained a bit of insight into the microservices architecture and the need of monitoring them, but what is the right way to do so? Let’s check!

1. Limit the Things to Measure

To focus your idea on limiting, you will have to sharpen one to three of the important metrics (a measure of quantitative assessments for tracking and comparing performance) you want to measure. For example, tools like Retrace, LightStep, or Splunk APM can perform metrics, alerts, error tracking, and centralized logging. And if you haven’t still decided what monitoring tool you should be using for microservice-based applications, reviewing the features can be intimidating. 

In order to understand the metrics to concentrate on, you will have to acknowledge the need of your business. You should know from where the most customer or operational complaints are coming, is it from downtime of the services or requests are too low, and so on. Simply put, the more deeply you will know your business, the easier it will be to choose tools for required metrics. 

2. Include Commission APM & Logging Software

With the three metrics you have opted in mind, you can easily choose a monitoring tool. Moreover, if you have a demo instance, you can explore the content to understand the way it works.  

Regardless of the metrics based on which you have chosen your monitoring system, you may want an easier overview of an entire system, including databases, runtimes, and other back-end components. Other than that you will also need a dashboard to correlate services to acknowledge the relationship between them that can’t be seen easily through code. 

The monitoring tool should be able to divide different dimensions while pinpointing the potential errors and problems. It must also be a system for centralized logging rather than just a monitoring tool. This tool must also enable you to log multiple services in one place. 

But in some cases, your monitoring and logging tool can be individual as long as there’s a way to correlate logs with the data in the monitor. 

3. Have Instrument Metrics at Extension Points

A good tool will tend to automatically instrument your services, that is, all you need to do is add a library and configure properties to connect with the accurate server. Another thing to assure here is that the chosen tool for monitoring also supports the framework and programming language you are opting for. Otherwise, you will have to manually find the suture in the life cycle of the framework’s request and instrument by yourself. 

Most tools out there with auto-instrumentation also allow customizing your application in places that are evolved from less ideal circumstances. After instrumenting and configuring, you can easily run services locally while pointing them to the server of the monitoring tool. 

4. Instrument Tracing Logs

With microservices, it can be hard to trace events throughout your system that makes cross-services bugs harder to find. To avoid that, you can use trace IDs in every service of your microservices architecture-based application. Tools like Open Tracking can help in doing so, and multiple frameworks have instrumental libraries to support it as well. It will make query logs across multiple services and identifying problems that may reside in your system easier and efficient. 

Conclusion

Now that you know why monitoring microservices is essential, it’s time for you to adopt these steps for your microservices-based application sooner than later.

All you will need to do is get a tool to monitor services side by side, add trace data to every service to understand the way they interact with one another. Besides, it will allow you to make smart decisions on architecture and scaling as well.

Tags

Join Hacker Noon

Create your free account to unlock your custom reading experience.

Coinsmart. Beste Bitcoin-Börse in Europa
Source: https://hackernoon.com/monitoring-microservices-a-step-by-step-guide-awl36c3?source=rss

Publications

Strategic Forum- Planning for Transformative Change-Hazel Henderson guest speaker

Published

on

The First

Strategic Forum –

Planning for Transformative Change

The pandemic and other threats like climate change pose an existential challenge to organizations everywhere, and they have made it clear that the present global order is not sustainable. The World Economic Forum called for  “A Great Reset” in all spheres of society. Leaders in business, government and other institutions need to plan for transformative change – NOW.

The TechCast Project draws on its leading research to bring authoritative studies on the crucial issues of today to a broader audience. See our work on  Global ConsciousnessThe Coming InternetRedesigning CapitalismForecasting the Presidential Election, and AI versus Humans.

Complimentary Admission 

Anyone with an interest in strategy, foresight, future studies and related fields is encouraged to attend. 

Conference Begins at 2000 UTC (coordinated universal time) and ends at 2130 UTC

Wednesday, June 30, 2021
1 pm PDT (Los Angeles, San Francisco)

4 pm EDT (New York, Washington, DC)
9 pm daylight time (London)
10 pm daylight time (Paris)

Thursday, July 1, 2021
6 am standard time (Seoul, Tokyo)
7 am standard time (Sydney)

PROGRAM
Conference Host

Limor will open the conference by welcoming participants, introduce speakers and their topics and direct questions through the chat function to speakers.  She is a skilled facilitator and will ensure that the proceedings are productive and transparent.

Limor Shafman

Limor Shafman
President of the Keystone Technology Group
And a Frequent Speaker

Limor is co-founder of  TIA’s Smart Building Program.  She works with PropTech startups on market strategy and business development. She leads the NIST – Global City Teams Challenge Smart Buildings Super Cluster which is releasing its Smart Buildings Blueprint. As an international corporate attorney, Limor draws on her understanding of the digital environment from her work in the theme park, video game, mobile communications infrastructure, and other technology sectors. She has led technology-oriented organizations, serving as President of the World Future Society DC Chapter and Co-founder, Chair Emeritus of the IPv6 Forum Israel Chapter. Limor is also an international speaker, moderator and has been a show host for several online media outlets.

Forecasting Global Transformation: 
Most Likely Scenario for 2030

Bill draws on his work at TechCast to provide forecasts of 50 emerging technologies, 30 social trends, and 25 wild cards.  Results are aggregated to provide a macro-forecast of the “Most Likely Scenario for 2030” —  Sustainability Arrives, Green Transportation, Infinite Knowledge and Intelligence, Mastery Over Life, Threats Across the Spectrum and Higher-Order Values.  We conclude with the theme of  Prof. Halal’s forthcoming book, Beyond Knowledge: Digital technology is now driving a shift to an “Age of Consciousness.”

Wiliam Halal

William Halal
The TechCast Project
George Washington University

Bill is Professor Emeritus of Management, Technology and Innovation. He is founder and director of the TechCast Project and a thought leader in foresight, strategy, forecasting and related fields. For more, see www.BillHalal.com

State-of-the-Art in Strategy and Foresight: Constant Change from the Bottom Up And the Outside In

Jess and Bill summarize results of their recent survey of strategic foresight practices to outline how strategic foresight is changing to cope with the technology revolution. The study’s main conclusion is that organizations should develop “constant change from the bottom up and the outside in.”

Jess Garretson
CEO, The Cognis Group

As leader of this life sciences consultancy firm, Jess provides leadership for the company portfolio that includes IP research, consulting and strategic partnering services, Pharmalicensing.com and FutureinFocus.com–an online subscription services curating foresight reports on technology and innovation trends driving the next 10-20 years.  Many years of experience in both corporations and consulting provided a multi-faceted perspective for driving solutions most critical to brand and business development.

William Halal
The TechCast Project
George Washington University
(See bio above)

Keynote Speech:
The Time For Transformation Is Now

Hazel Henderson draws on a lifetime of work in future studies to suggest what families, organizations, nations, and all of us can do to actually create transformative change.  How do futurists and strategists get their attention? What strategic “processes” do we recommend? How can this Strategic Forum provide leadership?

Hazel Henderson
Futurist, Author, Speaker, Consultant
President, Ethical Markets

Hazel Henderson is a global futurist and her eleven books and current research continue to map the worldwide transition from the fossil-fueled Industrial Era to the renewable circular economies emerging in a knowledge-rich, cleaner, greener and wiser future. Ethical Markets Media Certified B. Corporation, which Hazel founded in 2004 after 20 years advising the Calvert Group of socially-responsible mutual funds, continues the work of reforming markets and metrics to guide investors toward our long-term survival on planet Earth. In the 1960s, with the help of a volunteer ad agency and enlightened media executives, Hazel organized Citizens for Clean Air to inform New Yorkers of the polluted air they were breathing. They showed the late Robert F. Kennedy, then running for his Senate seat, all the sources of this pollution and why they were campaigning to correct the GDP to subtract, not add, these pollution costs.  Kennedy’s speech on the GDP problem at the University of Kansas became a rallying cry for reform of this obsolete indicator, still too often quoted as a measure of national “progress“!  In 1975, Hazel joined Lester Brown on the founding board of the World Watch Institute, and again, she was forced to face up to this Global MegaCrisis at every board meeting, as the human effects on planetary ecosystems deteriorated. For more, see Hazel’s recent presentation at the Family Office Forum in Singapore, March 5th.  Hazel can be reached at [email protected]

Following Executive Workshop

($195 Admission)

The Workshop begins 30 minutes after the Conference ends (2200 UTC).

This Executive Workshop follows the above Conference to assist leaders, planners and other professionals in drawing on the presentations to develop a more powerful strategic posture. In this workshop, you will review the presentations of the previous speakers and assess the impact on your current strategic posture. In a small working group of your peers, you will discuss needed adjustments to account for the anticipated changes. Each group will report their key findings to the entire group. You will come away with a comprehensive set of insights and actions that you can take back to your organization and bring your overall strategy into greater alignment with the transformative changes that lie ahead.

Art Murray
President/CEO, Applied Knowledge Sciences, Inc.
Assisted by Limor Shafman and Bill Halal

Dr. Art Murray is co-founder of Applied Knowledge Sciences, Inc. where he has served as CEO for over 27 years. Since 2005, he’s been the Director of the Enterprise of the Future Program at the International Institute for Knowledge and Innovation. He’s the author of “Deep Learning Manual: the knowledge explorer’s guide to self-discovery in education, work, and life,” and “Building the Enterprise of the Future: Co-creating and delivering extraordinary value in an eight-billion-mind world,” and KMWorld magazine’s popular column: “The Future of the Future.” He holds a B.S.E.E. degree from Lehigh University, and the M.E.A. and D.Sc. degrees from the George Washington University.

Small group breakout discussions and reporting.

Readings:

  • Updating Strategy for a High-Tech World: Constant Change from the Bottom Up and the Outside In
  • Through the MegaCrisis (Awarded “Outstanding pape of 2013” by Emerald Publishing)

Register Here for the June 30 Conference

Offer Donations to the June 30 Conference

Register Here for the June 30 Workshop

To clarify questions about the program or other issues, email Prof. Halal at [email protected]


Second Conference of the Strategic Forum
July 28, 2021

Foresight Lessons From the Pandemic:
Implications for Strategy Formulation and Response

Ideally, foresight precedes strategy formulation, but in moments of crisis normal order must be abandoned and foresight and strategy inevitably unfold together in real-time.  We will offer a set of lessons learned from conducting a major Delphi-based scenario foresight project during the darkest days of the unfolding pandemic and reflect on the long-term implications for how foresight and strategy can more effectively blend in the face of deep uncertainty.

Jerome Glenn
CEO, The Millennium Project

Jerry is the co-founder of the Millennium Project with 67 Nodes around the world. He is also lead author of the State of the Future reports, co-editor of Futures Research Methodology 3.0, designed and manages the Global Futures Intelligence System. Glenn led The Millennium Project team that created the COVID-19 scenarios for the American Red Cross and lead-author for Scenario 1: America Endures, the baseline, surprise fee scenario.


Theodore Jay Gordon
Futurist and Management Consultant


Ted is a specialist in forecasting methodology, planning, and policy analysis. He is co-founder and Board member of The Millennium Project. Ted founded The Futures Group,  was one of the founders of The Institute for the Future and consulted for the RAND Corporation. He was also Chief Engineer of the McDonnell Douglas Saturn S-IV and S-IVB space vehicles and was in charge of the launch of space vehicles from Cape Canaveral. He is a frequent lecturer, author of many technical papers and several books dealing with space, the future, life extension, scientific and technological developments and issues, and recently, co-author of books on the prospects for terrorism and counterfactual methods. He is the author of the Macmillan encyclopedia article on the future of science and technology. He is on the editorial board of Technological Forecasting and Social Change. Mr. Gordon was a member of the Millennium Project team that created scenarios for the American Red Cross. Ted was responsible for the negative scenario that depicted a bleak but plausible future; this scenario contains many assumptions about the unknowns, but in the end seems endurable and plausible.

Paul Saffo
Forecaster

Paul is a Silicon Valley-based forecaster who studies technological change.  He teaches at Stanford where he is an Adjunct Professor in the School of Engineering and is Chair of Future Studies at Singularity University.  Paul is also a non-resident Senior Fellow at the Atlantic Council, and a Fellow of the Royal Swedish Academy of Engineering Sciences. Paul holds degrees from Harvard College, Cambridge University, and Stanford University.

Readings:

  • Three Futures of the Covid-19 Pandemic in the US,  January 1, 2022.

Register Here for the July 28 Conference

Offer Donations to the July 28 Conference

Register Here for the July 28 Workshop

To clarify questions about the program or other issues, email Prof. Halal at [email protected]


Coming Speakers

The Emerging Global Consciousness

It is increasingly clear that a major shift in values, beliefs and ideology is needed to make sense of today’s turmoil and to grasp the outlines of the emerging global order. This session presents a vision of global consciousness developed by TechCast’s study to resolve the Global MegaCrisis.

William E. Halal
The TechCast Project
George Washington University
(See bio above)

Story Thinking

A strategic organizational posture that balances collaboration with competitiveness requires a deeper understanding of common ground, and that understanding is found in stories. Beyond storytelling, story thinking provides the visualization of story structure as the holistic business, learning, and communication model. The foundational shared mental model of “process” which was adopted in the Second Industrial Revolution must expand into a shared mental model of “story” to thrive in the Fourth Industrial Revolution, given it is based on intelligence, not electricity. Carl Jung said, “You are IN a story, whether you know it or not.” Operationalizing this quotation is the goal of story thinking, and is the key to thriving within transformational change.

John Lewis
Coach, Speaker, Author, Story Thinking

Dr. John Lewis, Ed.D. is a consultant, coach, and is speaker on the topics of human capital and strategic change within the knowledge-driven enterprise. He is the author of Story Thinking, which is about the major organizational challenges related to the Fourth Industrial Revolution and ways for visionary leaders to begin addressing them now by rethinking traditional views of change, learning, and leadership. He is also the author of The Explanation Age, which Kirkus Reviews described as “An iconoclast’s blueprint for a new era of innovation.” He is the current president of EBLI (Evidence-Based Learning Institute) and holds a Doctoral degree in Educational Psychology from the University of Southern California, with a dissertation focus on mental models and decision making.

Keys to Open Innovation

Many of the world’s most successful business models, companies, and products were born from the synthesis of necessity and collaboration. “Open Innovation” is not a new concept, but rather one that demands increasing attention and robust implementation in the rapidly accelerating technology innovation lifecycle. Despite the success stories, many organizations have not yet fully embraced the concept of leveraging external innovation, as internal stakeholders often mistakenly perceive threats and underestimate opportunities that may arise from partnerships. This discussion will explore the careful balance that must be achieved and maintained between legacy internal processes and the augmented capabilities of external resources.

Anthony Cascio
Director of Research & Engineering
The Cognis Group

Anthony Cascio leads the Cognis team responsible for intellectual property analytics & landscaping, technology scouting, and partnering search engagements. For over twelve years, Anthony has consulted with clients ranging from the Fortune 500 to startups in a broad array of high technology industries related to both the life and physical sciences. He provides unique insight alongside validation to help guide each client’s strategic direction and identify new technology-related business opportunities. Anthony studied electrical engineering at the University of South Florida while conducting research in electronic materials characterization and electrospray deposition of macromolecular structures.

 

Staying Safe in a Digital World

Each day the news is filled with stories about computer crime and hacking which affect our financial institutions, banks, small businesses, large corporations, hospitals, retail stores and threatens to steal even our own identity. Cybersecurity refers to the practice of defending computers, networks and data from malicious attacks. We will provide an overview of aspects of cybersecurity including viruses, phishing, social engineering, identity theft and personal privacy as well as threats to the Internet of Things and physical security and provide tips on how to protect yourself and your organization from these threats.

Steven Hausman
Futurist  and Speaker
 Former Administrator, National Institutes of Health

Which data, what data, what futures: cybersecurity from the cloud to the brain cloud

We live our existence in a space we see, smell, hear, touch, and taste. However, for the last 10 years, this is not just all the space our existence is lived. We spend an ever-growing part of our time in cyberspace, in a global domain within the information environment where our digital life carries on — but for which nature did not equip us for sense-making. In this talk we will explore the strategic structure of cyberspace and its implications, to then broaden our aperture looking at trends for both near future and deep futures.

Gabriele Rizzo
Visionary Futurist and Enthusiastic Innovator
Former Advisor to the Minister of Defense for Futures

Dr. Gabriele Rizzo, Ph.D., APF, holds a PhD in String Theory and Astrophysics. He is the NATO’s Member at Large (“world-class expert drawn from academia, industry or government from the Nations”) in Strategic Foresight and Futures Studies, and the former advisor to the Italian Minister of Defense on Futures. He is a member of the Strategy Board of the European Cyber Security Organization, a PPP worth $2B.  Dr. Rizzo’s works inform $1T (one trillion USD) worth of Defense planning, some were evaluated “important pillars of strategy and implementation of R&I” by the EU, and others shape industrial investments in Research, Development and Innovation for more than $20B in 2020.

Coinsmart. Beste Bitcoin-Börse in Europa
Source: https://www.ethicalmarkets.com/strategic-forum-planning-for-transformative-change-hazel-henderson-guest-speaker/

Continue Reading

AI

A Brief Intro to the GPT-3 Algorithm

Published

on

image

Generative Pre-trained Transformer 3 (GPT-3) embraces and augments the GPT-2 model architecture, including pre-normalization, modified initialization, and reversible tokenization. It exhibits strong performance on many Natural Language Processing (NLP) tasks.

GPT-3 is an auto-regressive artificial intelligence algorithm developed by OpenAI, an AI-powered research laboratory located in San Francisco, California.

It is a massive artificial neural network that takes help from deep learning to generate human-like text and is trained on huge text datasets with thousands of billions of words. It is the third-generation AI language prediction model in the GPT-n series and the successor to GPT-2.

In simple words, OpenAI GPT-3 was fed inputs the ways how billions of people write and also was taught how to pick up on writing patterns based on user entry. Once few inputs are offered, the model will generate intelligent text following the submitted pattern and structure. It is also the largest AI language algorithm that produces billions of words a day.

GPT-3 working process

This artificial intelligence algorithm is a program that can calculate the word or even the character which must appear in a text given in relation to the words around it. This is called the conditional probability of words. It is a generative neural network that allows out a numeric score or a yes or no answer. It also generates long sequences of the original text as its output.

The total number of weights the OpenAI GPT-3 dynamically holds in its memory and utilizes to process every query is 175 billion.

Examples

•noun + verb = subject + verb
• noun + verb + adjective = subject + verb + adjective
• verb + noun = subject + verb
• noun + verb + noun = subject + verb + noun
• noun + noun = subject + noun
• noun + verb + noun + noun = subject + verb + noun + noun

The stream of algorithmic content in GPT-3

Every month over 409 million people view more than 20 billion pages, and users publish around 70 million posts on WordPress, which is the dominant content management system online.

The main specialty of OpenAI GPT-3 is the capacity to respond intelligently to minimal input. It is extensively trained on billions of parameters and produces up to 50,000 characters without any supervision. This one-of-a-kind AI neural network generates texts at an amazing quality, making it quite tough for a normal human to understand whether the output was written by GPT-3 or a human.

Training of the GPT-3

The training of the GPT-3 artificial intelligence algorithm has two steps.

Step – 1: It needs to create the vocabulary, production rules, and the various categories. It can be achieved by offering inputs in the form of books. For each word, the model predicts the category to that the word belongs, and afterward, a production rule should be built.

• Step – 2: The development of the vocabulary and production rules for each category takes place. This can be achieved by offering the inputs to the model with sentences. For every sentence, the model will be predicting the category to which each word belongs, and after that, a production rule should be built.

The model consists of a few tricks that allow it a provision to boost its capability to generate texts. For example, it can guess the inception of a word by understanding the context of the word. It also predicts the next word depending on the last word of a sentence. It can also predict the length of a sentence.

Conclusion

There’s a lot of hype for the GPT-3 AI algorithm right now. One can say that in the future, it will be offering more beyond the text that includes pictures, videos, and many more. Many researchers also predicted that GPT-3 would possess the capability to translate words to pictures and pictures to words.

Tags

Join Hacker Noon

Create your free account to unlock your custom reading experience.

Coinsmart. Beste Bitcoin-Börse in Europa
Source: https://hackernoon.com/a-brief-intro-to-the-gpt-3-algorithm-t31f37k5?source=rss

Continue Reading

AI

How AI Is Catapulting Cannabis into the Future

Published

on

image

John Kaweske Hacker Noon profile picture

@johnkaweskeJohn Kaweske

John Kaweske is Founder & CEO of North Star Holdings, Inc. and Tweedleaf.

We like to think we know a thing or two about artificial intelligence. We’ve seen the ominous technological future depicted in television shows and films of robots slowly amalgamating into society. But this imagery is all wrong. AI isn’t taking over the world in the form of lifelike robots. Instead, automation has been in our lives for quite some time now, and many of us are likely not even aware of it.

Driverless cars. Voice-activated home assistants. Smartphones. These are all made possible because of artificial intelligence. It’s not only changed our lives in unimaginable ways, but it’s also allowed us to collect, analyze, and interpret data that can help us better understand the world around us. And for business owners, that’s critical not only for our organizational efficacy and profitability, but it allows us to see our customers through an entirely new lens.

It’s no surprise that many industries are already taking advantage of artificial intelligence in their practices — and now, so is the cannabis industry. The legal cannabis market is predicted to reach over $66 billion by 2025. To prepare for this growth, leaders in the cannabis sector must exploit AI’s transformative powers or risk falling behind the competition. 

So, what exactly are some of the transformative powers artificial intelligence holds in the cannabis industry? Let’s find out.

Enhanced cultivation capabilities

Have you ever enjoyed the benefits of smart home technology? If you have, you know that smart lights and smart thermostats allow you to control your home’s lighting and temperature from anywhere in the world. Growers can now enjoy these same benefits. By using artificial intelligence, we are better able to manage our crops, which can generate higher yields at lower prices. This is what we do at Tweedleaf.

We use AI to adjust the pH level and moisture levels of our soil. We also use AI to help monitor and control lighting exposure to ensure our plants receive the appropriate level of photosynthesis. We even use it for pest control. By giving growers real-time updates, AI eliminates the ‘guessing game’ of cultivation. Is the growth rate slower than usual? Are nutrient levels low? Is there a pest infestation? Artificial intelligence alerts growers to any issues, so they know exactly what to fix and how to fix it.

All of this is pretty miraculous when you think about it. AI allows growers to create the perfect environment for plants rather than leaving it up to chance. Growers shouldn’t have to hope for the best; they can make this ‘best’ their reality.

But one of the biggest advantages of artificial intelligence is that it makes it possible for growers to create and breed new, customized strains. As the legal marijuana market continues to expand, access to a variety of strains ensures that all consumers can benefit from the remarkable and healing powers of cannabis. And once growers perfect their new strains, they can then use AI to lock in the correct watering, lighting, and temperature schedules that will aid in the cultivation and production of a diverse range of plants.

Recommendation apps

For many of us, our smartphones were our earliest introductions to artificial intelligence. We’ve become so accustomed to their convenience that we can’t imagine our lives without them. This is exactly why they were created — to make our lives better.

Think about the last time you shopped online. As you were shopping, the store probably sent you some ‘recommended’ items to view. Were they scarily accurate? This wasn’t by mistake. By using AI, brands can analyze your preferences and interests and pull items from their store they also think you’d like. While this may seem a bit eerie at first, we’ve eventually come to love these recommendations. Instead of spending hours browsing a site, the brands are doing the heavy lifting for us and pulling the products we’ll have the most interest in.

The cannabis industry can take advantage of these same benefits. Certain apps like Uppy can help medical marijuana users traverse the world of legal cannabis. What experience do you hope to get from a cannabis product? Do you use it to alleviate a physical injury? Do you need it to help you sleep better at night? Are you a creative person and want to gain some inspiration? Recommendation apps can pull information about you and use it to offer insight into new strains you might like to try.

This sort of capability is so invaluable because it enriches our experience and can transform our lives. 

Better customer experience/service

While artificial intelligence has many benefits for companies, its central mission is improving the customer experience. 

If you’ve ever browsed a marijuana company’s website and interacted with a chatbot, there’s a good chance you spoke with a digital budtender that was powered by AI. And chances are you probably didn’t even realize you weren’t talking to a real person.

This is how powerful artificial intelligence has become. And now, over half of people would rather speak with a chatbot than a human because it saves time, and these chatbots can often be more knowledgeable. 

Consumers — especially new cardholders — often have a lot of questions about different products at the onset yet are too intimidated and embarrassed to walk into a dispensary and approach an employee for answers. Digital budtenders can help walk people through these questions online while also providing them with insight into the cannabis products that will best fit their needs. As customers feel more comfortable, they will begin interacting with your company more and more, which will result in a higher number of sales. 

Additionally, brands can also take advantage of tools like QR codes to have this same impact in-store. By placing a QR code on your packaging, a dispensary can equip customers with all of the information they should know about a product: potency, expected effects, user reviews, lab testing, etc. And because they don’t have to approach a person with all their questions, this could also lead to more sales.

Artificial intelligence isn’t just a revolutionary technology; it’s the future of business. Companies that implement this transformative technology do so not only to the benefit of their organizations but also to their customers. 

Tags

Join Hacker Noon

Create your free account to unlock your custom reading experience.

Coinsmart. Beste Bitcoin-Börse in Europa
Source: https://hackernoon.com/how-ai-is-catapulting-cannabis-into-the-future-zi1737kw?source=rss

Continue Reading

Publications

How We Implemented the Face-with-Mask Detection Web App for Chrome

Published

on

image

Yan Tsishko Hacker Noon profile picture

@yantsishkoYan Tsishko

Skilled front end developer with 6+ years experience in developing web and SmartTV applications

In the previous article, I discussed whether it is possible to use machine learning (in particular, face and mask detection) in the browser, approaches to detection, and optimization of all processes.

Today I want to give the technical details of the implementation.

Technologies

image

The primary language for development is TypeScript. The client application is written in React.js.

The application uses several neural networks to detect different events: face detection, mask detection. Each model/network runs in a separate thread (Web Worker). Neural networks are launched using TensorFlow.js, and Web Assembly or WebGL is used as a backend, which allows you to execute code at speed close to native. The choice of this or that backend depends on the size of the model (small models work faster on WebAssembly), but you should always test and choose what is faster for a particular model.

Receiving and displaying a video stream using WebRTC. The OpenCV.js library is used to work with images.

The following approach was implemented:

image

The main thread is only orchestrating all processes. It doesn’t load the heavy OpenCV library and doesn’t use TensorFlow.js. It gets images from the video stream and sends them for processing by web workers.

A new image is not sent to the worker until it informs the main thread that the worker is free and can process the next image. Thus a queue is not created, and we process the last image each time.

Initially, the image is sent for face recognition, in case the face is recognized; only then is the image sent for mask recognition. Each result of the worker is saved and can be displayed on the UI.

Performance

  • Receiving an image from a stream – 31 ms
  • Face detection preprocessing – 0-1 ms
  • Face detection – 51 ms
  • Face detection post-processing  – 8 ms
  • Mask detection preprocessing – 2 ms
  • Mask detection – 11 ms
  • Mask detection post-processing – 0-1 ms

Total: 

  • Face detection – 60 ms + 31 ms = 91 ms
  • Mask detection – 14 ms

In ~ 105 ms, we would know all the information from the image.

  1. Face detection preprocessing is getting an image from a stream and sending it to a web worker.
  2. Face detection post-processing – saving the result from the face detection worker and drawing it on the canvas.
  3. Mask detection preprocessing – preparing a canvas with an aligned face image and transferring it to the web worker.
  4. Mask detection post-processing – saving the results of mask detection.
image

Each model (face detection and mask detection) runs in a separate web worker, which loads the necessary libraries (OpenCV.js, Tensorflow.js, models).

We have 3 web workers:

  • Face detection
  • Mask detection
  • Worker-helper that can transform images uses heavy methods from OpenCV and TensorFlow.js. For example, to build a calibration matrix for multiple cameras.

Features and tricks that helped us in development and optimization

Web workers and how to work with them

A web worker is a way to run a script on a separate thread.

They allow running heavy processes in parallel with the main thread without blocking the UI. The main thread executes the orchestration logic; all heavy computation is running in the web workers. Web workers are supported in almost all browsers.

image

Features and limitations of web workers

Features:

  • Access only to a subset of JavaScript features
  • Access to
    navigator

    object

  • Read-only access to the
    location

    object

  • Possibility to use
    XMLHttpRequest
  • Possibility to use
    setTimeout()
     / 
    clearTimeout()
     и 
    setInterval()
     / 
    clearInterval()
  • Application Cache
  • Importing external scripts using importScripts()
  • Creating other web workers

Limitations:

  • No access to DOM
  • No access to windows
  • No access to the document
  • No access to parent

To provide communication between the main thread and the web workers

postMessage

and

onmessage

the event handler is used.

image

If you look at the specification of the

postMessage()

method, you will notice that it accepts not only data but also a second argument – a transferable object.

worker.postMessage(message, [transfer]);

Let’s see how using it will help us.

A transferable interface is an object that can be passed between different execution contexts, such as the main thread and web workers.

This interface is implemented in:

  • ImageBitmap
  • OffscreenCanvas
  • ArrayBuffer
  • MessagePort

If we want to transfer 500 MB of data to the worker, we can do it without the second argument, but the difference will be in the time transfer and memory usage.

Sending data without an argument will take 149 ms and 1042 MB for Google Chrome, in other browsers even more.

image

When you use the transfer argument, it will take 1ms and will decrease memory consumption by 2 times!

image

Since images are often transferred from the main thread to the web workers, it is important for us to do this as quickly and efficiently as possible for memory usage, and this feature helps us a lot with this.

Using OffscreenCanvas

The web worker does not have access to the DOM, so you cannot use canvas directly.

OffscreenCanvas

comes to the rescue.

Advantages:

  • Fully detached from the DOM
  • It can be used both in the main thread and in web workers
  • It has a transferable interface and does not load the main thread if rendering running in a web worker
  • image

Advantages of using requestAnimationFrame

requestAnimationFrame

allows you to receive images from the stream with maximum performance (60 FPS), and it is only limited by the camera’s capability, not all cameras send video with such frequency.

The main advantages are:

  • Browser optimizes requestAnimationFrame calls with other animations and drawings.
  • Less power consumption, it’s very important for mobile devices
  • It works without a call stack and doesn’t create a call queue.
  • Minimum call frequency 16.67 ms (1000 ms / 60 fps = 16.67 ms)
  • Call frequency can be controlled manually
    image

Metrics of application

At first, using stats.js seemed to be a good idea for displaying application metrics, but after the count of metrics became 20+, the main flow of the application began to slow down due to the specifics of the browser. Each metric uses a canvas on which draws a graph (data are received very often there), and the browser calls re-render with high frequency, which negatively affects the application. As a result, the metrics are underestimated.

To avoid such a problem, it is better to abandon the use of “beauty” and simplify displaying the current and calculated average for the entire time by text. Updating a value in the DOM will be much faster than rendering graphics.

Memory leaks control

Quite often, during development, we encountered memory leaks on mobile devices, while on a desktop, it could work for a very long time.

In web workers, it is impossible to know how much memory it actually consumes (

performance.memory

does not work in web workers).

Based on this, we provided for the launch of our application through web workers and completely in the main thread. By running all our detection models on the main thread, we can take the memory consumption metrics and see where the memory leak is and fix it.

The main code of models in web workers

We got acquainted with the main tricks that were used in the application; now we will look at the implementation.

For working with web workers initially, comlink-loader was used. It’s a very handy library that allows you to work with the worker as a class object without using the

onmessage

and

postMessage

methods and control the asynchronous code using async-await. All this was convenient until the application was launched on a tablet (Samsung Galaxy Tab S7), and suddenly it crashed after 2 minutes.

After analyzing all the code, no memory leaks were found, except for a black box of this library for working with workers. For some reason, the launched Tensorflow.js models were not cleared and stored somewhere inside this library.

It was decided to use a worker-loader, which allows you to work with web workers from pure js without unnecessary layers. And this solved the problem; the application works for days without crashes.

Face detection worker

Create web worker

this.faceDetectionWorker = workers.FaceRgbDetectionWorkerFactory.createWebWorker();

Create a message handler from a worker in the main thread

this.faceDetectionWorker.onmessage = async (event) => { if (event.data.type === 'load') { this.faceDetectionWorker.postMessage({ type: 'init', backend, streamSettings, faceDetectionSettings, imageRatio: this.imageRatio, }); } else if (event.data.type === 'init') { this.isFaceWorkerInit = event.data.status; // When both workers inited it is run processes to grab and process frames only if (this.isFaceWorkerInit && this.isMaskWorkerInit) { await this.grabFrame(); } } else if (event.data.type === 'faceResults') { this.onFaceDetected(event); } else { throw new Error(`Type=${event.data.type} is not supported by RgbVideo for FaceRgbDatectionWorker`); }
};

Sending an image for face processing

this.faceDetectionWorker.postMessage( { type: 'detectFace', originalImageToProcess: this.lastImage, lastIndex: lastItem!.index, }, [this.lastImage], // transferable object
);

Face detection web worker code

The init method initializes all the models, libraries, and canvas that are needed to work with.

export const init = async (data) => { const { backend, streamSettings, faceDetectionSettings, imageRatio } = data; flipHorizontal = streamSettings.flipHorizontal; faceMinWidth = faceDetectionSettings.faceMinWidth; faceMinWidthConversionFactor = faceDetectionSettings.faceMinWidthConversionFactor; predictionIOU = faceDetectionSettings.predictionIOU; recommendedLocation = faceDetectionSettings.useRecommendedLocation ? faceDetectionSettings.recommendedLocation : null; detectedFaceThumbnailSize = faceDetectionSettings.detectedFaceThumbnailSize; srcImageRatio = imageRatio; await tfc.setBackend(backend); await tfc.ready(); const [blazeModel] = await Promise.all([ blazeface.load({ // The maximum number of faces returned by the model maxFaces: faceDetectionSettings.maxFaces, // The width of the input image inputWidth: faceDetectionSettings.faceDetectionImageMinWidth, // The height of the input image inputHeight: faceDetectionSettings.faceDetectionImageMinHeight, // The threshold for deciding whether boxes overlap too much iouThreshold: faceDetectionSettings.iouThreshold, // The threshold for deciding when to remove boxes based on score scoreThreshold: faceDetectionSettings.scoreThreshold, }), isOpenCvLoaded(), ]); faceDetection = new FaceDetection(); originalImageToProcessCanvas = new OffscreenCanvas(srcImageRatio.videoWidth, srcImageRatio.videoHeight); originalImageToProcessCanvasCtx = originalImageToProcessCanvas.getContext('2d'); resizedImageToProcessCanvas = new OffscreenCanvas( srcImageRatio.faceDetectionImageWidth, srcImageRatio.faceDetectionImageHeight, ); resizedImageToProcessCanvasCtx = resizedImageToProcessCanvas.getContext('2d'); return blazeModel;
};

The

isOpenCvLoaded

method is waiting for openCV to load

export const isOpenCvLoaded = () => { let timeoutId; const resolveOpenCvPromise = (resolve) => { if (timeoutId) { clearTimeout(timeoutId); } try { // eslint-disable-next-line no-undef if (cv && cv.Mat) { return resolve(); } else { timeoutId = setTimeout(() => { resolveOpenCvPromise(resolve); }, OpenCvLoadedTimeoutInMs); } } catch { timeoutId = setTimeout(() => { resolveOpenCvPromise(resolve); }, OpenCvLoadedTimeoutInMs); } }; return new Promise((resolve) => { resolveOpenCvPromise(resolve); });
};

Face detection method

export const detectFace = async (data, faceModel) => { let { originalImageToProcess, lastIndex } = data; const facesThumbnailsImageData = []; // Resize original image to the recommended BlazeFace resolution resizedImageToProcessCanvasCtx.drawImage( originalImageToProcess, 0, 0, srcImageRatio.faceDetectionImageWidth, srcImageRatio.faceDetectionImageHeight, ); // Getting resized image let resizedImageDataToProcess = resizedImageToProcessCanvasCtx.getImageData( 0, 0, srcImageRatio.faceDetectionImageWidth, srcImageRatio.faceDetectionImageHeight, ); // Detect faces by BlazeFace let predictions = await faceModel.estimateFaces( // The image to classify. Can be a tensor, DOM element image, video, or canvas resizedImageDataToProcess, // Whether to return tensors as opposed to values returnTensors, // Whether to flip/mirror the facial keypoints horizontally. Should be true for videos that are flipped by default (e.g. webcams) flipHorizontal, // Whether to annotate bounding boxes with additional properties such as landmarks and probability. Pass in `false` for faster inference if annotations are not needed annotateBoxes, ); // Normalize predictions predictions = faceDetection.normalizePredictions( predictions, returnTensors, annotateBoxes, srcImageRatio.faceDetectionImageRatio, ); // Filters initial predictions by the criteri that all landmarks should be in area of interest predictions = faceDetection.filterPredictionsByFullLandmarks( predictions, srcImageRatio.videoWidth, srcImageRatio.videoHeight, ); // Filters predictions by min face width predictions = faceDetection.filterPredictionsByMinWidth(predictions, faceMinWidth, faceMinWidthConversionFactor); // Filters predictions by recommended location predictions = faceDetection.filterPredictionsByRecommendedLocation(predictions, predictionIOU, recommendedLocation); // If there are any predictions it is started faces thumbnails extraction according to the configured size if (predictions && predictions.length > 0) { // Draw initial original image originalImageToProcessCanvasCtx.drawImage(originalImageToProcess, 0, 0); const originalImageDataToProcess = originalImageToProcessCanvasCtx.getImageData( 0, 0, originalImageToProcess.width, originalImageToProcess.height, ); // eslint-disable-next-line no-undef let srcImageData = cv.matFromImageData(originalImageDataToProcess); try { for (let i = 0; i < predictions.length; i++) { const prediction = predictions[i]; const facesOriginalLandmarks = JSON.parse(JSON.stringify(prediction.originalLandmarks)); if (flipHorizontal) { for (let j = 0; j < facesOriginalLandmarks.length; j++) { facesOriginalLandmarks[j][0] = srcImageRatio.videoWidth - facesOriginalLandmarks[j][0]; } } // eslint-disable-next-line no-undef let dstImageData = new cv.Mat(); try { // eslint-disable-next-line no-undef let thumbnailSize = new cv.Size(detectedFaceThumbnailSize, detectedFaceThumbnailSize); let transformation = getOneToOneFaceTransformationByTarget(detectedFaceThumbnailSize); // eslint-disable-next-line no-undef let similarityTransformation = getSimilarityTransformation(facesOriginalLandmarks, transformation); // eslint-disable-next-line no-undef let similarityTransformationMatrix = cv.matFromArray(3, 3, cv.CV_64F, similarityTransformation.data); try { // eslint-disable-next-line no-undef cv.warpPerspective( srcImageData, dstImageData, similarityTransformationMatrix, thumbnailSize, cv.INTER_LINEAR, cv.BORDER_CONSTANT, new cv.Scalar(127, 127, 127, 255), ); facesThumbnailsImageData.push( new ImageData( new Uint8ClampedArray(dstImageData.data, dstImageData.cols, dstImageData.rows), detectedFaceThumbnailSize, detectedFaceThumbnailSize, ), ); } finally { similarityTransformationMatrix.delete(); similarityTransformationMatrix = null; } } finally { dstImageData.delete(); dstImageData = null; } } } finally { srcImageData.delete(); srcImageData = null; } } return { resizedImageDataToProcess, predictions, facesThumbnailsImageData, lastIndex };
};

The input is an image and an index for face matching and mask detection in the future.

Since blazeface accepts images with a maximum size of 128 px, the image from the camera must be reduced.

Calling the

faceModel.estimateFaces

method starts the image analysis using blazeface, and the predicted coordinates with the coordinates of the face, nose, ears, eyes, mouth area are returned to the main thread.

Before working with them, you need to restore the coordinates for the original image because we compressed it to 128 px.

Now you can use these data to decide whether the face is in the desired area or not. What is the minimum face size you need for subsequent identification.

image

The following code cuts the face out of the image and aligns it to identify the mask using openCV methods.

Mask detection

Model initialization and

webAssembly

backend

export const init = async (data) => { const { backend, streamSettings, maskDetectionsSettings, imageRatio } = data; flipHorizontal = streamSettings.flipHorizontal; detectedMaskThumbnailSize = maskDetectionsSettings.detectedMaskThumbnailSize; srcImageRatio = imageRatio; await tfc.setBackend(backend); await tfc.ready(); const [maskModel] = await Promise.all([ tfconv.loadGraphModel( `/rgb_mask_classification_first/MobileNetV${maskDetectionsSettings.mobileNetVersion}_${maskDetectionsSettings.mobileNetWeight}/${maskDetectionsSettings.mobileNetType}/model.json`, ), ]); detectedMaskThumbnailCanvas = new OffscreenCanvas(detectedMaskThumbnailSize, detectedMaskThumbnailSize); detectedMaskThumbnailCanvasCtx = detectedMaskThumbnailCanvas.getContext('2d'); return maskModel;
};

The mask detection requires the coordinates of eyes, ears, nose, mouth, and the aligned image which is returned by the face detection worker.

this.maskDetectionWorker.postMessage({ type: 'detectMask', prediction: lastItem!.data.predictions[0], imageDataToProcess, lastIndex: lastItem!.index,
});

Detection method

export const detectMask = async (data, maskModel) => { let { prediction, imageDataToProcess, lastIndex } = data; const masksScores = []; const maskLandmarks = JSON.parse(JSON.stringify(prediction.landmarks)); if (flipHorizontal) { for (let j = 0; j < maskLandmarks.length; j++) { maskLandmarks[j][0] = srcImageRatio.faceDetectionImageWidth - maskLandmarks[j][0]; } } // Draw thumbnail with mask detectedMaskThumbnailCanvasCtx.putImageData(imageDataToProcess, 0, 0); // Detect mask via NN let predictionTensor = tfc.tidy(() => { let maskDetectionSnapshotFromPixels = tfc.browser.fromPixels(detectedMaskThumbnailCanvas); let maskDetectionSnapshotFromPixelsFlot32 = tfc.cast(maskDetectionSnapshotFromPixels, 'float32'); let expandedDims = maskDetectionSnapshotFromPixelsFlot32.expandDims(0); return maskModel.predict(expandedDims); }); // Put mask detection result into the returned array try { masksScores.push(predictionTensor.dataSync()[0].toFixed(4)); } finally { predictionTensor.dispose(); predictionTensor = null; } return { masksScores, lastIndex, };
};

The result of the neural network is the probability that there is a mask, which is returned from the worker. It helps to increase and decrease the threshold of mask detection. By

lastIndex

, we can compare the face and the presence of a mask and display some information on a specific person on the screen.

Conclusion

I hope this article will help you to learn about the possibilities of working with ML in the browser and ways to optimize it. Most applications can be optimized using the tricks described above.

by Yan Tsishko @yantsishko. Skilled front end developer with 6+ years experience in developing web and SmartTV applicationsRead my stories

Tags

Join Hacker Noon

Create your free account to unlock your custom reading experience.

Coinsmart. Beste Bitcoin-Börse in Europa
Source: https://hackernoon.com/how-we-implemented-the-face-with-mask-detection-web-app-for-chrome-co2m35q0?source=rss

Continue Reading
Energy5 days ago

Extensive Demand from the Personal Care and Cosmetics Industry Coupled with the Booming Construction Industry will Invite Impactful Growth for the Mineral Oil & Mineral Spirit Market: TMR

Esports3 days ago

World of Warcraft 9.1 Release Date: When is it?

Energy3 days ago

Biocides Market worth $13.6 billion by 2026 – Exclusive Report by MarketsandMarkets™

Esports4 days ago

Clash of Clans June 2021 Update patch notes

Big Data5 days ago

In El Salvador’s bitcoin beach town, digital divide slows uptake

Aviation5 days ago

Boeing 727 Set To Be Turned Into Luxury Hotel Experience

HRTech4 days ago

Pre-Owned Luxury Car dealer Luxury Ride to add 80 Employees across functions to boost growth

Blockchain3 days ago

Former PayPal Employees Launch Cross-Border Payment System

Esports3 days ago

Here are the patch notes for Brawl Stars’ Jurassic Splash update

Blockchain3 days ago

PancakeSwap (CAKE) Price Prediction 2021-2025: Will CAKE Hit $60 by 2021?

Esports2 days ago

Here are the patch notes for Call of Duty: Warzone’s season 4 update

Blockchain4 days ago

Since It Adopted Bitcoin As Legal Tender, The World Is Looking At El Salvador

Energy3 days ago

XCMG dostarcza ponad 100 sztuk żurawi dostosowanych do regionu geograficznego dla międzynarodowych klientów

Gaming4 days ago

Super Smash Bros. Ultimate – Tekken’s Kazuya Mishima is the Next Challenger pack

Blockchain2 days ago

Will Jeff Bezos & Kim Kardashian Take “SAFEMOON to the Moon”?

Esports2 days ago

How to complete Path to Glory Update SBC in FIFA 21 Ultimate Team

Blockchain4 days ago

Civic Ledger awarded as Technology Pioneers by World Economic Forum

Esports2 days ago

How to unlock the MG 82 and C58 in Call of Duty: Black Ops Cold War season 4

Esports2 days ago

How to unlock the Call of Duty: Black Ops Cold War season 4 battle pass

Blockchain3 days ago

CUHK Pairs with ConsenSys To Launch Blockchain-based Covid Digital Health Passport

Trending