Connect with us

AI

GitHub Copilot and the Rise of AI Language Models in Programming Automation

Published

on

GitHub Copilot and the Rise of AI Language Models in Programming Automation

Read on to learn more about what makes Copilot different from previous autocomplete tools (including TabNine), and why this particular tool has been generating so much controversy.


github-copilot.png

Should I Use Github Copilot?

 
If you are a software engineer, or count any of them among your circle of acquaintances, then you’re probably already aware at some level of Copilot. Copilot is GitHub’s new deep learning code completion tool.

Autocomplete tools for programmers are nothing new, and Copilot is not even the first to make use of deep learning nor even the first to use a GPT transformer. After all, TabNine sprung out of a summer project by OpenAI alum Jacob Jackson and makes use of the GPT-2 general purpose transformer.

Microsoft (which owns GitHub) has packaged their own IntelliSense code completion tool with programming products since at least 1996, and autocomplete and text correction has been an active area of research since the 1950s.

Read on if you’d like to learn more about what makes Copilot different from previous autocomplete tools (including TabNine), and why this particular tool has been generating so much controversy.

Copyright Controversy

 

Screenshot demonstrating GitHub Copilot’s eagerness to recite the fast inverse square root function from Quake III.
Screenshot of Armin Ronacher demonstrating GitHub Copilot’s eagerness to recite the fast inverse square root function from Quake III.

 

Since its inception, Copilot has fueled a heated discussion about the product and its potential copyright implications. In large part this is due to the way the model was trained. GitHub Copilot is based on OpenAI’s Codex, a variant of GPT-3 fine-tuned on code. GPT-3 is OpenAI’s 175 billion-parameter (Codex is apparently based on the 12-billion parameter version of GPT-3) general-purpose transformer, and of course any giant transformer needs a giant training dataset to be effective. GitHub is just the place to find such a dataset, and Copilot’s training dataset included all public code hosted by GitHub.

This is a principal source of controversy surrounding the project, surpassing the discussion about automating away software engineering and the impact of tools like Copilot on the future of programming. More technical information about the model and its limitations can be found in the paper on Arxiv.


Portrait of Edmond de Belamy.” Collective Obvious used open source code to generate the piece, later selling for over $400,000 at auction, much to the chagrin of Robbie Barrat, whose code they used.

 

Some programmers are simply upset that their code contributed to what is likely to become a paid product without their explicit permission, with a few commenters on hacker news discussing leaving the platform.

In some ways the reaction to Copilot echoes of the GAN-generated “painting” that sold for nearly half a million dollars at auction. The art piece was created on top of open source contributions with a lineage of multiple authors, none of whom received any compensation that we know of as reward for the noteworthy success of the work at auction.

The code used to produce the artwork, and potentially the pre-trained model weights as well, was made publicly available under a BSD license by the model’s author, Robbie Barrat, whose own work was based on previous open source projects and who later modified the license to disallow commercial use of the pre-trained weights. It’s understandable for programmers to be frustrated when left out of profitable uses of their work, but there’s more to the copyright controversy surrounding Copilot than that.

”github Copilot has, by their own admission, been trained on mountains of gpl code, so i’m unclear on how it’s not a form of laundering open source code into commercial works.-Twitter user eevee.

GitHub Copilot is demonstrably capable of reproducing extended sections of copyleft code, which many in the open source community consider a violation of the terms of licenses like GPL.

Copilot was trained on all public code, including permissive open source licenses like the MIT License. However, it also copyleft licenses like the Affero General Public License (AGPL) that allows use and modification, but requires modified works to be made available under the same license. In some interpretations, code generated by GitHub Copilot can be considered derivative of the original training data, and perhaps more problematically Copilot can sometimes reproduce code from the training dataset verbatim. That makes Copilot a trickier case than, say, the Google book-scanning precedent often cited as a cornerstone of fair use for scraping copyrighted data.

The discussion on potential legal issues continues with little consensus from either side of the debate for now, and the subject may very likely become an issue for the courts to resolve.

Even if we assume that Copilot is totally in the clear legally, there may be other risks to using the product. If Copilot’s training is considered fair use and its output is not considered derivative or copyright/copyleft infringing work, it could still produce output that easily fits the criteria for plagiarism in the context of something like a PhD student writing code for their thesis research. For the time being it may be a good idea to use Copilot carefully, but there’s another reason that Copilot is a trending topic of discussion: Copilot can give surprisingly good solutions to common programming tasks, and appears to be both quantitatively and qualitatively more capable than previous autocomplete tools.

How Good is GitHub’s Copilot?

 

Youtube video showing how Github copilot crushes leetcode interview questions
GitHub Copilot CRUSHES Leetcode Interview Questions! Source

 

Even if you’re not a programmer, you’ve probably had some experience with autocomplete in the form of predictive text on a mobile phone. This will automatically suggest the next word as you begin to type it, and may suggest a slightly longer continuation such as to finish the current sentence.

For most programming autocomplete tools the amount and complexity of suggestions is roughly similar to what you’d find in a mobile phone keyboard, but not all code completion tools use modern (aka deep learning) machine learning.

The default autocomplete in vim, for example, will simply offer a list of suggested completions based on the words that a user has entered previously. More recently developed code completion tools like TabNine or Kite are a little more sophisticated and can suggest the completion of the rest of a line or two. The Kite website suggests this is enough to make a programmer nearly twice as efficient in terms of the number of keystrokes used, but Github Copilot takes this one step further, albeit with a very long stride.

Copilot has similar completion capabilities to the standard language GPT-3 it is based on, and working with the code completion tool looks and feels similar to the style of “prompt programming” that GPT-3 experimenters have adopted when working with the model. Copilot can interpret the contents of a docstring and write a function to match, or given a function and the start of an appropriately named test function it can generate unit tests. That saves a lot more than 50% of a programmer’s keystrokes.

Taken to its logical conclusion, when Copilot works perfectly it turns the job of a software engineer into something that looks a lot more like constant code review than writing code.

Several programmer-bloggers with early access to the technical preview version have put Copilot to the test by essentially challenging the model to solve interview-level programming problems. Copilot is pretty impressive in how well it can solve these types of challenges, but not good enough to warrant using its output without carefully reviewing it first.

Several online software engineers have put the “AI pair programmer” (as GitHub puts it) to the test. We’ll go through some of the points identified in the Arxiv paper as scenarios where Codex falls short and try to find examples in the experiments conducted by programmers involved in the technical preview.

The HumanEval Dataset

 
OpenAI Codex is a ~12 billion parameter GPT-3 fine-tuned on code, with Codex-S being the most advanced variant of Codex itself. To evaluate the performance of this model, OpenAI built what they call the HumanEval dataset: a collection of 164 hand-written programming challenges with corresponding unit tests, the sort you might find on a coding practice site like CodeSignal, Codeforces, or HackerRank.

In HumanEval, the problem specifications are included in function docstrings, and the problems are all written for the Python programming language.

While an undifferentiated GPT-3 without code-specific was unable to solve any of the problems in the HumanEval dataset (at least on the first try), the fine-tuned Codex and Codex-S were able to solve 28.8% and 37.7% of problems, respectively. By cherry-picking from the top-100 attempts at the problems, Codex-S was further able to solve 77.5% of problems.

One way to interpret this is that if a programmer was using Codex, they could expect to find a valid solution to a problem (at roughly the level of complexity encountered in technical interviews) by looking through the first 100 suggestions, or even blindly throwing attempted solutions at a valid set of unit tests until they pass. That’s not to say a better solution can’t be found, if a programmer is willing to modify suggestions from Codex, in one of the first few suggestions.

They also used a more complicated estimator for solving each problem by generating around 200 samples and calculating an unbiased estimator of the proportion of samples that pass unit tests, which they report as “[email protected]” where k is 100, the number of samples being estimated. This method has a lower variance than reporting [email protected] directly.

The Best Codex Model Still Under-Performs a Computer Science Student

 
The authors of Codex note that, being trained on over 150GB in hundreds of millions of lines of code from GitHub, the model has been trained on significantly more code than a human programmer can expect to read over the course of their careers. However, the best Codex model (Codex-S with 12 billion parameters) still under-performs the abilities of a novice computer science student or someone who spends a few afternoons practicing interview-style coding challenges.

In particular, Codex performance degrades rapidly when chaining together several operations in a problem specification.

In fact, the ability of Codex to solve several operations chained together drops by a factor of 2 or worse for each additional instruction in the problem specification. To quantify this effect, the authors at OpenAI built an evaluation set of string manipulations that could operate sequentially (change to lowercase, replace every other character with a certain character, etc.). For a single string manipulation, Codex passed nearly 25% of problems, dropping to just below 10% for 2 string manipulations chained together, 5% for 3, and so on.

The rapid drop-off in solving multi-step problems was seen by an early Copilot reviewer Giuliano Giacaglia on Medium. Giuliano reports giving Copilot a problem description of reversing the letters in each word in an input string, but instead Copilot suggested a function that reverses the order of words in a sentence, not letters in a sentence of words (“World Hello” instead of “olleH dlroW”). Copilot did, however, manage to write a test that failed for its own implementation.

Although not sticking to the multi-step string manipulation paradigm used by Giuliano and OpenAI to test Copilot, Kumar Shubham discovered an impressive result when Copilot successfully solved a multi-step problem description that involved calling system programs to take a screenshot, run optical character recognition on the image, and finally extract email addresses from the text. That does, however, raise the issue that Copilot may write code that relies on unavailable, out-of-date, or untrusted external dependencies. That’s a point raised by OpenAI alongside the model’s susceptibility to bias, ability to generate security vulnerabilities, and potential economic and energy costs in the section of their paper discussing risks.

Other reviews of Copilot by YouTubers DevOps Directive and Benjamin Carlson found impressive results when challenging Copilot with interview-style questions from leetcode.com, including some that seemed significantly more complex than chaining together a series of simple string manipulations. The difference in the complexity of code that Copilot can generate and the complexity of problem specifications that Copilot can understand is striking.

Perhaps the prevalence of code written in the style of interview practice questions in the training dataset leads to Copilot overfitting those types of problems, or perhaps it is just more difficult to chain together several steps of modular functionality than it is to churn out a big chunk of complex code that is very similar to something the model has seen before. Poorly described and poorly interpreted specifications are already a common source of complaints for engineers and their managers of the human variety, so perhaps it should not be so surprising to find an AI coding assistant fails to excel at parsing complicated problem specifications.

Copilot Alternatives

 
As of this writing, Copilot is still limited to programmers lucky enough to be enrolled in the technical preview, but fear not: myriad of other code completion assistants (whether using deep learning or not) are readily available to try out and it’s not a bad time to reflect on what increasing automation might mean for software engineering.

Earlier we mentioned TabNine, a code completion tool based in part on OpenAI’s GPT-2 transformer. Originally built by Jacob Jackson and now owned by codota, TabNine was able to solve 7.6% of the HumanEval benchmark in the [email protected] metric used by OpenAI authors. That’s fairly impressive considering that TabNine was designed to be a more hands-on code completion solution, unlike Codex which was explicitly inspired by the potential of GPT-3 models to produce code from problem descriptions. TabNine has been around since 2018 and has both free and paid versions.

Kite is another code completion tool in the same vein as TabNine, with free (desktop) and paid (server) versions that differ in the size of the model used by a factor of 25. According to Kite’s usage statistics, coders choose to use the suggested completions often enough to cut their keystrokes in half compared to manually typing out every line, and Kite’s cite their users’ self-reported productivity boost of 18%. Going by the animated demos on their website, Kite definitely suggests shorter completions than both TabNine and Copilot. This differs in degree from TabNine, which suggests only slightly longer completions for the most part, but it’s qualitatively different from Copilot: Copilot can suggest extended blocks of code and changes the experience from choosing the best completion to code reviewing several suggested approaches to the problem.

Is Copilot Here to Take Your Job or Just Your Code?

 
GitHub Copilot has some software engineers joking that the automation they’ve been building for years is finally coming home to roost, and soon we’ll all be out of a job. In reality this is very unlikely to be the case for many years as there is more to programming than just writing code.

Besides, it’s an oft-repeated trope that even interpreting exactly what a client or manager wants in a set of software specifications is more of an art than a science.

On the other hand, Copilot and other natural language code completion tools like it (and trust us, more are coming) are indeed likely to have a big impact on the way software engineers do their jobs. Engineers will probably spend more time reviewing code and checking tests, whether the code under scrutiny was written by an AI model or a fellow engineer. We’ll probably also see another layer of meta to the art of programming as “prompt programming” of machine learning programming assistants becomes commonplace.

As cyberpunk author William Gibson put it all those years ago: “the future is already here — it’s just not evenly distributed.”

Copilot has also ignited a debate on copyright, copyleft, and all manner of open source licenses and the philosophy of building good technology, which is a discussion that needs to take place sooner rather than later. Additionally, most contemporary interpretations of intellectual property require a human author for a work to be eligible for copyright. As more code is written in larger proportions by machine learning models instead of humans, will those works legally enter the public domain upon their creation?

Who knows? Perhaps the open source community will finally win in the end, as the great-great-great successor to Copilot becomes a staunch open source advocate and insists on working only on free and open source software.

 
Bio: Kevin Vu manages Exxact Corp blog and works with many of its talented authors who write about different aspects of Deep Learning.

Original. Reposted with permission.

Related:


PlatoAi. Web3 Reimagined. Data Intelligence Amplified.
Click here to access.

Source: https://www.kdnuggets.com/2021/09/github-copilot-rise-ai-language-models-programming-automation.html

AI

When to Contact an Attorney After a Car Accident

Published

on

Car accident victims are frequently forced to deal with significant injuries and mounting medical expenditures following an accident. Insurance providers often provide accident victims with a lowball settlement that is only a fraction of what they require to recover. 

These insurance firms do not have your best interests at heart; they care about their shareholders and decrease their obligations. It would be best if you considered hiring a lawyer as soon as possible. Columbia injury lawyers will help to gather the necessary shreds of evidence that will be crucial to your case.

Below, we will discuss when you need to contact an attorney if you are involved in a car accident.

When Should One Contact an Accident Attorney?

1. When It’s Not Clear Who Is at Fault

In the event of an accident, no one wants to take blame and fault for the accident. Everyone will be throwing blame on each party involved just to avoid being liable and dealing with the consequences.

An accident attorney will conduct their investigations and collect the necessary evidence to show which party is liable. The earlier you contact a lawyer, the better your chances for a successful case.

2. More Than One Party Is Involved

If there were many parties involved, it’s important to contact an attorney to help you with the litigation process. An example is when a truck accident occurs; in such a scenario, you will be dealing with the driver’s liability and the manufacturers whose goods were being transported.

3. When There Is Significant Injury and Damage

The aftermath of car accidents can be fatal, with others suffering severe injuries and sometimes even death. When facing a severe injury, it can be challenging to deal with your injuries and case at the same time. Hiring the assistance of a personal injury lawyer will enable you to take care of your injuries as they work on your case.

4. Dealing With Rogue Insurance Companies

In most cases, insurance companies will try to give you a lower settlement than you deserve. This is usually because you may not know how to air your complaints. Therefore, your lawyer will negotiate with the insurance companies on your behalf to ensure you get your compensation in full and on a timely basis.

5. The Police Report Doesn’t Match What Happened

When you detect inconsistencies with the police report on what took place, a lawyer will be best brought in to guide you on the way forward. They will examine the evidence at hand and advise you on the appropriate action to take.

As we have seen above, lawyers play a vital role in the success of a case; after an accident, let’s take a look at some of the benefits of hiring a lawyer.

Benefits of Hiring a Lawyer

1. Great at Negotiations

Since they are aware of all the tricks insurance companies and lawyers use to avoid giving total compensation, they will provide an excellent defense to ensure you get compensated fully and in a timely manner.

2. Familiar with the Court Systems

Going to court alone may be stressful, as many legal terms are used that you likely won’t be familiar with. A lawyer will explain these words and make sure everything proceeds on schedule.

3. Conduct Thorough Investigations

Lawyers have a lot of expertise in examining vehicle accidents. They frequently employ accident reconstruction teams, forensic specialists, and experts to identify all guilty parties. They will bring even those not included in the police report, such as the automobile manufacturer, the municipality accountable for road maintenance, or the bar that supplied the drunk driver.

4. Higher Compensation

Those who hire a lawyer get more compensation than those who do not. Lawyers understand how to develop a case to demonstrate to the insurance company how much money you need to recover. They are not afraid to go toe-to-toe with huge insurance companies, and they will battle to ensure that all of your medical expenses are taken into account – both now and in the future.

Do Not Settle for Just Any Lawyer

Ensure you do your research diligently before appointing a lawyer. Ask around and look for lawyers with good reputations.

Recommended Products

PlatoAi. Web3 Reimagined. Data Intelligence Amplified.
Click here to access.

Source: https://1reddrop.com/2021/10/16/when-to-contact-an-attorney-after-a-car-accident/?utm_source=rss&utm_medium=rss&utm_campaign=when-to-contact-an-attorney-after-a-car-accident

Continue Reading

AI

5 Ways to Attract Clients with Law Firm SEO

Published

on

As a business owner, your primary goal is to succeed and make an impact on society. The dream is universal, and law firms are no different. You want a scenario where your listing appears first whenever anyone searches for what you offer. So, what can you do to ensure your listing ranks first on Google search engine results pages (SERPs)? How can you outrank your competition to win more clients? Law firm local search engine optimization is the answer. 

How to Improve Your Law Firm’s SEO

Gone are the days where advertising your business would need word-of-mouth referrals or print media. I mean, these methods, though outdated, are still helpful. However, they sure cannot beat the effectiveness of technology and digital media. This is why local SEO is important. It achieves fast results and has the capacity to reach a bigger audience. Moreover, through Google Analytics, you can track your progress and manage your marketing campaigns.

So, how do you go about optimizing your website with local SEO? This can be difficult, especially considering how demanding your job already is. Now imagine not being tech-savvy? Quite the uphill climb, isn’t it? Do not despair, read on to learn a few tips to help you get started. 

1. Collect Customer Testimonials

The best and probably most straightforward way to begin optimization would be to collect reviews from your existing clients. How great the reviews are can propel your law firm into the stratosphere. You can imagine how many prospective clients will respond positively after seeing these glowing reviews. It will likely increase your trustworthiness. Eventually, google analytics will pick up on it and get you to rank higher on search engine results.

2. Choose the Right Keywords

You need to predetermine what your prospective clients will be searching for when in need of your services. Pay attention to what keywords your competition is using to attract clients to their website. Combine these two to come up with keyword-rich, high-quality content to incorporate into your site. Keyword research will help you determine relevant legal keywords to strengthen your search results.

3. Avoid Excessive Legal Jargon

Attorneys are well known for their impeccable use of words. Their career largely depends on such. On the contrary, law firm websites need to be as simple as possible. Most clients are regular people who may not understand legal jargon. 

For you to increase your law firm SEO effort, you will need to appeal to people, most of whom don’t have a law degree. This, in combination with the keywords, will definitely boost your listing on google search results.

4. Utilize Proper Meta Descriptions

Meta descriptions are a synopsis of what your law firm is all about. It tells anyone interested in what your specialties are and what they should expect when they click on your website. While these descriptions are auto-generated, you could still write a killer description to boost your rankings and click-through rates.

5. Prioritize Locality

Search engine data shows local searches are the most common. People often look for products and services that are close to them. Proximity matters a lot to many people, and your target audience is most likely in this demographic as well. Targeting your law firm SEO to the locality you are based on will most likely attract more local clients.

Equip Your Law Firm with SEO Strategies to Stand Out

All these great tips can help you gain more website traffic, which will likely increase your client base. Join the bandwagon and optimize your site and propel past your competitors.

Recommended Products

PlatoAi. Web3 Reimagined. Data Intelligence Amplified.
Click here to access.

Source: https://1reddrop.com/2021/10/16/5-ways-to-attract-clients-with-law-firm-seo/?utm_source=rss&utm_medium=rss&utm_campaign=5-ways-to-attract-clients-with-law-firm-seo

Continue Reading

Artificial Intelligence

AI Visual Inspection For Defect Detection in Manufacturing

Published

on

defect detection artificial intelligence
Illustration: © IoT For All

Artificial intelligence in manufacturing is a trendy term. When describing AI-based defect detection solutions, it’s often about visual inspection technology based on deep learning and computer vision.

What Is Deep Learning in a Visual Inspection?

Deep learning is an aspect of machine learning technology powered by artificial neural networks. The operating principle of deep learning technology is teaching machines to learn by example. By providing a neural network with labeled examples of specific data types, it’s possible to extract common patterns between those examples and then transform them into a math equation. This helps to classify future pieces of information.

With visual inspection technology, integrating deep learning algorithms allows differentiating parts, anomalies, and characters, which imitate a human visual inspection while running a computerized system. 

So, what does it mean exactly? Let’s use an example:

If you were to create a visual inspection software for automotive manufacturing, you should develop a deep learning-based algorithm and train it with examples of defects it must detect. With enough data, the neural network will eventually detect defects without any additional instructions.

Deep learning-based visual inspection systems are good at detecting defects that are complex in nature. They address complex surfaces and cosmetic flaws and generalize and conceptualize the parts’ surfaces.

How to Integrate AI Visual Inspection System

1. State the Problem

Visual inspection development often starts with a business and technical analysis. The goal here is to determine what kind of defects the system should detect.

Other important questions to ask include:

  • What is the visual inspection system environment?
  • Should the inspection be real-time or deferred? 
  • How thoroughly should the visual inspection system detect defects, and should it distinguish them by type?
  • Is there any existing software that integrates the visual inspection feature, or does it require a development from scratch?
  • How should the system notify the user(s) about detected defects?
  • Should the visual inspection system record defects detection statistics?
  • And the key question: Does data for deep learning model development exist, including images of “good” and “bad” products and the different types of defects?

Data science engineers choose the optimal technical solution and flow to proceed based on the answers they receive.

2. Gather and Prepare Data

Data science engineers must gather and prepare data required to train a future model before deep learning model development starts. For manufacturing processes, it’s important to implement IoT data analytics. When discussing visual inspection models, the data is often video records, where images processed by a visual inspection model include video frames. There are several options for data gathering, but the most common are:

  1. Taking an existing video record provided by a client
  2. Taking open-source video records applicable for defined purposes
  3. Gathering data from scratch according to deep learning model requirements

The most important parameters here are the video record’s quality. Higher quality data will lead to more accurate results. 

Once we gather the data, we prepare it for modeling, clean it, check for anomalies, and ensure its relevance.

3. Develop Deep Learning Model

The selection of a deep learning model development approach depends on the complexity of a task, required delivery time, and budget limitations. There are several approaches:

Using a deep learning model development service (e.g: Google Cloud ML Engine, Amazon ML, etc.)

This type of approach makes sense when requirements for defect detection features are in line with templates provided by a given service. These services can save both time and budget as there is no need to develop models from scratch. You have to upload data and set model options according to the relevant tasks. 

What’s the catch? These types of models are not customizable. Models’ capabilities are limited to options provided by a given service.

Using Pre-trained Models

A pre-trained model is an already created deep learning model that accomplishes tasks similar to what we want to perform. We do not have to build a model from scratch as it uses a trained model based on our data.

A pre-trained model may not 100% comply with all of our tasks, but it offers significant time and cost savings. Using models previously trained on large datasets lets us customize these solutions according to our problem. 

Deep Learning Model Development from Scratch

This method is ideal for complex and secure visual inspection systems. The approach may be time and effort-intensive, but the results are worth it. 

When developing custom visual inspection models, data scientists use one or several computer vision algorithms. These include image classification, object detection, and instance segmentation.

Many factors influence the choice of a deep learning algorithm(s). These include:

  • Business goals
  • Size of objects/defects 
  • Lighting conditions
  • Number of products to inspect
  • Types of defects
  • Resolution of images

An example of defect categories:

Let’s say that we’re developing a visual inspection model for quality assessment in buildings. The main focus is to detect defects on the walls. An extensive dataset is necessary to obtain accurate visual inspection results, as the defect categories might be incredibly diverse, from peeling paint and mold to wall cracks. The optimal approach here would be to develop an instance segmentation-based model from scratch. A pre-trained model approach is also viable in some cases.

Another example is a visual inspection for pharmaceutical manufacturing, where you want to differentiate air bubbles from particles in products like highly viscous parental solutions. The presence of bubbles is the only defect category here, so the required dataset will not be as extensive as in the example above. The optimal deep learning model development approach might be to use a model development service over developing one from scratch.

4. Train and Evaluate

The next step after developing the visual inspection model is to train it. In this stage, data scientists validate and evaluate the performance and result accuracy of the model. A test dataset is useful here. A visual inspection system may be a set of video records that are either outdated or similar to ones we want to process after deployment.

5. Deploy and Improve

When deploying a visual inspection model, it’s important to consider how software and hardware system architectures correspond to a model capacity.

Software 

The structure of visual inspection-powered software bases itself on the combination of web solutions for data transmission and a Python framework for neural network processing. 

The key parameter here is data storage. There are three common ways to store data: on a local server, a cloud streaming service, or serverless architecture. 

A visual inspection system involves the storage of video records. The choice of a data storage solution often depends on a deep learning model functionality. For example, if a visual inspection system uses a large dataset, the optimal selection may be a cloud streaming service.

Hardware

Depending on the industry and automation processes, devices required to integrate visual inspection system may include:

  • Camera: The key camera option is real-time video streaming. Some examples include IP and CCTV.
  • Gateway: Both dedicated hardware appliances and software programs work well for a visual inspection system.
  • CPU / GPU: If real-time results are necessary, a GPU would be the better choice than a CPU, as the former boasts a faster processing speed when it comes to image-based deep learning models. It is possible to optimize a CPU for operating the visual inspection model, but not for training. An example of an optimal GPU might be the Jetson Nano
  • Photometer (optional): Depending on the lighting conditions of the visual inspection system environment, photometers may be required.
  • Colorimeter (optional): When detecting color and luminance in light sources, imaging colorimeters have consistently high spatial resolutions, allowing for detailed visual inspections. 
  • Thermographic camera (optional): In case of automated inspection of steam/water pipelines and facilities it is a good idea to have thermographic camera data. Thermographic camera data provides valuable information for heat/steam/water leakage detection. Thermal camera data is also useful for heat insulation inspection.
  • Drones (optional): Nowadays it is hard to imagine automated inspection of hard-to-reach areas without drones: building internals, gas pipelines, tanker visual inspection, rocket/shuttle inspection. Drones may be equipped with high resolution cameras that can do real-time defect detection.

Deep learning models are open to improvement after deployment. A deep learning approach can increase the accuracy of the neural network through the iterative gathering of new data and model re-training. The result is a “smarter” visual inspection model that learns by increasing data during operation.

Visual Inspection Use Cases

Healthcare

In the fight against COVID-19, most airports and border crossings can now check passengers for signs of the disease.

Baidu, the large Chinese tech company, developed a large-scale visual inspection system based on AI. The system consists of computer vision-based cameras and infrared sensors that predict the temperatures of passengers. The technology, operational in Beijing’s Qinghe Railway Station, can screen up to 200 people per minute. The AI algorithm detects anyone who has a temperature above 37.3 degrees.

Another real-life case is the deep learning-based system developed by the Alibaba company. The system can detect the coronavirus in chest CT scans with 96% accuracy. With access to data from 5,000 COVID-19 cases, the system performs the test in 20 seconds. Moreover, it can differentiate between ordinary viral pneumonia and coronavirus.

Airlines

According to Boeing, 70% of the $2.6 trillion aerospace services market is dedicated to quality and maintenance. In 2018, Airbus introduced a new automated, drone-based aircraft inspection system that accelerates and facilitates visual inspections. This development reduces aircraft downtime while simultaneously increasing the quality of inspection reports.

Automotive

Toyota recently agreed to a $1.3 billion settlement due to a defect that caused cars to accelerate even when drivers attempted to slow down, resulting in 6 deaths in the U.S. Using the cognitive capabilities of visual inspection systems like Cognex ViDi, automotive manufacturers can analyze and identify quality issues much more accurately and resolve them before they occur.

Computer Equipment Manufacturing

The demand for smaller circuit board designs is growing. Fujitsu Laboratories has been spearheading the development of AI-enabled recognition systems for the electronics industry. They report significant progress in quality, cost, and delivery.

Textile

The implementation of automated visual inspection and a deep learning approach can now detect texture, weaving, stitching, and color matching issues.

For example, Datacolor’s AI system can consider historical data of past visual inspections to create custom tolerances that match more closely to the samples.

We will conclude with a quotation from the general manager we mentioned earlier: “It makes no difference to me whether the suggested technology is the best, but I do care how well it’s going to solve my problems.”

Solar Panels

Solar panels are known to suffer from dust and microcracks. Automated inspection of solar panels during manufacturing and before and after installation is a good idea to prevent shipment of malfunctioning solar panels and quick detection of damaged panels on your solar farm. For example, DJI Enterprise uses drones for solar panels inspection.

Pipeline Inspection

Gas and oil pipelines are known to have a huge length. The latest data from 2014 gives a total of slightly less than 2,175,000 miles (3,500,000 km) of pipeline in 120 countries of the world. Gas and oil leakages may lead to massive harm to nature by chemical pollution, explosions, and conflagrations.

Satellite and drone inspection with the help of computer vision techniques is a good tool for early detection and localization of a gas/oil leakage. Recently, DroneDeploy reported that they mapped about 180 miles of pipelines.

AI Visual Inspection: Key Takeaways

  1. Concept: Al visual inspection bases itself on traditional computer vision methods and human vision.
  2. Choice: Deep learning model development approach depends on the task, delivery time, and budget limits.
  3. Algorithm: Deep learning algorithms detect defects by imitating a human analysis while running a computerized system.
  4. Architecture: Software & hardware should correspond to deep learning model capacity.
  5. Main question: When initiating a visual inspection, the main question is “What defects should the system detect?”
  6. Improvements: After deployment, deep learning model becomes “smarter” through data accumulation.

PlatoAi. Web3 Reimagined. Data Intelligence Amplified.
Click here to access.

Source: https://www.iotforall.com/ai-visual-inspection-for-defect-detection-in-manufacturing

Continue Reading

AI

Can Artificial Intelligence be Used to Win Online Casino Games?

Published

on

It is fair to say that the online gambling industry has been a big beneficiary of advancements in technology. In fact, without these advancements, the industry would not even exist. The internet and mobile technology have allowed many casino games to be enjoyed in an online format. 

Not only that, but they advanced from basic games that used simple graphics to what they are today. Video slots at top rated online casinos for US players, for example, now feature amazing graphics, while we can play live casino games streamed to our devices via webcam, and there is also talk of virtual reality becoming a big part of the industry in the future. However, could the advancement of technology actually threaten the existence of this industry?

If artificial intelligence continues to advance at the rate it has done, could it be used to beat online casino games? If so, the industry could be at risk of those using it to profit. 

How Has Artificial Intelligence Faired in Other Games?

We all know how AI is increasingly used in marketing, transportation, construction, hospitality, and many other industries, but this is to their benefit. However, can AI better a human when playing games? There has certainly been evidence that it can. For instance, AI has beaten the best chess players, poker players, and when playing professional players of video games, such as DOTA and League of Legends.  

Artificial intelligence can learn to make the most optimal decisions at the perfect time. It also does not make the mistakes that humans are capable of. Therefore, it will always beat experienced WSOP poker players at their own game. 

The fundamental similarity of each of those games, however, is that they all rely on skill. Artificial intelligence is called artificial intelligence for a reason. It is intelligence that software can learn and improve upon, so it makes sense that it could learn how to out skill somebody in games such as those mentioned. Players also compete against each other on a level playing field.

So How About Casino Games?

Casino games are not skill-based at all. Instead, they are games of chance. No matter the action a player takes, it is luck that determines the result. Plus, the casino always has a competitive edge, as without one, the gambling industry would not exist. They designed the games in such a way that while players can win, the casino will always win in the long run.  

As it stands right now, it would seem fairly unlikely that artificial intelligence could learn to beat a casino game that is luck-based and has an edge. Results of these games are determined by random number generators (RNGs) which is software that ensures that results are fair and random. 

However, the worrying thing about that, is that humans designed this software. If artificial intelligence can already beat humans at many things, could it one day figure out and beat the algorithms used in RNG software? While we would all like to say no, we would not like to bet against AI one day having the beating of online casino games.

What Can the Casino Industry Do?

Well, as of now, artificial intelligence is still in its infancy. We do not think there is a risk of anything being developed that could harm the casino industry just yet. A few years down the line though, and with the rate of advancement the technology is enjoying, and online casinos might have a problem.

The biggest threat is the development of AI bots that could be used to consistently win at casino games. So if that were to happen, the industry would have to think of ways to detect this behavior and prevent it. In fact, we could even implement AI in the battle against this kind of behavior. 

Whatever the solution may or may not be, it is certainly time for the industry to put some thought into it. 

The Prize Takeaway

There is reason to be concerned about the increase in which artificial intelligence is being used in every industry. Why it is currently used to enhance customer support, the healthcare industry, and marketing, people are concerned that computers and software will eventually take replace humans in jobs and professions. However, the casino industry has deeper concerns. Not only could it replace humans, but it could also actually learn how to beat the casino games that have been designed to win for the casino.  

 

Source: Plato Data Intelligence

Continue Reading
Esports4 days ago

The best teams in Hearthstone Mercenaries

Aviation3 days ago

Vaccine passports for overseas travel to be rolled out this week

Esports4 days ago

How Many Chapters are in the Demon Slayer Game?

Esports2 days ago

New World team share details of upcoming server transfers in Q&A

Fintech2 days ago

PNC cuts nearly 600 apps for BBVA conversion

Payments3 days ago

Everyone is building a wallet

Esports4 days ago

Demon Slayer: Kimetsu no Yaiba – The Hinokami Chronicles Character Tier List

Cyber Security3 days ago

Spotify Web Player

AI4 days ago

When to Contact an Attorney After a Car Accident

Esports3 days ago

The 10 Legends of Runeterra characters most likely to turn into League champions

AI4 days ago

5 Ways to Attract Clients with Law Firm SEO

Automotive2 days ago

This Toyota Mirai 1:10 Scale RC Car Actually Runs On Hydrogen

Covid195 days ago

The CDC emphasizes COVID vaccinations as a key to safe holiday gatherings

Blockchain5 days ago

Reasons to Start Learning Blockchain Technology

Supply Chain4 days ago

Top 10 hydraulic cylinder manufacturers in China

ACN Newswire5 days ago

UpBots Launches Version 2.0 of its Crypto Trading Platform

Esports4 days ago

Only 6,900 pick’ems remain perfect after group B’s second round-robin at the 2021 World Championship

Low_Car33ElfynEvansScott-Martin.jpg
Automotive3 days ago

Evans and TOYOTA GAZOO Racing Seal Second in Spain

Startups5 days ago

Customer Acquisition: 5 Cost Effective Ways to Reach Customers Online

Blockchain5 days ago

What Are the Different Types of Consensus Algorithms That Exist Today?

Trending