Connect with us

AI

NLP in 2020; Modern Applications

Published

on

NLP has gone from rule based systems to generative systems with almost human level accuracy along multiple rubrics within 40 years. This is incredible considering we were so far off naturally talking to a computer system even just ten years ago; now I can tell Google Home to turn off my sitting room lights.

In the Stanford Lecture by Chris Manning introduces a Computer Science class to what NLP is, its complexity and specific toolings such as word2vec which enable learning systems to learn from natural language. Professor Manning is the Director of the Stanford Artificial Intelligence Laboratory and is a leader in applying Deep Learning (DL) to NLP.

The goal of NLP is to allow computers to ‘understand’ natural language in order to perform tasks and support the human user to make decisions. For a logic system, understanding and representing the meaning of language is a “difficult goal”. The goal is so compelling all major technology firms have put huge investment into the field. The lecture focuses on these areas of the NLP challenge.

Some applications which you might encounter NLP systems are spell checking, search, recommendations, speech recognition, dialog agents, sentiment analysis and translation services. One key point Chris Manning explains is that human language (either text, speech or movement) is unique in that it is done to communicate something, some ‘meaning’ is embedded in the action. This is not often the case with anything else that generates data. Its data with intent, extracting and understanding the intent is part of the NLP challenge. Chris Manning also lists “Why NLP is hard” which I think we take for granted.

Language interpretation depends on ‘common sense’ and contextual knowledge, language is ambiguous (computers like direct, formal statements!), language contains a complex mix of situational, visual and linguistic knowledge from various timelines. Learning systems we have now do not have a lifetime of learned weights and bias so can only currently be applied in narrow-AI use cases.

The Stanford lecture also dives into DL and how it is different to a human exploring and designing features or signals to then apply to the learning systems. The lecture discusses the first spark of DL with speech recognition from work done by George Dahl and how the DL approach got a 33% increase in performance compared to traditional feature modelling. Professor Manning also talks about how NLP and DL have added capabilities in three segments, namely what he calls Levels; speech, words, syntax and semantics. Tools; parts-of-speech, entities and parsing and Applications; machine translation, sentiment analysis, dialogue agent and question answering. Stating NLP + DL have created a ‘Few key tools’ which have wide applications.

Words as vectors — https://youtu.be/8rXD5-xhemo?t=2346

Towards the end of the lecture we explore the ideas around how words are represented as numbers in vector spaces and how this applies to NLP and DL. Word meaning vectors then are usable to represent meaning in words, sentences and beyond.

Source: https://chatbotslife.com/nlp-in-2020-modern-applications-c8dab0ffdead?source=rss—-a49517e4c30b—4

AI

Google AI researchers want to teach robots tasks through self-supervised reverse engineering

Published

on


A preprint paper published by Stanford University and Google researchers proposes an AI technique that predicts how goals were achieved, effectively learning to reverse-engineer tasks. They say it enables autonomous agents to learn through self-supervision, which some experts believe is a critical step toward truly intelligent systems.

Learning general policies for complex tasks often requires dealing with unfamiliar objects and scenes, and many methods rely on forms of supervision like expert demonstrations. But these entail significant tuning; demonstrations, for example, must be completed by experts many times over and recorded by special infrastructure.

That’s unlike the researchers’ proposed approach — time reversal as self-supervision (TRASS) — which predicts “reversed trajectories” to create sources of supervision that lead to a goal or goals. A home robot could leverage it to learn tasks like turning on a computer, turning a knob, or opening a drawer, or chores like setting a dining table, making a bed, and cleaning a room.

“Most manipulation tasks that one would want to solve require some understanding of objects and how they interact. However, understanding object relationships in a task-specific context is non-trivial,” explain the coauthors. “Consider the task [making a bed]. Starting from a made bed, random perturbations to the bed can crumple the blanket, which when reversed provides supervision on how to flatten and spread the blanket. Similarly, randomly perturbing objects in a clean [or] organized room will distribute the objects around the room. These trajectories reversed will show objects being placed back to their correct positions, strong supervision for room cleaning.”

VB Transform 2020 Online – July 15-17. Join leading AI executives: Register for the free livestream.

Google TRASS robot

TRASS works by collecting data given a set of goal states, applying random forces to disrupt the scene, and carefully recording each of the subsequent states. A TRASS-driven agent explores outwardly using no expert knowledge, collecting a trajectory that when reversed can teach the agent to return to the goal states. In this way, TRASS essentially trains to predict the trajectories in reverse so that the trained model can take the current state as input, providing supervision toward the goal in the form of a guiding trajectory of frames (but not actions).

At test time, a TRASS-driven agent’s objective is to reach a state in a scene that satisfies certain specified goal conditions. At every step the trajectory is recomputed to produce a high-level guiding trajectory, and the guiding trajectory decouples high-level planning and low-level control so that it can be used as indirect supervision to produce a policy via model and model-free techniques.

In experiments, the researchers applied TRASS to the problem of configuring physical Tetris-like blocks. With a real-world robot — the Kuka IIWA — and a TRASS vision model trained in simulation and then transferred to the robot, they found that TRASS successfully paired blocks it had seen during training 75% of the time and blocks it hadn’t seen 50% of the time over the course of 20 trials each.

TRASS has limitations in that it can’t be applied in cases where object deformations are irreversible, for example (think cracking an egg, mixing two ingredients, or welding two parts together). But the researchers believe it can be extended by using exploration methods driven by state novelty, among other things.

“[O]ur method … is able to predict unknown goal states and the trajectory to reach them,” they write. “This method used with visual model predictive control is capable of assembling Tetris-style blocks with a physical robot using only visual inputs, while using no demonstrations or explicit supervision.”

Source: http://feedproxy.google.com/~r/venturebeat/SZYF/~3/3Rd18kkyUUc/

Continue Reading

AI

Google AI researchers want to teach robots tasks through self-supervised reverse engineering

Published

on


A preprint paper published by Stanford University and Google researchers proposes an AI technique that predicts how goals were achieved, effectively learning to reverse-engineer tasks. They say it enables autonomous agents to learn through self-supervision, which some experts believe is a critical step toward truly intelligent systems.

Learning general policies for complex tasks often requires dealing with unfamiliar objects and scenes, and many methods rely on forms of supervision like expert demonstrations. But these entail significant tuning; demonstrations, for example, must be completed by experts many times over and recorded by special infrastructure.

That’s unlike the researchers’ proposed approach — time reversal as self-supervision (TRASS) — which predicts “reversed trajectories” to create sources of supervision that lead to a goal or goals. A home robot could leverage it to learn tasks like turning on a computer, turning a knob, or opening a drawer, or chores like setting a dining table, making a bed, and cleaning a room.

“Most manipulation tasks that one would want to solve require some understanding of objects and how they interact. However, understanding object relationships in a task-specific context is non-trivial,” explain the coauthors. “Consider the task [making a bed]. Starting from a made bed, random perturbations to the bed can crumple the blanket, which when reversed provides supervision on how to flatten and spread the blanket. Similarly, randomly perturbing objects in a clean [or] organized room will distribute the objects around the room. These trajectories reversed will show objects being placed back to their correct positions, strong supervision for room cleaning.”

VB Transform 2020 Online – July 15-17. Join leading AI executives: Register for the free livestream.

Google TRASS robot

TRASS works by collecting data given a set of goal states, applying random forces to disrupt the scene, and carefully recording each of the subsequent states. A TRASS-driven agent explores outwardly using no expert knowledge, collecting a trajectory that when reversed can teach the agent to return to the goal states. In this way, TRASS essentially trains to predict the trajectories in reverse so that the trained model can take the current state as input, providing supervision toward the goal in the form of a guiding trajectory of frames (but not actions).

At test time, a TRASS-driven agent’s objective is to reach a state in a scene that satisfies certain specified goal conditions. At every step the trajectory is recomputed to produce a high-level guiding trajectory, and the guiding trajectory decouples high-level planning and low-level control so that it can be used as indirect supervision to produce a policy via model and model-free techniques.

In experiments, the researchers applied TRASS to the problem of configuring physical Tetris-like blocks. With a real-world robot — the Kuka IIWA — and a TRASS vision model trained in simulation and then transferred to the robot, they found that TRASS successfully paired blocks it had seen during training 75% of the time and blocks it hadn’t seen 50% of the time over the course of 20 trials each.

TRASS has limitations in that it can’t be applied in cases where object deformations are irreversible, for example (think cracking an egg, mixing two ingredients, or welding two parts together). But the researchers believe it can be extended by using exploration methods driven by state novelty, among other things.

“[O]ur method … is able to predict unknown goal states and the trajectory to reach them,” they write. “This method used with visual model predictive control is capable of assembling Tetris-style blocks with a physical robot using only visual inputs, while using no demonstrations or explicit supervision.”

Source: http://feedproxy.google.com/~r/venturebeat/SZYF/~3/3Rd18kkyUUc/

Continue Reading

AI

Google AI researchers want to teach robots tasks through self-supervised reverse engineering

Published

on


A preprint paper published by Stanford University and Google researchers proposes an AI technique that predicts how goals were achieved, effectively learning to reverse-engineer tasks. They say it enables autonomous agents to learn through self-supervision, which some experts believe is a critical step toward truly intelligent systems.

Learning general policies for complex tasks often requires dealing with unfamiliar objects and scenes, and many methods rely on forms of supervision like expert demonstrations. But these entail significant tuning; demonstrations, for example, must be completed by experts many times over and recorded by special infrastructure.

That’s unlike the researchers’ proposed approach — time reversal as self-supervision (TRASS) — which predicts “reversed trajectories” to create sources of supervision that lead to a goal or goals. A home robot could leverage it to learn tasks like turning on a computer, turning a knob, or opening a drawer, or chores like setting a dining table, making a bed, and cleaning a room.

“Most manipulation tasks that one would want to solve require some understanding of objects and how they interact. However, understanding object relationships in a task-specific context is non-trivial,” explain the coauthors. “Consider the task [making a bed]. Starting from a made bed, random perturbations to the bed can crumple the blanket, which when reversed provides supervision on how to flatten and spread the blanket. Similarly, randomly perturbing objects in a clean [or] organized room will distribute the objects around the room. These trajectories reversed will show objects being placed back to their correct positions, strong supervision for room cleaning.”

VB Transform 2020 Online – July 15-17. Join leading AI executives: Register for the free livestream.

Google TRASS robot

TRASS works by collecting data given a set of goal states, applying random forces to disrupt the scene, and carefully recording each of the subsequent states. A TRASS-driven agent explores outwardly using no expert knowledge, collecting a trajectory that when reversed can teach the agent to return to the goal states. In this way, TRASS essentially trains to predict the trajectories in reverse so that the trained model can take the current state as input, providing supervision toward the goal in the form of a guiding trajectory of frames (but not actions).

At test time, a TRASS-driven agent’s objective is to reach a state in a scene that satisfies certain specified goal conditions. At every step the trajectory is recomputed to produce a high-level guiding trajectory, and the guiding trajectory decouples high-level planning and low-level control so that it can be used as indirect supervision to produce a policy via model and model-free techniques.

In experiments, the researchers applied TRASS to the problem of configuring physical Tetris-like blocks. With a real-world robot — the Kuka IIWA — and a TRASS vision model trained in simulation and then transferred to the robot, they found that TRASS successfully paired blocks it had seen during training 75% of the time and blocks it hadn’t seen 50% of the time over the course of 20 trials each.

TRASS has limitations in that it can’t be applied in cases where object deformations are irreversible, for example (think cracking an egg, mixing two ingredients, or welding two parts together). But the researchers believe it can be extended by using exploration methods driven by state novelty, among other things.

“[O]ur method … is able to predict unknown goal states and the trajectory to reach them,” they write. “This method used with visual model predictive control is capable of assembling Tetris-style blocks with a physical robot using only visual inputs, while using no demonstrations or explicit supervision.”

Source: http://feedproxy.google.com/~r/venturebeat/SZYF/~3/3Rd18kkyUUc/

Continue Reading

Trending