Connect with us

AI

5 Reasons why Customer Service Chatbots are the Need of the Hour

The rapidly advancing world suddenly came to a halt with the outbreak of the COVID-19 pandemic. If anything positive that has come out of this crisis is that it has made people more comfortable with technology. Even people from non-tech-savvy older generations are readily adopting technological advancements. Especially the customer service verticals (helpdesk and support […]

The post 5 Reasons why Customer Service Chatbots are the Need of the Hour appeared first on Mantra Labs.

Published

on

SMEs are acclaimed to be the backbone of the Indian economy. They are crucial to achieving the nation’s dream of a $5 trillion economy by 2025. But, the sudden outbreak of Covid-19 and the prolonged lockdown has brought about a very distressing time for small and medium enterprises in India and across the world.

On May 14th, 2020, the Government of India announced a Rs 20 lakh crore stimulus package, which includes 6 relief measures to bring India’s vast MSME sector back to life. Banks and NBFCs are also willing to offer up to 20% of the entire outstanding credit to MSMEs. However, the root cause of disruption in small & medium enterprises, which relies heavily on personal communication will remain unresolved unless the sector readily opts technology to drive their business amidst social distancing and staggered workforce. 

The economic stimulus will help many SMEs resume operations by providing access to credit to help overcome near term loss of income. This will help businesses..to also grow and maintain business continuity. The long term focus on enabling SMEs with technology also provides a great opportunity for our business.”

Saahil Goel, CEO and co-founder Shiprocket

Here’s how simple technology solutions like conversational chatbots can help SMEs to continue their businesses remotely.

The need of time

While running a small business can be challenging even in favourable times, productivity suffers a lot when such an unanticipated time stacks against the business. Because of the small size of the business, lack of resources and restraints on investing in workforce training are the biggest challenges with employers.

Moreover, most MSMEs rely on persuasion, for which communication is the key. The communication gap may lead to losing customers, which businesses certainly cannot afford at this time. In lines with the Government of India’s move towards self-reliance (Atma-nirbhar Bharat), reducing dependencies of any form can help startups and SMEs sustain their business.

A feasible solution to resolve communication-related concerns is deploying technologies for customer support, scheduling and reminders. 

How can conversational chatbots help SMEs and consultants

Chatbots are a great medium to automate customer support and helpdesk conversations and release human resources for sophisticated tasks. Conversational chatbots have NLP (Natural Language Processing) capabilities that understand different forms of queries and deliver more human-like responses.

In this pandemic time, where social distancing will be the new normal and business travels will suffer a setback, chatbots can make contactless, global customer support a new reality. Key benefits:

  1. 24X7 communication support: with context-based automated replies, chatbots help in lead generation and nurturing.
  2. Multiple language support: conversational chatbots support regional languages and many chatbots are trained for industry-specific jargon. This makes communication more realistic (human-like).
  3. Platform integration: it is possible to integrate chatbots on WhatsApp, Facebook messenger, skype, and many other platforms where the consumers are most active. Enterprise chatbots also have the facility to integrate with CRMs.
  4. Video conferencing: some chatbots like Hitee have video conferencing features along with chats to enable face to face and more personalized interaction.
  5. Data collection: the chatbot platform maintains data records which can be utilized in the future for analyzing consumer intent and preferences.

SMEs that benefit the most by chatbots

1. Private clinics

Juniper research suggests that worldwide, the adoption of virtual assistants in healthcare will reach $3.6 billion by 2020.

Private medical practitioners can use chatbots to schedule appointments, share diagnosis results, video chat (telehealth) to understand the condition and provide instant support and prescribe medicines.

2. Legal consultation services

Clio reports that law practitioners spend only 2.3 hours of 8 working hours in actual practice every day. Their rest of the time is consumed in administration, marketing and business development activities. 

work distribution of legal professionals

Law practitioners are already using chatbots to generate legal documents (e.g. AILira), privacy policy or a non-disclosure agreement (e.g. Lexi) and support customers with legal FAQs (e.g. Lawdroid).

Chatbots can also help the legal consultants to automate due diligence procedures, schedule meetings with clients, setting reminders, and answering firm related questions.

3. Career consultation & educational institutes

Chatbots can act as virtual teaching assistants for managing student queries, lesson plans, assignments and video FAQs.

Education institutes can also automate helpdesk queries related to admissions, fees, and curriculums.

4. Insurance companies

Amid this pandemic, health insurance and claims-related queries have skyrocketed. From making claims to browsing new plans, increasing one-on-one conversational efficiency and nurture leads into sales, chatbots can help insurance companies with customer query support.

Also read: Adoption of Chatbots across Insurance

5. Stock brokers & wealth managers

Stockbrokers can personalize the interaction and resolve queries irrespective of the client’s location. Wealth managers can continue their lending business from home using chatbots. Bots with video conferencing tools can help them understand the clients’ sentiments and improve conversation efficiency. 

If you need customer support automation solutions, we’re here to help. We’ve made India’s leading industry-specific chatbot — Hitee to empower SMEs with AI-based chatbot solutions. For your specific requirements, please feel free to write to us at hello@mantralabsglobal.com.

Knowledge thats worth delivered in your inbox

Source: https://www.mantralabsglobal.com/blog/customer-service-chatbots/

AI

Google AI researchers want to teach robots tasks through self-supervised reverse engineering

Published

on


A preprint paper published by Stanford University and Google researchers proposes an AI technique that predicts how goals were achieved, effectively learning to reverse-engineer tasks. They say it enables autonomous agents to learn through self-supervision, which some experts believe is a critical step toward truly intelligent systems.

Learning general policies for complex tasks often requires dealing with unfamiliar objects and scenes, and many methods rely on forms of supervision like expert demonstrations. But these entail significant tuning; demonstrations, for example, must be completed by experts many times over and recorded by special infrastructure.

That’s unlike the researchers’ proposed approach — time reversal as self-supervision (TRASS) — which predicts “reversed trajectories” to create sources of supervision that lead to a goal or goals. A home robot could leverage it to learn tasks like turning on a computer, turning a knob, or opening a drawer, or chores like setting a dining table, making a bed, and cleaning a room.

“Most manipulation tasks that one would want to solve require some understanding of objects and how they interact. However, understanding object relationships in a task-specific context is non-trivial,” explain the coauthors. “Consider the task [making a bed]. Starting from a made bed, random perturbations to the bed can crumple the blanket, which when reversed provides supervision on how to flatten and spread the blanket. Similarly, randomly perturbing objects in a clean [or] organized room will distribute the objects around the room. These trajectories reversed will show objects being placed back to their correct positions, strong supervision for room cleaning.”

VB Transform 2020 Online – July 15-17. Join leading AI executives: Register for the free livestream.

Google TRASS robot

TRASS works by collecting data given a set of goal states, applying random forces to disrupt the scene, and carefully recording each of the subsequent states. A TRASS-driven agent explores outwardly using no expert knowledge, collecting a trajectory that when reversed can teach the agent to return to the goal states. In this way, TRASS essentially trains to predict the trajectories in reverse so that the trained model can take the current state as input, providing supervision toward the goal in the form of a guiding trajectory of frames (but not actions).

At test time, a TRASS-driven agent’s objective is to reach a state in a scene that satisfies certain specified goal conditions. At every step the trajectory is recomputed to produce a high-level guiding trajectory, and the guiding trajectory decouples high-level planning and low-level control so that it can be used as indirect supervision to produce a policy via model and model-free techniques.

In experiments, the researchers applied TRASS to the problem of configuring physical Tetris-like blocks. With a real-world robot — the Kuka IIWA — and a TRASS vision model trained in simulation and then transferred to the robot, they found that TRASS successfully paired blocks it had seen during training 75% of the time and blocks it hadn’t seen 50% of the time over the course of 20 trials each.

TRASS has limitations in that it can’t be applied in cases where object deformations are irreversible, for example (think cracking an egg, mixing two ingredients, or welding two parts together). But the researchers believe it can be extended by using exploration methods driven by state novelty, among other things.

“[O]ur method … is able to predict unknown goal states and the trajectory to reach them,” they write. “This method used with visual model predictive control is capable of assembling Tetris-style blocks with a physical robot using only visual inputs, while using no demonstrations or explicit supervision.”

Source: http://feedproxy.google.com/~r/venturebeat/SZYF/~3/3Rd18kkyUUc/

Continue Reading

AI

Google AI researchers want to teach robots tasks through self-supervised reverse engineering

Published

on


A preprint paper published by Stanford University and Google researchers proposes an AI technique that predicts how goals were achieved, effectively learning to reverse-engineer tasks. They say it enables autonomous agents to learn through self-supervision, which some experts believe is a critical step toward truly intelligent systems.

Learning general policies for complex tasks often requires dealing with unfamiliar objects and scenes, and many methods rely on forms of supervision like expert demonstrations. But these entail significant tuning; demonstrations, for example, must be completed by experts many times over and recorded by special infrastructure.

That’s unlike the researchers’ proposed approach — time reversal as self-supervision (TRASS) — which predicts “reversed trajectories” to create sources of supervision that lead to a goal or goals. A home robot could leverage it to learn tasks like turning on a computer, turning a knob, or opening a drawer, or chores like setting a dining table, making a bed, and cleaning a room.

“Most manipulation tasks that one would want to solve require some understanding of objects and how they interact. However, understanding object relationships in a task-specific context is non-trivial,” explain the coauthors. “Consider the task [making a bed]. Starting from a made bed, random perturbations to the bed can crumple the blanket, which when reversed provides supervision on how to flatten and spread the blanket. Similarly, randomly perturbing objects in a clean [or] organized room will distribute the objects around the room. These trajectories reversed will show objects being placed back to their correct positions, strong supervision for room cleaning.”

VB Transform 2020 Online – July 15-17. Join leading AI executives: Register for the free livestream.

Google TRASS robot

TRASS works by collecting data given a set of goal states, applying random forces to disrupt the scene, and carefully recording each of the subsequent states. A TRASS-driven agent explores outwardly using no expert knowledge, collecting a trajectory that when reversed can teach the agent to return to the goal states. In this way, TRASS essentially trains to predict the trajectories in reverse so that the trained model can take the current state as input, providing supervision toward the goal in the form of a guiding trajectory of frames (but not actions).

At test time, a TRASS-driven agent’s objective is to reach a state in a scene that satisfies certain specified goal conditions. At every step the trajectory is recomputed to produce a high-level guiding trajectory, and the guiding trajectory decouples high-level planning and low-level control so that it can be used as indirect supervision to produce a policy via model and model-free techniques.

In experiments, the researchers applied TRASS to the problem of configuring physical Tetris-like blocks. With a real-world robot — the Kuka IIWA — and a TRASS vision model trained in simulation and then transferred to the robot, they found that TRASS successfully paired blocks it had seen during training 75% of the time and blocks it hadn’t seen 50% of the time over the course of 20 trials each.

TRASS has limitations in that it can’t be applied in cases where object deformations are irreversible, for example (think cracking an egg, mixing two ingredients, or welding two parts together). But the researchers believe it can be extended by using exploration methods driven by state novelty, among other things.

“[O]ur method … is able to predict unknown goal states and the trajectory to reach them,” they write. “This method used with visual model predictive control is capable of assembling Tetris-style blocks with a physical robot using only visual inputs, while using no demonstrations or explicit supervision.”

Source: http://feedproxy.google.com/~r/venturebeat/SZYF/~3/3Rd18kkyUUc/

Continue Reading

AI

Google AI researchers want to teach robots tasks through self-supervised reverse engineering

Published

on


A preprint paper published by Stanford University and Google researchers proposes an AI technique that predicts how goals were achieved, effectively learning to reverse-engineer tasks. They say it enables autonomous agents to learn through self-supervision, which some experts believe is a critical step toward truly intelligent systems.

Learning general policies for complex tasks often requires dealing with unfamiliar objects and scenes, and many methods rely on forms of supervision like expert demonstrations. But these entail significant tuning; demonstrations, for example, must be completed by experts many times over and recorded by special infrastructure.

That’s unlike the researchers’ proposed approach — time reversal as self-supervision (TRASS) — which predicts “reversed trajectories” to create sources of supervision that lead to a goal or goals. A home robot could leverage it to learn tasks like turning on a computer, turning a knob, or opening a drawer, or chores like setting a dining table, making a bed, and cleaning a room.

“Most manipulation tasks that one would want to solve require some understanding of objects and how they interact. However, understanding object relationships in a task-specific context is non-trivial,” explain the coauthors. “Consider the task [making a bed]. Starting from a made bed, random perturbations to the bed can crumple the blanket, which when reversed provides supervision on how to flatten and spread the blanket. Similarly, randomly perturbing objects in a clean [or] organized room will distribute the objects around the room. These trajectories reversed will show objects being placed back to their correct positions, strong supervision for room cleaning.”

VB Transform 2020 Online – July 15-17. Join leading AI executives: Register for the free livestream.

Google TRASS robot

TRASS works by collecting data given a set of goal states, applying random forces to disrupt the scene, and carefully recording each of the subsequent states. A TRASS-driven agent explores outwardly using no expert knowledge, collecting a trajectory that when reversed can teach the agent to return to the goal states. In this way, TRASS essentially trains to predict the trajectories in reverse so that the trained model can take the current state as input, providing supervision toward the goal in the form of a guiding trajectory of frames (but not actions).

At test time, a TRASS-driven agent’s objective is to reach a state in a scene that satisfies certain specified goal conditions. At every step the trajectory is recomputed to produce a high-level guiding trajectory, and the guiding trajectory decouples high-level planning and low-level control so that it can be used as indirect supervision to produce a policy via model and model-free techniques.

In experiments, the researchers applied TRASS to the problem of configuring physical Tetris-like blocks. With a real-world robot — the Kuka IIWA — and a TRASS vision model trained in simulation and then transferred to the robot, they found that TRASS successfully paired blocks it had seen during training 75% of the time and blocks it hadn’t seen 50% of the time over the course of 20 trials each.

TRASS has limitations in that it can’t be applied in cases where object deformations are irreversible, for example (think cracking an egg, mixing two ingredients, or welding two parts together). But the researchers believe it can be extended by using exploration methods driven by state novelty, among other things.

“[O]ur method … is able to predict unknown goal states and the trajectory to reach them,” they write. “This method used with visual model predictive control is capable of assembling Tetris-style blocks with a physical robot using only visual inputs, while using no demonstrations or explicit supervision.”

Source: http://feedproxy.google.com/~r/venturebeat/SZYF/~3/3Rd18kkyUUc/

Continue Reading

Trending