Connect with us

AR/VR

Pivot Fast: next generation immersive simulation

Avatar

Published

on

Simulation and training are becoming more immersive as providers and businesses seek lower costs and greater feedback and analytics. This recap of the latest episode of the Pivot Fast web series is from VRWorldTech editorial board member Sophia Moshasha, of Brightline Interactive

The world is experiencing a mass shift to virtual technologies. As we migrate from physical to virtual presences, people across industries are becoming more aware of the possibilities and capabilities of immersive technologies, including the enhancement of training and simulation.

While most of the world is still exploring the incredible potential of this technology to deliver a fully contextualised experience, we as an industry are already moving quickly beyond the world’s current understanding.

Various companies have been tackling large scale problems to enhance VR and AR. Specifically, a select few are creating immersive training and simulation solutions such as head-mounted displays that are operable in any environment, full body suits that are creating the ability to track your body in space, brain computer interfaces (BCIs) that understand patterns in the human brain, and software creation engines that allow all of this to happen.

For this discussion, we brought together leading technologists that are working within the art of the possible to introduce the next generation of enhanced immersive training capabilities.

Featured speakers

Sebastian Loze, Simulations Industry Manager, Epic Games/Unreal Engine

Unreal Engine, Epic Game’s software creation platform, provides the ability to create interactive and immersive experiences. Unreal allows developers to digitally recreate real life experiences and exercises in virtual environments, and is at the forefront of the lifecycle of simulation. 

Dimitri Mikhalchuk, Co-Founder and Chief Revenue Officer, Teslasuit

Teslasuit’s revolutionary full body, sensor integrated suits provide the ability to focus on creating realism with the interaction of physical elements of simulation. Taking it a step further, the technology influences neural elasticity, the amount of memory our brains will allocate to learning and training, which has never been done before, Humans’ pace of life is changing with the evolution of technology so that we now expect more efficiencies in the way we learn and train, and Teslasuit is actively working on providing real solutions to meet those needs. 

Adam Molnar, Co-Founder and Head of Partnerships, Neurable

Neurable is one of the leading companies in neurotechnology working on BCIs, integrated with VR devices, and is creating ways to measure human cognitive performance that translate to immersive environments. Neurable is working to quantify aspects of the cognitive experience in immersive simulation to understand human cognitive states that have been, historically, based on more qualitative and subjective means. The team is making large strides in building new systems to ingest and analyse attention, cognitive load, and neural efficiencies to significantly enhance and personalise training.

Urho Kontori, Co-Founder and Chief Product Officer, Varjo Technologies

Varjo offers a variety of incredibly capable products in VR and MR. Varjo was created with the goal of revolutionising the training, simulation and design markets. Its main focuses include providing high resolution viewing experiences. Perfecting acute human eye-resolution is key for virtual training and VR at large. Varjo is also focusing on an eye tracking capability for gathering analytics on passive viewing performance.

MR is where Varjo continues to focus the rest of its efforts. The XR-1 headset allows for video passthrough MR and selective occlusion of the physical environment. 

The combination of all of these technologies and capabilities allows for the creation of the ultimate immersive training environment.

Value of immersive technology

There are three key pillars in the value of immersive technology: 

➨ Significant cost savings;

➨ A reduction in a timeframe to learn; broken down to receive, retain and deploy data; and

➨ The ability to measure in an unprecedented capacity.

Beyond the initial, more obvious, value that immersive technology brings, companies such as Varjo, Teslasuit, Neurable and Epic Games are making incredible strides to advance capabilities by providing even more value through intense data collection and analysis that have never been achieved before.

These capabilities will ultimately allow businesses to spend less and get more out of training. Once we collectively arrive at the conclusion that immersive technology is a holistically better way to train, we can then advance the basis of the technology with capabilities such as those represented in this discussion. We can do things like map the body’s perfect motion through space, understand cognitive load, track and measure passive focal attention, and even automate the training environment based on the combination of all of these factors.

Budget always becomes the driving force in simulation and, because of that, immersive training is the most obvious solution to overcome budgetary constraints. In fact, many programmes anticipate being able to cut their training costs significantly with the incorporation of immersive training. When it comes to cost savings and revenue generation, it starts with the software creation engines. One of the things that Epic Games did to reduce costs was to make its creation platform, Unreal Engine, free to developers to help incentivise creation and interaction between the community in order to spur faster innovation.

Personalised analytics are another big part of what these companies are offering to advance training. As immersive technology companies, we constantly ask ourselves how we can act as an interpretation layer to feed into the training system in a manageable and understandable way. Handling a large amount of data has been tricky in traditional simulation systems. We have a responsibility as leaders in the space to move beyond the raw data and numbers into discovering new findings and understandings to gather and interpret that information.

To help facilitate this, the solution providers in this discussion are working to integrate with other hardware and software providers to make sure that users and developers are able to use the immersive reality tools of their choice to create the appropriate simulations for specific use cases. Monolithic solutions in simulation do not exist anymore so the industry needs to make sure to be inclusive of all the hardware and software solutions, in order for creators to be able to combine the best of these various solutions.

We traditionally learn through translation methods such as reading text, watching video, viewing photos, or listening to lectures in a classroom. In doing this, the trainee will naturally fill in the gaps for how the information they are learning fits in with the actual context of a certain scenario. Immersive technology allows us to recreate that original context to get the most amount of fidelity and realism, and to facilitate higher levels of comprehension and lower levels of distraction. The trainee is then able to recall that original context a lot faster and much more accurately in their minds. We can recreate original context by keeping in the important aspects of the scenarios that allow users to learn and leave out aspects of original context that are distractions and limit our ability to learn. These attributes are significant enhancers to traditional means of learning and training.

Adaptive virtual environments

Immersive technology developers have achieved the ability to give users a realistic, embodied experience for the ultimate training tool. The teams in this discussion are working on ways to take these personalised experiences to new levels by automating the environment so it is objectively tailored to the user or trainee’s performance, with the ability to track, measure and manipulate cognitive state, and focus focal areas at any given point in time and what the user is doing at any given point in time.

Brightline Interactive’s Performance Adaptive Virtual Engine (PAVE) enables VR scenarios to be adaptive to a user’s active or passive performance. The simulation learns both the individual user behaviour and the trends of the collective so that we can uncover aspects of training effectiveness that have not yet been addressed or answered. Cognitive human performance is a big focus within the US Department of Defense. The understanding of an individual’s passive performance, combined with automated tailoring of training, has created opportunities to perfect individualised training at scale.

In addition to enabling better training for end users, with automated virtual systems, we are able to create next-generation instructors by reducing manpower and logistics, and removing unnecessary distraction from the instructors. We now have the opportunity to bring instructors into the contextualised training experience in order to maximise instructor effectiveness of transitioning knowledge from one human to the next. Immersive technology is helping instructors to learn how to interact with these new tools and how to best use these tools to maximise impact.

There are significant ways that immersive technology will continue to enhance training and simulation effectiveness. Thanks to our guests on this episode of Pivot Fast, we will continue to push the boundaries of these capabilities to create robust, smart training systems.

About the author

An evangelist in immersive technology, Sophia Moshasha spends her time educating the community on applications of virtual and augmented reality (VR/AR). She is currently director of immersive platforms at Brightline Interactive, an immersive technology company that produces custom interactive technology, to include VR and AR experiences, for brands, agencies and government entities. Sophia is also vice president of the VR/AR Association Washington DC chapter, co-chairs the association’s marketing and defence committees, and co-hosts the association’s podcast, Everything VR & AR.

LET US KNOW ABOUT YOUR EVENT VIA EDITOR@VRWORLDTECH.COM AND WE’LL CONSIDER IT FOR PROS+CONS

Main image: Sebastian Loze, Dimitri Mikhalchuk, Adam Molnar and Urho Kontori

Source: https://vrworldtech.com/2020/07/28/pivot-fast-next-generation-immersive-simulation/

AR/VR

Facebook Researchers Develop Bleeding-edge Facial Reconstruction Tech So You Can Make Goofy Faces in VR

Avatar

Published

on


Facebook Reality Labs, the company’s R&D division, has been leading the charge on making virtual reality avatars realistic enough to cross the dreaded ‘uncanney valley’. New research from the group aims to support novel facial expressions so that your friends will accurately see your silly faces VR.

Most avatars used in virtual reality today are more cartoon than human, largely as a way to avoid the ‘uncanny valley’ problem—where more ‘realistic’ avatars become increasingly visually off-putting as they get near, but not near enough, to how a human actually looks and moves.

The Predecessor: Codec Avatars

The ‘Codec Avatar’ project at Facebook Reality Labs aims to cross the uncanny valley by using a combination of machine learning and computer vision to create hyper-realistic representations of users. By training the system to understand what a person’s face looks like and then tasking it with recreating that look based on inputs from cameras inside of a VR headset, the project has demonstrated some truly impressive results.

Recreating typical facial poses with enough accuracy to be convincing is already a challenge, but then there’s a myriad of edge-cases to deal with, any of which can throw the whole system off and dive the avatar right back into the uncanny valley.

The big challenge, Facebook researchers say, is that it’s “impractical to have a uniform sample of all possible [facial] expressions” because there’s simply so many different ways that one can contort their face. Ultimately this means there’s a gap in the system’s example data, leaving it confused when it sees something new.

The Successor: Modular Codec Avatars

Image courtesy Facebook Reality Labs

Researchers Hang Chu, Shugao Ma, Fernando De la Torre, Sanja Fidler, and Yaser Sheikh from the University of Toronto, Vector Institute, and Facebook Reality Labs, propose a solution in a newly published research paper titled Expressive Telepresence via Modular Codec Avatars.

While the original Codec Avatar system looks to match an entire facial expression from its dataset to the input that it sees, the Modular Codec Avatar system divides the task by individual facial features—like each eye and the mouth—allowing it to synthesize the most accurate pose by fusing the best match from several different poses in its knowledge.

In Modular Codec Avatars, a modular encoder first extracts information inside each single headset-mounted camera view. This is followed by a modular synthesizer that estimates a full face expression along with its blending weights from the information extracted within the same modular branch. Finally, multiple estimated 3D faces are aggregated from different modules and blended together to form the final face output.

The goal is to improve the range of expressions that can be accurately represented without needing to feed the system more training data. You could say that the Modular Codec Avatar system is designed to be better at making inferences about what a face should look like compared to the original Codec Avatar system which relied more on direct comparison.

The Challenge of Representing Goofy Faces

One of the major benefits of this approach is improving the system’s ability to recreate novel facial expressions which it wasn’t trained against in the first place—like when people intentionally contort their faces in ways which are funny specifically because people don’t normally make such faces. The researchers called out this particular benefit in their paper, saying that “making funny expressions is part of social interaction. The Modular Codec Avatar model can naturally better facilitate this task due to stronger expressiveness.”

They tested this by making ‘artificial’ funny faces by randomly shuffling face features from completely different poses (ie: left eye from {pose A}, right eye from {pose B}, and mouth from {pose C}) and looked to see if the system could produce realistic results given the unexpectedly dissimilar feature input.

Image courtesy Facebook Reality Labs

“It can be seen [in the figure above] that Modular Codec Avatars produce natural flexible expressions, even though such expressions have never been seen holistically in the training set,” the researchers say.

As the ultimate challenge for this aspect of the system, I’d love to see its attempt at recreating the incredible facial contortions of Jim Carrey.

Eye Amplification

Beyond making funny faces, the researchers found that the Modular Codec Avatar system can also improve facial realism by negating the difference in eye-pose that is inherent with wearing a headset.

In practical VR telepresence, we observe users often do not open their eyes to the full natural extend. This maybe due to muscle pressure from the headset wearing, and display light sources near the eyes. We introduce an eye amplification control knob to address this issue.

This allows the system to subtly modify the eyes to be closer to how they would actually look if the user wasn’t wearing a headset.

Image courtesy Facebook Reality Labs

– – – – –

While the idea of recreating faces by fusing together features from disparate pieces of example data isn’t itself entirely new, the researchers say that “instead of using linear or shallow features on the 3D mesh [like prior methods], our modules take place in latent spaces learned by deep neural networks. This enables capturing of complex non-linear effects, and producing facial animation with a new level of realism.”

The approach is also an effort to make this kind of avatar representation a bit more practical. The training data necessary to achieve good results with Codec Avatars requires first capturing the real user’s face across many complex facial poses. Modular Codec Avatars achieve similar results with greater expressiveness on less training data.

It’ll still be a while before anyone without access to a face-scanning lightstage will be able to be represented so accurately in VR, but with continued progress it seems plausible that one day users could capture their own face model quickly and easily through a smartphone app and then upload it as the basis for an avatar which crosses the uncanny valley.

Source: https://www.roadtovr.com/facebook-reality-labs-modular-codec-avatar-research-goofy-face-vr/

Continue Reading

AR/VR

Psychedelic VR Exhibition Terminus Comes to Oculus Rift

Avatar

Published

on

The COVID-19 pandemic has forced many people, businesses and organisations to completely overhaul how they operate, either closing completely or enhancing their online presence. New Zealand artists Jess Johnson and Simon Ward were touring their Terminus exhibition when the pandemic struck, so they’ve now brought it to Oculus Rift for you at home to enjoy.

Terminus VR

Created as a five-part psychedelic virtual reality (VR) experience, Terminus was commissioned by the National Gallery of Australia. Already shown at the Heide Museum of Contemporary Art (Melbourne), Jack Hanley Gallery (New York), and Nanzuka Gallery (Tokyo) before the lockdowns commenced, once that happened Johnson and Ward then went about compiling the various parts for home audiences.

Presented as a sort of ‘choose-your-own-adventure’, Terminus lets you journey through five trippy realms, Fleshold Crossing; Unknown; Scumm Engine; Gog & Magog and Tumblewych.

“As an artist, I’m really excited by the psychological implications of being able to position an audience essentially within my artwork. I think VR is the most effective conduit from one brain to another that’s ever existed. With VR you can seduce someone into accepting an entirely new reality,” says Johnson in a statement.

Terminus VR

“Instead of using VR to simulate reality we’ve tried to make Jess’s world a dream space where the rules of reality don’t apply,” Ward adds.

2020 has seen other examples of artists looking to connect in virtual ways, two of the most recent via The Museum of Other Realities. Fashion show The Fabric of Reality and Cannes XR Virtual both held events inside the app.

Terminus is available now via the Oculus Store for £5.99 GBP/$7.99 USD. For the latest Oculus Rift releases, keep reading VRFocus.

Source: https://www.vrfocus.com/2020/08/psychedelic-vr-exhibition-terminus-comes-to-oculus-rift/

Continue Reading

AR/VR

Remote AR multiplayer for gaming and entertainment

Avatar

Published

on


Source
Vlad Vodolazov

1. How to use subtle AR filters to survive your Zoom meetings?

2. The First No-Headset Virtual Monitor

3. Augmented reality (AR) is the future of Restaurant Menu?

4. Creating remote MR productions

Source: https://arvrjourney.com/remote-ar-multiplayer-for-gaming-972ceec3e69a?source=rss—-d01820283d6d—4

Continue Reading
Gaming2 hours ago

All remaining DreamHack 2020 events postponed due to COVID-19

Payments2 hours ago

Westpac hires Scott Collary as new COO

Payments3 hours ago

Bitcoin options market shows it is “open air” for BTC after $14,000

Payments3 hours ago

UK fintech VibePay launches business accounts

Gaming3 hours ago

Watch This Impressive Fall Guys Hex-A-Gone Finish

Gaming3 hours ago

An xCloud Game Streaming Beta Is Coming to Android Today for Game Pass Ultimate Subscribers

Start Ups3 hours ago

Budget hospitality startup Zostel set to expand and open 500 new properties in a span of 2 years

Payments3 hours ago

Banks with IT-savvy board are hit less by cyberattacks and downtime

Publications3 hours ago

Stock futures up in overnight trading after Wall Street notches seventh straight day of gains

Gaming3 hours ago

Vampire: The Masquerade – Bloodlines 2 Has Been Delayed Until 2021

Gaming4 hours ago

Shackcast Episode 072 – Goatse Sashimi and the Koopa Clown Car

Cannabis4 hours ago

An in-depth look at the study that discovered THCP, a cannabinoid more potent than THC

Cyber Security4 hours ago

Signs that confirm that an Android Smart phone is hacked

Cyber Security4 hours ago

Over 5.5m files or 343GB data leaked from Amazon Web Services AWS

Start Ups4 hours ago

Swiggy launches Instamart to deliver grocery & essential items within 45 min

Publications4 hours ago

Trump evacuated from press briefing after Secret Service officer shoots man outside White House

Gaming4 hours ago

Call Of Duty Microtransactions Helped Boost Activision’s Profits By $536 Million

Gaming4 hours ago

Ninjala Allows For Worldwide Matchmaking With Update 2.0

Blockchain5 hours ago

Australian Hacker Sentenced to 2 Years in Prison for $300K XRP Theft

Publications5 hours ago

After border clash with China, India to continue strengthening ties with U.S., others, experts say

Gaming5 hours ago

Call Of Duty: Modern Warfare/Warzone Season 5’s New Operator Is Based On A Real-Life Soldier

Gaming5 hours ago

Wide World of Electronic Sports: Episode 57

Gaming5 hours ago

Shacknews Twitch Highlights: State of Play Reactions, Fall Guys, and The Stimulus Games

Cyber Security6 hours ago

Combat mobile phishing attacks targeting Financial Services with AI

Cyber Security6 hours ago

Cybersecurity risk management explained

Publications6 hours ago

Coronavirus live updates: Global cases top 20 million; Mnuchin says relief deal could come this week

Networks7 hours ago

India connects submarine cable to islands where some still live in the stone age

Blockchain7 hours ago

What Would the Re-Election of Alexander Lukashenko Mean for Crypto?

Publications8 hours ago

Banks and tech giants including JPMorgan and Amazon pledge to hire 100,000 minority New Yorkers

Covid198 hours ago

Global Coronavirus Case Count Surpasses 20 Million

Covid198 hours ago

California Gov. Says Trump Unemployment Executive Action Spells Disaster For Budget

Payments9 hours ago

Grayscale kicks off national cryptocurrency ad campaign on CNBC, MSNBC, FOX

Blockchain9 hours ago

Tron (TRX) Forms Textbook Bear Signal Despite Rally in the Altcoin Market

Blockchain9 hours ago

USDA Proposes Blockchain Ledger for Organic Product Supply Chain

Publications9 hours ago

Airline shares surge as TSA numbers hit pandemic high, support for second bailout builds

Esports9 hours ago

Peruvian Zerg Castro passes away.

AR/VR9 hours ago

Facebook Researchers Develop Bleeding-edge Facial Reconstruction Tech So You Can Make Goofy Faces in VR

Publications10 hours ago

Robinhood reports more monthly trades than rivals Charles Schwab, E-Trade combined

Cannabis10 hours ago

Arizona Legalization Initiative Survives Lawsuit

Publications10 hours ago

Trump urges Americans to stop politicizing the coronavirus, blames China

Trending