Connect with us

AR/VR

Best VR Engines for Enterprise applications

Avatar

Published

on

Virtual reality (VR) is a simple term that refers to and describes a variety of technologies associated with immersion into a simulated 3D environment. It can be considered primarily as the point where human-computer interaction, computer graphics, computer visions and 3D sensing meet.

VR Engines image

Once virtual reality engines were associated with gaming only, but now it has gained momentum in all industries. VR in the enterprise and consumer sector has taken the world of tech by storm. It has transformed from a figment of science fiction imagination into a billion-dollar business. According to expert estimations, the virtual reality (VR) market is forecast to reach 18.8 billion U.S. dollars in 2020, representing a 78% increase in spending from last year.

Virtual reality app development has become a highly competitive space, with several companies offering excellent VR engines for businesses and other large enterprises. With so many VR options available in the market, it is easy for company executives to get confused about the best ones that suit their business. We will look at some of the best VR engines for enterprise applications.

Top VR engines to consider

Amazon Sumerian

The Amazon Sumerian is the virtual reality engine developed by AWS. When using this VR engine, you don’t need 3D graphics or VR programming skills. The engine works with the popular VR platforms, including Oculus Go, HTC Vive Pro, Oculus Rift, HTC Vive, Google Daydream, and Lenovo Mirage. The Amazon Sumerian engine also works well with Android and iOS mobile devices.

The good thing about this VR engine is that it has numerous enterprise applications. You can use it for cases such as employee education, training simulation, retail & sales, virtual concierge, and field services productivity,

Some of the powerful features of Amazon Sumerian include;

·         Sumerian editor;

·         Sumerian hosts;

·         Asset management;

·         Capability to script the logic in any scene you create.

Amazon Sumerian offers several learning resources that make it easy for you to use the VR engine. The resources have valuable information for virtual reality developers.

Maya

Maya is one of the most widely used VR enterprise applications. The R software development tool from Autodesk is used for various purposes including D animations, motion graphics, and VFX software.

It is currently one of the most powerful VR engines as it is used for various functions such as dynamics, 3D rendering, effects, 3D animation, 3D shading, 3D modelling, motion graphics, pipeline integration, and more.

Unity

Unity is a popular VR engine as it allows you to develop solutions for various sectors. With Unity, you can create VR solutions for sectors like automotive, transportation, manufacturing, media & entertainment, engineering, construction.

The tool comes with numerous perks for developers such as;

  • Artist and designer tools;
  • A powerful editor for creating Unity 3D VR assets;
  • CAD tools; and 
  • Collaboration tools.

Google VR for everyone

Google VR is the engine developed by the search engine giant, Google. The development tool allows you to create an immersive VR experience for your company. The tool and other VR engines are available on the Google VR developer portal.

The Google VR engine can be used to develop VR tools on numerous platforms such as Android, iOS, Unity, Unreal, and web. Google has software development kits (SDKs) for the various VR platforms it supports and can be accessed easily.

The Google VR offers numerous perks, which include;

  • Low cost
  • Easy to set up and use for developing VR apps
  • Various VR platforms available, making it easier for developers to choose.

Final thoughts

Using VR for your business can open up a whole new market for you. The VR engines discussed in this post are some of the best for enterprise applications. They allow virtual reality app development for different purposes and on multiple platforms.

Source: https://www.vrfocus.com/2020/11/best-vr-engines-for-enterprise-applications/

AR/VR

US Army using Augmented Reality overlays in its research for the detection of roadside explosive hazards

Avatar

Published

on

In Augmented Reality News 

January 23, 2021 – The US Army Combat Capabilities Development Command (DEVCOM), Army Research Laboratory (ARL), has recently announced that it is employing the use of augmented reality (AR) overlays in its research for the detection of roadside explosive hazards, such as improvised explosive devices (IEDs), unexploded ordnance and landmines.

Route reconnaissance in support of convoy operations remains a critical function to keep Soldiers safe from such hazards, which continue to threaten operations abroad and continually prove to be an evolving and problematic adversarial tactic. To combat this problem, ARL and other research collaborators were funded by the Defense Threat Reduction Agency, via the ‘Blood Hound Gang Program’, which focuses on a system-of-systems approach to standoff explosive hazard detection.

Kelly Sherbondy, Program Manager at the lab, said “Logically, a system-of-systems approach to standoff explosive hazard detection research is warranted going forward,” adding, “Our collaborative methodology affords implementation of state-of-the-art technology and approaches while rapidly progressing the program with seasoned subject matter experts to meet or exceed military requirements and transition points.”

The program has seven external collaborators from across the country, which include the US Military Academy, The University of Delaware Video/Image Modeling and Synthesis Laboratory, Ideal Innovations Inc., Alion Science and Technology, The Citadel, IMSAR and AUGMNTR.

In Phase I of the program, researchers took 15-months to evaluate mostly high-technology readiness level (TRL) standoff detection technologies against a variety of explosive hazard emplacements. In addition, a lower-TRL standoff detection sensor, which was focused on the detection of explosive hazard triggering devices, was developed and assessed. According to the Army, the Phase I assessment included probability of detection, false alarm rate and other important information that will ultimately lead to a down-selection of sensors based on best performance for Phase II of the program.

Researchers use various sensors on Unmanned Aerial Systems equipped with high-definition infrared cameras and navigation to enable standoff detection of explosive hazards using machine learning techniques.

The sensors evaluated during Phase I included an airborne synthetic aperture radar, ground vehicular and small unmanned aerial vehicle LIDAR, high-definition electro-optical cameras, long-wave infrared cameras and a non-linear junction detection radar. Researchers carried a field test in real-world representative terrain over a 7-kilometer test track and included a total of 625 emplacements including a variety of explosive hazards, simulated clutter and calibration targets. They collected data before and after emplacement to simulate a real-world change between sensor passes.

Terabytes of data was collected across the sensor sets which was needed to adequately train artificial intelligence/machine learning (AI/ML) algorithms. The algorithms subsequently performed autonomous automatic target detection for each sensor. The Army stated that this sensor data is pixel-aligned via geo-referencing and the AI/ML techniques can be applied to some or all of the combined sensor data for a specific area. Furthermore, the detection algorithms are able to provide ‘confidence levels’ for each suspected target, which is displayed to a user as an augmented reality overlay. The detection algorithms were executed with various sensor permutations so that performance results could be aggregated and determine the best course of action moving forward into Phase II.

“The accomplishments of these efforts are significant to ensuring the safety of the warfighter in the current operation environment,” said Lt. Col. Mike Fuller, US Air Force Explosive Ordnance Disposal and DTRA Program Manager.

The Army noted that future research into the technology will enable real-time automatic target detection displayed with an augmented reality engine. The three year effort will ultimately culminate with demonstrations at multiple testing facilities to show the technology’s robustness over varying terrain.

“We have side-by-side comparisons of multiple modalities against a wide variety of realistic, relevant target threats, plus an evaluation of the fusion of those sensors’ output to determine the most effective way to maximize probability of detection and minimize false alarms,” Fuller said. “We hope that the Army and the Joint community will both benefit from the data gathered and lessons learned by all involved.”

Image credit: US Army

About the author

Sam Sprigg

Sam is the Founder and Managing Editor of Auganix. With a background in research and report writing, he covers news articles on both the AR and VR industries. He also has an interest in human augmentation technology as a whole, and does not just limit his learning specifically to the visual experience side of things.

Source: https://www.auganix.org/us-army-using-augmented-reality-overlays-in-its-research-for-the-detection-of-roadside-explosive-hazards/

Continue Reading

AR/VR

LIV Now Supports Full-body Avatars from ReadyPlayerMe, Making it Easy to Stream VR Without a Green Screen

Avatar

Published

on

Many VR streamers use complicated mixed reality setups to show themselves from a third-person perspective inside the virtual world. LIV, a leading tool which makes this possible, now supports free, customizable, full-body avatars from ReadyPlayerMe, making it possible to stream your avatar inside of VR without the need for a green screen.

In addition to true mixed reality streaming, Liv has supported streaming with avatars for some time. However, actually finding a unique avatar for yourself was no simple task. Now, Liv has partnered with avatar maker ReadyPlayerMe to make it as simple as can be.

ReadyPlayerMe allows you to build a free full-body avatar—optionally based on a photo of yourself—in mere minutes. You can use the avatar as the character in select Liv-supported VR games, allowing stream viewers to see your movements in third-person.

Here’s an example of a ReadyPlayerMe avatar in Pistol Whip streamed via Liv:

What Sadie said! They have improved on them, they now are full body and support finger tracking and full body tracking! It’s pretty smooth! pic.twitter.com/J8rY5UwWOo

— AtomBombBody (@AtomBombBody) January 17, 2021

Avatars from ReadyPlayMe are moderately customizable, and easy enough to get something you’re happy with relatively quickly, though we hope to see more customization options in the future (like height, build, and more control over outfits).

Image courtesy ReadyPlayerMe

You can make your own ReadyPlayMe avatar to import to Liv right here. If you want to download your avatar for some other use, you can make one here and download it at the end of the process as a .GLB file for use in other applications.

Streamer Atom Bomb Body also has a detailed walkthrough for configuring Liv with your new avatar here:

The post LIV Now Supports Full-body Avatars from ReadyPlayerMe, Making it Easy to Stream VR Without a Green Screen appeared first on Road to VR.

Source: https://vrarnews.com/details/liv-now-supports-full-body-avatars-from-readyplayerme-making-it-easy-to-stream-vr-without-a-green-screen-600b772745b9dcae3e9a590f?s=rss

Continue Reading

AR/VR

Pinterest’s new AR feature lets you try on virtual eyeshadow

Avatar

Published

on

Shopping online is the primary way people get most of the items they want or need, but there are some downsides: you can’t try on clothes to make sure they’ll fit right and it’s not easy to determine whether a particular makeup color will look good on you. Pinterest has introduced another feature that addresses the latter problem, one that … Continue reading

Source: https://vrarnews.com/details/pinterests-new-ar-feature-lets-you-try-on-virtual-eyeshadow-600b6e18c1c62e453a615b12?s=rss

Continue Reading

AR/VR

Magic Leap announces partnership with Google Cloud to Spatial Computing to enterprise and Google Cloud customers

Avatar

Published

on

In Augmented Reality and Mixed Reality News

January 22, 2021 – Magic Leap has today announced that it has entered into a multi-phased, multi-year strategic partnership agreement with Google Cloud to deliver spatial computing solutions to businesses and Google Cloud customers.

Through the partnership, Magic Leap will deliver its enterprise solutions on the Google Cloud Marketplace and explore potential new cloud-based, spatial computing solutions running on Google Cloud.

Magic Leap stated that as enterprises have evolved their operations over the past year to meet the needs of the changing business environment, demand for solutions that support business continuity, agility and borderless collaboration has accelerated exponentially. The partnership is therefore designed to meet those demands.

Beginning in 2021, select Magic Leap solutions that provide tools for businesses will be available in the Google Cloud Marketplace, allowing developers who create solutions on the Magic Leap platform to reach global customers via Google’s marketplace. Magic Leap’s own solutions, such as its Communication, Collaboration and Co-presence platform, will also be made available in the Google Cloud Marketplace as well.

“As we continue to build momentum for spatial computing in the enterprise market, we are very excited to partner with Google Cloud to deliver unique cloud solutions to their customers and ours,” explained Walter Delph, Chief Business Officer, Magic Leap. “Google Cloud offers best in class infrastructure for leading edge solutions designed to provide efficiencies, continuity and innovation to businesses across the globe.”

In the second phase of the partnership, the two companies will jointly explore opportunities to integrate Google Cloud capabilities in artificial intelligence (AI), machine learning, and analytics into Magic Leap’s Communication, Collaboration and Co-presence platform to support co-presence in any enterprise setting globally. According to Magic Leap, potential use cases involve applying cloud capabilities to help capture data and knowledge from experienced technicians in manufacturing settings, enhancing remote-technical support and training using augmented reality (AR), or providing complex or personalized procedure support in the healthcare industry.

Magic Leap added that it is working on the development of an AR Cloud product that will help to “advance the activation of spatially-aware enterprise solutions across multiple industry verticals.” The ‘Magic Leap Augmented Reality Cloud’ will allow enterprises to build applications that are spatially-aware and collaborative. The company also stated that it will explore the optimization of its AR Cloud by working in collaboration with Google Cloud, leveraging its network, content delivery services, and evolving 5G network edge compute services.

“More than ever, organizations are looking for ways to keep teams connected and support employees with innovative solutions in the cloud,” said Joe Miles Managing Director of Healthcare and Life Sciences at Google Cloud. “We are excited that Magic Leap has selected Google Cloud to expand the availability of its solutions for productivity in the enterprise. We look forward to working together to help Magic Leap scale its cloud-based solutions globally, and to help customers deploy next-generation collaboration and productivity solutions in the workplace.”

For more information on Magic Leap and its augmented and mixed reality solutions for enterprise, please visit the company’s website.

Image credit: Magic Leap

About the author

Sam Sprigg

Sam is the Founder and Managing Editor of Auganix. With a background in research and report writing, he covers news articles on both the AR and VR industries. He also has an interest in human augmentation technology as a whole, and does not just limit his learning specifically to the visual experience side of things.

Source: https://www.auganix.org/magic-leap-announces-partnership-with-google-cloud-to-spatial-computing-to-enterprise-and-google-cloud-customers/

Continue Reading
Blockchain5 days ago

5 Best Bitcoin Alternatives in 2021

Cyber Security3 days ago

Critical Cisco SD-WAN Bugs Allow RCE Attacks

Medical Devices4 days ago

Elcam Medical Joins Serenno Medical as Strategic Investor and Manufacturer of its Automatic Monitoring of Kidney Function Device

custom-packet-sniffer-is-a-great-way-to-learn-can.png
Blockchain2 days ago

TA: Ethereum Starts Recovery, Why ETH Could Face Resistance Near $1,250

SPAC Insiders4 days ago

Churchill Capital IV (CCIV) Releases Statement on Lucid Motors Rumor

SPACS3 days ago

Intel Chairman Gets Medtronic Backing for $750 Million SPAC IPO

Cyber Security4 days ago

SolarWinds Malware Arsenal Widens with Raindrop

PR Newswire4 days ago

Global Laboratory Information Management Systems Market (2020 to 2027) – Featuring Abbott Informatics, Accelerated Technology Laboratories & Autoscribe Informatics Among Others

SPAC Insiders4 days ago

Queen’s Gambit Growth Capital (GMBT.U) Prices Upsized $300M IPO

SPAC Insiders4 days ago

FoxWayne Enterprises Acquisition Corp. (FOXWU) Prices $50M IPO

SPACS3 days ago

Payments Startup Payoneer in Merger Talks With SPAC

SPACS5 days ago

Why Clover Health Chose a SPAC, Not an IPO, to Go Public

Medical Devices5 days ago

FDA’s Planning for Coronavirus Medical Countermeasures

SPACS5 days ago

With the Boom in SPACs, Private Companies Are Calling the Shots

NEWATLAS4 days ago

New Street Bob 114 heads Harley-Davidson’s 2021 lineup

NEWATLAS4 days ago

World-first biomarker test can predict depression and bipolar disorder

Aerospace5 days ago

Aurora Insight to launch cubesats for RF sensing

SPACS3 days ago

Michael Moe, fresh from raising $225M for education-focused SPAC, set for another free Startup Bootcamp

Blockchain2 days ago

Bitcoin Cash Analysis: Strong Support Forming Near $400

University of Minnesota Professor K. Andre Mkhoyan and his team used analytical scanning transmission electron microscopy (STEM), which combines imaging with spectroscopy, to observe metallic properties in the perovskite crystal barium stannate (BaSnO3). The atomic-resolution STEM image, with a BaSnO3 crystal structure (on the left), shows an irregular arrangement of atoms identified as the metallic line defect core. CREDIT Mkhoyan Group, University of Minnesota
Nano Technology4 days ago

Conductive nature in crystal structures revealed at magnification of 10 million times: University of Minnesota study opens up possibilities for new transparent materials that conduct electricity

Trending