Connect with us

AR/VR

Oculus CTO Wants Android Apps on Quest, But is “not winning” the Debate Within Facebook

Avatar

Published

on

Oculus CTO John Carmack has said publicly that he’d love to open up Oculus Quest to Android apps to boost the headset’s usefulness, but admits he’s “not winning” the debate internally at Facebook.

Although the Quest home environment looks nothing like the home screen of your Android phone, the headset actually runs the Android operating system underneath it all. Allowing users to install and run Android apps on the headset—if just in a ‘flat’ mode on a virtual screen controlled by a laser pointer—could drastically boost the value of the headset by bringing all manner of video players, web browsers, productivity tools, utilities, and even flat games to the device.

Android apps running on Quest is apparently something Oculus’ own CTO has been arguing for internally.

Legendary developer John Carmack—who for several years held the role of Oculus CTO but now maintains a less formal “consulting CTO” arrangement—said during his Facebook Connect keynote in September that he doesn’t believe Oculus will be able to convince a meaningful portion of Android developers to rewrite their applications specifically for the headset. Instead, he says, the company needs to find a way to bring existing Android applications to Quest.

[…] it also works cooperatively with sort of our Android applications like Fandango or the other things there, and that’s still one of the things that absolutely kills me, where I think we need more Android applications.

We do not have a sorted out strategy—I’ve got a long spiel about this that I’m not gonna have time to get to—but we have all these existence-proofs and examples of… Microsoft tried really really hard to move all apps to a brand-new system [UWP and/or Windows Phone] and it just… doesn’t work out… I don’t think it’s gonna work out for us.

I think that we need to support our Android apps [on the headset] in a broader sense. We have progressive web apps as the backstop for everything, but on the mobile platforms the progressive web apps […] generally lose out [in terms of performance] to native applications, and we care more about performance in VR than in mobile systems, so I think we need a solution there, and we haven’t sorted it out.

If Oculus allowed existing Android apps onto Quest, it could radically improve the usefulness of the headset by allowing users access to a much wider range of apps. They wouldn’t be ‘native’ to VR of course, but it’s easy to understand how much more value users would see from the headset if they could load up, say, the Disney Plus app on a big virtual screen or run their favorite web browser on the headset instead of being stuck with the default. And wouldn’t it make sense to be able to run the Facebook app on Quest?

Because Quest is already running Android, it would be trivial from a technical standpoint to get Android apps up and running on the headset, and there’s a few ways Facebook could approach it.

For one, it could simply allow access to the Google Play store on the headset, allowing users to download apps they already own through the store, and then project those apps onto a flat screen inside the headset.

But it seems highly unlikely that Facebook would take this approach, as the company has clearly followed the ‘walled garden’ playbook of making the Oculus app store allowed on the headset. Instead of the Google Play store, Facebook could begin accepting ‘flat’ Android apps into the Oculus store and allow them to be distributed that way.

The company could also skip its own store and choose to allow users to sideload any Android APKs they have access too, leaving the feature mostly in the hands of power-users.

Unfortunately, we may not see any of these avenues pursued, despite Carmack’s insistence. In a recent tweet on the topic he noted, “I continue to argue for [Android apps on Quest], but so far, I’m not winning.”

Source: https://www.roadtovr.com/oculus-quest-android-apps-carmack-debate/

AR/VR

US Army using Augmented Reality overlays in its research for the detection of roadside explosive hazards

Avatar

Published

on

In Augmented Reality News 

January 23, 2021 – The US Army Combat Capabilities Development Command (DEVCOM), Army Research Laboratory (ARL), has recently announced that it is employing the use of augmented reality (AR) overlays in its research for the detection of roadside explosive hazards, such as improvised explosive devices (IEDs), unexploded ordnance and landmines.

Route reconnaissance in support of convoy operations remains a critical function to keep Soldiers safe from such hazards, which continue to threaten operations abroad and continually prove to be an evolving and problematic adversarial tactic. To combat this problem, ARL and other research collaborators were funded by the Defense Threat Reduction Agency, via the ‘Blood Hound Gang Program’, which focuses on a system-of-systems approach to standoff explosive hazard detection.

Kelly Sherbondy, Program Manager at the lab, said “Logically, a system-of-systems approach to standoff explosive hazard detection research is warranted going forward,” adding, “Our collaborative methodology affords implementation of state-of-the-art technology and approaches while rapidly progressing the program with seasoned subject matter experts to meet or exceed military requirements and transition points.”

The program has seven external collaborators from across the country, which include the US Military Academy, The University of Delaware Video/Image Modeling and Synthesis Laboratory, Ideal Innovations Inc., Alion Science and Technology, The Citadel, IMSAR and AUGMNTR.

In Phase I of the program, researchers took 15-months to evaluate mostly high-technology readiness level (TRL) standoff detection technologies against a variety of explosive hazard emplacements. In addition, a lower-TRL standoff detection sensor, which was focused on the detection of explosive hazard triggering devices, was developed and assessed. According to the Army, the Phase I assessment included probability of detection, false alarm rate and other important information that will ultimately lead to a down-selection of sensors based on best performance for Phase II of the program.

Researchers use various sensors on Unmanned Aerial Systems equipped with high-definition infrared cameras and navigation to enable standoff detection of explosive hazards using machine learning techniques.

The sensors evaluated during Phase I included an airborne synthetic aperture radar, ground vehicular and small unmanned aerial vehicle LIDAR, high-definition electro-optical cameras, long-wave infrared cameras and a non-linear junction detection radar. Researchers carried a field test in real-world representative terrain over a 7-kilometer test track and included a total of 625 emplacements including a variety of explosive hazards, simulated clutter and calibration targets. They collected data before and after emplacement to simulate a real-world change between sensor passes.

Terabytes of data was collected across the sensor sets which was needed to adequately train artificial intelligence/machine learning (AI/ML) algorithms. The algorithms subsequently performed autonomous automatic target detection for each sensor. The Army stated that this sensor data is pixel-aligned via geo-referencing and the AI/ML techniques can be applied to some or all of the combined sensor data for a specific area. Furthermore, the detection algorithms are able to provide ‘confidence levels’ for each suspected target, which is displayed to a user as an augmented reality overlay. The detection algorithms were executed with various sensor permutations so that performance results could be aggregated and determine the best course of action moving forward into Phase II.

“The accomplishments of these efforts are significant to ensuring the safety of the warfighter in the current operation environment,” said Lt. Col. Mike Fuller, US Air Force Explosive Ordnance Disposal and DTRA Program Manager.

The Army noted that future research into the technology will enable real-time automatic target detection displayed with an augmented reality engine. The three year effort will ultimately culminate with demonstrations at multiple testing facilities to show the technology’s robustness over varying terrain.

“We have side-by-side comparisons of multiple modalities against a wide variety of realistic, relevant target threats, plus an evaluation of the fusion of those sensors’ output to determine the most effective way to maximize probability of detection and minimize false alarms,” Fuller said. “We hope that the Army and the Joint community will both benefit from the data gathered and lessons learned by all involved.”

Image credit: US Army

About the author

Sam Sprigg

Sam is the Founder and Managing Editor of Auganix. With a background in research and report writing, he covers news articles on both the AR and VR industries. He also has an interest in human augmentation technology as a whole, and does not just limit his learning specifically to the visual experience side of things.

Source: https://www.auganix.org/us-army-using-augmented-reality-overlays-in-its-research-for-the-detection-of-roadside-explosive-hazards/

Continue Reading

AR/VR

LIV Now Supports Full-body Avatars from ReadyPlayerMe, Making it Easy to Stream VR Without a Green Screen

Avatar

Published

on

Many VR streamers use complicated mixed reality setups to show themselves from a third-person perspective inside the virtual world. LIV, a leading tool which makes this possible, now supports free, customizable, full-body avatars from ReadyPlayerMe, making it possible to stream your avatar inside of VR without the need for a green screen.

In addition to true mixed reality streaming, Liv has supported streaming with avatars for some time. However, actually finding a unique avatar for yourself was no simple task. Now, Liv has partnered with avatar maker ReadyPlayerMe to make it as simple as can be.

ReadyPlayerMe allows you to build a free full-body avatar—optionally based on a photo of yourself—in mere minutes. You can use the avatar as the character in select Liv-supported VR games, allowing stream viewers to see your movements in third-person.

Here’s an example of a ReadyPlayerMe avatar in Pistol Whip streamed via Liv:

What Sadie said! They have improved on them, they now are full body and support finger tracking and full body tracking! It’s pretty smooth! pic.twitter.com/J8rY5UwWOo

— AtomBombBody (@AtomBombBody) January 17, 2021

Avatars from ReadyPlayMe are moderately customizable, and easy enough to get something you’re happy with relatively quickly, though we hope to see more customization options in the future (like height, build, and more control over outfits).

Image courtesy ReadyPlayerMe

You can make your own ReadyPlayMe avatar to import to Liv right here. If you want to download your avatar for some other use, you can make one here and download it at the end of the process as a .GLB file for use in other applications.

Streamer Atom Bomb Body also has a detailed walkthrough for configuring Liv with your new avatar here:

The post LIV Now Supports Full-body Avatars from ReadyPlayerMe, Making it Easy to Stream VR Without a Green Screen appeared first on Road to VR.

Source: https://vrarnews.com/details/liv-now-supports-full-body-avatars-from-readyplayerme-making-it-easy-to-stream-vr-without-a-green-screen-600b772745b9dcae3e9a590f?s=rss

Continue Reading

AR/VR

Pinterest’s new AR feature lets you try on virtual eyeshadow

Avatar

Published

on

Shopping online is the primary way people get most of the items they want or need, but there are some downsides: you can’t try on clothes to make sure they’ll fit right and it’s not easy to determine whether a particular makeup color will look good on you. Pinterest has introduced another feature that addresses the latter problem, one that … Continue reading

Source: https://vrarnews.com/details/pinterests-new-ar-feature-lets-you-try-on-virtual-eyeshadow-600b6e18c1c62e453a615b12?s=rss

Continue Reading

AR/VR

Magic Leap announces partnership with Google Cloud to Spatial Computing to enterprise and Google Cloud customers

Avatar

Published

on

In Augmented Reality and Mixed Reality News

January 22, 2021 – Magic Leap has today announced that it has entered into a multi-phased, multi-year strategic partnership agreement with Google Cloud to deliver spatial computing solutions to businesses and Google Cloud customers.

Through the partnership, Magic Leap will deliver its enterprise solutions on the Google Cloud Marketplace and explore potential new cloud-based, spatial computing solutions running on Google Cloud.

Magic Leap stated that as enterprises have evolved their operations over the past year to meet the needs of the changing business environment, demand for solutions that support business continuity, agility and borderless collaboration has accelerated exponentially. The partnership is therefore designed to meet those demands.

Beginning in 2021, select Magic Leap solutions that provide tools for businesses will be available in the Google Cloud Marketplace, allowing developers who create solutions on the Magic Leap platform to reach global customers via Google’s marketplace. Magic Leap’s own solutions, such as its Communication, Collaboration and Co-presence platform, will also be made available in the Google Cloud Marketplace as well.

“As we continue to build momentum for spatial computing in the enterprise market, we are very excited to partner with Google Cloud to deliver unique cloud solutions to their customers and ours,” explained Walter Delph, Chief Business Officer, Magic Leap. “Google Cloud offers best in class infrastructure for leading edge solutions designed to provide efficiencies, continuity and innovation to businesses across the globe.”

In the second phase of the partnership, the two companies will jointly explore opportunities to integrate Google Cloud capabilities in artificial intelligence (AI), machine learning, and analytics into Magic Leap’s Communication, Collaboration and Co-presence platform to support co-presence in any enterprise setting globally. According to Magic Leap, potential use cases involve applying cloud capabilities to help capture data and knowledge from experienced technicians in manufacturing settings, enhancing remote-technical support and training using augmented reality (AR), or providing complex or personalized procedure support in the healthcare industry.

Magic Leap added that it is working on the development of an AR Cloud product that will help to “advance the activation of spatially-aware enterprise solutions across multiple industry verticals.” The ‘Magic Leap Augmented Reality Cloud’ will allow enterprises to build applications that are spatially-aware and collaborative. The company also stated that it will explore the optimization of its AR Cloud by working in collaboration with Google Cloud, leveraging its network, content delivery services, and evolving 5G network edge compute services.

“More than ever, organizations are looking for ways to keep teams connected and support employees with innovative solutions in the cloud,” said Joe Miles Managing Director of Healthcare and Life Sciences at Google Cloud. “We are excited that Magic Leap has selected Google Cloud to expand the availability of its solutions for productivity in the enterprise. We look forward to working together to help Magic Leap scale its cloud-based solutions globally, and to help customers deploy next-generation collaboration and productivity solutions in the workplace.”

For more information on Magic Leap and its augmented and mixed reality solutions for enterprise, please visit the company’s website.

Image credit: Magic Leap

About the author

Sam Sprigg

Sam is the Founder and Managing Editor of Auganix. With a background in research and report writing, he covers news articles on both the AR and VR industries. He also has an interest in human augmentation technology as a whole, and does not just limit his learning specifically to the visual experience side of things.

Source: https://www.auganix.org/magic-leap-announces-partnership-with-google-cloud-to-spatial-computing-to-enterprise-and-google-cloud-customers/

Continue Reading
Blockchain5 days ago

5 Best Bitcoin Alternatives in 2021

Cyber Security3 days ago

Critical Cisco SD-WAN Bugs Allow RCE Attacks

Medical Devices4 days ago

Elcam Medical Joins Serenno Medical as Strategic Investor and Manufacturer of its Automatic Monitoring of Kidney Function Device

custom-packet-sniffer-is-a-great-way-to-learn-can.png
Blockchain2 days ago

TA: Ethereum Starts Recovery, Why ETH Could Face Resistance Near $1,250

SPAC Insiders5 days ago

Churchill Capital IV (CCIV) Releases Statement on Lucid Motors Rumor

SPACS3 days ago

Intel Chairman Gets Medtronic Backing for $750 Million SPAC IPO

Cyber Security4 days ago

SolarWinds Malware Arsenal Widens with Raindrop

PR Newswire4 days ago

Global Laboratory Information Management Systems Market (2020 to 2027) – Featuring Abbott Informatics, Accelerated Technology Laboratories & Autoscribe Informatics Among Others

SPAC Insiders4 days ago

Queen’s Gambit Growth Capital (GMBT.U) Prices Upsized $300M IPO

SPAC Insiders4 days ago

FoxWayne Enterprises Acquisition Corp. (FOXWU) Prices $50M IPO

SPACS3 days ago

Payments Startup Payoneer in Merger Talks With SPAC

Medical Devices5 days ago

FDA’s Planning for Coronavirus Medical Countermeasures

SPACS5 days ago

Why Clover Health Chose a SPAC, Not an IPO, to Go Public

SPACS5 days ago

With the Boom in SPACs, Private Companies Are Calling the Shots

NEWATLAS4 days ago

New Street Bob 114 heads Harley-Davidson’s 2021 lineup

NEWATLAS4 days ago

World-first biomarker test can predict depression and bipolar disorder

SPACS3 days ago

Michael Moe, fresh from raising $225M for education-focused SPAC, set for another free Startup Bootcamp

Aerospace5 days ago

Aurora Insight to launch cubesats for RF sensing

Blockchain2 days ago

Bitcoin Cash Analysis: Strong Support Forming Near $400

University of Minnesota Professor K. Andre Mkhoyan and his team used analytical scanning transmission electron microscopy (STEM), which combines imaging with spectroscopy, to observe metallic properties in the perovskite crystal barium stannate (BaSnO3). The atomic-resolution STEM image, with a BaSnO3 crystal structure (on the left), shows an irregular arrangement of atoms identified as the metallic line defect core. CREDIT Mkhoyan Group, University of Minnesota
Nano Technology4 days ago

Conductive nature in crystal structures revealed at magnification of 10 million times: University of Minnesota study opens up possibilities for new transparent materials that conduct electricity

Trending