Zephyrnet Logo

Building the Metaverse, Part Two: The Technology

Date:

In my first article on the metaverse, I explored the extraordinary vision and driving forces behind the metaverse, along with some potential use cases. In this second part, I want to outline the technology that will be needed to enable it.

The metaverse will rely on a range of existing and new hardware and software technologies, which will enable the development of new services and new ways for humans and machines to interact with the world. It’s fair to say that the metaverse will touch every aspect of technology development, from the design of personal devices (which will need to deliver both high performance and high energy efficiency) to the cloud networks.

Below I’ve noted some of the primary areas for development:

Another way to imagine the metaverse value chain is to think about it chronologically from when data is generated to when it moves to deep storage.

Data collection and augmentation

Sensors, cameras, drones and other devices capture ambient data about real world. Visual information already occupies a tremendous amount of real estate in the digital world. Analysts estimate more than 1.4 trillion photos get taken every year and that video consumes between 60% and 80% of all IP traffic. This will likely grow with the metaverse.

Sensor fusion for bringing together visual information with spatial, thermal, movement, and other types of data will be increasingly common to serve up 3D images or deeply-informed images.

The speed and scope of data collection will vary by application. Metaverse-enhanced video conference calls between two familiar parties may require little data after initial machine learning (ML) training. Home repair simulations may require sensor fusion and data extrapolation but put low demands on latency. However, the sheer volume and variety might require in-sensor efficient event stream processing and a dedicated analytics engine.

Model creation and synthesis

This origin data then gets transformed into digital models and digital twins. While models of individual items may be created at the edge, synthesis into larger digital communities or a large digital universe will invariably happen in the cloud. Tools like avatar creation engine and digital twin creation engine will become the norm.

The term ‘digital twin’ has acquired multiple definitions in a relatively short period of time. In general, however, a digital twin is a model of a person, object or physical phenomenon getting impacted by the forces surrounding it. A ‘live’ digital replica of an engine is a digital twin, as is a reconstruction of a car accident or future projection of food spoilage. The underlying data can be real or simulated, as can the physical or chemical forces acting upon it.

A robust infrastructure for creating, analyzing, transmitting, storing, and continually updating model will be needed.

Tools and application development

3D content creation tools, augmented reality (AR) and virtual reality (VR) software development kits, low-code and no-code application development enablement technologies will all be required for the metaverse. One can imagine that the innovations will be driven by today’s game platform developers like Unity. The scope of the requirements will also likely inspire many to launch startups that can fulfill specialized performance needs.

The metaverse will be unique in that it will be a cloud native platform from the start rather than migrating to the cloud from in-house datacenters. Expect to see plenty of activity in CD/CI and other cloud-native development areas.

Infrastructure

The metaverse will require step function improvements in the data speeds and data volumes. As stated in the first blog, U.S. household data consumption comes to around 800GB per month, with a sizeable figure driven by gaming, Internet use, and video streaming. With the metaverse, this figure could grow 20x to 16,800GB by 2025. Carriers and cloud providers will need to work diligently to ensure they can manage this traffic in an economical and environmentally sustainable way. This will mean a focus on specialized cloud processors, the cloud-ification of 5G networks, computational storage, and other chip-level innovations.

Additionally, it will have to perform superbly as AR and VR require higher speeds and lower latency. Video gaming, generally the most compute-intensive application in the home, can demand a bit rate of 35Mbps. AR can range from 2 to 50 Mbps, while full VR applications can require 25 to 200 Mbps.

Latency will be equally as challenging. According to VC Matthew Ball, most viewers won’t notice if voice races ahead of images by 45 milliseconds in video calls. City-to-city latency averages around 35 ms. In AAA games, 105 latency can reduce participation by 6%. For the metaverse, many applications will need latency in the 2 to 20 millisecond range. On-device processing through more powerful CPUs and GPUs inside of TVs and set-top boxes, caching at edge servers, and continued improvement in wired and wireless networks will be needed.

Infrastructure providers will also have to upgrade their processes and protocols for security and privacy. Trustworthy AI principles will also need to be adopted to gain the trust of consumers and businesses.

Applications and content

While Meta, Microsoft and other U.S. companies are ramping up their virtual offerings, expect to see much of the activity centered in China. Newzoo estimates that nearly 40% of Chinese gaming companies, including all of the big names like Baidu, Tencent, NetEase, and Alibaba, have invested in metaverse-related technologies and services. The popularity of TikTok and Genshin Impact have also shown the global appeal of digital entertainment concepts coming out of studios in China.

With virtual events already taking place in Roblox and Fortnite, many of the parameters of moving to the metaverse are already being devised. Success in content, however, can be fickle—remember the efforts around 3D films and TV – so one should expect to see a number of public flops and surprise successes. Of all the sectors, content is often the most unpredictable. Ball also notes how payment systems and contractual relationships will have to be reorganized in a virtual world so content developers can reap more of the rewards of their work and consumers can move more easily between worlds.

Personal devices and user interfaces

While headsets and haptic gloves have yet to catch on with the wider public, lighter, smaller wearable devices like smartglasses and smart contact lenses will be coming to market. Meanwhile, headset enthusiasts will see 8k resolutions, wide angle viewing, and state-of-the-art sound.

To reduce the size, cost, power consumption, and heat dissipation of these personal devices, most will interact with smartphones and TVs in an edge/cloud manner. However, the processing and memory requirements of personal interfaces will be stringent and require plenty of innovation. TVs, in particular, will need to be enhanced with multicore CPUs and GPUs.

Beyond traditional consumer devices, an emerging, and potentially very disruptive area is brain and/or nerve connectivity. This can allow users to directly interact with the metaverse through their own synapses. Brain/computer interfaces will require further technology innovation. In the long-term future, smaller wearables or electric tattoos could effectively replace the need for smartglasses.

Direct bodily connection, of course, comes with a host of challenges: security, access protocols, privacy, safety, to name a few. We will even need bodily vocabularies for navigating the systems. Nonetheless, the benefits of greater access for all, fewer hardware requirements, and potential access at the flick of an eyelash make innovation and continued work in this space a given.

The metaverse will almost certainly create a unique space for continued technology innovation that has the potential to fundamentally transform how society interacts with existing and new devices.

spot_img

Latest Intelligence

spot_img

Chat with us

Hi there! How can I help you?