Connect with us

AI

Membership inference attacks detect data used to train machine learning models

Avatar

Published

on

Join Transform 2021 this July 12-16. Register for the AI event of the year.


One of the wonders of machine learning is that it turns any kind of data into mathematical equations. Once you train a machine learning model on training examples—whether it’s on images, audio, raw text, or tabular data—what you get is a set of numerical parameters. In most cases, the model no longer needs the training dataset and uses the tuned parameters to map new and unseen examples to categories or value predictions.

You can then discard the training data and publish the model on GitHub or run it on your own servers without worrying about storing or distributing sensitive information contained in the training dataset.

But a type of attack called “membership inference” makes it possible to detect the data used to train a machine learning model. In many cases, the attackers can stage membership inference attacks without having access to the machine learning model’s parameters and just by observing its output. Membership inference can cause security and privacy concerns in cases where the target model has been trained on sensitive information.

From data to parameters

deep neural network AI

Above: Deep neural networks use multiple layers of parameters to map input data to outputs

Each machine learning model has a set of “learned parameters,” whose number and relations vary depending on the type of algorithm and architecture used. For instance, simple regression algorithms use a series of parameters that directly map input features to the model’s output. Neural networks, on the other hand, use complex layers of parameters that process input and pass them on to each other before reaching the final layer.

But regardless of the type of algorithm you choose, all machine learning models go through a similar process during training. They start with random parameter values and gradually tune them to the training data. Supervised machine learning algorithms, such as those used in classifying images or detecting spam, tune their parameters to map inputs to expected outcomes.

For example, say you’re training a deep learning model to classify images into five different categories. The model might be composed of a set of convolutional layers that extract the visual features of the image and a set of dense layers that translate the features of each image into confidence scores for each class.

The model’s output will be a set of values that represent the probability that an image belongs to each of the classes. You can assume that the image belongs to the class with the highest probability. For instance, an output might look like this:

Cat: 0.90
Dog: 0.05
Fish: 0.01
Tree: 0.01
Boat: 0.01

Before training, the model will provide incorrect outputs because its parameters have random values. You train it by providing it with a collection of images along with their corresponding classes. During training, the model gradually tunes the parameters so that its output confidence score becomes as close as possible to the labels of the training images.

Basically, the model encodes the visual features of each type of image into its parameters.

Membership inference attacks

A good machine learning model is one that not only classifies its training data but generalizes its capabilities to examples it hasn’t seen before. This goal can be achieved with the right architecture and enough training data.

But in general, machine learning models tend to perform better on their training data. For example, going back to the example above, if you mix your training data with a bunch of new images and run them through your neural network, you’ll see that the confidence scores it provides on the training examples will be higher than those of the images it hasn’t seen before.

training examples vs new examples

Above: Machine learning models perform better on training examples as opposed to unseen examples

Membership inference attacks take advantage of this property to discover or reconstruct the examples used to train the machine learning model. This could have privacy ramifications for the people whose data records were used to train the model.

In membership inference attacks, the adversary does not necessarily need to have knowledge about the inner parameters of the target machine learning model. Instead, the attacker only knows the model’s algorithm and architecture (e.g., SVM, neural network, etc.) or the service used to create the model.

With the growth of machine learning as a service (MaaS) offerings from large tech companies such as Google and Amazon, many developers are compelled to use them instead of building their models from scratch. The advantage of these services is that they abstract many of the complexities and requirement of machine learning, such as choosing the right architecture, tuning hyperparameters (learning rate, batch size, number of epochs, regularization, loss function, etc.), and setting up the computational infrastructure needed to optimize the training process. The developer only needs to set up a new model and provide it with training data. The service does the rest.

The tradeoff is that if the attackers know which service the victim used, they can use the same service to create a membership inference attack model.

In fact, at the 2017 IEEE Symposium on Security and Privacy, researchers at Cornell University proposed a membership inference attack technique that worked on all major cloud-based machine learning services.

In this technique, an attacker creates random records for a target machine learning model served on a cloud service. The attacker feeds each record into the model. Based on the confidence score the model returns, the attacker tunes the record’s features and reruns it by the model. The process continues until the model reaches a very high confidence score. At this point, the record is identical or very similar to one of the examples used to train the model.

membership inference attack models

Above: Membership inference attacks observe the behavior of a target machine learning model and predict examples that were used to train it.

After gathering enough high confidence records, the attacker uses the dataset to train a set of “shadow models” to predict whether a data record was part of the target model’s training data. This creates an ensemble of models that can train a membership inference attack model. The final model can then predict whether a data record was included in the training dataset of the target machine learning model.

The researchers found that this attack was successful on many different machine learning services and architectures. Their findings show that a well-trained attack model can also tell the difference between training dataset members and non-members that receive a high confidence score from the target machine learning model.

The limits of membership inference

Membership inference attacks are not successful on all kinds of machine learning tasks. To create an efficient attack model, the adversary must be able to explore the feature space. For example, if a machine learning model is performing complicated image classification (multiple classes) on high-resolution photos, the costs of creating training examples for the membership inference attack will be prohibitive.

But in the case of models that work on tabular data such as financial and health information, a well-designed attack might be able to extract sensitive information, such as associations between patients and diseases or financial records of target people.

overfitting vs underfitting

Above: Overfitted models perform well on training examples but poorly on unseen examples.

Membership inference is also highly associated with “overfitting,” an artifact of poor machine learning design and training. An overfitted model performs well on its training examples but poorly on novel data. Two reasons for overfitting are having too few training examples or running the training process for too many epochs.

The more overfitted a machine learning model is, the easier it will be for an adversary to stage membership inference attacks against it. Therefore, a machine model that generalizes well on unseen examples is also more secure against membership inference.

This story originally appeared on Bdtechtalks.com. Copyright 2021

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact. Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Coinsmart. Beste Bitcoin-Börse in Europa
Source: https://venturebeat.com/2021/04/28/membership-inference-attacks-detect-data-used-to-train-machine-learning-models/

AI

Understanding dimensionality reduction in machine learning models

Avatar

Published

on

Join Transform 2021 this July 12-16. Register for the AI event of the year.


Machine learning algorithms have gained fame for being able to ferret out relevant information from datasets with many features, such as tables with dozens of rows and images with millions of pixels. Thanks to advances in cloud computing, you can often run very large machine learning models without noticing how much computational power works behind the scenes.

But every new feature that you add to your problem adds to its complexity, making it harder to solve it with machine learning algorithms. Data scientists use dimensionality reduction, a set of techniques that remove excessive and irrelevant features from their machine learning models.

Dimensionality reduction slashes the costs of machine learning and sometimes makes it possible to solve complicated problems with simpler models.

The curse of dimensionality

Machine learning models map features to outcomes. For instance, say you want to create a model that predicts the amount of rainfall in one month. You have a dataset of different information collected from different cities in separate months. The data points include temperature, humidity, city population, traffic, number of concerts held in the city, wind speed, wind direction, air pressure, number of bus tickets purchased, and the amount of rainfall. Obviously, not all this information is relevant to rainfall prediction.

Some of the features might have nothing to do with the target variable. Evidently, population and number of bus tickets purchased do not affect rainfall. Other features might be correlated to the target variable, but not have a causal relation to it. For instance, the number of outdoor concerts might be correlated to the volume of rainfall, but it is not a good predictor for rain. In other cases, such as carbon emission, there might be a link between the feature and the target variable, but the effect will be negligible.

In this example, it is evident which features are valuable and which are useless. in other problems, the excessive features might not be obvious and need further data analysis.

But why bother to remove the extra dimensions? When you have too many features, you’ll also need a more complex model. A more complex model means you’ll need a lot more training data and more compute power to train your model to an acceptable level.

And since machine learning has no understanding of causality, models try to map any feature included in their dataset to the target variable, even if there’s no causal relation. This can lead to models that are imprecise and erroneous.

On the other hand, reducing the number of features can make your machine learning model simpler, more efficient, and less data-hungry.

The problems caused by too many features are often referred to as the “curse of dimensionality,” and they’re not limited to tabular data. Consider a machine learning model that classifies images. If your dataset is composed of 100×100-pixel images, then your problem space has 10,000 features, one per pixel. However, even in image classification problems, some of the features are excessive and can be removed.

Dimensionality reduction identifies and removes the features that are hurting the machine learning model’s performance or aren’t contributing to its accuracy. There are several dimensionality techniques, each of which is useful for certain situations.

Feature selection

Feature selection

A basic and very efficient dimensionality reduction method is to identify and select a subset of the features that are most relevant to target variable. This technique is called “feature selection.” Feature selection is especially effective when you’re dealing with tabular data in which each column represents a specific kind of information.

When doing feature selection, data scientists do two things: keep features that are highly correlated with the target variable and contribute the most to the dataset’s variance. Libraries such as Python’s Scikit-learn have plenty of good functions to analyze, visualize, and select the right features for machine learning models.

For instance, a data scientist can use scatter plots and heatmaps to visualize the covariance of different features. If two features are highly correlated to each other, then they will have a similar effect on the target variable, and including both in the machine learning model will be unnecessary. Therefore, you can remove one of them without causing a negative impact on the model’s performance.

Heatmap

Above: Heatmaps illustrate the covariance between different features. They are a good guide to finding and culling features that are excessive.

The same tools can help visualize the correlations between the features and the target variable. This helps remove variables that do not affect the target. For instance, you might find out that out of 25 features in your dataset, seven of them account for 95 percent of the effect on the target variable. This will enable you to shave off 18 features and make your machine learning model a lot simpler without suffering a significant penalty to your model’s accuracy.

Projection techniques

Sometimes, you don’t have the option to remove individual features. But this doesn’t mean that you can’t simplify your machine learning model. Projection techniques, also known as “feature extraction,” simplify a model by compressing several features into a lower-dimensional space.

A common example used to represent projection techniques is the “swiss roll” (pictured below), a set of data points that swirl around a focal point in three dimensions. This dataset has three features. The value of each point (the target variable) is measured based on how close it is along the convoluted path to the center of the swiss roll. In the picture below, red points are closer to the center and the yellow points are farther along the roll.

Swiss roll

In its current state, creating a machine learning model that maps the features of the swiss roll points to their value is a difficult task and would require a complex model with many parameters. But with the help of dimensionality reduction techniques, the points can be projected to a lower-dimension space that can be learned with a simple machine learning model.

There are various projection techniques. In the case of the above example, we used “locally-linear embedding,” an algorithm that reduces the dimension of the problem space while preserving the key elements that separate the values of data points. When our data is processed with the LLE, the result looks like the following image, which is like an unrolled version of the swiss roll. As you can see, points of each color remain together. In fact, this problem can still be simplified into a single feature and modeled with linear regression, the simplest machine learning algorithm.

Swiss roll, projected

While this example is hypothetical, you’ll often face problems that can be simplified if you project the features to a lower-dimensional space. For instance, “principal component analysis” (PCA), a popular dimensionality reduction algorithm, has found many useful applications to simplify machine learning problems.

In the excellent book Hands-on Machine Learning with Python, data scientist Aurelien Geron shows how you can use PCA to reduce the MNIST dataset from 784 features (28×28 pixels) to 150 features while preserving 95 percent of the variance. This level of dimensionality reduction has a huge impact on the costs of training and running artificial neural networks.

dimensionality reduction mnist dataset

There are a few caveats to consider about projection techniques. Once you develop a projection technique, you must transform new data points to the lower dimension space before running them through your machine learning model. However, the costs of this preprocessing step are not comparable to the gains of having a lighter model. A second consideration is that transformed data points are not directly representative of their original features and transforming them back to the original space can be tricky and in some cases impossible. This might make it difficult to interpret the inferences made by your model.

Dimensionality reduction in the machine learning toolbox

Having too many features will make your model inefficient. But cutting removing too many features will not help either. Dimensionality reduction is one among many tools data scientists can use to make better machine learning models. And as with every tool, they must be used with caution and care.

Ben Dickson is a software engineer and the founder of TechTalks, a blog that explores the ways technology is solving and creating problems.

This story originally appeared on Bdtechtalks.com. Copyright 2021

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact. Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Coinsmart. Beste Bitcoin-Börse in Europa
Source: https://venturebeat.com/2021/05/16/understanding-dimensionality-reduction-in-machine-learning-models/

Continue Reading

AI

Bitcoin Mining Company Vows to be Carbon Neutral Following Tesla’s Recent Statement

Avatar

Published

on

Last week, Elon Musk and Tesla shocked the entire crypto industry following an announcement that the electric car company will no longer accept bitcoin payments for “environmental reasons.”

A Hard Pill For Bitcoin Maximalists

Giving its reasons, Tesla argued that Bitcoin mining operation requires massive energy consumption, which is generated from fossil fuel, especially coal, and as such, causes environmental pollution.

The announcement caused a market dip which saw over $4 billion of both short and long positions liquidated as the entire capitalization lost almost $400 billion in a day.

For Bitcoin maximalists and proponents, Tesla’s decision was a hard pill to swallow, and that was evident in their responses to the electric car company and its CEO.

While the likes of Max Keiser lambasted Musk for his company’s move, noting that it was due to political pressure, others like popular YouTuber Chris Dunn were seen canceling their Tesla Cybertruck orders.


ADVERTISEMENT

Adding more fuel to the fire, Musk also responded to a long Twitter thread by Peter McCormack, implying that Bitcoin is not actually decentralized.

Musk Working With Dogecoin Devs

Elon Musk, who named himself the “Dogefather” on SNL, created a Twitter poll, asking his nearly 55 million followers if they want Tesla to integrate DOGE as a payment option.

The poll, which had almost 4 million votes, was favorable for Dogecoin, as more than 75% of the community voted “Yes.”

Following Tesla’s announcement, the billionaire tweeted that he is working closely with Dogecoin developers to improve transaction efficiency, saying that it is “potentially promising.”

Tesla dropping bitcoin as a payment instrument over energy concerns, with the possibility of integrating dogecoin payments, comes as a surprise to bitcoiners since the two cryptocurrencies use a Proof-of-Work (PoW) consensus algorithm and, as such, face the same underlying energy problem.

Elon Musk: Dogecoin Wins Bitcoin

Despite using a PoW algorithm, Elon Musk continues to favor Dogecoin over Bitcoin. Responding to a tweet that covered some of the reasons why Musk easily chose DOGE over BTC, the billionaire CEO agreed that Dogecoin wins Bitcoin in many ways.

Comparing DOGE to BTC, Musk noted that “DOGE speeds up block time 10X, increases block size 10X & drops fee 100X. Then it wins hands down.”

Max Keiser: Who’s The Bigger Idiot?

As Elon Musk continues his lovey-dovey affair with Dogecoin, Bitcoin proponents continue to criticize the Dogefather.

Following Musk’s comments on Dogecoin today, popular Bitcoin advocate Max Keiser took to his Twitter page to ridicule the Tesla boss while recalling when gold bug Peter Schiff described Bitcoin as “intrinsically worthless” after he lost access to his BTC wallet.

“Who’s the bigger idiot?” Keiser asked.

Aside from Keiser, other Bitcoin proponents such as Michael Saylor replied to Tesla’s CEO:

SPECIAL OFFER (Sponsored)

Binance Futures 50 USDT FREE Voucher: Use this link to register & get 10% off fees and 50 USDT when trading 500 USDT (limited offer).

PrimeXBT Special Offer: Use this link to register & enter POTATO50 code to get 50% free bonus on any deposit up to 1 BTC.

You Might Also Like:

Coinsmart. Beste Bitcoin-Börse in Europa
Source: https://coingenius.news/bitcoin-mining-company-vows-to-be-carbon-neutral-following-teslas-recent-statement-6/?utm_source=rss&utm_medium=rss&utm_campaign=bitcoin-mining-company-vows-to-be-carbon-neutral-following-teslas-recent-statement-6

Continue Reading

AI

Bitcoin Proponents Against Elon Musk Following Heated Dogecoin vs Bitcoin Tweets

Avatar

Published

on

Last week, Elon Musk and Tesla shocked the entire crypto industry following an announcement that the electric car company will no longer accept bitcoin payments for “environmental reasons.”

A Hard Pill For Bitcoin Maximalists

Giving its reasons, Tesla argued that Bitcoin mining operation requires massive energy consumption, which is generated from fossil fuel, especially coal, and as such, causes environmental pollution.

The announcement caused a market dip which saw over $4 billion of both short and long positions liquidated as the entire capitalization lost almost $400 billion in a day.

For Bitcoin maximalists and proponents, Tesla’s decision was a hard pill to swallow, and that was evident in their responses to the electric car company and its CEO.

While the likes of Max Keiser lambasted Musk for his company’s move, noting that it was due to political pressure, others like popular YouTuber Chris Dunn were seen canceling their Tesla Cybertruck orders.


ADVERTISEMENT

Adding more fuel to the fire, Musk also responded to a long Twitter thread by Peter McCormack, implying that Bitcoin is not actually decentralized.

Musk Working With Dogecoin Devs

Elon Musk, who named himself the “Dogefather” on SNL, created a Twitter poll, asking his nearly 55 million followers if they want Tesla to integrate DOGE as a payment option.

The poll, which had almost 4 million votes, was favorable for Dogecoin, as more than 75% of the community voted “Yes.”

Following Tesla’s announcement, the billionaire tweeted that he is working closely with Dogecoin developers to improve transaction efficiency, saying that it is “potentially promising.”

Tesla dropping bitcoin as a payment instrument over energy concerns, with the possibility of integrating dogecoin payments, comes as a surprise to bitcoiners since the two cryptocurrencies use a Proof-of-Work (PoW) consensus algorithm and, as such, face the same underlying energy problem.

Elon Musk: Dogecoin Wins Bitcoin

Despite using a PoW algorithm, Elon Musk continues to favor Dogecoin over Bitcoin. Responding to a tweet that covered some of the reasons why Musk easily chose DOGE over BTC, the billionaire CEO agreed that Dogecoin wins Bitcoin in many ways.

Comparing DOGE to BTC, Musk noted that “DOGE speeds up block time 10X, increases block size 10X & drops fee 100X. Then it wins hands down.”

Max Keiser: Who’s The Bigger Idiot?

As Elon Musk continues his lovey-dovey affair with Dogecoin, Bitcoin proponents continue to criticize the Dogefather.

Following Musk’s comments on Dogecoin today, popular Bitcoin advocate Max Keiser took to his Twitter page to ridicule the Tesla boss while recalling when gold bug Peter Schiff described Bitcoin as “intrinsically worthless” after he lost access to his BTC wallet.

“Who’s the bigger idiot?” Keiser asked.

Aside from Keiser, other Bitcoin proponents such as Michael Saylor replied to Tesla’s CEO:

SPECIAL OFFER (Sponsored)

Binance Futures 50 USDT FREE Voucher: Use this link to register & get 10% off fees and 50 USDT when trading 500 USDT (limited offer).

PrimeXBT Special Offer: Use this link to register & enter POTATO50 code to get 50% free bonus on any deposit up to 1 BTC.

You Might Also Like:

Coinsmart. Beste Bitcoin-Börse in Europa
Source: https://coingenius.news/bitcoin-proponents-against-elon-musk-following-heated-dogecoin-vs-bitcoin-tweets/?utm_source=rss&utm_medium=rss&utm_campaign=bitcoin-proponents-against-elon-musk-following-heated-dogecoin-vs-bitcoin-tweets

Continue Reading

AI

PlotX v2 Mainnet Launch: DeFi Prediction Markets

Avatar

Published

on

In early Sunday trading, BTC prices had fallen to their lowest levels for over 11 weeks, hitting $46,700 before a minor recovery.

The last time Bitcoin dropped to these levels was at the end of February during the second major correction of this ongoing rally. A rebound off that bottom sent prices above $60K for the first time in the two weeks that followed.

Later today, Bitcoin is going to close another weekly candle. In case the candle closes at those levels, this will become the worst weekly close since February 22nd, when BTC ended the week at $45,240, according to Bitstamp. Two weeks ago the weekly candle closed at $49,200, which the current lowest week close since February.

Second ‘Lower Low’ For Bitcoin

This time around, things feel slightly different and the bearish sentiment is returning to crypto-asset markets. Since its all-time high of $65K on April 14, Bitcoin has made a lower high and has now formed a second lower low on the daily chart, which is indicative of a larger downtrend developing.

Analyst ‘CryptoFibonacci’ has been eyeing the weekly chart which also suggests the bulls could be running out of steam.


ADVERTISEMENT

The move appears to have been driven by Elon Musk again with a tweet about Bitcoin’s energy consumption on May 13. Bitcoin’s fear and greed index has dropped to 20 – ‘extreme fear’ – its lowest level since the March 2020 market crash. At the time of press, BTC was trading at just under $48,000, down 4% over the past 24 hours.

Market Cap Shrinks by $150B

As usual, the move has initiated a selloff for the majority of other cryptocurrencies resulting in around $150 billion exiting the markets over the past day or so.

The total market cap has declined to $2.3 trillion after an all-time high of $2.5 trillion on May 12. Things are still high on the long term view but losses could accelerate rapidly if the bearish sentiment increases.

Not all crypto assets are correcting this weekend, and some have been building on recent gains to push even higher – although they are few in number.

Those weekend warriors include Cardano which has added 4.8% on the day to trade at $2.27 according to Coingecko. ADA hit an all-time high on Saturday, May 15 reaching $2.36, a gain of 54% over the past 30 days.

Ripple’s XRP is also seeing a resurgence with a 13% pump on the day to flip Cardano for the fourth spot. XRP is currently trading at $1.58 with a market cap of $73 billion. The only other two cryptocurrencies in the green at the time of writing are Stellar and Solana, gaining 3.7% and 12% respectively.

SPECIAL OFFER (Sponsored)

Binance Futures 50 USDT FREE Voucher: Use this link to register & get 10% off fees and 50 USDT when trading 500 USDT (limited offer).

PrimeXBT Special Offer: Use this link to register & enter POTATO50 code to get 50% free bonus on any deposit up to 1 BTC.

You Might Also Like:

Coinsmart. Beste Bitcoin-Börse in Europa
Source: https://coingenius.news/plotx-v2-mainnet-launch-defi-prediction-markets-58/?utm_source=rss&utm_medium=rss&utm_campaign=plotx-v2-mainnet-launch-defi-prediction-markets-58

Continue Reading
AI5 days ago

Build a cognitive search and a health knowledge graph using AWS AI services

Esports4 days ago

‘Destroy Sandcastles’ in Fortnite Locations Explained

Blockchain4 days ago

Shiba Inu: Know How to Buy the New Dogecoin Rival

Blockchain5 days ago

Meme Coins Craze Attracting Money Behind Fall of Bitcoin

Esports5 days ago

Valve launches Supporters Clubs, allows fans to directly support Dota Pro Circuit teams

Blockchain5 days ago

Sentiment Flippening: Why This Bitcoin Expert Doesn’t Own Ethereum

Blockchain4 days ago

Texas House Passes Bill that Recognizes Crypto Under Commercial Law

Aviation4 days ago

American Airlines Continues To Build Up Its Core Hub Strategy

Aviation5 days ago

Reuters: American Airlines adds stops to two flights after pipeline outage

ACN Newswire5 days ago

Duet Protocol closes first-round funding at US$3 million

Cyber Security5 days ago

Pending Data Protection and Security Laws At-A-Glance: APAC

AI5 days ago

Onestream: Data analysis, AI tools usage increased in 2021

Blockchain5 days ago

QAN Raises $2.1 Million in Venture Capital to Build DeFi Ecosystem

Blockchain4 days ago

Facebook’s Diem Enters Crypto Space With Diem USD Stablecoin

Esports4 days ago

Video: s1mple – MVP of DreamHack Masters Spring 2021

Business Insider5 days ago

Rally Expected To Stall For China Stock Market

Blockchain4 days ago

NSAV ANNOUNCES LAUNCH OF VIRTUABROKER’S PROPRIETARY CRYPTOCURRENCY PRICE SEARCH FEATURE

Business Insider4 days ago

HDI Announces Voting Results for Annual General and Special Meeting

Esports4 days ago

TiMi Studios partners with Xbox Game Studios to bring a “new game sensory experience” to players

AR/VR1 day ago

Next Dimension Podcast – Pico Neo 3, PSVR 2, HTC Vive Pro 2 & Vive Focus 3!

Trending