Connect with us

Big Data

Geometric foundations of Deep Learning

Published

on

Geometric foundations of Deep Learning

Geometric Deep Learning is an attempt for geometric unification of a broad class of machine learning problems from the perspectives of symmetry and invariance. These principles not only underlie the breakthrough performance of convolutional neural networks and the recent success of graph neural networks but also provide a principled way to construct new types of problem-specific inductive biases.


By Michael Bronstein (Imperial College), Joan Bruna (NYU), Taco Cohen (Qualcomm), and Petar Veličković (DeepMind).

This blog post is based on the new “proto-book” M. M. Bronstein, J. Bruna, T. Cohen, and P. Veličković, Geometric Deep Learning: Grids, Groups, Graphs, Geodesics, and Gauges (2021), Petar’s talk at Cambridge, and Michael’s keynote talk at ICLR 2021.

 

In October 1872, the philosophy faculty of a small university in the Bavarian city of Erlangen appointed a new young professor. As customary, he was requested to deliver an inaugural research program, which he published under the somewhat long and boring title Vergleichende Betrachtungen über neuere geometrische Forschungen (“A comparative review of recent researches in geometry”). The professor was Felix Klein, only 23 years of age at that time, and his inaugural work has entered the annals of mathematics as the “Erlangen Programme” [1].

Felix Klein and his Erlangen Programme. Image: Wikipedia/University of Michigan Historical Math Collections.

The nineteenth century had been remarkably fruitful for geometry. For the first time in nearly two thousand years after Euclid, the construction of projective geometry by Poncelet, hyperbolic geometry by Gauss, Bolyai, and Lobachevsky, and elliptic geometry by Riemann showed that an entire zoo of diverse geometries was possible. However, these constructions had quickly diverged into independent and unrelated fields, with many mathematicians of that period questioning how the different geometries are related to each other and what actually defines a geometry.

The breakthrough insight of Klein was to approach the definition of geometry as the study of invariants, or in other words, structures that are preserved under a certain type of transformations (symmetries). Klein used the formalism of group theory to define such transformations and use the hierarchy of groups and their subgroups in order to classify different geometries arising from them. Thus, the group of rigid motions leads to the traditional Euclidean geometry, while affine or projective transformations produce, respectively, the affine and projective geometries. Importantly, the Erlangen Program was limited to homogeneous spaces [2] and initially excluded Riemannian geometry.

Klein’s Erlangen Program approached geometry as the study of properties remaining invariant under certain types of transformations. 2D Euclidean geometry is defined by rigid transformations (modeled as the isometry group) that preserve areas, distances, and angles, and thus also parallelism. Affine transformations preserve parallelism, but neither distances nor areas. Finally, projective transformations have the weakest invariance, with only intersections and cross-ratios preserved, and correspond to the largest group among the three. Klein thus argued that projective geometry is the most general one.

The impact of the Erlangen Program on geometry and mathematics broadly was very profound. It also spilled to other fields, especially physics, where symmetry considerations allowed to derive conservation laws from the first principles — an astonishing result known as Noether’s Theorem [3]. It took several decades until this fundamental principle — through the notion of gauge invariance (in its generalized form developed by Yang and Mills in 1954) — proved successful in unifying all the fundamental forces of nature with the exception of gravity. This is what is called the Standard Model, and it describes all the physics we currently know. We can only repeat the words of a Nobel-winning physicist, Philip Anderson [4], that

“it is only slightly overstating the case to say that physics is the study of symmetry.’’

We believe that the current state of affairs in the field of deep (representation) learning is reminiscent of the situation of geometry in the nineteenth century: on the one hand, in the past decade, deep learning has brought a revolution in data science and made possible many tasks previously thought to be beyond reach — whether computer vision, speech recognition, natural language translation, or playing Go. On the other hand, we now have a zoo of different neural network architectures for different kinds of data but few unifying principles. As a consequence, it is difficult to understand the relations between different methods, which inevitably leads to the reinvention and re-branding of the same concepts.

Deep learning today: a zoo of architectures, few unifying principles. Animal images: ShutterStock.

Geometric Deep Learning is an umbrella term we introduced in [5] referring to recent attempts to come up with a geometric unification of ML similar to Klein’s Erlangen Program. It serves two purposes: first, to provide a common mathematical framework to derive the most successful neural network architectures, and second, to give a constructive procedure to build future architectures in a principled way.

Supervised machine learning in its simplest setting is essentially a function estimation problem: given the outputs of some unknown function on a training set (e.g., labelled dog and cat images), one tries to find a function from some hypothesis class that fits well the training data and allows to predict the outputs on previously unseen inputs. In the past decade, the availability of large, high-quality datasets such as ImageNet coincided with growing computational resources (GPUs), allowing the design of rich function classes that have the capacity to interpolate such large datasets.

Neural networks appear to be a suitable choice to represent functions because even the simplest architecture, like the Perceptron, can produce a dense class of functions when using just two layers, allowing to approximate any continuous function to any desired accuracy — a property known as Universal Approximation [6].

Multi-layer Perceptrons are universal approximators: with just one hidden layer, they can represent combinations of step functions, allowing to approximate any continuous function with arbitrary precision.

The setting of this problem in low-dimensions is a classical problem in approximation theory that has been studied extensively, with precise mathematical control of estimation errors. But the situation is entirely different in high dimensions: one can quickly see that in order to approximate even a simple class of, e.g., Lipschitz continuous functions, the number of samples grows exponentially with the dimension — a phenomenon known colloquially as the “curse of dimensionality.” Since modern machine learning methods need to operate with data in thousands or even millions of dimensions, the curse of dimensionality is always there behind the scenes making such a naive approach to learning impossible.

Illustration of the curse of dimensionality: in order to approximate a Lipschitz-continuous function composed of Gaussian kernels placed in the quadrants of a d-dimensional unit hypercube (blue) with error ε, one requires 𝒪(1/εᵈ) samples (red points).

This is perhaps best seen in computer vision problems like image classification. Even tiny images tend to be very high-dimensional, but intuitively they have a lot of structure that is broken and thrown away when one parses the image into a vector to feed it into the Perceptron. If the image is now shifted by just one pixel, the vectorized input will be very different, and the neural network will need to be shown a lot of examples in order to learn that shifted inputs must be classified in the same way [7].

Fortunately, in many cases of high-dimensional ML problems, we have an additional structure that comes from the geometry of the input signal. We call this structure a “symmetry prior,” and it is a powerful general principle that gives us optimism in dimensionality-cursed problems. In our example of image classification, the input image x is not just a d-dimensional vector but a signal defined on some domain Ω, which in this case is a two-dimensional grid. The structure of the domain is captured by a symmetry group 𝔊 — the group of 2D translations in our example — which acts on the points on the domain. In the space of signals 𝒳(Ω), the group actions (elements of the group, 𝔤∈𝔊) on the underlying domain are manifested through what is called the group representation ρ(𝔤) — in our case, it is simply the shift operator, a d×d matrix that acts on a d-dimensional vector [8].

Illustration of geometric priors: the input signal (image x∈𝒳(Ω)) is defined on the domain (grid Ω), whose symmetry (translation group 𝔊) acts in the signal space through the group representation ρ(𝔤) (shift operator). Making an assumption on how the functions f (e.g., image classifier) interacts with the group restricts the hypothesis class.

The geometric structure of the domain underlying the input signal imposes structure on the class of functions f that we are trying to learn. One can have invariant functions that are unaffected by the action of the group, i.e., f(ρ(𝔤)x)=f(x) for any 𝔤∈𝔊 and x. On the other hand, one may have a case where the function has the same input and output structure and is transformed in the same way as the input—such functions are called equivariant and satisfy f(ρ(𝔤)x)=ρ(𝔤)f(x) [9]. In the realm of computer vision, image classification is a good illustration of a setting where one would desire an invariant function (e.g., no matter where a cat is located in the image, we still want to classify it as a cat), while image segmentation, where the output is a pixel-wise label mask, is an example of an equivariant function (the segmentation mask should follow the transformation of the input image).

Another powerful geometric prior is “scale separation.” In some cases, we can construct a multiscale hierarchy of domains (Ω and Ω’ in the figure below) by “assimilating” nearby points and also producing a hierarchy of signal spaces that are related by a coarse-graining operator P. On these coarse scales, we can apply coarse-scale functions. We say that a function f is locally stable if it can be approximated as the composition of the coarse-graining operator P and the coarse-scale function, ff’P. While f might depend on long-range dependencies, if it is locally stable, these can be separated into local interactions that are then propagated towards the coarse scales [10].

Illustration of scale separation where we can approximate a fine-level function f as the composition f≈f’∘Pof a coarse-level function f’ and a coarse-graining operator P.

These two principles give us a very general blueprint of Geometric Deep Learning that can be recognized in the majority of popular deep neural architectures used for representation learning: a typical design consists of a sequence of equivariant layers (e.g., convolutional layers in CNNs), possibly followed by an invariant global pooling layer aggregating everything into a single output. In some cases, it is also possible to create a hierarchy of domains by some coarsening procedure that takes the form of local pooling.

Geometric Deep Learning blueprint.

This is a very general design that can be applied to different types of geometric structures, such as grids, homogeneous spaces with global transformation groups, graphs (and sets, as a particular case), and manifolds, where we have global isometry invariance and local gauge symmetries. The implementation of these principles leads to some of the most popular architectures that exist today in deep learning: Convolutional Networks (CNNs), emerging from translational symmetry, Graph Neural Networks, DeepSets [11], and Transformers [12], implementing permutation invariance, gated RNNs (such as LSTM networks) that are invariant to time warping [13], and Intrinsic Mesh CNNs [14] used in computer graphics and vision, that can be derived from gauge symmetry.

The “5G” of Geometric Deep Learning: Grids, Group (homogeneous spaces with global symmetries), Graphs (and sets as a particular case), and Manifolds, where geometric priors are manifested through global isometry invariance (which can be expressed using Geodesics) and local Gauge symmetries.

In future posts, we will be exploring in further detail the instances of the Geometric Deep Learning blueprint on the “5G” [15]. As a final note, we should emphasize that symmetry has historically been a key concept in many fields of science, of which physics, as already mentioned in the beginning, is key. In the machine learning community, the importance of symmetry has long been recognized in particular in the applications to pattern recognition and computer vision, with early works on equivariant feature detection dating back to Shun’ichi Amari [16] and Reiner Lenz [17]. In the neural networks literature, the Group Invariance Theorem for Perceptrons by Marvin Minsky and Seymour Papert [18] put fundamental limitations on the capabilities of (single-layer) Perceptrons to learn invariants. This was one of the primary motivations for studying multi-layer architectures [19–20], which had ultimately led to deep learning.

Footnotes and References

[1] According to a popular belief, repeated in many sources, including Wikipedia, the Erlangen Program was delivered in Klein’s inaugural address in October 1872. Klein indeed gave such a talk (though on December 7, 1872), but it was for a non-mathematical audience and concerned primarily with his ideas of mathematical education. What is now called the “Erlangen Programme” was actually the aforementioned brochure Vergleichende Betrachtungen, subtitled Programm zum Eintritt in die philosophische Fakultät und den Senat der k. Friedrich-Alexanders-Universität zu Erlangen (“Program for entry into the Philosophical Faculty and the Senate of the Emperor Friedrich-Alexander University of Erlangen,” see an English translation). While Erlangen claims the credit, Klein stayed there for only three years, moving in 1875 to the Technical University of Munich (then called Technische Hochschule), followed by Leipzig (1880), and finally settling down in Göttingen from 1886 until his retirement. See R. Tobies Felix Klein — Mathematician, Academic Organizer, Educational Reformer (2019) In: H. G. Weigand et al. (eds) The Legacy of Felix Klein, Springer.

[2] A homogeneous space is a space where “all points are the same,” and any point can be transformed into another by means of group action. This is the case for all geometries proposed before Riemann, including Euclidean, affine, and projective, as well as the first non-Euclidean geometries on spaces of constant curvature such as the sphere or hyperbolic space. It took substantial effort and nearly 50 years, notably by Élie Cartan and the French geometry school, to extend Klein’s ideas to manifolds.

[3] Klein himself has probably anticipated the potential of his ideas in physics, complaining of “how persistently the mathematical physicist disregards the advantages afforded him in many cases by only a moderate cultivation of the projective view.” By that time, it was already common to think of physical systems through the perspective of the calculus of variation, deriving the differential equations governing such systems from the “least action principle,” i.e., as the minimizer of some functional (action). In a paper published in 1918, Emmy Noether showed that every (differentiable) symmetry of the action of a physical system has a corresponding conservation law. This, by all means, was a stunning result: beforehand, meticulous experimental observation was required to discover fundamental laws such as the conservation of energy, and even then, it was an empirical result not coming from anywhere. For historical notes, see C. Quigg, Colloquium: A Century of Noether’s Theorem (2019), arXiv:1902.01989.

[4] P. W. Anderson, More is different (1972), Science 177(4047):393–396.

[5] M. M. Bronstein et al. Geometric deep learning: going beyond Euclidean data (2017), IEEE Signal Processing Magazine 34(4):18–42 attempted to unify learning on grids, graphs, and manifolds from the spectral analysis perspective. The term “geometric deep learning” was actually coined earlier, in Michael’s ERC grant proposal.

[6] There are multiple versions of the Universal Approximation theorem. It is usually credited to G. Cybenko, Approximation by superpositions of a sigmoidal function (1989) Mathematics of Control, Signals, and Systems 2(4):303–314 and K. Hornik, Approximation capabilities of multi-layer feedforward networks (1991), Neural Networks 4(2):251–257.

[7] The remedy for this problem in computer vision came from classical works in neuroscience by Hubel and Wiesel, the winners of the Nobel prize in medicine for the study of the visual cortex. They showed that brain neurons are organized into local receptive fields, which served as an inspiration for a new class of neural architectures with local shared weights, first the Neocognitron in K. Fukushima, A self-organizing neural network model for a mechanism of pattern recognition unaffected by shift in position (1980), Biological Cybernetics 36(4):193–202, and then the Convolutional Neural Networks, the seminar work of Y. LeCun et al., Gradient-based learning applied to document recognition (1998), Proc. IEEE 86(11):2278–2324, where weight sharing across the image effectively solved the curse of dimensionality.

[8] Note that a group is defined as an abstract object, without saying what the group elements are (e.g., transformations of some domain), only how they compose. Hence, very different kinds of objects may have the same symmetry group.

[9] These results can be generalized for the case of approximately invariant and equivariant functions, see e.g., J. Bruna and S. Mallat, Invariant scattering convolution networks (2013), Trans. PAMI 35(8):1872–1886.

[10] Scale separation is a powerful principle exploited in physics, e.g., in the Fast Multipole Method (FMM), a numerical technique originally developed to speed up the calculation of long-range forces in n-body problems. FMM groups sources that lie close together and treat them as a single source.

[11] M. Zaheer et al., Deep Sets (2017), NIPS. In the computer graphics community, a similar architecture was proposed in C. R. Qi et al., PointNet: Deep Learning on Point Sets for 3D Classification and Segmentation (2017), CVPR.

[12] A. Vaswani et al., Attention is all you need (2017), NIPS, introduced the now popular Transformer architecture. It can be considered as a graph neural network with a complete graph.

[13] C. Tallec and Y. Ollivier, Can recurrent neural networks warp time? (2018), arXiv:1804.11188.

[14] J. Masci et al., Geodesic convolutional neural networks on Riemannian manifolds (2015), arXiv:1501.06297 was the first convolutional-like neural network architecture with filters applied in local coordinate charts on meshes. It is a particular case of T. Cohen et al., Gauge Equivariant Convolutional Networks and the Icosahedral CNN (2019), arXiv:1902.04615.

[15] M. M. Bronstein, J. Bruna, T. Cohen, and P. Veličković, Geometric Deep Learning: Grids, Groups, Graphs, Geodesics, and Gauges (2021)

[16] S.-l. Amari, Feature spaces which admit and detect invariant signal transformations (1978), Joint Conf. Pattern Recognition. Amari is also famous as the pioneer of the field of information geometry, which studies statistical manifolds of probability distributions using tools of differential geometry.

[17] R. Lenz, Group theoretical methods in image processing (1990), Springer.

[18] M. Minsky and S. A Papert. Perceptrons: An introduction to computational geometry (1987), MIT Press. This is the second edition of the (in)famous book blamed for the first “AI winter,” which includes additional results and responds to some of the criticisms of the earlier 1969 version.

[19] T. J. Sejnowski, P. K. Kienker, and G. E. Hinton, Learning symmetry groups with hidden units: Beyond the perceptron (1986), Physica D:Nonlinear Phenomena 22(1–3):260–275

[20] J. Shawe-Taylor, Building symmetries into feedforward networks (1989), ICANN. The first work that can be credited with taking a representation-theoretical view on invariant and equivariant neural networks is J. Wood and J. Shawe-Taylor, Representation theory and invariant neural networks (1996), Discrete Applied Mathematics 69(1–2):33–60. In the “modern era” of deep learning, building symmetries into neural networks was done by R. Gens and P. M. Domingos, Deep symmetry networks (2014), NIPS (see also Pedro Domingos’ invited talk at ICLR 2014)

We are grateful to Ben Chamberlain for proofreading this post and to Yoshua Bengio, Charles Blundell, Andreea Deac, Fabian Fuchs, Francesco di Giovanni, Marco Gori, Raia Hadsell, Will Hamilton, Maksym Korablyov, Christian Merkwirth, Razvan Pascanu, Bruno Ribeiro, Anna Scaife, Jürgen Schmidhuber, Marwin Segler, Corentin Tallec, Ngân Vu, Peter Wirnsberger, and David Wong for their feedback on different parts of the text on which this post is based. We also that Xiaowen Dong and Pietro Liò for helping us break the “stage fright” and present early versions of our work.

Original. Reposted with permission.

Bios: Michael Bronstein is Professor and Chair in Machine Learning and Pattern Recognition at Imperial College and the Head of Graph Learning Research at Twitter. Joan Bruna is an Associate Professor of Computer Science, Data Science and Mathematics at the Courant Institute and the Center for Data Science, New York University. Taco Cohen is a machine learning researcher at Qualcomm with a PhD in Machine Learning from the University of Amsterdam. Petar Veličković is a Senior Research Scientist at DeepMind with a PhD in Computer Science from the University of Cambridge (Trinity College).

Related:


PlatoAi. Web3 Reimagined. Data Intelligence Amplified.
Click here to access.

Source: https://www.kdnuggets.com/2021/07/geometric-foundations-deep-learning.html

Big Data

Synthetic gene circuits as tools for drug discovery

Published

on

Within mammalian systems, there exists enormous opportunity to use synthetic gene circuits to enhance phenotype-based drug discovery, to map the molecular origins of disease, and to validate therapeutics in complex cellular systems. While drug discovery has relied on marker staining and high-content imaging in cell-based assays, synthetic gene circuits expand the potential for precision and speed. Here we present a vision of how circuits can improve the speed and accuracy of drug discovery by enhancing the efficiency of hit triage, capturing disease-relevant dynamics in cell-based assays, and simplifying validation and readouts from organoids and microphysiological systems (MPS).
PlatoAi. Web3 Reimagined. Data Intelligence Amplified.
Click here to access.

Source: https://www.cell.com/trends/biotechnology/fulltext/S0167-7799(21)00137-2?rss=yes

Continue Reading

Big Data

How much Mathematics do you need to know for Machine Learning?

Published

on



Mathematics For Machine Learning | Maths to understand ML Algorithms





















Learn everything about Analytics



PlatoAi. Web3 Reimagined. Data Intelligence Amplified.
Click here to access.

Source: https://www.analyticsvidhya.com/blog/2021/07/how-much-mathematics-do-you-need-to-know-for-machine-learning/

Continue Reading

Big Data

Nokia lifts full-year forecast as turnaround takes root

Published

on

HELSINKI (Reuters) -Telecom equipment maker Nokia reported a stronger-than-expected second-quarter operating profit on Thursday and raised its full-year outlook as promised, thanks to a turnaround of its business.

The Finnish company’s April-June comparable operating profit rose to 682 million euros ($808.51 million) from 423 million euros a year earlier, beating the 408-million euro mean estimate in a Refinitiv poll of analysts.

Shifting geopolitics and a sharp round of cost cutting have put Nokia firmly back in the global 5G rollout race just a year after CEO Pekka Lundmark took the reins, allowing it to gain ground on Swedish arch-rival Ericsson.

“We have executed faster than planned on our strategy in the first half which provides us with a good foundation for the full year,” Lundmark said in a statement on Thursday, but added that Nokia still expects the 2021 second-half results to be less pronounced.

Nokia said it now expects full-year net sales of 21.7 billion-22.7 billion euros, up from its prior estimate of 20.6 billion-21.8 billion euros, with an operating profit margin of 10-12% instead of the 7% to 10% expected previously.

The company had announced on July 13 that it would raise its outlook, but did not provide any details.

($1 = 0.8435 euros)

(Reporting by Essi Lehto; editing by Terje Solsvik and Sriraj Kaluvila)

Image Credit: Reuters

PlatoAi. Web3 Reimagined. Data Intelligence Amplified.
Click here to access.

Source: https://datafloq.com/read/nokia-lifts-full-year-forecast-turnaround-takes-root/16713

Continue Reading

Big Data

Robinhood, gateway to ‘meme’ stocks, raises $2.1 billion in IPO

Published

on

By Echo Wang and David French

(Reuters) -Robinhood Markets Inc, the owner of the trading app which emerged as the go-to destination for retail investors speculating on this year’s “meme’ stock trading frenzy, raised $2.1 billion in its initial public offering on Wednesday.

The company was seeking to capitalize on individual investors’ fascination with cryptocurrencies and stocks such as GameStop Corp, which have seen wild swings after becoming the subject of trading speculation on social media sites such as Reddit. Robinhood’s monthly active users surged from 11.7 million at the end of December to 21.3 million as of the end of June.

The IPO valued Robinhood at $31.8 billion, making it greater as a function of its revenue than many of its traditional rivals such as Charles Schwab Corp, but the offering priced at the bottom of the company’s indicated range.

Some investors stayed on the sidelines, citing concerns over the frothy valuation, the risk of regulators cracking down on Robinhood’s business, and even lingering anger with the company’s imposition of trading curbs when the meme stock trading frenzy flared up at the end of January.

Robinhood said it sold 55 million shares in the IPO at $38 apiece, the low end of its $38 to $42 price range. This makes it one of the most valuable U.S. companies to have gone public year-to-date, amid a red-hot market for new listings.

In an unusual move, Robinhood had said it would reserve between 20% and 35% of its shares for its users.

Robinhood’s platform allows users to make unlimited commission-free trades in stocks, exchange-traded funds, options and cryptocurrencies. Its simple interface made it popular with young investors trading from home during the COVID-19 pandemic.

Robinhood enraged some investors and U.S. lawmakers earlier this year when it restricted trading in some popular stocks following a 10-fold rise in deposit requirements at its clearinghouse. It has been at the center of many regulatory probes.

The company disclosed this week that it has received inquiries from U.S. regulators looking into whether its employees traded shares of GameStop and AMC Entertainment Holdings, Inc before the trading curbs were placed at the end of January.

In June, Robinhood agreed to pay nearly $70 million to settle an investigation by Wall Street’s own regulator, the Financial Industry Regulatory Authority, for “systemic” failures, including systems outages, providing “false or misleading” information, and weak options trading controls.

The brokerage has also been criticized for relying on “payment for order flow” for most of its revenue, under which it receives fees from market makers for routing trades to them and does not charge users for individual trades.

Critics argue the practice, which is used by many other brokers, creates a conflict of interest, on the grounds that it incentivizes brokers to send orders to whoever pays the higher fees. Robinhood contends that it routes trades based on what is cheapest for its users, and that charging a commission would be more expensive. The U.S. Securities and Exchange Commission is examining the practice.

Robinhood was founded in 2013 by Stanford University roommates Vlad Tenev and Baiju Bhatt. They will hold a majority of the voting power after the offering, these filings showed, with Bhatt having around 39% of the voting power of outstanding stock while Tenev will hold about 26.2%.

The company’s shares are scheduled to start trading on Nasdaq on Thursday under the ticker “HOOD”

Goldman Sachs and J.P. Morgan were the lead underwriters in Robinhood’s IPO.

(Reporting by Echo Wang and David French in New York; Editing by Leslie Adler)

Image Credit: Reuters

PlatoAi. Web3 Reimagined. Data Intelligence Amplified.
Click here to access.

Source: https://datafloq.com/read/robinhood-gateway-meme-stocks-raises-21-billion-ipo/16712

Continue Reading
Ecommerce5 days ago

How professional videos can help e-commerce brands attract more customers

Blockchain3 days ago

Everything you should know about Coinbase

Blockchain1 day ago

CryptoHawk Provides Subscribers 2-month Returns of 44.5% on BTC and 22.1% on ETH

Cyber Security1 day ago

What is a Remote Access Code?

Gaming4 days ago

SCCG Management and Data Sports Group Partner to Deliver Sports Content and Analytics to the Media Industry

image001.gif
Startups1 day ago

From Red Gold to Olympic Gold: Seedo Corp. Seeks Solutions for Athletes and More

Gaming3 days ago

How to win the war on betting

Gaming3 days ago

How to master poker

Cleantech1 day ago

Why food production is as dangerous for the planet as fossil fuels

Blockchain1 day ago

Had Enough of Crypto? Here are Some Alternatives!

AR/VR1 day ago

nDreams Appoints ex Codemasters CEO as Non-Executive Chair of the Board

Blockchain1 day ago

GBA Names New United Kingdom Chapter President

Cleantech1 day ago

To sell EVs, are automakers ready to ditch dealerships?

AR/VR1 day ago

Ven VR Adventure Arrives on Oculus Quest Next Week

Cleantech1 day ago

Green bonds are beating all expectations in the post-pandemic recovery

Cleantech1 day ago

You’ve set a bold climate goal, so now what?

Blockchain1 day ago

From Around the Web: 10 Awesome Stories About Bitcoin

Blockchain2 days ago

Unique Network Chosen As Exclusive Blockchain Partner for United Nations Associated NFT Climate Initiative

Cyber Security2 days ago

What is Service Desk?

AR/VR2 days ago

The Walking Dead: Saints & Sinners Aftershocks Confirmed for Late September

Trending