If you are stuck with dumping data into warehouses and lakes then you are most likely not prepared for what’s coming up next. We are sliding into Web 3.0, an era of decentralization that trusts local ownership. This era is changing data as we know it. It has begun to testify its worth with products across industrial use-cases.
Data Mesh which is the latest addition to the stack is saving data teams from the hassle of producing qualitative data for all business types. Most recently, JP Morgan built a ‘Mesh’ on AWS and locked its scalability fortune on a decentralized architecture. More case studies are added every day and give a clear hint – data analytics are all set to change, again!
Data Management before the ‘Mesh’
In the early days, organizations used a central data warehouse to drive their data analytics. Even today, there are a large number of them using data lakes to drive predictive analytics. However, the enormous rate of data growth is obstructing application scalability. The cloud age did address that issue to a certain extent. Even there, the number of users is growing faster than the enterprise readiness to serve all of them. Amidst all this, data professionals, such as scientists, engineers and analysts are locking horns with qualitative transforming of raw data into actionable feed.
In a centralized ecosystem, everyone is dependent upon everyone else thereby creating uncertainties and interrupted flow of accurate data. With Mesh, data teams have an opportunity to go full throttle and embrace the ethos of Web 3.0 – decentralization.
This is also true that decentralized data management is not new. It gained acceptance more than a decade ago when the industry was waking up to the potential urgency of big data that we are witnessing today. The Hadoop library enabled distributed processing across all points of data storage. Equally effective is the virtualization of data that integrates data silos using a logical layer.
However, all of these may not be effective in the fast-changing data landscape.
Today, Hadoop struggles with complexity while Virtualization gets ineffective queries running in parallel across diverse data sources. Traditional data warehousing or even the recent data lakes models of the fabric fail to scale up to the level they should.
The Data Mesh is resolving these bottlenecks by revamping the architecture from the ground. In total contrast to the centralized lakes or warehouses, mesh pushes for a self-sustainable and self-served data-as-a-product owned by multiple nodes of the network.
The mesh architecture lets the creators of the data asset own it in the landscape. The new owners would be accountable for quality, accuracy and relevance. Not to miss, the central admin would still have the rights to write the governing policies for the network; like the best of both worlds!
The Benefits of Data Mesh
With a mission to ensure scalability and agility, the mesh delivers actionable value from the raw data sets faster. By provisioning the data infrastructure as a service, the mesh decentralizes the operations and lessens the IT backlogs. With such independence, the domain teams can focus only on the data sets relevant to their domain.
The owners sitting at nodes and managing their relevant domains are also given the charge to strategize, create and maintain pipelines. This ensures 100% data control with the domains. Unlike the traditional practice wherein a common team would do this for the entire landscape, the mesh solution enhances domain-level knowledge while producing more agile business processes.
Data fabrics, if used strategically can help to implement the decentralized mesh pattern more effectively.
Consider K2view; it creates an entity-based data fabric for building a decentralized network of business domains. It creates an integration layer to connect data sources and deliver a view of operational and analytical workloads. Here, the domains held by nodes have local ownership of the data services. This ensures successful implementation of the policies in compliance with the governance guidelines as decided by the central admin. Regardless of the incoming volume, their mesh architecture dynamically scales up and down thereby ensuring on-demand flexibility. It provides seamless accessibility to a diverse range of data source types, technologies and formats. Furthermore, it integrates transactional and master data at rest.
Not to miss, the mesh architecture works in compliance with different environments such as the cloud, on-premise and hybrid environments without affecting the transactional integrity.
As already discussed, the increasing number of data sources make it cumbersome for the lakes and warehouses to perform large-scale integrations. With Mesh’s domain-level ownership and governance narrated from the center, the resulting architecture delivers qualitative and actionable data. The mesh is highly secure. It encrypts data, consistently monitors user credentials to ensure authorization and thus complies with privacy regulations across the data landscape.
Data Mesh is gaining a stronger foundation. However, migrating existing ‘warehousing’ to totally new environments is a challenge. For data teams, this brings massive tasking to implement distributed data ownership. Given the risks of staying intact to primitive practices is scary in itself. It’s a tough road but worth the effort.
The post How is the ‘Mesh’ Resolving Bottlenecks of Data Management appeared first on SmartData Collective.