Big Data
Blackbox Confidence Intervals: Excel and Perl Implementation
Originally posted here. Check original article for most recent updates.
Confidence interval is abbreviated as CI. In this new article (part of our series on robust techniques for automated data science) we describe an implementation both in Excel and Perl, and discussion about our popular modelfree confidence interval technique introduced in our original Analyticbridge article. This technique has the following advantages:
 Very easy to understand by nonstatisticians (business analysts, software engineers, programmers, data architects)
 Simple (if not basic) to code; no need to use tables of Gaussian, Student or other statistical distributions
 Robust, not sensitive to outliers
 Modelindependent, datadriven: no assumptions required about the data set; it works with nonnormal data, and produces asymmetrical confidence intervals
 Therefore, suitable for blackbox implementation or automated data science
This is part of our series on data science techniques suitable for automation, usebla by nonexperts. The next one to be detailed (with source code) will be our Hidden Decision Trees.
Figure 1: Confidence bands based on our CI (bold red and blue curves) – Comparison with traditional normal model (light red anf blue curves)
Figure 1 is based on simulated data that does not follow a normal distribution : see section 2 and Figure 2 in this article. It shows how sensitive CI’s are to model assumptions, when using the traditional approach, leading to very conservative (and plain wrong) CI’s. Classical CI’s are just based on 2 parameters: mean and variance. With the classical model, all data sets with same mean and same variance have same CI’s. To the contrary, our CI’s are based on k parameters – average values computed on k different bins – see next section for details. In short, they are much better predictive indicators when your data is not normal. Yet they are so easy to understand and compute, you don’t even need to understand probability 101 to get started. The attached spreadsheet and Perl scripts have all computations done for you.
1. General Framework
We assume that we have n observations from a continuous or discrete variable. We randomly assign a bin number to each observation: we create k bins (1 ≤ k ≤ n) that have similar or identical sizes. We compute the average value in each bin, then we sort these averages. Let p(m) be the mth lowest average (1 ≤ m ≤ k/2, with p(1) being the minimum average). Then our CI is defined as follows:
 Lower bound: p(m)
 Upper bound: p(km+1)
 Confidence level, also called level or CI level: equal to 1 – 2m/(k+1)
The confidence level represents the probability that a new observation (from the same data set) will be between the lower and upper bounds of the CI. Note that this method produces asymetrical CI’s. It is equivalent to designing percentilebased confidence intervals on aggregated data. In practice, k is chosen much smaller than n, say k = SQRT(n). Also m is chosen to that 1 – 2m/(k+1) is as close as possible to a prespecified confidence level, for instance 0.95. Note that the higher m, the more robust (outliernonsensitive) your CI.
If you can’t find m and k to satisfy level = 0.95 (say), then compute a few CI’s (with different values of m), with confidence level close to 0.95. Then inperpolate or extrapolate the lower and upper bounds to get a CI with 0.95 confidence level. The concept is easy to visualize if you look at Figure 1. Also, do proper crossvalidation: slpit your data in two; compute CI’s using the first half, and test them on the other half, to see if they still continue to have sense (same confidence level, etc.)
CI’s are extensively used in quality control, to check if a batch of new products (say, batteries) have failure rates, lifetime or other performance metrics that are within reason, that are acceptable. Or if wine advertised with 12.5% alcohol content has an actual alcohol content reasonably close to 12.5% in each batch, year after year. By “acceptable” or “reasonable”, we mean between the upper and lower bound of a CI with prespecified confidence level. CI are also used in scoring algorithms, to provide CI to each score.The CI provides an indication about how accurate the score is. Very small confidence levels (that is, narrow CI’s) corresponds to data well understood, with all sources of variances perfectly explained. Converserly, large CI’s mean lot’s of noise and high individual variance in the data. Finally, if your data is stratified in multiple heterogeneous segments, compute separate CI’s for each strata.
That’s it, no need to know even rudimentary statistical science to understand this CI concept, as well as the concept of hypothesis testing (derived from CI) explained below in section 3.
When Big Data is Useful
If you look closely at Figure 1, it’s clear that you can’t compute accurate CI’s with a high (above 0.99) level, with just a small sample and (say) k=100 bins. The higher the level, the more volatile the CI. Typically, an 0.999 level requires 10,000 or more observations to get something stable. These highlevel CI’s are needed especially in the context of assessing failure rates, food quality, fraud detection or sound statistical litigation. There are ways to work with much smaller samples by combining 2 tests, see section 3.
An advantage of big data is that you can create many different combinations of k bins (that is, test many values of m and k) to look at how the confidence bands in Figure 1 change depending on the bin selection – even allowing you to create CI’s for these confidence bands, just like you could do with Bayesian models.
2. Computations: Excel, Source Code
The first step is to reshuffle your data to make sure that your observations are in perfect random order: read A New Big Data Theorem section in this article for an explanation why reshuffling is necessary (look at the second theorem). In short, you want to create bins that have the same mix of values: if the first half of your data set consisted of negative values, and the second half of positive values, you might end up with bins either filled with positive or negative values. You don’t want that; you want each bin to be well balanced.
Reshuffling Step
Unless you know that your data is in an arbitrary order (this is the case most frequently), reshuffling is recommended. Reshuffling can easily be performed as follows:
 Add a column or variable called RAN, made up of simulated random numbers, using a function such as 100,000 + INT(10,000*RAND()) where RAND() returns a random number between 0 and 1.
 Sort your data by column RAN
 Delete column RAN
Note that we use 100,000 + INT(10,000*RAND()) rather than just simply RAND() to make sure that all random numbers are integers with the same number of digits. This way, whether you sort alphabetically or numerically, the result will be identical, and correct. Sorting numbers of variable length alphabetically (without knowing it) is a source of many bugs in software engineering. This little trick helps you avoid this problem.
If the order in your data set is very important, just add a column that has the original rank attached to each observation (in your initial data set), and keep it through the resshuffling process (after each observation has been assigned to a bin), so that you can always recover the original order if necessary, by sorting back according to this extra column.
The Spreadsheet
Download the Excel spreadsheet. Figures 1 and 2 are in the spreadsheet, as well as all CI computations, and more. The spreadsheet illustrates many not so well known but useful analytic Excel functions, such as: FREQUENCY, PERCENTILE, CONFIDENCE.NORM, RAND, AVERAGEIF, MOD (for bin creations) and RANK. The CI computations are in cells O2:Q27 in the Confidence Intervals tab. You can modify the data in column B, and all CI’s will automatically be recomputed. Beware if you change the number of bins (cell F2): this can screw up the RANK function in column J (some ranks will be missing) and then screw up the CI’s.
For other examples of great spreadsheet (from a tutorial point of view), check the Excel section in our data science cheat sheet.
Simulated Data
The simulated data in our Excel spreadsheet (see the data simulation tab), represents a mixture of two uniform distributions, driven by the parameters in the orange cells F2, F3 and H2. The 1,000 original simulated values (see Figure 2) were stored in column D, and were subsequently hardcopied into column B in the Confidence Interval (results) tab (they still reside there), because otherwise, each time you modify the spreadsheet, new deviates produced by the RAND Excel function are automatically updated, changing everything and making our experiment nonreproducible. This is a drawback of Excel, thought I’ve heard that it is possible to freeze numbers produced by the function RAND. The simulated data is remarkably nonGaussian, see Figure 2. It provides a great example of data that causes big problems with traditional statistical science, as described in our following subsection.
In any case, this is an interesting tutorial on how to generate simulated data in Excel. Other examples can be found in our Data Science Cheat Sheet (see Excel section).
Comparison with Traditional Confidence Intervals
We provide a comparison with standard CI’s (available in all statistical packages) in Figure 1, and in our spreadsheet. There are a few ways to compute traditional CI’s:
 Simulate Gaussian deviates with prespecified variance matching your data variance, by(1) generating (say) 10 million uniform deviates on [1, +1] using a great random generator, (2) randomly grouping these generated values in 10,000 buckets each having 1,000 deviates, and (3) compute averages in each bucket. These 10,000 averages will approximate very well a Gaussian distribution, all you need to do is to scale them so that the variance matches the variance in your data sets. And then compute intervals that contain 99%, 95%, or 90% off all the scaled averages: these are your standard Gaussian CI’s.
 Use libraries to simulate Gaussian deviates, rather than the caveman appoach mentioned above. Source code and simulators can be found in books such as Numerical Recipes.
 In our Excel spreadsheet, we used the Confidence.norm function.
As you can see in Figure 1, traditional CI’s fail miserably if your data has either a short or long tail, compared with data originating from a Gaussian process.
Perl Code
Here’s some simple source code to compute CI for given m and k:
$k=50; # number of bins
$m=5;
open(IN,”< data.txt”);
$binNumber=0;
while ($value=<IN>) {
$value=~s/n//g;
$binNumber = $n % $k;
$binSum[$binNumber] += $value;
$binCount[$binNumber] ++;
$n++;
}
if ($n < $k) {
print “Error: Too few observations: n < k (choose a smaller k)n”;
exit();
}
if ($m> $k/2) {
print “Error: reduce m (must be <= k/2)n”;
exit();
}
for ($binNumber=0; $binNumber<$k; $binNumber++) {
$binAVG[$binNumber] = $binSum[$binNumber]/$binCount[$binNumber];
}
$binNumber=0;
foreach $avg (sort { $a <=> $b } @binAVG) { # sorting bins numerically
$sortedBinAVG[$binNumber] = $avg;
$binNumber++;
}
$CI_LowerBound= $sortedBinAVG[$m];
$CI_UpperBound= $sortedBinAVG[$k$m+1];
$CI_level=12*$m/($k+1);
print “CI = [$CI_LowerBound,$CI_UpperBound] (level = $CI_level)n”;
Exercise: write the code in R or Python.
3. Application to Statistical Testing
Rather than using pvalues and other dangerous concepts (about to become extinct) that nobody but statisticians understand, here is an easy way to perform statistical tests. The method below is part of what we call rebel statistical science.
Let’s say that you want to test, with 99.5% confidence (level = 0.995), whether or not a wine manufacturer consistently produces a specific wine that has a 12.5% alcohol content. Maybe you are a lawyer, and the wine manufacturer is accused of lying on the bottle labels (claiming that alcohol content is 12.5% when indeed it is 13%), maybe to save some money. The test to perform is as follows: check out 100 bottles from various batches, and compute an 0.995level CI for alcohol content. Is 12.5% between the upper and lower bounds? Note that you might not be able to get an exact 0.995level CI if your sample size n is too small (say n=100), you will have to extrapolate from lower level CI’s, but the reason here to use a high confidence level is to give the defendant the benefit of the doubt rather than wrongly accusing him based on a too smallconfidence level. If 12.5% is found inside even a small 0.50level CI (which will be the case if the wine is truly 12.5% alcohol), then a fortiori it will be inside an 0.995level CI, because these CI’s are nested (see Figure 1 to understand these ideas). Likewise, if the wine truly has a 13% alcohol content, a tiny 0.03level CI containing the value 13% will be enough to prove it.
One way to better answer these statistical tests (when your highlevel CI’s don’t provide an answer) is to produce 2 or 3 tests (but no more, otherwise your results will be biased). Test whether the alcohol rate is
 As declared by the defendant (test #1)
 Equal to a prespecified value (the median computed on a decent sample, test #2)
4. Miscellaneous
We include two figures in this section. The first one is about the data used in our test and Excel spreadsheet, to produce our confidence intervals. And the other figure shows the theorem that justifies the construction of our confidence intervals.
Figure 2: Simulated data used to compute CI’s: asymmetric mixture of nonnormal distrubutions
Figure 3: Theorem used to justify our confidence intervals
Source: http://www.datasciencecentral.com/xn/detail/6448529:Topic:192888
Big Data
Join Hands with Instagram’s New Algorithm to Boost Your Business’s User Engagement
👉 Most people are not at all happy with the latest 📸 Instagram algorithm. However, if you want to make the most of your business, you should understand how it works. The trick here is you must work with its algorithm and not go against it here; it is easy for your knowhow. 👇🔻
🚩This post will guide you on how the 🆕 new Instagram algorithm works and how you can use it for the advantage of your business
How does the new Instagram algorithm work?
The new 📸 Instagram ♀️ algorithm is a mystery to many users on the platform. It no longer functions at a slow pace as it did in the past. To score high on Instagram today, you need to be extremely diligent as a business owner. The algorithm needs to be fully optimized so that you get success in the platform. So, read and remember the following factors to help you determine how your posts can perform well to boost user engagement and sales revenue for your business with success
⦁ The volume of social engagement you receive : Note, your posts with the maximum number of shares, views, comments, etc., will rank high in the Instagram feed over others. When your post receives a lot of comments and likes, the Instagram platform gets a signal that it is engaging to users and has highquality content. It wants more users to see it, and the algorithm on the platform will display it to other users online.
However, here again, there is a catch. It is not only the engagement that Instagram considers now; it is how fast your Instagram post engages the readers in some cases. Trending hashtags on Instagram, for instance, is the bestknown example of the above. The volume of engagement you get for your business posts is important; however, how quickly you get this engagement is even more important!
👉▶Tip You should discover when the best time of the day to post is. In this way, you can schedule posts targeted at that time only. You should post when most users can see it. This increases your chances of boosting engagement faster, and the Instagram algorithm as it knows what somebody likes on Instagram and will take over from here to share it with those users who will ✌ ⚡ like, share and comment on your post.
⦁ How long are people looking at your post on Instagram : If you see the algorithm that Facebook uses, it takes into account how long users look at your post and spend time interacting with its content. Instagram is no different! Its algorithm will take a look at the duration of time that users spend on your post too. This is again, another important factor that Instagram uses to boost posts.
👉▶Tip Here, you should craft a great 📸 Instagram caption for your post to boost user engagement. If your Instagram caption is engaging, users will want to read it or click on the button that says “more” to spend more time on your post, and in this way, you can boost more engagement with success on 📸 Instagram.
This is why Boomerangs and videos are generally posted in the 📹 video format. This makes them perform well with the Instagram algorithm. Users take more time to watch them toll the end. Another great way for you to make users stay on your posts is to swipe up CTA action for them to view more. This is another great strategy you can use for your business to boost engagement.
⦁ When did you post your photo?: This is another factor that influences how the 📸 Instagram algorithm works for your business is determining the time of the post. It depends on how often you use the 📲 app. In the past, the algorithm used to give you insights into the recent posts; however, if you tend to log on to Instagram just a few times in one week, it will show posts that have been posted recently. You might even get likes from a previous post that you published a few days ago. The target here is to keep users updated on the posts they might have missed due to the fact that they did not log in regularly.
👉 This means that users can see your posts now for longer periods.
⦁ The sort of content you post also influences engagement – Think about if 📸 Instagram only focused on content with the optimal engagement; users would only see that content every time they logged into 📸 Instagram. However, this is not the case with the Instagram algorithm.
For instance, the content 👥 users’ genre searches for in the platform also influence the algorithm of how Instagram works. If a user is a fan of sports, says the NBA and sees that genre of content regularly, Instagram will immediately catch on to this and bring the user more similar content related to sports and the NBA. It knows that the user will be interested in such posts and perhaps bring news and updates of the LA Lakers to boost 👤 user engagement and satisfaction.
⦁ Accounts that users search for : Like the above, the accounts that users search for also determines how the 📸 Instagram algorithm works. This is why when users search a specific 🧾 account many times; Instagram will bring more such content from other accounts to that user. This is why they see it often in their Instagram feeds.
From the above, it is evident that if you want to work with the new Instagram algorithm, you must understand how it works and optimize your posts to help it boost your business. In the past, the feed on Instagram was chronological; however, things have changed now.
🟥 So, ensure that your CTA is strong; you use the right hashtags, post at the best time, and make your feed on Instagram as attractive as possible. In this way, you can boost user engagement, lead conversions, sales, and of course, gain a strategic edge in the market as well.
↘ Source: Ron Johnson. Ron is a Marketer. He always shares his tips on trendy marketing tricks. He always implements new tricks in his field.
Big Data
Top 10 Big Data trends of 2020
During the last few decades, Big Data has become an insightful idea in all the significant technical terms. Additionally, the accessibility of wireless connections and different advances have facilitated the analysis of large data sets. Organizations and huge companies are picking up strength consistently by improving their data analytics and platforms.
2019 was a major year over the big data landscape. In the wake of beginning the year with the Cloudera and Hortonworks merger, we’ve seen huge upticks in Big Data use across the world, with organizations running to embrace the significance of data operations and orchestration to their business success. The big data industry is presently worth $189 Billion, an expansion of $20 Billion more than 2018, and is set to proceed with its rapid growth and reach $247 Billion by 2022.
It’s the ideal opportunity for us to look at Big Data trends for 2020.
Chief Data Officers (CDOs) will be the Center of Attraction
The positions of Data Scientists and Chief Data Officers (CDOs) are modestly new, anyway, the prerequisite for these experts on the work is currently high. As the volume of data continues developing, the requirement for data professionals additionally arrives at a specific limit of business requirements.
CDO is a Clevel authority at risk for data availability, integrity, and security in a company. As more businessmen comprehend the noteworthiness of this job, enlisting a CDO is transforming into the norm. The prerequisite for these experts will stay to be in big data trends for quite a long time.
Investment in Big Data Analytics
Analytics gives an upper hand to organizations. Gartner is foreseeing that organizations that aren’t putting intensely in analytics by the end of 2020 may not be ready to go in 2021. (It is expected that private ventures, for example, selfemployed handymen, gardeners, and many artists, are excluded from this forecast.)
The realtime speech analytics market has seen its previously sustained adoption cycle beginning in 2019. The idea of customer journey analytics is anticipated to grow consistently, with the objective of improving enterprise productivity and the client experience. Realtime speech analytics and customer journey analytics will increase its popularity in 2020.
Multicloud and Hybrid are Setting Deep Roots
As cloudbased advances keep on developing, organizations are progressively liable to want a spot in the cloud. Notwithstanding, the process of moving your data integration and preparation from an onpremises solution to the cloud is more confounded and tedious than most care to concede. Additionally, to relocate huge amounts of existing data, organizations should match up to their data sources and platforms for a little while to months before the shift is complete.
In 2020, we hope to see later adopters arrive at a conclusion of having multicloud deployment, bringing the hybrid and multicloud philosophy to the front line of data ecosystem strategies.
Actionable Data will Grow
Another development concerning big data trends 2020 recognized to be actionable data for faster processing. This data indicates the missing connection between business prepositions and big data. As it was referred before, big data in itself is futile without assessment since it is unreasonably stunning, multiorganized, and voluminous. As opposed to big data patterns, ordinarily relying upon Hadoop and NoSQL databases to look at data in the clump mode, speedy data mulls over planning continuous streams.
Because of this data stream handling, data can be separated immediately, within a brief period in only a single millisecond. This conveys more value to companies that can make business decisions and start processes all the more immediately when data is cleaned up.
Continuous Intelligence
Continuous Intelligence is a framework that has integrated realtime analytics with business operations. It measures recorded and current data to give decisionmaking automation or decisionmaking support. Continuous intelligence uses several technologies such as optimization, business rule management, event stream processing, augmented analytics, and machine learning. It suggests activities dependent on both historical and realtime data.
Gartner predicts more than 50% of new business systems will utilize continuous intelligence by 2022. This move has begun, and numerous companies will fuse continuous intelligence during 2020 to pick up or keep up a serious edge.
Machine Learning will Continue to be in Focus
Being a significant innovation in big data trends 2020, machine learning (ML) is another development expected to affect our future fundamentally. ML is a rapidly developing advancement that used to expand regular activities and business processes
ML projects have gotten the most investments in 2019, stood out from all other AI systems joined. Automated ML tools help in making pieces of knowledge that would be difficult to separate by various methods, even by expert analysts. This big data innovation stack gives faster results and lifts both general productivity and response times.
Abandon Hadoop for Spark and Databricks
Since showing up in the market, Hadoop has been criticized by numerous individuals in the network for its multifaceted nature. Spark and managed Spark solutions like Databricks are the “new and glossy” player and have accordingly been picking up a foothold as data science workers consider them to be as an answer to all that they disdain about Hadoop.
However, running a Spark or Databricks work in data science sandbox and then promoting it into full production will keep on facing challenges. Data engineers will keep on requiring more fit and finish for Spark with regards to enterpriseclass data operations and orchestration. Most importantly there are a ton of options to consider between the two platforms, and companies will benefit themselves from that decision for favored abilities and economic worth.
InMemory Computing
Inmemory computing has the additional advantage of helping business clients (counting banks, retailers, and utilities) to identify patterns rapidly and break down huge amounts of data without any problem. The dropping of costs for memory is a major factor in the growing enthusiasm for inmemory computing innovation.
Inmemory innovation is utilized to perform complex data analyses in real time. It permits its clients to work with huge data sets with a lot more prominent agility. In 2020, inmemory computing will pick up fame because of the decreases in expenses of memory.
IoT and Big Data
There are such enormous numbers of advancements that expect to change the current business situations in 2020. It is hard to be aware of all that, however, IoT and digital gadgets are required to get a balance in big data trends 2020.
The function of IoT in healthcare can be seen today, likewise, the innovation joining with gig data is pushing companies to get better outcomes. It is expected that 42% of companies that have IoT solutions in progress or IoT creation in progress are expecting to use digitized portables within the following three years.
Digital Transformation Will Be a Key Component
Digital transformation goes together with the Internet of Things (IoT), artificial intelligence (AI), machine learning and big data. With IoT connected devices expected to arrive at a stunning 75 billion devices in 2025 from 26.7 billion presently, it’s easy to see where that big data is originating from. Digital transformation as IoT, IaaS, AI and machine learning is taking care of big data and pushing it to regions inconceivable in mankind’s history.
Source: https://www.fintechnews.org/top10bigdatatrendsof2020/
Big Data
What are the differences between Data Lake and Data Warehouse?
Overview
 Understand the meaning of data lake and data warehouse
 We will see what are the key differences between Data Warehouse and Data Lake
 Understand which one is better for the organization
Introduction
From processing to storing, every aspect of data has become important for an organization just due to the sheer volume of data we produce in this era. When it comes to storing big data you might have come across the terms with Data Lake and Data Warehouse. These are the 2 most popular options for storing big data.
Having been in the data industry for a long time, I can vouch for the fact that a data warehouse and data lake are 2 different things. Yet I see many people using them interchangeably. As a data engineer understanding data lake and data warehouse along with its differences and usage are very crucial as then only will you understand if data lake fits your organization or data warehouse?
So in this article, let satiate your curiosity by explaining what data lake and warehousing are and highlight the difference between them.
Table of Contents
 What is a Data Lake?
 What is a Data Warehouse?
 What are the differences between Data Lake and Data Warehouse?
 Which one to use?
What is a Data Lake?
A Data Lake is a common repository that is capable to store a huge amount of data without maintaining any specified structure of the data. You can store data whose purpose may or may not yet be defined. Its purposes include building dashboards, machine learning, or realtime analytics.
Now, when you store a huge amount of data at a single place from multiple sources, it is important that it should be in a usable form. It should have some rules and regulations so as to maintain data security and data accessibility.
Otherwise, only the team who designed the data lake knows how to access a particular type of data. Without proper information, it would be very difficult to distinguish between the data you want and the data you are retrieving. So it is important that your data lake does not turn into a data swamp.
Image Source: here
What is a Data Warehouse?
A Data Warehouse is another database that only stores the preprocessed data. Here the structure of the data is welldefined, optimized for SQL queries, and ready to be used for analytics purposes. Some of the other names of the Data Warehouse are Business Intelligence Solution and Decision Support System.
What are the differences between Data Lake and Data Warehouse?
Data Lake  Data Warehouse  
Data Storage and Quality  The Data Lake captures all types of data like structure, unstructured in their raw format. It contains the data which might be useful in some current usecase and also that is likely to be used in the future.  It contains only highquality data that is already preprocessed and ready to be used by the team. 
Purpose  The purpose of the Data Lake is not fixed. Sometimes organizations have a future usecase in mind. Its general uses include data discovery, user profiling, and machine learning.  The data warehouse has data that has already been designed for some usecase. Its uses include Business Intelligence, Visualizations, and Batch Reporting. 
Users  Data Scientists use data lakes to find out the patterns and useful information that can help businesses.  Business Analysts use data warehouses to create visualizations and reports. 
Pricing  It is comparatively lowcost storage as we do not give much attention to storing in the structured format.  Storing data is a bit costlier and also a timeconsuming process. 
Which one to use?
We have seen what are the differences between a data lake and a data warehouse. Now, we will see which one should we use?
If your organization deals with healthcare or social media, the data you capture will be mostly unstructured (documents, images). The volume of structured data is very less. So here, the data lake is a good fit as it can handle both types of data and will give more flexibility for analysis.
If your online business is divided into multiple pillars, you obviously want to get summarized dashboards of all of them. The data warehouses will be helpful in this case in making informed decisions. It will maintain the data quality, consistency, and accuracy of the data.
Most of the time organizations use a combination of both. They do the data exploration and analysis over the data lake and move the rich data to the data warehouses for quick and advance reporting.
End Notes
In this article, we have seen the differences between data lake and data warehouse on the basis of data storage, purpose to use, which one to use. Understanding this concept will help the big data engineer choose the right data storage mechanism and thus optimize the cost and processes of the organization.
The following are some additional data engineering resources that I strongly recommend you go through
If you find this article informative, then please share it with your friends and comment below your queries and feedback.
You can also read this article on our Mobile APP
Related Articles

Techcrunch1 week ago
Original Content podcast: It’s hard to resist the silliness of ‘Emily in Paris’

Startups1 week ago
Solve the ‘dead equity’ problem with a longer founder vesting schedule

Startups1 week ago
Three views on the future of media startups

Startups1 week ago
Pear hosted its inviteonly demo day online this year; here’s what you might have missed

Blockchain4 days ago
Bitcoinnami Officially Launches on October 21, 2020

AI1 week ago
How AI Revolutionize the Way Video Games Developed and Played

Startups1 week ago
VCs reload ahead of the election as unicorns power ahead

AR/VR1 week ago
‘Blaston’ is a Fantastically Creative VR ‘Shooter’ That’s All About Making You Move