Does too much data kill data?
Today’s article is a response to the recent publication in the French magazine “L’Usine Digitale” of the interview of the leader of one of the most important retail chains in France, about his vision of digital. Asked about the need to capture and analyse the data for retailers, he responds, “With Big Data, we are in a bubble that, when it will explode, will make considerable damage. Too much information kills information. ”
Beyond the reference to the American economist Arthur Laffer and his formula “Too much tax kills tax ‘, which means both everything and nothing, and that everyone tends to use in its own way, this major leader seems to think that we are trapped under a huge mass of data that we don’t have the capacity to process, and that this data„ therefore doesn’t necessarily help us to make clever decisions.
These statements made me react because my vision is different: data exploitation and what is called Big Data are issues with both a human and a technological dimension. An overflow of information blocks the human capacity to process this information, but in no case those of machines, which have a processing capacity far superior to ours. The challenge here is for me one of capacity, not expertise.
A human dimension
This is to assess the human capacity to exploit these thousands of data collected every day. From what happens in stores to the behaviour of a client on the mobile app of the brand, or the way a consumer acts in the parking lot of a store or in the aisles of the supermarket, the average basket or the types of products they buy … Marketers now have billions of data available to them: yet, it is humanly impossible to process these billions of data, hence the importance of the technological dimension and the use of suitable tools.
A technological dimension
In reality, the challenge today is no longer to capture information to get some information, but rather to retrieve data corresponding to specific use cases, for which we need to offer a customized marketing response.
This is particularly what does Adobe Analytics, through their Contribution Analysis Module: it is based on data collected by Adobe Analytics, in order to process information a human brain would not be able to get (or only after a very long period of time), and measure the impact of a particular event on a result.
Let’s take the example (real but that unfortunately cannot be named) of a B‑to‑B trader, who suddenly realizes a 81% increase in orders. This merchant has assigned a team of five data scientists to try to identify the reasons of this increase for a whole weekend: unfortunately, they were able to analyse only 5 out of 300 potentially relevant dimensions. However, it took only 30 seconds for the tool to realize that this peak in orders was related to fraudulent discount vouchers. This finding then enabled managers to delete these discount vouchers, and to cancel the fraudulent orders. In this case, solving the problem was possible only because of the technological capabilities of the module, due to the volume of data to process.
Another interesting example: a company belonging to the travel industry had realized that it had a daily a significant daily Increase Revenue of $ 1.7 million. After analyse from the tool, it turned out that the most profitable campaign for this client had been disabled due to a misinterpretation of the data from the Analytics team. Simply reactivating this campaign has enabled this customer to stop losing upwards of $1.7M/day.
It’s in this sense that we cannot say that “too much information kills information”: the machine has a processing capacity far superior to ours, but we need human for the added value that it represents and its decision-making power. The two are indispensable and the two, combined, allow companies to make decisions incredibly more profitable than they were a few years ago…
What about you, what is your opinion on the use of Big Data? Feel free to continue the discussion and share your views on the subject in the comments section!