Everything is for sale. If capitalism needed a motto that should be it.
However, up until now “everything” didn’t include data.
The gigantic profusion of digital information in the modern world is the price of doing business. It stands to reason then that organizing this data in such a way that it can be mined for patterns and insights to bring efficiencies to bear, will be a major benefit and competitive advantage for those entities agile enough to seize the opportunity.
There’s another problem with commoditizing data so it can be sold in a marketplace like any other asset and that’s accessibility.
Why does this really matter? Isn’t it just more blue-sky thinking from the techies that over promises and underdelivers?
Well, it matters to companies small and large because without access to their own data or the inability to share its data internally and externally, they will have a flawed view of their business because their intelligence has blind spots that could skew decision-making.
Closed silos of wasted data
It is one thing to know that the data exists but if it is trapped in silos unwittingly created by a company, with each individual department’s system walled off from interoperability with others, except those previously deemed to be stakeholders.
So not only do companies not necessarily have an audited overview of all their data, they can often fail to enable the various datasets to interact, or at minimum to at least be aware of each datasets mutual existence.
What are the practical implications of this?
A firm may have an order fulfillment process designed to accept postal addresses in a certain format etc. The company is expanding to China and wants to deliver to Chinese addresses so its system needs to read Mandarin script. If the company’s fulfillment software is able to connect with data source of Chinese addresses in Mandarin it can look up the addresses.
This could be solved with a data marketplace where our expanding company can access the Chinese data it needs and open it up to its systems.
Those systems are often a hybrid of technology developed and tweaked over many years, using new and old technology, from spreadsheets to databases to “airtight” cloud storage.
Data will sometimes be unstructured and therefore unusable by computer code and the custodians of the data may often be using incompatible data models.
Thinking through the isolation of datasets in silos it is easier to see that the solution must come through introducing interoperability and fluidity in order to breakdown those impervious silo structures.
Yet, there can be good reasons why silos exist, both in terms of security of information and the protection of the integrity of records.
The sheer size of the amount of data (2.5 quintrillion bytes a day) being generated is not the end of the problems to be solved.
There is also the contemporary dynamic landscape of data creation to contend with.
Just think about how much data an individual generates just by browsing the web on their smartphone – the geo data, server requests, downloads etc. Now multiply that for the commercial world, where transactions in payments, for example, are measured in the tens of thousands per second, Each transaction its own record. And that’s before we even get started with the data Internet of Things devices are generating, which is set to explode.
Data marketplaces: from Google Cloud to the blockchain-fuelled future
Data marketplaces, centralized or decentralized, are the future. For sure, companies such as
Facebook can get their data from you and me for free but for others it is going to require
finding the seller to buy the data from.
Amazon filed a patent this year for a streaming data marketplace for example. One of the most exciting developments comes from Google, in which its recently launched Google Cloud Platform’s suite of services, including commercial access to datasets and being able to run queries at Google BigQuery run queries.
Google’s data marketplace may be the more advanced on the market at present. Examples of the datasets that can be accessed include the 1000 Cannabis Genome Project, the bitcoin blockchain or City of Chicago Taxi trips from 2013 to the present. Google, as you might expect, entices those considering being customers of its services a generous introductory offer of a $300 credit.
However, all those examples are non-blockchain solutions and as such could perhaps be at a disadvantage to a new breed of platforms from blockchain start-ups.
Distributed ledger technology can be leveraged to track with granular detail every piece of data and its lifecycle history.
Blockchain’s two key features of immutability and encryption are ideal for protecting data from unapproved changes and unauthorized access.
The alpha is up and running on the Rinkeby Testnet – you can use the Metamask Chrome browser extension to test it.
Examples of the data available for purchase, using the native SciDex Token, include “Number of infants born in Spain from 1990”. The data costs SDX 5,500. Or perhaps “Birth rate in Asia in 2017” piques interest for SDX 4,000.
In addition to “find Data”, users can go “provide data” to sell.
Describe the nature of the data, select from one of three contract templates (for scientists, small business and enterprises) and then upload, although that last part is not available in the alpha version.
In the third part of the data-selling journey users are asked to specify “variations”, which is essentially the various ways the dataset could be presented or split up, such as buy age, location, dates, industry and so on. Each variation can be priced separately.
The “Call for Actions” is the really clever part, where customers set up their interactions with dataset smart contracts.
State of the art protocol for trading data
The foundation of that interaction is the SciDex Protocol’s adaptable Ricardian smart contracts, which means that smart contracts are human- and machine-readable.
They are also adaptable enough to respond to new parameters and events, as opposed to recurring events such as an oracle feeding in time data so a smart contract so it knows when to pay a dividend, but that’s all the contract knows about in the world.
Let’s take a closer look at Streamr. It has an Editor in alpha release that those interested can try out to get a peak at the future of data.
The innovative drag-and-drop interface makes it a breeze to get datasets working together and to add programs to process events.
The Streamr provides an example of a person in an electric vehicle selling battery level data, selling route condition data, buying nearby charge point price – all handled in realtime. It provides the reader/viewer with a fascinating insight into the data marketplace possibilities.
No one knows which data marketplace will get traction first, but we do know that data is the business opportunity of the 21 st century.
Geoffrey Moore, the author of Crossing the Chasm, a bestseller on marketing and selling disruptive products, says the data war will take no prisoners:
“Without big data analytics, companies are blind and deaf, wandering out onto the Web like deer on a freeway.
Data marketplaces may be the key utility – essential to modern life, like water or electricity – of our connected future worlds.
Brian Wang is a Futurist Thought Leader and a popular Science blogger with 1 million readers per month. His blog Nextbigfuture.com is ranked #1 Science News Blog. It covers many disruptive technology and trends including Space, Robotics, Artificial Intelligence, Medicine, Anti-aging Biotechnology, and Nanotechnology.
Known for identifying cutting edge technologies, he is currently a Co-Founder of a startup and fundraiser for high potential early-stage companies. He is the Head of Research for Allocations for deep technology investments and an Angel Investor at Space Angels.
A frequent speaker at corporations, he has been a TEDx speaker, a Singularity University speaker and guest at numerous interviews for radio and podcasts. He is open to public speaking and advising engagements.