The availability of up-to-date and accurate data is a fundamental requirement for the vast majority of industry sectors. Whether it’s Facebook exploring where we like to eat or major credit institutions attempting to ascertain our financial health – data plays a substantial role in the world of business-related decision making. However, one such industry that appears to be falling behind the age of the big data is the real estate space. Many argue that the availability of credible information within the industry is now a rather rare source, subsequently resulting in data providers hoarding the material they possess. Could decentralization be the solution?
The real estate industry has a significant relationship with data, especially when one considers the end-to-end transactional process. Whether it’s a development company looking to explore the availability of land in a particular area, a real estate agent looking to explore market demand or a bank that facilitates a mortgage application – every outcome creates a new data-driven exercise. The big problem is that this data is not distributed across the wider-real estate industry, meaning that its only use if for those that are in receipt of it.
As a result, not only is the availability of data an issue for stakeholders that need it, but there is often a sheer lack of accuracy. The symptoms of such a model is that industry leaders are essentially monopolizing the data they hold, with there being no incentivization for them to collaborate with their real estate counterparts. A fine example of this is the online valuation sector.
In times gone buy, the only way for potential buyers to ascertain what properties were
available was to literally come across a “For Sale” sign positioned outside the home. Not only can this now be achieved through an online portal, but it is also possible to view the approximate value of the property. However, here lies the problem. These approximations are just that – approximate, meaning that the reliability of the provided data is often wanting.
One such example of this is Zillow – who currently dominate the online real estate database sector. Its nearest competitor was a platform called Trulia, however in 2014 Zillow flexed its muscles through a $3.5 billion acquisition. Although other independent alternatives exist in the space, they are minute in comparison. Nevertheless, by accessing the Zillow website, buyers have the opportunity to scrawl through millions of for-sale properties, positioned alongside their estimated valuation.
In order to obtain their estimations – or “Zestimate” as Zillow refer to it as, the platform combines an algorithmic formula with publically available data. Although artificial intelligence certainly has a role to play in the data-driven space, the key issue is that the data that Zillow bases their model on is highly inaccurate. They claim that their median error rate is less than 5 percent, however this can be higher in certain U.S. states.
According to Steve Freeman of Keller Williams – a U.S. based a real estate agency, the Zillow model is highly inaccurate. Comparing it to a broken clock that is correct twice a day, Freeman states that he knows of no agents that actually use Zillow. A further issue to the Zillow platform is that they do not update the status of listed properties regularly enough. This results in countless enquirers from perspective buyers for properties that are no longer on the market.
If these claims are correct, how is it that the platform dominates the sector so strongly? Once again, this leads back to the issues of industry leaders monopolizing the space, even though the data they provide is of little value to the vast majority of stakeholders. The Zillow model isn’t necessarily a bad one – as it aims to utilize the growth of digitalization to provide the end-user with value. The problem the platform has is that it is unable to effectively obtain up-to-date, useful data – subsequently skewing the information that its algorithmic model generates.
Taking all of the above concerns into consideration, the solution certainly lies in collaboration.
As there is such a large breadth of industry stakeholders involved in real estate, data is scattered at an industrial scale. If these data providers were able to channel their material through a secure decentralized hub that was accessible by others operating in the industry, it could open up the doors to a revitalized real estate database.
Although this sounds like a step in the right direction, there still lies a fundamental hindrance – incentivization. Why would data providers want to share the data they possess to potential competitors? The solution could be achieved on two fronts. First and foremost, if both stakeholder A and stakeholder B share the data they are attributable to, then both parties benefit as they will be accustomed to information they might otherwise have not been able to access.
The central hub would not only need to be secure, but autonomous too. This would essentially remove the role of a centralized third party and instead create a decentralized data sharing ecosystem. One such organization that are looking to fill this gap is ReBloc – a startup who aim to merge the immutable and scalable benefits of blockchain technology with that of real estate data. By hosting a go-to hub for the distribution and extraction of real estate data, it essentially creates an opportunity for stakeholders to improve their decision-making process.
The second front that could pave the way for a more credible real estate data space is incentivization in the form of reward-based contribution. In a nutshell, if the blockchain protocol does act as a central hub for data, then it also has the means to reward data providers in the form of tokenization. In other words, a decentralized data eco-system can bridge buyers and seller together, without the need of a third party.
Through the use of smart contract technology, data requests can result in real-time delivery automatically. Moreover, with data providers operating in a transparent eco-system, they can further amplify their benefits by ensuring that they provide ultra-accurate information, essentially ensuring that the platform retains its integrity.
And who would make the perfect buyer in such an eco-system? Zillow – because if the data they utilize was not only up-to-date, but reliable too – their algorithmic model could improve quite significantly.
Brian Wang is a Futurist Thought Leader and a popular Science blogger with 1 million readers per month. His blog Nextbigfuture.com is ranked #1 Science News Blog. It covers many disruptive technology and trends including Space, Robotics, Artificial Intelligence, Medicine, Anti-aging Biotechnology, and Nanotechnology.
Known for identifying cutting edge technologies, he is currently a Co-Founder of a startup and fundraiser for high potential early-stage companies. He is the Head of Research for Allocations for deep technology investments and an Angel Investor at Space Angels.
A frequent speaker at corporations, he has been a TEDx speaker, a Singularity University speaker and guest at numerous interviews for radio and podcasts. He is open to public speaking and advising engagements.