As a long-term member of the Linked Data community, which has evolved from W3C’s Semantic Web, the latest developments around Data Science have become more and more attractive to me due to its complementary perspectives on similar challenges. Both disciplines work on questions like these:
- How to extract meaningful information from large amounts of data?
- How to connect pieces of information to other pieces in order to generate ‘bigger pictures’ of sometimes complex problems?
- How to visualize complex information structures in a way that decision-makers benefit from it?
Two complementary approaches
When taking a closer look to the approaches taken by those two ‘schools of advanced data management’ one aspect becomes obvious: Both try to develop models in order to be able to ‘codify and to calculate the data soup’.
While Linked Data technologies are built on top of knowledge models (‘ontologies’), which try to describe first of all data in distributed environments like the web, are Data Science methods mainly based on statistical models. One could say: ‘Causality and Reasoning over Distributed Data’ meets ‘Correlation and Machine Learning on Big Data’.
Graph databases are key to success
In contrast to this supposed contradiction, correlations and complementarities between those two disciplines prevail. Both approaches seek for solutions to overcome the problem with rigid data structures which can hardly adapt to the needs of dynamic knowledge graphs. Whenever relational databases cannot fulfill requirements about performance and simplicity, due to the complexity of database queries, graph databases can be used as an alternative.
Thus, both disciplines make use of these increasingly popular database technologies: While Linked Data can be stored and processed by standards-based RDF stores like Virtuoso, MarkLogic, GraphDB or Sesame, are the most popular graph databases for Data Scientists mainly based on the property graph model, for example: Titan or Neo4J. Some vendors like Bigdata support even both graph models.
Both graph models are similar and can be mapped to each other, but they try to solve slightly different problems:
- the property graph model serves better the needs of Graph Data Analysts (e.g. for Social Network Analysis or for real-time recommendations)
- RDF graph databases are great when distributed information sources should be linked to each other and mashed together (e.g. for Dynamic Semantic Publishing or for context-rich applications).
Connect both approaches and combine methods
I can see at least two options where methods from Data Science will benefit from Linked Data technologies and vice versa:
- Machine learning algorithms benefit from the linking of various data sets by using ontologies and common vocabularies as well as reasoning, which leads to a broader data basis with (sometimes) higher data quality
- Linked Data based knowledge graphs benefit from Graph Data Analyses to identify data gaps and potential links (find an example for a semantic knowledge graph about ‘Data Science’ here: http://vocabulary.semantic-web.at/data-science)
Questions on the use of Linked Data in businesses
We want to learn more about the opinion of various stakeholders working in different industry verticals about the status of Linked Data technologies. The main question is: Is Linked Data perceived as mature enough to be used on a large scale in enterprises? The results will contribute to the development of the Linked Data market by reporting how enterprises currently think.