Knowledge graphs are increasingly becoming important in the AI world as an enabling technology for data integration and analytics, semantic search and question answering, and other cognitive applications. However, developing and maintaining large knowledge graphs in a manual way is too expensive and time consuming. To accelerate and scale the process, methods and techniques from the areas of information extraction and natural language processing (NLP) can be very helpful.
The last decades witnessed a significant evolution in terms of data generation, management, and maintenance. This has resulted in vast amounts of data available in a variety of forms and formats such as RDF. As RDF data is represented as a complex graph structure, applying machine learning algorithms to extract valuable knowledge and insights from them is not straightforward, especially when the size of data is enormous. Although Knowledge Graph Embedding models (KGEs) convert the RDF graphs to low-dimensional vector spaces, however, the vector suffers from a lack of explainability.
During its first two decades, linked data has been a highly technical approach. A small group of devotees and academics was able to use it in some use cases. But mainstream adoption was always hampered by a steep learning curve and complex tooling. At the same time, linked data has the potential to transform the current information landscape. Hundreds of thousands of domain experts whose daily work could be improved by using linked data-powered solutions. And there are thousands of organizations.
While most content management systems offer taxonomy management as a non-intuitive bolt-on feature, Tridion CMS comes with a far superior way. It approaches ‘human-in-the-loop’ in a way that is non-intrusive, respectful of the author’s time, and intelligent. We’ll demonstrate its working with both unstructured web content as well as structured (DITA) content and the resulting improvements in content management, findability, and discoverability.