The United Nations’ Climate Technology Centre & Network (CTCN) provides technical assistance in response to requests submitted by developing countries via their Nationally-selected focal points, or National Designated Entities (NDEs). Upon receipt of such technical assistance requests, the CTC needs to quickly mobilize its global Network of climate technology experts to design and deliver a customized response tailored to local needs. The volume of requests as well as the amount of experts and technical assistance responses ('solutions') is growing considerably and therefore it was decided to explore if and how an automated process can support filtering technical assistance requests ('requirements') and the identification of experts and resources to fulfil these requests (a "Matchmaking Assistant"). Ideally, such an automated process would offer NDEs with the opportunity to query the CTCN's knowledge management system using problem-oriented language to find expertise, case studies/good practice stories, relevant documents and signposts to relevant other knowledge sources. This latter scenario would allow the CTCN to significantly scale-up its technical assistance work to developing countries.
A demonstrator matchmaking assistant was developed to explore how existing tools such as the REEEP Climate Tagger (which can identify most relevant concepts from unstructured text and is based on an expert-developed climate thesaurus) and the underlying PoolParty Semantic Suite Technology provide a solution for the above-mentioned challenge.
This talk outlines the matchmaking scenario supported by the demonstrator, the technical set-up of the demonstrator, user feedback and lessons learned.
Publications in peer-reviewed Scientific Journals are seen as a performance indicator reflecting the productivity of a research-based pharmaceutical company focused on innovation and new therapeutic concepts for unmet medical needs. A new corporate publication tracking system serves Boehringer Ingelheim employees and the Research leadership team as an important benchmark and tracking tool for “Boehringer Ingelheim Papers” published in the global scientific community. Our semantic approach comprises the results of automated literature database alerts and manually curated & enriched data for tracking, storing and visually analysing published articles in peer-reviewed scientific journals, based on semantic analysis of publications coming from Boehringer Ingelheim authors.
With a Triple Store you can model data as RDF triples. Triples are atomic, so they are easy to create/combine/share. The RDF data model can be very powerful - but it's not ideal for all your information and all your applications. Some data is better modeled as documents, which may be mostly structured text plus metadata; or as data, which may be richly structured objects. Ideally you'd want to manage all this data in one place; query across all the data models; and combine data models and queries in some interesting ways. In this talk we’ll describe some real-world projects built on a multi-model NoSQL database – MarkLogic - which lets you manage and search all of these data models, separately and in combination.
We’ll briefly describe the sweet spot for each model; then we'll talk about interesting ways to combine those models, such as having triples embedded in text documents. Then we'll show some projects where the models are used together, including:
- A major media company using triples and documents to manage digital assets and metadata
- A financial services company applying semantics to financial data to do data integration without ETL
- A scientific publisher using semantics to improve access to, and understanding of, scientific journals
- A TV company with an innovative mobile app to search, organize, and recommend movie clips using triples and digital documents
This talk describes three case studies where we have applied controlled vocabularies to help with information discovery and re-use. These examples stretch across the 15 years and highlight some of the developments in thinking about linked data and education during that time.
As law enforcement agencies around the world consider the use of police officer bodyworn cameras to capture critical evidence during interactions with the public, what's being overlooked is how to effectively use the massive amount of video data being uploaded every day. This session will cover strategies for integrating semantics into the use of body camera video for investigations, officer training and public awareness.