Models for recognizing categories, entities, roles, and patterns in legal text documents have been developed mainly for forms: They have in common that they require some fixed structure to reliably obtain the relevant information and only predefined subjects are covered by existing algorithms. Neither polystructured nor free text data are interpreted in a meaningful way.
In this presentation we describe how we developed modules and pipelines for the industrial processing of data and content from a machine learning project in the legal domain. Lawyers’ work is far from being digitized. They need to deal with large amounts of printed documents of various types (e.g. letters, contracts, invoices, orders, offers, court documents). A single mandate could consist of several hundreds of paper folders. To work on specific aspects of a case, they need to review, annotate, reorder, even reprint and reattach their documents to a new folder.
In the frame of Cohesion Policy, the European Union distributed 32.5 % of the EU budget (equivalent to ca. EUR 351.8 billion over seven years) to its member states. European Law obligates the member states to publish how the money was spent. In this talk, we are going to describe what is the effect of the EU legislation regulating the data publication of Cohesion Policy as well as how decision-makers can make more sense out of it thanks to the integration in Knowledge Graphs and Question Answering Techniques.
The World Cafe is online only. Please go to https://t1p.de/session21.