HACKER Q&A
📣 gpa

How to extract information from mutiple (unstructured text) documents?


I need to extract certain information from research publications, such as species, biomass, geographic location, and maybe related environmental data. Assume that I will convert PDF to text and, if necessary, do OCR. But here's the catch: other species with similar data can be quite close to my target species on the same page, paragraph, sentence, or in the same table. Moreover, indicator values can be quite close or the same (e. g., biomass B = 1.2 kg/m^2), as the species are from the same Genus. For example, Mytilus has 3 species (actually more) - Mytilus edulis, Mytilus trossulus, and Mytilus galloprovincialis. How would an algorithm with no prior knowledge determine that a specific value relates to my target species rather than, say, the one adjacent to it in the same table or paragraph? I'm a human, and I know what to look for as I have prior knowledge, but I cannot process hundreds or thousands of articles as quickly as a machine can. Does anyone have experience in using a tool that can correctly parse such information after appropriate setup? I am aware of:

- HN search results (https://hn.algolia.com/?q=information+extraction)

- Apache Tika (https://tika.apache.org/)

- Apache OpenNLP (https://opennlp.apache.org/)

- Apache UIMA (https://uima.apache.org/external-resources.html)

- GATE (https://gate.ac.uk/)

But I am not sure if any of these can do the job, as I haven't used them. I also know that there are companies that have developed similar solutions (https://www.ontotext.com/knowledgehub/case-studies/ai-content-generation-in-scientific-communication/), possibly by using GraphDB. In addition, what is the best data storage solution? In one case, you extract a table from the publication, whereas in another - a single data point. It's not worth the effort of creating a separate table for a single data point. What would be the right approach, software (library) and possible workflow and data storage solution in this case?


  👤 PaulHoule Accepted Answer ✓
No off-the shelf information extraction system is going to be useful for your task. In particular most of the systems you list are notorious rabbit holes and dead ends. (Well, UIMA was developed by IBM to support projects that have 100+ coders and data entry people, it's not a dead end if you have a budget that big...)

If getting the right answer matters for you you need to start with a workflow system that will let you do the task manually. You will absolutely need it for two reasons: (1) editing cases that the extraction system gets wrong, (2) creating a training/evaluation set for the extraction pipeline.

When you've got a well-defined task you can do manually then you can think about automating some of the extraction (80% is realistic) with rules like regexes, RNN/CNN/Transformer models.

My contacts in Argentina who do projects like this all the time say that it takes maybe 20,000 examples to train an extraction model and that fits my experience. What separates the people who succeed at this kind of project from those who fail is that those who succeed make the training set, those who fail exhaust themself looking at projects like Tika, OpenNLP, UIMA, etc.