HACKER Q&A
📣 shekhar101

What's the current best way to extract tables from PDFs?


I am have a set of pdf that are bank statements. The formats of these statements are different based on the bank but they are limited set (<15). What's the current best approach to extract tabular data from PDFs? I tried writing custom logic based on pdfplumber and such but they are very fragile and have lots of ad-hoc logic. The maintenance is pretty high. Are there small models that can run preferably on CPUs alone and that I can possibly fine tune for this task? Any guides or pointers for that? I see a lot of available models, but as someone with no ML background, it's difficult to navigate through.


  👤 jonahbenton Accepted Answer ✓
Field report- the problem is subtle. I wrote code to do this for mine, rather than use CSVs, because the statement is a regulated document, which CSVs are not, and it has balances for validation, which CSVs also lack.

I wound up with a pipeline of pdftotext -> configurable regexes to capture the transactions within their respective sections (banks list credits and debits separately without indicating the sign in the amount field) -> BNF parser to turn transaction lines into data, then checks start balance + transactions = end balance.

PITB but works well.

Over the winter will be standing up a local model to see whether a sophisticated prompt can reliably accomplish the same.

Not going to base any workflow on my transaction data on hosted models.


👤 andrewio
To extract tables from PDFs, you can use the following tools:

1. Tabula (https://tabula.technology): a free and open-source tool.

2. Parsio (https://parsio.io): uses pre-trained AI models for data extraction from PDFs, emails, and other formats.

3. Airparser (https://airparser.com): uses GPT approach similar to ChatGPT for data extraction from PDFs, emails, and other formats.