There are even distributed versions being built for reliability in the cloud: dqlite by canonical (of Ubuntu fame) and rqlite
Given the complexity it seems like there are use cases or needs here that I'm not seeing and I'd be very interested to know more from those who've tried.
Have you tried this? Did it go well? Or blow up? Were there big surprises along the way?
- https://sqlite.org - https://dqlite.io - https://github.com/rqlite/rqlite
I use SQLite in production for my SaaS[1]. It's really great — saves me money, required basically no setup/configuration/management, and has had no scaling issues whatsoever with a few million hits a month. SQLite is really blazing fast for typical SaaS workloads. And will be easy to scale by vertically scaling the vm it's hosted on.
Litestream was the final piece of the missing puzzle that helped me use it in production — continuous backups for SQLite like other database servers have: https://litestream.io/ With Litestream, I pay literally $0 to back up customer data and have confidence nothing will be lost. And it took like 5 minutes to set up.
I'm so on-board the SQLite train you guys.
[1] https://extensionpay.com — Lets developers take payments in their browser extensions.
When I first came and saw it, it...did not sound right. But I didn't want to be the guy who comes in and says "you are doing it wrong" month 1. So I went along with it.
Of course, eventually problems started to pop up. I distinctly remember that the ingestion (happening via a lot of Kafka consumers) throughput was high enough that SQLite started to crumble and even saw WAL overruns, data loss etc. Fortunately, it wasn't "real" production yet.
I suggested we move to Postgres and was eventually able to convince everyone from engineers to leadership. We moved to a custom sharded Postgres (9.6 at the time). This was in 2016. I spoke to people at the place last month, and it's still humming along nicely.
This isn't to illustrate anything bad about SQLite, to be clear! I like it for what it does. Just to show at least 1 use case where it was a bad fit.
SQLite was a tempting first answer, but what solved it was Postgres, and we eventually offloaded a lot of aggregation tables to Clickhouse and turned the whole thing into a warehouse where the events got logged.
When not to use sqlite:
- Is the data separated from the application by a network?
- Many concurrent writers?
- Data size > 280 TB
For device-local storage with low writer concurrency and less than a terabyte of content, SQLite is almost always better.
The workload was simple (single node work tracking) and I didn't expect it to become a bottleneck. Unfortunately, there were some default settings in the storage backend (tiny page size or WAL or something) that caused severe thrashing and a dearth of tooling to track down the issue. After making a custom build with custom instrumentation and figuring out the problem, I found an email thread where the sqlite community was arguing about this exact issue and the default settings in question. A couple of people had forseen the exact problem I had run into and suggested a fix. Their concerns were dismissed on the grounds that the problem could be configured away, and their concerns about discoverability of configuration were ignored completely. I wasn't thrilled with the crummy defaults, but seeing that the consequences had been forseen, considered, and dismissed despite what seemed like widespread consensus on the fix being simple... it really damaged my trust. How many more landmines did SQLite have?
Lack of perf tooling + bad defaults = recipe for pain.
As he notes https://www.mozilla.org/ uses this pattern:
> They started using SQLite back in 2018 in a system they call Bedrock ... Their site content lives in a ~22MB SQLite database file, which is built and uploaded to S3 and then downloaded on a regular basis to each of their application servers.
I'm particularly interested in the "Sessions" extension (https://www.sqlite.org/sessionintro.html) and would love to hear if anyone has successfully used it for an eventually consistent architecture built on top of SQLite?
When embedding natively, like in a Rust app, the performance is better than any other RDBMs because no network/serialization overhead and being able to use pointers in-process if needed.
The DevOps story also is a dream: typically it is just a single file (optionally + some more for journaling) and setup is automated away (most language libs bundle it already), plus it is widely known since smartphone SDKs and all webbrowsers include/expose it.
A subtile advantage: the supported SQL subset is so small, that "if it works in sqlite, it will also work with $RDBMS" in most cases, but not the other way around. I always use it when getting started when in need of relational data, and only had to swap it out for postgres once, but not due to technical/scaling reasons (IT policy change & stuff).
Having said that, it is mind-boggling what kind of load you can handle with a small VPS that runs a Rust microservice that embeds it's own SQLite natively... that would be an expensive cluster of your typical rails/django servers and still have worse performance.
It works great - there are ergonomic APIs in most languages, it’s fast and reliable, and great to be able to drop into an SQL shell occasionally to work out what’s going on. A custom binary format might be slightly more optimal in some ways but using sqlite saves so much work and means a solid base we can trust.
I switched from Postgres to SQLite for a couple of versions, put mainly because Postgres wasn't "supported" I called SQLite an "internal database thing".
Worked flawlessly for about 7-8 years before both services were gobbled up into micro API services.
At the last count, we have about 14,000 services checked by uptime (about 1,000 every 5 minutes, 2,000 every 10 minutes, the rest every 15). Probably had about 60,000 tinyurls in MyTurl. We also ran the MyTurl urls through uptime every night to look for bad links. The system go hammered, often.
It took minor tweaking to get the the best performance out of the database and AOLserver has some nice caching features, which helped to take the load off the database a bit. But overall, it worked as well as the Postgres counterpart.
And now, I have to figure out why I never released the SQLite version of both.
This means it won't cost any money if it's not receiving any traffic, and it can scale easily by launching additional instances.
I wrote about my patter for doing this, which I call Baked Data, here: https://simonwillison.net/2021/Jul/28/baked-data/
A few examples are listed here: https://datasette.io/examples
SQLite may shine in edge cases where you know you can outperform a regular database server and you know why, and you could build everything either way. SQLite could be a way to e.g. decentralize state, using local instances to do local storage and compute before shipping off or coordinating elsewhere.
Otherwise, SQLite can simply be a recipe for lots of lock errors on concurrent operations. I've also never been very impressed with its performance as a general purpose replacement for postgres or MySQL.
1) Sqlite
2) Self-hosted Postgres
3) Big Boy Database, with an $$$ cost. (AWS Aurora, Oracle, etc).
Most projects never leave the Sqlite level. Only one has left the Postgres level so far.
This is where the "MongoDB is webscale" meme came from.
The truth is SQLite and a single webserver or Docker container will be fine for 95% of web applications.
People really underestimate the advantage of simplicity vs perceived power.
Use SQLite.
I believe the high level approach he's taking is essentially: 1. Concurrently execute the multiple write transactions in parallel. 2. Sequentially write the changed pages to the WAL. *[3] If a previous transaction causes the next to compute differently (conflict), then rerun that next transaction & then write.
The way to detect if were conflicts is essentially:
1. Keep track of all the b-tree pages accessed before running the transaction 2. Check the WAL if any previously transaction modified one of those b-trees. If so, this means we have to rerun our transaction.
I've seen it done in software transactional memory (STM) systems as well. It's really beautifully simple, but I think there are a lot of devils in the details.
[1] https://github.com/sqlite/sqlite/blob/9077e4652fd0691f45463e...
[2] https://github.com/sqlite/sqlite/compare/begin-concurrent
[3] * Write to the WAL, so that parallel transactions see a static snapshot of the world.
That said, sqlite used 'badly' can be quite frustrating. Home Assistant, for example, is usually set up on an sd card in a raspi and then runs an sqlite database on it that it dumps massive amounts of very redundant data into as json blobs. Pretty common to have it just randomly lock up because the sd card has trouble with that frequency of writes.
I also do backups periodically with ActiveJob using `.backup` on the sqlite3 client. It's simple and nice because I just have to worry about running the app, and nothing else.
We've been using this stuff in production for over half a decade now. Multi-user, heavily-concurrent systems too. The biggest cost savings so far has been the lack of having to screw with a separate database server per customer install (we do B2B software).
https://pve.proxmox.com/pve-docs/chapter-pmxcfs.html
https://git.proxmox.com/?p=pve-cluster.git;a=tree;f=data/src...
When it was written by our CTO over 10 years ago he tried every DB solution available, that is those that somewhat fit the picture, only sqlite survived any test thrown at them, if setup as documented it handles a pulling the power plug in any situation, at least in our experience.
It may need to be noted that the DBs are only used locally, we synchronize commits ourselves via a distributed FSM, that's mostly transforming the Extended Virtual Synchrony corosync provides to simple Virtual Synchrony.
1. PHP web development for the client of a client. They needed persistent data and MySQL was not available. Moving to a different webhost was straight up rejected. Used sqlite with Idiorm and it worked just fine.
2. As the local datastore for a cross platform mobile application. The sqlite DB was unique on each device. Libraries were available and worked well.
3. This is a large one. Several 10's of thousands of installs that query the filesystem, but filesystem access is throttled by the vendor. We're using sqlite to store the state of the filesystem as it doesn't really change that much. If the db is damaged or whatever, it can be wiped as it isn't the final source of truth.
SQLite proved to be phenomenal. We spec'ed hardware with enough RAM to hold the FR DB in memory, and damn SQLite is fast enough to keep up with the optimized FR system performing 24M face compares per second. With a 700M face training set, SQLite also proved instrumental in reducing the training time significantly. These daze, if given the opportunity to choose a DB I always choose SQLite. I use SQLite for my personal projects, and I go out of my way to not use MySQL because SQLite is so much faster.
Sqlite is one of the greatest open source projects in history, with awesome docs, and really is a tribute to the art of programming. I'm happy and honored to use it for the appropriate use cases (which are a lot more than one would think).
Our product is a self-hosted IoT & hub unit solution, so we have no requirements to work with thousands of users doing who knows what. For our use case, sqlite is perfect. We don’t need to work with millions of rows, don’t need to stress the relatively low-power server units with another long lived network process, have no requirements of authentication since the user owns all the data, and can easily get insights into the database both during development and during troubleshooting at customer locations.
I’d sooner leave the project than move to anything else.
We replaced SSMS + SQL Server with Python + SQLite run in AWS Lambda. The jobs fetch the database from S3, update with the latest deltas and write out the database and some CSV files to S3. The CSV files drive some Tableau dashboards through Athena.
The SQL usually needs a bit of a rework to make this work, but for the volumes of data we were looking at (we're talking less than a million rows, jobs run once per day) we've seen good performance at low cost. We used DuckDB for a couple of workloads which needed more complicated queries, it's stupid quick.
In my opinion the biggest thing separating Sqlite from a "full blown" database is actually Sqlite's lack of stored procedures. At all of the places where I worked with traditional databases, we used stored procedures to create an ersatz data access abstraction so that the database design could vary independently of the API presented to the application. With Sqlite I find myself (ab)using views as a poor man's stored procedure, but of course that only covers the read-only or "functional" (in the functional programming sense) portion of stored procedure code.
Everything other commenters have said about data size or centralization also applies, but for me (again, just personal opinion) I'd actually draw the line at the point where you can or cannot get by without stored procedures. From an operational standpoint that would be: at what point is it imperative to be able to vary the details of the database design while maintaining an abstraction layer (stored procedures) that allows application code to be blissfully unaware anything changed underneath it?
Examples of when that would be needed would be if new users + applications start having competing needs, or if you need to revamp your table structure to improve performance or get around a limitation. If you're in a startup or small company, it would be the point at when you find yourselves hiring a real Database Administrator (DBA) rather than giving DBA duties to developers. Prior to that organizational scale you may be better off with the simplicity of Sqlite; after reaching that level of organizational complexity you might need a "real" (server-based) database.
I can't really count how many times I've been pleasantly surprised by how extensive the feature set of SQLite is. I mean, it even has window functions (https://www.sqlite.org/windowfunctions.html). And being able to quickly open up the app's SQLite file in a database browser is also quite helpful during development.
Pros:
- A single API server, no separate database to worry about, configure, and update.
- Backups are as simple as backing up one file every so often. SQLite even has an API to do this from a live connection.
- Handles way more concurrent users than we’ve ever needed.
- Dev and test environments are trivial and fast.
- Plenty of tools for inspecting and analysing the data.
Cons:
- There are certainly use cases it won’t scale to, or at least not without a bunch of work, but in my experience those are less than 1% of projects. YMMV.
- The type system (even with the newish stricter option) has nothing on Postgres. I realise this is basically a non-goal but I’d seriously love to somehow combine the two and get PG’s typing in a fast, single file embedded DB library.
- Postgres JSON support is also better/nicer IMO.
Yeah, you probably can do everything with the "simpler" stack. It might even be nominally faster in many cases. But if there's any chance you're going end up rolling your own type-validation or ORM or admin interface or GIS... Just use the battle-tested kitchen sink from the get go.
However, I would consider how important RDMS features are to you which are not available in SQLite:
- less sophisticated type and constraint system.
- a severely limited ALTER TABLE.
- No stored procedures.
- limited selection of math and statistical functions.
- no permission and user model, not to mention row-level security.
To be clear, I don't think it's bad the SQLIte doesn't try to be an RDMS, but I would consider this perspective when making a decision, not performance which is great, and difficult to max out.
There is a hacky solution for redundancy; at certain events, a copy of the .db file is made and rsynced to a secondary node. This will probably fall apart if the file ever goes above a few MB in size.
Pros / reasons to use it: Self-contained, just a single file to transfer, no drivers needed, no servers running other than my own application.
Cons: No good support for ALTER TABLE queries, so things like changing the name, datatype, or default value of a column isn't happening. The workaround is to create a new table and transfer rows over, then drop the old table and rename the new table. Also the aforementioned issue if you want redundancy.
So basically, if redundancy isn't a requirement for you, sqlite is fine. It's probably ideal for single user applications, like your browser or apps (iirc sqlite is used a lot for those purposes).
Production uses: 0 (1 if my Ph.D. thesis code is included, which had some C++ code that linked against version 2 of the SQLite library).
Even though SQLite can handle 99% of peoples use cases, WAL2 + BEGIN TRANSACTION will greatly close that last 1% gap.
b. Expensify has created a client/server database based on SQLite called https://bedrockdb.com and years ago it was scaling to 4M+ qps https://blog.expensify.com/2018/01/08/scaling-sqlite-to-4m-q...
Although SQLite is not designed for this type of scenario, this discussion higlights there's a strong demand for a concurrent client/server RDMS that is simple, performant and easy to deploy. PostgreSQL is powerful and feature-rich, but not simple or easy to delploy. Hence the appeal of SQLite.
For example, could SQLite power a discussion forum of moderate (or more) activity i.e. users posting comments? The Nim language forum is powered by SQLite, but activity in the forum is fairly low. [1]
Between the simplicity of SQLite and the complex, heavyweight that is PostgreSQL, there is a wide gap between these database opposites. It's a shame there is no concurrent RDMS to fill that gap.
(Note: Another poster mentions the concurrent Firebird RDMS as a possible alternative, but I haven't used it. [2])
Then, with a little traffic, things continued to go well in production. But as traffic scaled up (to 1-5 QPS, roughly 25% writes), they fell apart. Hard. Because my production environment was spinning rust, IO contention was a real issue and totally absent from development. This manifested as frequent database timeouts, both from reads and writes.
Echoing another commenter's sentiment: things would have gone much more smoothly from the beginning had I started with PostgreSQL, but after having written many thousands of lines of direct SQL taking intimate advantage of SQLite's surprisingly rich featureset, migrating was less than totally appealing.
The mitigation strategy, which ultimately worked out, was to implement backpressure for writes to SQLite: queuing and serializing all writes to each database in the application, failing loudly and conspicuously in the case of errors (thus forcing the client to retry), and gracefully handling the rare deadlock by crashing the process completely with a watchdog timer.
But you probably won't see it since at the time of writing my response there are already 172 comments.
The use case is storing trace/profiling data, where we use one sqlite file for each customer per day. This way its easy to implement retention based cleanup and also there is little contention in write locking. We store about 1 terrabyte of data over the course of 2 weeks this way.
Metadata is stored in Elasticsearch for querying the search results and then displaying a trace hits the Sqlite database. As looking at traces is a somewhat rare occurence we iterate over all fileservers and query them for trace data given an ID until we find the result.
Reference https://www.sqlite.org/fasterthanfs.html
An important realization is that not everything needs to scale, and that it depends on how you access the DB and what your product looks like. For a load with many concurrent writes I'd be careful with sqlite, or when I know that I'll want my DB to mostly live in memory (e.g. operations will often process the whole, huge dataset and no index can help with that). But even if I thought "Uh, I'll probably need a full DB", I'd still benchmark my application with both sqlite and e.g. postgres. And if the API to access the DB uses some nice abstractions, swapping the flavor of SQL isn't a huge issue anyway.
//edit: Plus, I've done stupid stuff like "my SPA hammers the PHP API with 20 to 40 requests, each resulting in a simple SQLite query, just to render a checklist" and got away with it: a) because we had at most 20 concurrent users [realistically: 1 to 5] b) doing the checklist took half a workday (ticking off an item was done via a JS callback in the background, so the actual rendering happened only once) and c) SQLite performs great for read heavy loads. The site performed so well (page loads felt about as fast as HN, even when connected via VPN) that I even scraped the plan to locally cache checklist in the HTML5 localStore (bonus: no cache = no cache incoherence to care about).
If you have multiple threads accessing the same database it will kill the speed of sqlite completely, it will work for development but as soon as you put it into production and put any sort of threaded load on the database it will quickly become the bottle neck and bring the whole thing down. If you run into this threaded issue you can just switch to mysql at that point and it will fix the issue.
But I've run into on prod that didn't exist in dev on my MacBook M1, and I'm curious if anyone has any suggestions:
My app is basically quiet and serves requests in the dozens (super easy to run on a tiny instance), but for a few hours a day it needs to run several million database transactions, N+1 queries, etc. Because of the high number of IOPS needed, a small instance falls down and runs suuuuuper sluggishly, so I've found myself needing to shut everything down, resize the instance to something with more CPUs, memory, and IOPS ($$$), doing the big batch, then scaling down again. That whole dance is a pain.
Were I using a more traditional postgres setup, I'd probably architect this differently -- the day-to-day stuff I'd run on Cloud Run and I'd spin up a separate beefy instance just for the daily batch job rather than resizing one instance up and down over and over again. The constraint here is that I have a 50GB+ sqlite db file that basically lives on local SSD.
Any thoughts?
HTTPS: https://vatcomply.com/ Github: https://github.com/madisvain/vatcomply
Client (incident response dept at megacorp) had a problem: their quarterly exercises of switching network storage devices from live servers to disaster recovery (DR) servers was a manual operation of reconciling about 8 Excel spreadsheets and setting up ACLs before (luckily) an automated process would switch the storage mounts from live to DR.
We modeled and matched up all the hosts, servers, and ACLs and did a daily write to a single SQLite database. (We redundantly sent all the data to Splunk.) Now the DR employees are automating a daily diff of servers, hosts, ACLs etc to further automate the switch.
To echo a bunch of comments here, we decided on SQLite for a few reasons:
- only one user would write to the DB - only a few users need to access the data - besides standard retention policies, the data could be considered ephemeral and easily recompiled - the script we wrote to compile the data runs in 5 minutes, so if we lose the db, we can easily recompile it.
SQLite (and SQLalchemy) is useful for inexpensive data.
Since every user has a separate DB file, writes to those files don't block reads from the global DB file which contains everyones public data. As long as you keep your user DB schema the same as the global DB schema, it's pretty easy to sync records using a simple cron job.
More info on my tech stack here: https://withoutdistractions.com/cv/faq#technology
In case that link ever goes down, here's an archive link: https://web.archive.org/web/20220503102946/https://withoutdi...
It always goes like this:
1. I start a new "lean" web service and decide to use sqlite.
2. Some months down the road I figure I need some slightly more advanced db feature. The ones I can remember are postgresql numeric arrays (for performance where I can test for membership in a where clause) and jsonb (again with its special syntax for querying and its performance implications).
3. For some time I postpone the inevitable and do various hacks until I fully hate myself.
4. Suddenly realize that migration to postgresql will reduce the complexity, even with regards of infrastructure, as I usually have redis et al. in the game (which I wouldn't have to use had I started with postgresql initially).
3. I waste several days migrating and wondering was it (my initial stupidity) worth it...
My advise is - if it's going to be accessed via the network (and you'll have to operate a server either way), make it two servers and go with postgresql. If you are not 100% sure about the opposite (no chance of it becoming a web service), go with postgresql. Is it a desktop app? Postgresql (just slightly joking here). Mobile app? OK, I guess you have no real choice here, go with sqlite.
And no, you can't "just use an ORM", because when the day comes, you will need to migrate because of features sqlite does not support and you will have made mistakes. If you used an ORM, now you'll have to migrate off both sqlite and the ORM.
PS: Ah, yeah, and now I remember one other instance where I had to migrate off sqlite solely because I needed to provide an admin interface (think PGAdmin) to the production system.
[1] https://www.gaia-gis.it/fossil/libspatialite/index
[2] http://web.archive.org/web/20190224100905/https://core.ac.uk...
This [0] is a good article with some benchmarks, misconceptions about speed, and limitations.
With a lot of help from https://www.sqlite.org/appfileformat.html
It has had some pains but it has been great rather then flat file storage.
It's also powering another one and I really like the fact that I can just commit the whole DB to the GIT repo.
Except for some rare exceptions, it's been doing pretty great. I don't have any plans to migrate from SQLite any time soon.
Also, take a look at ws4sqlite (https://germ.gitbook.io/ws4sqlite/) for a middle ground between SQLite (embedded) and rqlite/dqlite: it's "normal" sqlite addressable via web services. May be useful in some scenarios.
For sake of argument, let's say I have a fixed schema/format that will never change and I never need to aggregate queries across multiple customer accounts. Also, let's say writes to a single database are never going to be more than a hundred concurrent users. Why shouldn't I store each tenant's data in its own SQLite database? It makes it very easy for local client apps to download their data all at once. Backups are incredibly easy. Peer-to-peer synchronization behaves like a git repository merge. Why shouldn't I do this?
No issues at all.
Aside from some surprises regarding packaging it together with the rust crate and inability to rename columns, I'm really happy with it. Easier than deploying postgresql, more useful than documents.
It started out life as a disposable engine for a decoy missile and so the engineers took a very lightweight approach to it and kept the costs down. It would later be adapted to be used for more permanent aircrafts and ended up being one of GE's most successful and longest serving engines.
I use SQLAlchemy and write applications where by just swapping out the database URI I can use either SQLite or Postgres. SQLite is nice for local development and easy testing, (you can even run tests using :memory: to accelerate CI/CD) and then I use hosted Postgres in prod. That said, based on what I have seen I would not be at all afraid to use SQLite in prod for internal tools etc.
I also built a Google Go library wrapping the sql amalgamation file and then cross compiled it for Android and iOS but with some more SQLite extension (GIS), which the stock Android/iOS SQLite did not have. This was some time in 2017 I guess.
I am a big fan of SQLite. You can integrate it in all kinds of stuff and adapt it to your needs. Compiling it is also straightforward.
It is so convenient to have it as a file, especially when you are just learning to do software development.
And the performance has not been an issue once.
One thing to note is that my site is not Facebook size. It only gets ~40 page views a day. And most of them are just for viewing, so no database opertaions.
So, I'm not going to be the most credible voice here. FWIW, I know that Pieter Levels, who runs multiple projects like nomadlist, remoteok, rebase, uses both SQLite and plain JSON files for storage.
When I looked around, Dropbox used it too; and so did Bittorrent Sync (Now Resilio)
I usually drop it because I need something that Postgres has or does better, or it's a write heavy site.
Recently I stumbled over a potential fix[1], which I will try in my next project.
[1] https://ja.nsommer.dk/articles/thread-safe-async-sqlite3-ent...
I have of course pickle -> open()
But eventually my projects grow large enough that sqlite3 becomes the database. I have never needed to go beyond sqlite3 in my projects. It does everything I ever need it to do.
This is the project: https://github.com/a-chris/faenz
I love it - very robust, lots of documentation, StackOverflow answers, example queries, etc.
On a typical day, I get less than 50 users per day globally, so I don't really have to worry much about concurrency or other issues that SQLite struggles with. I'd wager that many web applications are perfectly well served by it.
My manager currently runs a number of personal sites with a SQLite backend and they all seem very performant so I have been honestly considering giving it a second look.
You should understand whichever RDBMS you use, and how to get the best performance out of it. Previously I used Postgres extensively, and it worked fine, and before that I managed MySQL servers. They are all fine, but SQLite is as simple as it gets, and more than adequate for most workloads.
I’ve been using (locally) a Redis container for a very early prototype because it seems to be simple enough to use.
I know you can query json strings in salute but that’s not quite the same thing. For one redis offers some geo features.
What I learned from that though is that I would never use it for actual business software, no matter the scale. The fact that it doesn't have proper timestamp support is enough by itself to be crippling.
The only issue is that you'll need to take special care when backing up the DB file (but this is probably the same for most DBs even today.)
Everything is good so far, though most of my traffic is bots probing for wordpress flaws.
> Given the complexity
Which complexity? It is the simplest possible widespread, reliable and effective solution. Which makes it a primary choice.
> it seems like there are use cases or needs here that I'm not seeing
On the contrary, the use cases for the traditional Relational DB engines are defined: when you need a concurrency manager better than filesystem access. (Or maybe some unimplemented SQL function; or special features.) Otherwise, SQLite would be the natural primary candidate, given the above.
Edit:
I concur about https://blog.wesleyac.com/posts/consider-sqlite being a close to essential read if one has the poster's doubt.
To its "So, what's the catch?" section, I would add: SQLite does not implement the whole of SQL (things that come to mind on the spot are variables; the possibility of recursion was implemented only recently, etc).
Works perfectly well. Mind you, I would use Postgresql if the site were important, just to be on the safe side.
https://docs.google.com/presentation/d/1Q8lQgCaODlecHa2hS-Oe...
..but like all things, it depends on your needs. Some have already pointed out the pages on SQLite's on site regarding # of writers (the main issue), etc.
It's probably elsewhere but I don't realize it.
SQLite==exclusive access, no sharing, unless read-only.
Basically, it provides a SQL convenience for local usage.