He is not the only one I've met who insisted on not having FK. I've even seen large systems prohibit using JOIN statements.
Personally, I see the data integrity out weights the inconveniences, do you use FK for your systems, what are your experiences?
Another classic is the “joins are slow” argument, which I believe goes back to a period in the late 1990s when in one, not highly regarded at the time, database, namely MySQL, they were indeed slow. But the reason “everyone” knew about this was precisely the oddness of this situation: in fact RDBMSes are highly optimized pieces of software that are especially good at combining sets of data. Much better than ORMs, anyway, or, god forbid, whatever you cobble together on your own.
There is, in my mind, only one valid reason to not use foreign keys in a database schema. If your database is mostly write only, the additional overhead of generating the indexes for the foreign keys may slow you down a little (for reading, these very same foreign keys in fact speed things up quite considerably). Even in such a case, however, I’d argue you’re doing it wrong and there should be a cache of some sort before things are written out in bulk to a properly setup RDBMS.
I just ran into severe data corruption at a large client because a programmer four years ago wrote an empty catch block. The system would open a transaction, hit a fault, roll back, then continue writing to the database as if it’s still in the context of the transaction.
I spent some time trying to pin down exactly what it did, and found that many writes went through because of a missing foreign key constraint.
In short: if a particular table of minor importance had a foreign key constraint, there would have been no damage whatsoever, because it would have faulted immediately after the rollback.
You can’t rig up a constraint against every dumb write. But you can rig them up against some of the dumb writes. And sometimes that’s enough.
In other words, the world is full of idiots; and any time I start forgetting about it, I read something like your post and I get a wake-up call.
What does R stand for in RDBMS is you don't use foreign keys and joins?
Please, keep using your FKs, stay safe and don't mingle too much with idiots.
For example audit logs get no FKs, when a delete happens the audit logs about the deletion shouldn't be deleted.
I always FK a large table (millions or more rows) to a small table (tens to hundreds of rows).
But I will pause and ask hard questions about FK a large table to a large table... will this impact migrations? Do I need this FK? Is data integrity at risk without this FK even assuming a buggy app? Does the app utilise this FK for queries, or is there zero performance benefit from having the FK? If I don't have the FK are both tables always queryable by a PK? Should I have an index, potentially a UNIQUE index, in place of a FK?
Like most things... it depends. A dogmatic insistence on using them everywhere isn't always healthy, and the inverse is true that an avoidance everywhere isn't healthy.
The DB is there to store data and make it available, whilst enforcing data integrity... if it makes sense to use a FK to achieve those things do it, otherwise don't.
I’ve never come across a scenario yet where this wasn’t sound advice.
I tend to lean heavily on my DB as well. E.g. I tend to push all state down to the DB and out of the application. I work in environments where it’s common for developers to want to disable FK constraints, and i temporarily do sometimes during specific bulk operations during releases for example. The usual reasons others will suggest relaxing FK constraints permanently will be due to the need for audit logs or soft deletes but i have patterns for these too.
Foreign key constraints are pretty awesome, all databases I’ve worked in so far have escape hatches for when they hurt too much.
I like foreign keys, check constraints and tight data types. Might as well constrain it and limit the scope for errors.
Application programmers can write some buggy code. The DB should provide a line of defence.
If you don’t have foreign keys that should be a design choice with a legitimate reason the entity can become orphaned.
You need a definition of what that means in real life. E.g PERSONID id 1012 and there is no associated record. This kinda means you need to look at another entity to know what the first entity “means”.
This might be useful for data that needs to sit in distributed databases.
I've found that proponents of this usually either don't fully understand the feature in question or made some major mistake in the implementation, which in turn causes problems that manifest themselves when they try to use said feature.
You also need to index your FKs so that the database does not have to do a full table search before you can delete a row. This is often overlooked.
Your friend can do his data migrations without FKs and create them afterwards. This is quite a common procedure.
There are a few different schools of thought. I will list them, but the important thing to remember is not to be dogmatic. They are all right or wrong depending on your circumstance.
One school of thought says "I want all data in my DB to be normalized. I want it to be right when it goes in so it never breaks the application layer." That school would say foreign key constraints are critical.
Another school of thought says "I want all the data in my DB to be retrieved and inserted quickly. I want the application layer to do any error-checking that is necessary, or, I want to be in a situation where I can always fail gracefully if there's errors in data validation."
Still another school of thought says "I don't trust those programmers to write good application code, so I will insist on normalized data for that reason," and yet another says, "I don't have control over the DB schema, that's some DBA's job, so I will just do all my validation in-app."
The point of this being, there's tradeoffs either way you go.
Personally, I typically would rather have the application layer do the validation and even the joining of data, a lot of the time, when the application is high-volume. At the volumes my organization works with, it is very difficult to write performant SQL queries that use JOINs and other relationships as a developer - even as a DBA! - and often much easier, for me, to write performant application code. The DB is also a pet with many owners, whereas the infra for my applications is owned by my team. So, it's better for me to do relationship validation in code myself. (We also do not use a heavy ORM, again for performance reasons. Just Dapper.)
At my previous job, the situation was the opposite - we weren't under such load at most times that it mattered if the queries were performant, we had Entity Framework building relationship, and EF will blow up if you ask it to build relationships where none exist. So, we needed more normalized data, and that was what we went for. But even then, not in every situation.
The small number of people with high enough scale that they can't use them know who they are, the rest of us need to think carefully when performing database migrations and reason out the order of operations required to maintain data integrity (sounds like a good idea anyway?).
When building systems for scale where the databases may grow large, foreign keys can cause many issues -
- ORM features around foreign keys can easily bring your system down when joining large tables with incorrect/missing indexes during heavy loads
- As the table grows, not having foreign keys makes it simple in taking out large tables into big-data solutions in the future
- The schemas and relations are sometimes hard to understand during the initial phases of application development. Not having those relations makes changing schemas simpler and faster.
- Sharding tables is much simpler when there are no foreign keys
- It helps to add some of the reference logic in the application rather than the database. Databases are the bottlenecks when it comes to IOPS and scaling. The more processing you move to your application server, the better scalability you can achieve.
In general it depends on whether you intend to use the database to drive a system, or whether you intend to use the data for reporting and data analysis.
Inconvenience in migrations is not a legit reason, if your migration would fail if you had FKs but doesn't because you don't you just broke your data, the errors are there to protect you. In other words your colleague is straight up wrong.
All my issues with databases usually stem from the fact that someone did not use enough checks and we got dirty data nobody knows what to do with.
My only non-conventional usage of database schemas is text field lengths. I have rule: it's either 20, 200 or 2000. It should be big enough to fit any non-insane value. Like 200 should be enough to fit a phone number. The main point is not to let absurdly broken data in.
I have seen over and over firsthand the kinds of data integrity problems that come from leaving the decision to the business software and those who meddle asynchronously with data.
You can always rewrite software. Rewriting bad data is not only difficult but often impossible.
Well, one might ask, what is the point of having foreign keys if they are disabled? And the answer is, there are several benefits. Here are a couple:
1. foreign keys, disabled or not, create a record of your data design that itself serves as documentation, and that can be programmatically queried, extracted, copied, modified etc, all of which reduces technical debt and is useful for other users or developers to understand your schema and work more efficiently.
2. You still have the option to enable the foreign keys. Furthermore, you can pass this job over to someone in your organization who may not have authority to create or modify foreign keys, but does have the authority to enable them and/or to fix the data however required to enable them
It sounds like your colleagues want NoSQL (key-value) given their no JOINs policy.
It's not difficult; it merely requires that you think about the order of the migration operations.
> He'd rather have a smooth data migration process than having an unexpected error
This should be grounds for dismissal. Data integrity problems should be dealt with immediately instead of making it someone else's problem later. He's just kicking that "unexpected error" down the road to some poor soul that will spend weeks trying to figure out how the data got so messed up.
If you don't care about data integrity then use a different data storage solution.
There are any number of tools that will generate me a pretty and useful database schema diagram if I point them at a relational database. This is incredibly handy when you're new to a database and need to figure out which tables to query and update, and which (gasp) sprocs you need to call. I've been on projects where people have been poking around in the dark, and then a good database diagram has saved us days or weeks of effort trying to figure out how to make something work. As I say, there are plenty of tools capable of generating one of these in seconds or minutes[0].
However, if you don't have foreign keys, the utility of such a tool is severely diminished because you just get a big pile of nodes representing tables clustered at the top or bottom of your diagram (depending on exactly which layout algorithm is being used and how it's been configured).
[0] Many years ago I and three colleagues built one of these: Redgate's SQL Dependency Tracker (https://www.red-gate.com/products/sql-development/sql-depend...). It was pretty neat because you could build a diagram spanning databases, or even linked servers, and unlike most other tools at the time it could handle thousands and thousands of database objects, but the product name doesn't really help get across that it's fundamentally a diagramming tool. I built the dependency engine, the graph, and radar views. We used yFiles from yWorks for the graph layout calculations, with a bit of extra hackery, but I seem to remember yFiles lost compatibility with a newer version of .NET at some point so (or something along those lines) so RG ended up swapping it out for something else.
>He rather have a smooth data migration process than having an unexpected error and abort the whole operation
That's not a good reason. It doesn't sound like he's making an educated decision based on context-- it sounds like he's sacrificing data consistency to make his own work easier. If that's the case, perhaps he should do away with all integrity constraints (primary keys, checks, et al). Then, he could enjoy a "smooth process" for adding new data, too. No more pesky errors-- just blissful, silent corruption.
>To be fair, as far as I know, he never had major data problems.
Yet, anyway. He has intentionally limited his visibility over at least one kind of problem (invalid references). His problems might not be clear until they start causing noticeable issues elsewhere.
>He is not the only one I've met who insisted on not having FK. I've even seen large systems prohibit using JOIN statements.
Allergy to the relational model. Many such cases!
In a strict relational model implementation, the only way to reify that is by the FK of the Order in the Line Item, but in some other implementation, say a generic programming language, the Line Items might be an array or similar data structure that is part of the the Order, and the programming language implementation keeps track of the pointers or address offsets or whatever detail it cares about.
Anyway that's all beside the point.
In my experience, if the IDs are not autogenerated by the relational system, then it's relatively easy to migrate, however when data is full of IDs defined via some AUTO_INCREMENT behavior then migrations become an awful mess and any system (and this happens more frequently than you might expect) where a specific ID starts to have semantic meaning (oh, the customer's ID is 387437. whoops) then all bets are off and you might as well just give up and accept that your auto-generated IDs are now fixed for all time and can't be changed.
Oh and just to add, I have notebooks from previous employers where I have written THE Important IDs to Know, which started life as auto-generated numbers but which now are enshrined and encased in acrylic to the extent that years later they are important tribal knowledge.
I think trying to manage consistency in your application is probably a bad idea.
FK maintains your database integrity. If you care about data, just use it !
In SQLite it's a pragma, in Postgres you turn triggers off, those are the ones I've personally done but surely any relational database has this ability for this specific reason?
There _is_ a scale, though. I have worked in enterprises where the database is the "contract" and where the business logic is implemented in the database, with triggers and all sorts of constraints and what not, in addition to foreign key constraints. But I now mostly work in service- or micro-service-oriented places where the "contract" is a REST(-ish) API, business logic is implemented in Java, and the database is exclusively owned by that one code-base, where not all business logic is replicated from the Java code into the database (still foreign keys, though!)
...Of course, if nobody understands relational databases at the company, then that won't help...
Even if data migration was difficult with foreign keys for some reason, they can be disabled at the time of migration - no need to put weird constraints on day-to-day operations.
I see them the same as stored procedures. If you use them, you better use them everywhere and have all your data consistency model in the fk and stored procedures. Also, figure out how to disable them during the more interesting online schema migrations. If you can't commit to that, they'll only cause issues down the line.
As an alternative you can use Foreign keys without constraints. This way you get the conveniences of them without the migration problems. You can do this permanently (a bit wilder) or just before an import, and re-enable it later.
Your data migration aborted on an unexpected error? Well in my view that's a problem and you need to figure out why that is happening and fix it.
But hey, you can always drop the constraint when you decide it's not worth it, and you can always add it back again after. Depends on the application too, sometimes it just doesn't matter if you make a mess.
I'm remembering the time I was working at a place that had a huge number of Microsoft Access, Microsoft SQL Server and mysql databases and I was the first person they'd hired who knew how to do joins and they thought it was pretty scary.
He's just kicking bugs down the road. That's not engineering.
If you have no FK, the day will come that your data corrupts. Tiny application bug, wrong manual data fix, incomplete datamodel communication. It's a case of when, not if. And when it happens, it can fester for weeks or months, corrupting all kinds of data in unfixable ways.
If you have FK, you can go all-in on relational. The stuff is so powerfull, you'd be dumb not to. But then comes the too-smart-optimizer problem. Just like with compilers and opengl shaders, the tiniest change might make you fall off the optimizer's preferred path and get a way-to-slow version. Even when no code has changed, a minor DB version or even a tiny shift in statistics will kill you.
Personally, I tend to consider integrity more important than performance, but both have a zone where they are good enough to make trading off worthwile.
So I do FKs and joins, and count on rigorous testing and monitoring to keep things in check. Someone else might decide otherwise. Both strategies require you to do the unsexy part of the wirk, or a harsh punishment will follow.
The issue with managing the relationship just in code is if you ship a bug to break the relationship, you now have to manually fix your data, and if you want to find out when or where the bug was introduced, you're looking at commit history instead of a migration history. Same thing when it comes to making manual updates or adds in the db. Even if it's just on a dev stage, if your code makes an assumption about the constraint which isn't true, you can end up with bugs or exceptions on dev, which is also annoying. If you want to remove the assumption of the relationship from the code entirely, that would be more understandable, but not if instead it means replacing what would be an efficient constraint and join with a separate query.
In most cases, a well designed database covers a multitude of sins and makes life for future you and your future team much easier. Some of the biggest dumpster fires of code I have seen started as a smoldering dumpster from terrible db choices that were made far harder to fix when it had been in production for a few years.
If someone put a gun to a developer's head and threatened to pull the trigger if the database was crap, sadly they would be forced to pull the trigger most of the time.
that's some take, because if that "unexpected error" is not reported because FK integrity was turned off, that means the data containing integrity errors goes right in. Now your database is corrupt. Dealing with a DB where random rows here and there are not conforming to implied-only constraints is zero fun. in my own experience, things like the main page of the production site is a 500 error, with a stack trace deep into some logic nobody has looked at in two years, run the identical code on staging, works fine. Fun stuff! Seems like an odd choice to let errors like that stream right into your production DB without checking.
"It's never gone wrong yet" is a good way to get yourself electrocuted when you're changing a light bulb.
Mainly to cascade delete though, since we were robust enough to insert proper data.
This was spurred on in new projects because in a very very large older project with many many tables we had problems with orphaned records.
(PostgreSQL)
One day, I decided let try this out. And turning out it wasy easiser than I think. When delete data just make sure to use cascade or ensure its associated data is deleted, which make sense.
Once I embrace it, FK now becomes my friend to enter data consistency, I cannot imagine how did I live without it before.
Specifically I hadn't thought through an Order model and the OrderLines ended up being dependent directly on the product meaning through an FK. The orders wouldn't "settle" once they had been completed, since the product could be updated and change the values of the orderline and order. Dumb dumb. It was one of the cases where denormalizing data, very much, makes sense
We do not use them for write performance concerns.
There's plenty of nice features of the DB we're not allowed to use under the excuse of "performance". But I'm told this by people who live and breathe SQL, so I trust them and I hope they have evidence to back it up. Because a lot of these features we're not allowed to use would make our lives 100x easier if we could!
A) The world is full of idiots.
> I've even seen large systems prohibit using JOIN statements.
B 1) WTF is wrong with these people? What do they think an RDBMS is for?!?
B 2) See above, exhibit A.
Performance can be another reason for skipping them but modern dbs are pretty good.
In fact, Snowflake really really doesn't care to the extent that I've seen non unique primary keys in tables.
NUTS!
> I suspect the errors might be mainly caused by not considering data integrity at all at the first place
Spot on.
----
I deal with interface with MANY ERP-like software with as many "database designs" as you can't imagine, and have face the fun of interface with many system in my long career.
Among DOZENS only 2 major packages have zero issues and are the only ones with competent schema designs.
Let me tell you how bad is it:
One of my interfaces is with a package that, somewhat, manage to mix all the ways to be wrong about RDBMS:
- MySql, not a good start
- Only Strings and Ints datatypes, and the Ints datatypes are not many
- This means ALL data is suspected (dates, money, and yes, strings too)
- The tables are named `UNCC_00001, UNCC_00002...`
- The fields are named `FT_0001, FT_0002...` and obviously the first is the PK. This pk is purely advisory, other fields can be part of the PK but you can't know looking at the schema
- NO FK at all, so ALL TABLES have integrity issues.
- NO indexes, because why make joins or lookups faster?
- Weird normalization, like yes, no, maybe?
- And no option to let me add some sanity to the DB, because the database is (in my niches) "property" of the software makers so can't be touched, only queried.
And not tell the rest, because this is only the query side, the CRUD side is nuts, and not wanna remember much about it.
---
RDBMS is one of the most simplest ways, to get FOR FREE, and minimal effort, a sane, well behaved and performant system.
Refuse to use them is NUTS. Refuse to use the abilities they have is NUTS.
I only have ONE actual company (in +20 years) with a real reason to go NoSQL, and that only was for a fraction of the thing (in fact, the data was mirrored to a RDBMS for reporting!). And maybe that was true 5-10 years ago, today I think modern SQL engines catchup more and more on the scenarios (Today I have used timescale just for speed up log processing, and was so much nicer that deal with weird log storage engines).
Personally, I would prefer errors to become immediately obvious, while I'm active and there's a roll-back ready to be applied.
Dear Database Architect,
Directive 595 Part 2 is as follows.
"Foreign and Primary Key constraints give lack of flexibility, more
costly evolution, inhibit the use of the database acting as a
service to applications and make it an inhibitor to evolution."
As such, please remove from all production databases.Sincerely, Chief Architect Gerald
I create all fields as NOT NULL and use empty string in place of NULL.
Last time I tried foreign key constraints can't work in an environment like this.
Referential integrity saves hours of pain from weird DB issues down the line.
I also interact with VAX applications that use Rdb (now owned by Oracle) that avoid constraints of all types for performance reasons.
I mean, there can be reasons why not, but why use an RDBMS at all if you’re not going to take advantage of its features?
Join is simply an index scan. And is guaranteed to be cheaper then the ORM having to send many request to the server.
Everyone here seems to be super diligent about FKs. I wonder if that's a sampling bias.
I've worked on a few projects where they were considered an unnecessary hassle, especially when the RDMS had some performance or functionality limitations when using them. Also "on delete cascade" seems scary. Plenty of systems even just set deleted=true instead of actually deleting (at least before GDPR).
Not every CRUD app treats data integrity as a holy grail. A social network for hamsters can lose a comment, no big deal. Some applications threat databases as a bag of key-value pairs, and the inconvenience of migrations ends up with a JSON in an "everything_else" column. Move fast and break relational integrity.
Though the answers would probably vary and there's most likely lots of nuance per individual case (which might matter more than just yes/no), personally I can think of the following as examples:
- yes, we use foreign keys
- yes, but we use them in testing environments and turn them off in prod
- no, we don't use them because our database doesn't support them (e.g. distributed like TiDB)
- no, we don't use them and check integrity and orphaned data ourselves
- no, because our system design doesn't allow us to use them meaningfully (e.g. OTLT and EAV)
(also, talking about whether to cascade or not might be useful, e.g. whether you want to manually clean up related data, or not)
Someone else mentioned varying schools of thought, which rings true. Personally, my opinions about database design in general are along the lines of: - avoid EAV and OTLT outside of very specific cases, have multiple tables over few (e.g. employees, employee_contact_information, employee_vacations, employee_notes instead of employee_fields and/or employee_field_values)
- have foreign key constraints across your tables, so that you might not end up with orphaned data, *consider* cascading the constraints (depends on requirements)
- use views liberally, especially for complex requirements in regards to selecting data, so that your app (or ORM in it) can map against it in a simple manner
- outside of batch operations, prefer to modify data through the app, instead of procedural SQL, since that's easier to debug; I'm yet to see someone use breakpoints/watches for stored procedures successfully
Though my ideal database design probably looks way different and scales slightly differently (which hasn't mattered as much yet) than someone else's.There are people who want to build their entire database around a "classifier" system, about which I wrote previously here: https://news.ycombinator.com/item?id=32416093 (this also makes the DB hard to visualize as ER diagram because of meaningless links, and sometimes makes the DB hard to use without the app, e.g. type_enum_value vs table_name).
There are people who want to do everything in procedural SQL (I've seen application views call stored procedures to fetch all data and validate forms), there are those who don't want to touch it with a 10 foot pole.
It really varies a lot, though in my experience it's invaluable to be able to feed a database into something like DbVisualizer and get an overview about how the different tables are related to one another, basically like documentation: https://www.dbvis.com/