I was just thinking how filesystems and data operations on files are exclusively done by the OS and applications. Why is there a need for modern hardware to stick to simple instructions for disk operations?
I'd imagine disk controllers being aware of logical segementation of their storage, supporting file systems and even APIs like SQL, Redis,Elastic,etc... why does my program perform a SQL SELECT or UPDATE which the DB then has to ask the OS to get the data from disk, the fs driver figures out the inode details and them does an open+read operation and after all this the DB will get the data, work on it and report back to my app? Why can't I tell the storage controller firmware "sectors x through y is a sql db and xn through yn is table" and then the storage controller itself runs the DB with.
Imagine searching for files meant the disk searched for them (or their content!). I can see how this would have been a crazy idea 10+ years ago, but given the role secondary storage plays as a performance bottleneck, wouldn't this cut down on operations done per cpu cycle? And the economics of the hardware doesn't seem that wild.
Another way to look at it is how GPUs implement things like OpenGL and CUDA, not just simple pixel and raster operations with their own specialized compute and storage setup.
Why haven't the brains at large done this already?
DPUs are also probably a piece you are missing. Vendors are starting to position these as almost PCIe arbiters with access to both nvme devices and GPUs without the need to involve the main cpu to move data between them.