Is DNS really a perfect protocol? How can it be improved?
Facebook was famously started and hosted in a dorm room. But this was only possible due to the history of Harvard within the advent of the internet and the fact that they had such an excess of addresses that Zuck could bind to a public IP address. We’ll never know what tiny services could have blown up if people didn’t hit this wall.
I started off with computers by hosting garrysmod servers. My brother started off with computers by hosting a website dedicated to the digital tv switchover in Wisconsin (lol). This was only possible because my dad was a software engineer and paid a bit extra to get us 5 dedicated IP addresses. If he didn’t understand that, who knows what me or my brother would be doing today.
Anyway, I say IPv6.
Imagine a world without DDoS or Cloudflare's 3+ second redirect.
Incorporating a time dimension to domains, such that it's explicitly recognised, and greatly restricting transfers, would be one element.
Ownership of domains should sit with the registrant, not the registrar.
Characterspace should be explicitly restricted to 7-byte ASCII to avoid homoglyph attacks. It's not a shrine to cultural affirmation, it's a globally-utilised indexing system, and as such is inherently a pidgen.
Other obvious pain points:
- BGP / routing
- Identity, authentication, integrity, and trust. This includes anonymity and repudiation. Either pole of singular authentication or total anonymity seems quite problematic.
- Security. It shouldn't have been permitted to be baked on.
- Protocol extensibility. Standards are good, but they can be stifling, and a tool for control. Much of the worst of the current Internet reflects both these problems.
- Lack of true public advocacy. It's deeply ironic that the most extensive and universal communications platform ever devised is controlled by a small handful of interests, none answerable to the public.
- More address space. IPv6 is a problematic answer.
- Better support for small-player participation. Servers are still highly dependent on persistence and major infrastructure. A much more robust peered / mesh / distributed protocol set that could be utilised without deep expertise ... well, it might not make things better, but we'd have a different problem-set than we face presently.
- Explicit public funding model for content, and a ban or very heavy tax on most advertising.
- A much keener, skeptical, and pessimistic early analysis of the interactions of media and society.
Imagine how many times security and privacy have been reimplemented in different contexts.
And that patchwork approach will incentivize security breaches and manipulation through dark surveillance until ... no end in sight.
DNS is a horrid mess that should have been designed with ease of reading in mind. And I know that DNS was designed way before it, but I think a data transit format similar to JSON would have made the system a bit more extendable, and given people a chance to make it a more dynamic system.
E-Mail was brilliant in it's time, but for the love of all things explosive and holy is it bad. Just the fact that in it's base design there is no E2E encryption going on is a problem.
My biggest beef with the current internet is HTTP. Everything is built on it, and it isn't the greatest system. There have been so many systems and protocols implemented that did so many things, FTP/gopher/irc/etc, and most of them have gone the way of the dodo. A few hold-outs in the dedicated tech world will still use irc, but we could have done so much with cloud-based systems with FTP. And if we had a new spec for irc, would we need Slack/Discord/MS Teams/etc? They could then all talk to each other. We shouldn't be trying to reinvent the wheel, we should be using these older services in our platforms.
And don't get me thinking about cloud. The worst term that a marketing team got a hold of. At it's core, it's just somebody else's computer. And again, so much of it is build on HTTP protocols. Not many people know or remember that xWindows and X for *nix systems had a distributed system built in. Log into a server, set one environment variable to your IP address as long as you were running one of these systems yourself, and you could runs programs on the server with the GUI on your computer.
Elements that know they're user-editable (images with upload controls attached to them to replace the image that's there, dates that trigger calendar controls when clicked, and inline editing for other elements that actually works and is consistent across browsers).
An offline database in the browser that has the concept of user accounts and resetting passwords baked in, as well as subscription payments, and has a universal API that can be automatically deployed to any hosting platform.
All of this would make building web apps and games trivial for so many more people -- write some basic HTML, upload it to one of a million platforms of your choice with a click, and you have a testing ground for a product that can grow.
It would be a way for anyone on the internet to build something useful and get it out there without learning a thousand technologies with a thousand nuances.
While the spam problem is much better than it used to be, that's because there's a whole lot of behind-the-scenes (and expensive) infrastructure devoted to combating it.
That infrastructure has also made it considerably more difficult to run your own mail server. While you can still do it, there are many hoops to jump through if you want to keep your mail from being dropped on the floor by the first overzealous spam filter it encounters. Unless you're big enough to devote a staff just to that (or you have a lot of free time) it's easier (but more expensive) to just use MailGun, MailChimp, or one of their brethren.
I would guess that the annual cost of spam is somewhere in the billions.
Make it easier and more equitable to obtain addresses and ASNs.
Build protocol to make edge/cloud computing more fungible. Similar to folding@home but more flexible and taking into account networj connectivity rather than just CPU cycles. Probably looks a lot like Cloudflare Workers/Sandstorm but no vendor lockin.
DNSSEC only. TLS only for HTTP/consumer facing web.
Actually on the topic of that probably something simpler than TLS to replace TLS. It has too many features. x509 can stay but the protocol needs reworking for simplicity.
Edit: I think this would result in protocols over walled gardens. The problem is JS makes HTTP/HTML everything to everyone.
For example: a public index service (akin to DNS) where all pages upload all hyperlinks they were using. The end result is a massive graph that you can do PageRank on. You'd have to add some protections to avoid it getting gamed...
Email was the first decentralized social network and with it came bulletin board services and groups. Could these concepts have been developed a bit further or been a bit more user friendly while remaining decentralized?
2. SRV RR's instead of "well known ports". Solves load balancing and fault tolerance as well as allowing lots of like-protocol servers to coexist on the same IP address.
3. Pie-in-the-sky: IPv4 semantics w/ a 96-bit address space (maybe 64-bit).
I would also change HTTP to be modular, allowing to add new sub-protocols which would be automatically compatible with old clients. So we could do something like http/imap or http/ftp on top of http, and unprepared clients would still be somewhat compatible to this, but adapted clients could fully utilize those protocols. Thing is, http is not the most perfect protocol, but it's a very pragmatic and straight forward one, which IMHO is the main reason why it succeeded, where other protocols failed. SO instead of adding another batch of protocols that barely anyone will use and support, it would be better to high jack the existing solution, and slowly extend it.
Also, I would make it Illegal to have proprietary protocols, data formats and obfuscated apps. Give the people the power they deserve. You can't open the cloud and business, but at least networks should be balanced for all involved sides.
Indexing and page interoperability is done by exposing standard functions which yield the necessary metadata. For example, if you want your site to be indexable by a search engine, you expose a function "contentText()" which crawlers will call (and which the client browser might also call and display using the user's own reader app). In the simplest case, the function simply returns a literal.
Core resources like libraries would be uniquely identified, cryptographically signed, versioned, and shared.
If someone wanted to make use of a browser pipeline like the standard one we have today, they might send code which does something like "DOMLibrary.renderHTML(generateDocument())". But someone else could write a competing library for rendering and laying out content, and it might take hold if it's better, and it wouldn't have to be built on top of the first one.
Also, the browser wouldn't necessarily be a separate app (though someone could make one); it would be a standard built-in feature of operating systems, i.e. the sandboxing would be done at the kernel level. With security properly handled, there'd be no difference between a native app and a web app, except whether you load it from your own filesystem or a remote one.
Typical page size would go from 1M typical to 64K-128K being typical. Images would stream in after initial page renders but since most pages would fit in 1-3 packets, you'd see pages pop-in very quickly. This would also be very helpful for poor connection, mobile and the developing world.
I'd fund a team to do this if I could figure out who would buy it.
The situation today of course is that the method that usually comes to mind when a person decides that something he or she has written should be put on the internet publishes what is essentially an executable for a very complex execution environment, and (except for a single non-profit with steadily decreasing mind share) the only parties maintaining versions of this execution environment with non-negligible mind share are corporations with stock-market capitalizations in the trillions.
We should have had DHCP prefix delegation for IPv4 so people wouldn't need NAT.
I tried to learn it a while ago and got super frustrated with how things are. The whole thing looked upside-down to me.
I mean The DKIM, SPF, DMARC, Bayesian filtering, etc sound like band-aids upon band-aids to fix something that's really broken inside.
I'd go source-routed isochronous streams, rather than address-routed asynchronous packets.
I haven't updated my blog in a few years, but I'm still working on building the above when I have the time. (IsoGrid.org)
If it was possible to have infinite radio frequencies to use. Then you could load every possible webpage of every website continuously broadcast on a different frequency. To load a page all you have to do is tune in to that frequency. You will then get an instant latest version of that site without having to wait. This gets more complicated for signed in websites. There is no reason you couldn't implement the same thing just add more frequencies for more users. This wouldn't work for POST requests, but I think any GET request would work fine.
I would eliminate third party requests for page assets. If you need images, css, js, or such it would come from the same origin as the page that requests it.
1. This would eliminate third party tracking
2. This would eliminate abuse of advertising
3. This would eliminate CDNs and thus force page owners to become directly liable for the bandwidth they waste
4. It would make walled gardens more expensive
—-
I would also ensure that section 230 is more precisely defined such that content portals are differentiated from the technology mechanisms on which such traverse. The idea here is that Section 230 continues to protect network operators, web servers, and storage providers from lawsuits about content but not the website operators that publish such.
1. Sites like Facebook and Twitter would become liable for their users submissions regardless of moderation or not
2. Terms of service agreements would largely become irrelevant and meaningless
3. This would radically increase operational risks for content portals and thus reinforce content self-hosting and thus a more diverse internet
——
I would ensure that identity certificates were free and common from the start.
1. The web would start with TLS everywhere.
2. This would make available models other than client/server for the secure and private distribution of content
3. This would, in joint consideration of the adoption of IPv6, also eliminate reliance upon cloud providers and web servers to distribute personal or social content
As for security, I'd enhance the web browsers to do internet hygiene - removing unwanted bits and blocking information leaks. The browser should protect user accounts from being linked together.
Another major issue is payments which today is a high friction backwards process and acts like a gatekeeper. We need an equitable solution for micro-payments without too large processing costs.
The last one is search - users can't customize the algorithmic feeds it generates and startups can't deep crawl to support inventing new applications. Search engine operators are guarding their ranking criteria and APIs too jealously while limiting our options.
So in short, ipfs/bittorrent like storage, nanny browser, easy payments and open search.
* A protocol for running distributed binary apps. Transforming the browser in an operating system sucks for both users and developers.
Still an unresolved issue to this day AFAIK.
I'd add a way to boot abusers from the network. Botnets would become the problem of the IOT device owner etc. This means protocols where routers could complain about an abuser to their upstream. If upstream receives eniugh complaint/votes, an end device gets booted off.
I'd like some way to re-decentralize the internet. There are some wrong incentives in the current internet, causing centralization.
Protocols like DNS and SMTP are designed so that multiple servers that can handle traffic, and one going down isn't a big deal - the clients will just retry using and the entire system just keeps working.
Compare to HTTP which don't have retry mechanisms which results in needing significant engineering to deal with single points of failure, fancy load balancers and other similar hard engineering challenges. Synchronization of state across HTTP requests would still be an issue, but that's already a problem in the load balancer case, pushed usually to a common backing database.
I recall a possibly apocryphal story about TCP - namely that it was originally meant to be encrypted. Supposedly it was the NSA who had gentle words with implementors which led to the spec not including any mechanisms for encrypted connections.
So, encrypted by default TCP, for a start.
DNS should have a much simpler, possibly TOFU model to help with the usability elements. DNSSEC is just a nightmare and confidentiality is nonexistent.
Somewhat controversially, I'd ditch IPv6 in favour of a 64bit IPv4.1 - blasphemy, I know, but the ROI and rate of adoption for IPv6 don't justify its existence, IMO.
[0] Tussle in Cyberspace: Defining Tomorrow’s Internet https://groups.csail.mit.edu/ana/Publications/PubPDFs/Tussle...
And making more top level domains available from the outset instead of the fetishisation of the .com domain.
https://www.energy.gov/sites/prod/files/2020/07/f76/QuantumW...
E2EE for all major protocols from the start (DNS, HTTP, SMTP, etc)
Protocols for controlling unwanted email and analytics (a not disastrous version of EU’s cookie consent)
Build in a layer for onion routing as well, so that all the servers in the middle don't automatically know who you're trying to reach.
Sounds elevating... P-:
Disclaimer: 'This position is constructed to deal with typical cases based on statistics, wich the reader may find mentioned, discribed or tendency-based elaborated.'
Sure, OT -but you may also like thinking other way about (-;
Does it really need to take 5 seconds to load a website whose contents total 500kB on a 100Mbit/s connection with a 6-core CPU?
But a decentralised way to do an internet search. So an anbiased/no tracking search engine
I think we could accomplish something similar just by having a global SSO that works for any website, totally decoupled from your actual identity or location. The only information it reveals might be some sort of quantifiable social standing based on how you interact with others on forums.
For one thing, operating systems' tcp/ip stacks. They should have come with TLS as a layer that any app could use just by making the same syscall they use to open a socket. For another, service discovery should not be based on hardcoded numbers, but querying for a given service on a given node and getting routed to the right host and port, so that protocols don't care what ports they use.
Domain registration and validation of records should be a simple public key system where a domain owner has a private key, and can sign sub-keys for any host or subdomain under the domain. When you buy the domain you register your key, and all a registrar does is tell other people what your public key is. To prove you are supposed to control a DNS record, you just sign a message with your key; no more fucking about with what the domain owners email is, or who controls at this minute the IP space that a DNS record at this minute is pointing to. This solves a handful of different problems, from registrars to CAs to nameservers and more.
The security of the whole web shouldn't depend on the security of 350+ organizations that can all independently decide to issue a secure cert for your domain. The public key thing above would be a step in the right direction.
BGP is still ridiculous, but I won't pretend to know it well enough to propose solutions.
Give IPv4 two more octets. It's stupid but sometimes stupid works better than smart.
Give the HTTP protocol the ability to have integrity without privacy (basically, signed checksums on plaintext content). This way we can solve most of the deficiencies of HTTPS (no caching, for one) but we don't get MitM attacks for boring content like a JavaScript library CDN.
And I would make it easier to roll out new internet protocols so we don't have to force ourselves to only use the popular ones. No immediate suggestion here, other than (just like port number limitations) it's stupid that we can't roll out replacements for TCP or UDP.
And I would add an extension that encapsulates protocol-specific metadata along each hop of a network. Right now if a network connection has issues, you don't actually know where along the path the issue is, because no intermediate information is recorded at all except the TTL. Record the actions taken at each route and pass it along both ways. Apps can then actively work around various issues, like "the load balancer can't reach the target host" versus "the target host threw an error" versus "the security group of the load balancer didn't allow your connection" versus "level3 is rejecting all traffic to Amazon for some reason". If we just recorded when and where and what happened at each hop, most of these questions would have immediate answers.
Next, we use 128 bit IP addresses, 32 bit port and protocol numbers, and absolutely forbid NAT as a way of getting more addresses. (no ip address shortage)
Next, all email has to be cryptographically signed by the sending domain using public key encryption. (No more spam)
Next, selling internet access [EDIT]ONLY is strictly prohibited. Either connections can host servers, or find a different business model. Any node of the internet should be able to run a small server. (No more censorship in the walled gardens) Yes, I know it's stupid to try to host a server on the upload bandwidth I get from a cable modem, but it shouldn't be prohibited. If I want to see my webcams from anywhere, it shouldn't require someone else's server.
DNS should be done using a blockchain, it would need some iteration to get it right.