How are you going to make the Internet better?
- 48-bit static IP addresses. 70 trillion should be enough. 128 bits was overkill.
- Nodes, not interfaces, have IP addresses, so you can use multiple paths.
- IPSEC available but initially optional.
- Explicit congestion notification, so packet loss and congestion loss can be distinguished.
- Everything on the wire is little-endian, byte oriented, and twos complement.
- You can validate a source IP address by pinging it with a random number. If you don't get a valid reply, the IP address is fake. Routers do this the first time they hear from a new address, as a form of egress filtering. This contains DDOS attacks.
- Routers will accept a "shut up" request. If A wants to block B, it sends to a router on the path, the router pings A to validate the source, and then blocks traffic from B to A for a few minutes. This also contains DDOS attacks. Routers can forward "shut up" requests to the next router in the path, for further containment.
- Fair queuing at choke points where bandwidth out is much less than bandwidth in.
- Explicit quality of service. At a higher quality of service, your packets get through faster, but you can't send as many per unit time.
- No delayed ACKs in TCP.
- Fast connection reuse in TCP.
- Mail is not forwarded. Mail is done with an end to end connection. Mail to offline nodes may be resent later, but the sender handles that. Mail, instant messaging, and notifications are the same thing. Spam is still possible but hard to anonymize. If you want your mail buffered, use an IMAP server at the receive end.
- One to many messaging uses a combination of RSS and notifications.
- Something like Gopher should be available early. The Web would not have fit in early machines. but Gopher would.
1. Everything being 'free' by default drives us to ad-supported centralized services. Economics aren't a separable concern.
2. Too few IP addresses. (At least one of the pioneers, I forget which, said he pushed for longer addresses but was overruled. So the technical constraints probably did not force this.)
I'm not sure how to fix #1, but here's an approach from the 90s: https://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.16....
Just before the Internet was opened to commercial use in the mid 90s, would've made a perpetual prohibition of advertising over the Internet. Ads are what have ruined everything.
Take just about anything unpleasant about the Internet today and it is either directly a consequence of ads, or an indirect consequence of someone trying hard to make you see ads.
The last big things to secure are DNS (can be done with DNSSEC), and possibly somehow mandate TLS for connections (although you definitely don't want that all the time).
One big glaring problem is BGP, which we don't really have an answer for. Whereas "just use DNSSEC" pretty much solve the last big security hole above, BGP is still difficult because you have to basically have a system to attest the path for each BGP node. AS1 can't say "I have a path of length 5 through AS2 AS3 AS4 AS5 AS6 to AS6" unless that message can be attested to by each node, but then this comes into a bootstrapping problem (e.g. how do you reach those ASes to get some sort of key without going through AS2 first?) or trusting some authority as we do for ssl certs. God knows the first thing I do on any fresh install is uninstall those root certs from any sketchy government I don't trust.
Having worked on SDN in its heyday for some of the big players in the space, there are definitely good ideas in the space, but getting to adoption is damn difficult, bordering on impossible. I don't know what it will take to oust BGP, so we're kinda stuck with it for the foreseeable future.
The worst that happened to the internet is Google becoming evil. The internet circa 2000 was mind blowing.
The problem is that it's the 60s. My first thought was "security", but unless you an also teach them about elliptic curves, they're going to use the security of the 1960s, which as we now know isn't very secure.
Maybe at least having security baked in would help make it easier to switch to better security later, like how ssh can use different protocols as old ones are broken. But you'd have to make sure that you were very clever about how it was implemented so that it could be switched without major changes.
Another thought is "more IP addresses", but again you are in the 60s. The computers don't have enough memory to deal with IPv6 length addresses. So again the best you can do is try to set them up with easy upgrades.
Which makes me think the best suggestion would be to teach them about Moore's Law, which of course would have a different name, and try to push for every protocol being extensible as technology grows -- make sure that more octets can be added to IP addresses without them breaking, that security is baked into everything but everything has a way of negating a protocol so that they can be upgraded, that there are no hard upper limits that are assumed and can always be changed.
Basically, teach them what we now know are software best practices -- constants shouldn't be hard coded in the software, they should always be in a separate config.
Others here seem to be redesigning the entire intarwebs. I'll just pick one thing I might have been able to digest.
DNS is brilliant - but insecure, and centralised. The dependence on registrars was a huge mistake. The competition for names is an unintended consequence; the DNS created artificial scarcity, which resulted in commercial businesses that produce nothing of value.
So something like GNS, I guess. https://tools.ietf.org/id/draft-schanzen-gns-01.html
A) Establish the expectation that websites "close" in the middle of the night for ~5-6 hours, local time / for each timezone. I don't know if would best be done via cultural influence -- giving talks, writing essays, personal communication, testifying / making inroads with politicians -- or via creating some sort of protocol. The idea is to prevent the unhealthier aspects of internet binging and screen addiction.
B) Establish the expectation that internet comments are transcriptions of voice recordings. I.e. to leave a comment, you have to call a phone number and leave a message which then gets transcribed as "the comment." In order to respond or reply to a post or a thread, you have to listen to the message and tone of voice of the person you are replying to. I don't think this would solve every internet dialogue, but it'd promote healthier interactions and less division.
In my book, the largest problems with the internet are techno-cultural, not technological.
- Get rid of ARP - just append the LAN address to the network address like other networks. By default LAN addresses are random. (Note IPv6 enables this basically.)
- Support encrypted DNS and authenticated BGP.
- Let DNS return other metadata including the port as well as the IP address.
- Let DNS caching work. Don't misuse short DNS timeouts for load balancing.
- Ingress traffic filtering - reject source IP addresses from outside the current prefix.
- Not IP per se, but let multipath work in the LAN (and give Ethernet a TTL so that packets don't loop forever if things go bad.)
- Eliminate (or minimize) broadcasts. Use unicast/multicast for DHCP, service lookup, etc..
- Support relocation/forwarding of TCP connections so they don't break when your IP address changes.
- Fix TCP congestion control so that the data rate doesn't decrease as latency increases.
- Second adding congestion notification to TCP to differentiate between packet loss and congestion.
- Encrypt the host name in SSL/TLS.
I have a theory that NAT killed the open web. There was this idea at the beginning that everyone could host their own website, email, etc. But when you're behind a router, you suddenly have to be quite technical in order to set all that up on the computer in your room. So only (bored) technical people bother. It's possible this is the reason platforms came to dominate.
The original idea was that protocols would allow any one to participate by simply making their own webpage. But dynamic IP addresses, the DNS system, and even just HTML design were out of reach for most people so that got lost and monsterous websites under centralized control became the mediators for most people.
So if we could find a way to bake that decentralization into the protocols even more strongly while making them accessible to non-technical people, that's the change I would make.
The aim is to create a world where central platforms are not dominant, but any user can easily participate in the communication protocols with out there being a central point to collect all the data or force changes from.
...of course, I have no idea how one would go about doing that, and there in lies the rub.
Failing that, Flash should have become open source and part of the W3 web standards, but opened up such that we could observe the code that's running.
(see http://www.youtube.com/watch?v=bpdDtK5bVKk&feature=youtu.be&... by Jaron Lanier, also see Ted Nelson)
Anything that allows me to send files to a device of a person I know on a direct connection without a service in between and regardless of our locations in the world. Still an unsolved problem AFAIK.
Having thing given for free to be then exploited for various purposes is reason why these services are shit - because you are not the client, the guy who pays for your data or advertising space is.
How this increadible technical potential got translated into social reality says more about society than the technology[0]. If the stack of applications that has been built on top of it has become dystopic it is because society had dystopia in its dna. The technology simply allowed it to be expressed, so to speak.
By the same token, any technical tweak that maintained or improved this scalability would simply have led to an alternate dystopia. It may be counterintuitive but maybe the only internet that would actually be "better" would have been a more local / less scaling version. A more gradual transition might have given society time to adapt, develop some defense mechanisms and not be dominated by the lowest common denominator
[0] Keep in mind that all communication technologies of the 20th century (phone, radio, TV) quickly degenerated and never delivered the utopia initially projected
2. Make use of DNS SRV records for all services. Why HTTP must be on port 80? Why not consult DNS to resolve the port too? Pretty much related to my first point.
Basically: servers focus on serving their data, and then it's up to the user to figure out which "renderer" they want to use to display it. Ofc defaults would be provided.
But, say, you wanted to view tweets in a table form: no problem.
Or maybe, you want to have a really wacky "whip the llamas ass" UI for podcasts: go for it.
-----
The big benefit of this is that it would allow for artistry in websites, rather than the boring old blue, black, white, grey material design.
I'd add some kind of built-in, frictionless, privacy-respecting, user-friendly, transparent payment/micropayment system. Built on open standards so we could have multiple competing UX's and the best one(s) would win.
Basically, think about how Patreon and Kickstarter (which are not without their flaws) have allowed people to support creators more or less directly. Now, imagine if we'd somehow baked something like that in to the internet itself.
The web is 99.9999% garbage and one of the biggest reasons is because we spent nearly two decades training people that everything on the interwebs was free which meant that it had to be ad-supported which means that nearly everything has been forced to pander to the absolute lowest common mass-market denominator.
Even with some kind of "good" micropayment system, sure, most stuff would still be free/ad-supported lowest common denominator crap. I have no illusions. Just look at every other form of media that has ever existed.
However, just imagine how books or movies or whatever would look they were de facto forced to be free for the earliest part of their existence.
I think we failed to appreciate how much the average user would need centralized services (Search & Social) to use the Web. Both of these services are around discoverability of content. Humans want a water cooler to visit and chit-chat, or an organized library to look for information.
Additionally, because accessing the Web was seen at first as "free" (outside your ISP), people would gravitate towards "free" centralized services like Facebook and Google.
This created a recipe for what we see today with the incredible power of these companies over so many aspects of our lives.
So what would I think should be different? I would have been more thoughtful about regulating these centralized services in the way the FCC regulates the airwaves & media companies. Which is even more proactive than antitrust law. It's OK that they're profitable. That's good! But we ought to avoid single companies owning the entire search / social space.
Now you wrap the adress of me: Individual > Household> Street > City> Airport into encrypted shells, that only reveal the next destination upon arrival within the data-organism.
These of course are valid only, if a public ledger certifies their longterm existence.
Your reply will take time, it will travel on land, air, water and, by all means possible. But it will reach me, i promise you that. To add plausible deniability, all you need is hostile apps, who participate within the meshnet, without the users consent. To add motivation to participate, just allow the transfer of crypto-currency - a currency backed up by the promise of data-transfer, no matter were, no matter what.
The web is a different story, especially social media. I'd like to make social media, and the web in general, more forgetful. "Digital natives" (second-flight millenials and Gen Z) are going to get screwed with the persistence and easy archiving of social media data. This is partially a result of the natural shift in cultural expectations that occurs over time, as well as a consequence of having their awkward-for-any-generation blunder years recorded forever. This is definitely more a legal change than a technical one, but I would mandate (1) a time span (such as 5 years) where public social media posts must revert to author-only private unless consent is otherwise obtained and (2) a prohibition against public mass archiving of social media posts from people who aren't public figures.
This type of mass archiving for the use of closed-off academic research libraries is acceptable, but merely going and hoovering up every public tweet or Youtube comment or Reddit post and and putting it up with a public search engine shouldn't be permitted. Treat it like many countries treat the census, and only allow publicly opening up these archives far into the future (for example, the raw underlying questionnaires used for the Canadian census are not released to the general public until 92 years after collection). Different story for public figures such as politicians, but we shouldn't archive everything that everyone has said in perpetuity.
Consequences:
* There is no live user tracking.
* Access control can be user/password or ssh keys
* You always have an archive of what you read
* You always have an archive of chats
* Everything is in principle decentralized (whether it is in practice depends on whether people keep files.
* Clients are in control.
That and probably mandatory native layer 3 encryption
- A general idea: Arrange so that sending packets costs way more than receiving, just like snail-mail. (At the moment, a large website or spammer pays very close to $0 per message, individuals pay much more per byte to receive junk.) This would encourage decentralisation, nicely small web pages, and discourage spam.
- And "disappear" XML. It might be "ok" (just) as document markup, but it's a terrible for structured data and config, and for transfers.
As a coworker of mine pointed out, it's a little bit ridiculous that URLs on the web look like this: `more.and.more.general/more/and/more/specific`. They should really be `more.and.more/specific`, just like bang paths[1] were.
I expected Xanadu. Centralized, omniscient namespace, two-way links, micropayments, etc.
We got The Web. Which eschewed all of that.
Much as I hate The Web (repeating myself), I grudgingly accept that it probably succeeded because it wasn't Xanadu.
Even if Xanadu launched, like a better AOL or Prodigy, I suspect most people wouldn't have grokked it.
Another triumph of Worse Is Better. Like PHP and JavaScript and so many others. Then hot patch it towards something less offensive.
It'd be like my brain broadcast just multi subscription stream, bunch of inputs and reacting to those events.
Encourage the development of browsers for kids. Parents can configure their kids' computer to only run "KidFox" browser which has all the security features turned on. Only allows white listed sites that has been vetted by various agencies, denies escalated privileges, turns off all webcams, prevents remote hackers taking over their kids computer etc.
Lastly, it is more of mindset. Developers should take the attitude that every client computer, server, and database has been compromised to some degree. That is we should have defense in depth and not rely solely on one mechanism to protect us from the bad guys.
Target goal: No cleartext password authentication. (No telnet as it was, no ftp as it was, no smtp as it was, etc., yada, and so on....)
Fallback goal: Get HTTPS right far sooner, with cryptographers working on SSL 1.0 from the beginning, with funding, and eliminate HTTP as soon as possible.
I'm push as many examples of capability based security into the academic world as I possibly could, in the 1970s.
Alternatively, push a version of Pascal with a standard library, and drown the insane practice of ending strings with a null instead of knowing their lengths.
Pondering the implications is left as an exercise for the reader.
As in, if you need ad revenue & marketing to support your website, you simply don't exist. You can have a website, but no advertisements or "user engagement" nonsense. You're either free or you're offline.
So so many protocols (history), but how few could we get away with, and what would they look like?*
Why don't we have constructive computational contracts for computational work?
How do we make the Internet easier to understand?
How do we manage the agency problem? We yield far too much agency as a matter of daily life, our data is not our own, our decisions are shared with barely knowable third parties.
How do we design human computer interfaces with health, especially mental health, as primary constraint?
How rapidly can the EU coalesce around a combination of a RISC-V general purpose CPU (with suitable trimmings) and a SEL4-influenced-kernel, perhaps in Rust (https://gitlab.com/robigalia)?
How do we standardise on constraints of discourse such as those pertaining to offensive language or hate speech? How do we make it easier for people to communicate with kindness? Autohinting everywhere? Like a shellcheck for human bashfulness?
VR and AR are coming very soon and without care they will be shatteringly destructive of human life. Humans addicted to computationally modeled utility functions mediated by multi-sensory computer games?
How do we embed the lore in the experience? How do we make available all the references as delightful marginalia?
What areas of Mathematics and Physics do we need to study to get ahead of our problems? Category theory is beyond trendy, what's trending? How about rigorous dimensional analysis to match the type theory, or sumthin? How do we invite the world's smartest financiers to apply and share their thought more generously?
Can we settle on a basic curriculum? What functional minimum of linguistic, mathematical, physical, visual, and other skills do we need? Is lisp or a variant the first language we should learn, and if so how should we be able to learn it? If not lisp then what? APL? Fortran? Compiler forbid, Haskell?
How do we ensure that code and documentation are always in sync? How much time will this require?
How do we guarantee a standard of professional attainment and delivery of ICT expert that is globally effective? How do we standardise how we do, not just what we do?
How best can we help each other make our Internet an even better place?
(much spelling, apologies)
*4
1. Encrypted onion routing on layers that betray source/dest IP. 2. eSNI on all TLS connections. 3. Privacy-focused DNS.
The problem is how laws destroy fair competition by favoring those with the most $$.
Fix that, and you fix everthing else (not gonna happen).
Or, if we're talking about the ground work, specs the cookies such that browsers must implement the cookie consent and therefore sites can't build it in js.
i passed the course.
How we distinguish warm va cold, idk
The only things broken on the internet are smartphones and closed IoT firmware.
I would change the web.
I would remove JS and design browsers to natively run python instead.
A centralized internet will inevitably do more harm than good.
And no JS. ;) /jk
As an alternative:
- Private and federated. Everyone has a personal server application which spans multiple personal computing, storage, and peripheral devices and supports federated access at varying levels of security.
- The server stores and manages a user's private data and anything else they feel like storing or sharing. (This requires unobtanium level security, but since we're imagining let's pretend this is a solved problem and see where it takes us.)
- Sharing of all kinds is user controlled and is opt-in across multiple competing federated networks. This includes social networks - with the difference that anyone can start their own network, for any purpose.
- Networks are decentralised and peer-to-peer, and do not store personal data, track, or profile users. This is a user=centric network where users own and control their data. Not an industrial data silo network.
- Users can share different interest profiles and personal details across different networks with varying levels of security and implied credibility.
- Ads are opt=in not opt-out, and defined by voluntary and informed profile and interest sharing, not involuntary and uninformed data harvesting.
- Anonymous microtransactions are a thing. Anyone can sell at scale with as little friction as possible.
- There are no cryptocurrencies and no blockchain tech, because generating random numbers with the equivalent of your own electrical substation is fucking stupid. There is a low-energy secure equivalent. (See unobtanium. Or is it?)
- A common kit of essential server apps is open sourced and community-maintained.
- Commercial and/or professional apps are available by hire or subscription. Servers have a multi-profile multi-layer security model which controls which layer of personal and/or server data outsider apps have access to.
- All paid-for apps supply full details of the schemas and file formats they use, to guarantee that users can freely transfer data to a competing app provider so apps and services hold personal data hostage and have to compete on service quality, not on retention gaming.
- Hacking, malware, virus creation, phishing, and so on, are punished by deletion of personal server data and reduction to the most basic server hardware and software. For serious and repeat offenders, this is for life.
- IoT devices are treated as personal server peripherals with no external data sharing (except by opt-in.)
- Government and military networks use an expanded version of the same system. Municipal, military, and internal gov services run on separate private subnetworks which can only be accessed through authorised devices with extra ID verification, not through general public logins.
Basically it's a combination of device security, private ID (probably biometric), sacrosanct personal data protection, high user-controlled privacy, super low cost of entry for entrepreneurial service provision, squashing of local, national and international scales, and strong forcing of anti-monopolistic competition - the opposite of the current model, which seems to be about herding users into virtual pens owned by monopolists, applying various psychological patterns to control and trigger behaviour, monitoring behaviour and sentiment through minimal privacy, and having to deal with very leaky and insecure devices and systems.
(html (head ...) (body ...))
You leave it the hell alone! You leave it the hell alone -- because if you don't, upon returning to 2021, you'll discover that in addition to your wanted change -- there will be all kinds of unwanted "butterfly effects" in the world, resulting from that change, and not limited to the Internet, either! Like, propagating in and through actual reality -- not just constrained to a computer screen or virtual world! Unwanted/unforseen/unexpected (but mostly unwanted!) "butterfly effects" (imagine just how scary these could be if you were unprepared for them -- the scariest Stephen King novel wouldn't do them justice!) resulting from Chaos Theory (which programmers know to be actual fact -- make a small change early on in a program -- get vastly differing results later on, as the program moves through TIME...) So if it one day happens that you magically appear (through time travel, or other plot element) at DARPA in the 1960's -- then you take a quick look, like Clark Griswold in "National Lampoon's Vacation", when he takes a brief look at the Grand Canyon (all of a few seconds!), and you appreciate all that DARPA and all of the other earlier Internet researchers did -- and you leave all of it the hell alone! Yup, sorry, nothing to see here, nothing to change here, no changes for me! Just passing by, not going to touch a single thing! You also appreciate the fact that while today's reality is a mess in many ways (and it is!) -- it could also (with Time Travel/Butterfly Effects/Chaos Theory) -- have been a much, much bigger mess(!) -- with Butterfly Effect horrors beyond your wildest understandings! https://en.wikipedia.org/wiki/The_Butterfly_Effect https://en.wikipedia.org/wiki/Butterfly_effect Disclaimer: The above was written for thought-provoking and
possibly (depending on the reader's viewpoint!) comedy purposes only!
You can still strip identification for things like posting in public forums, but for everything else knowing where shit came from is critical. From spam email to well everything. It baffles me that spoofing callerID is not only possible but that some people think its important.
Along these lines I have a pet idea that a subset of IPv6 be geo-located, meaning your latitude, longitude, and possibly altitude are encoded in the IP address. This allows routing without the huge tables in the routers. Combined with the ability to verify that a packet came from its advertised location this is very powerful for security.
One way to verify the origin of data (email for example) is not to send it, but to send something akin to a URL (preferably better than that) so we have to at least be able to request the data from somewhere rather than have it sent to us anonymously.
Unfortunately being able to verify the source of data also enables end-to-end encryption fairly easily and nobody in power wants the public to anything like that...
For example, 2d langs for HTML, CSS, JSON, others: https://jtree.treenotation.org/designer/
A 2D lang that replaces Markdown: http://scroll.pub/
You can have 2D langs for TCP/IP, DNS, HTTP, et cetera. A grid is all you need.
I figured the math out 8 years ago, https://medium.com/space-net, and slowly getting there. Still in early days, but good annual growth rate. I'd be surprised if it doesn't happen. The math makes too much sense.