HACKER Q&A
📣 flerovium

If the Internet were redesigned today, what changes would you make?


I mean the protocols, networking, connectivity. I don't mean the content of the internet.

Is DNS really a perfect protocol? How can it be improved?


  👤 nickysielicki Accepted Answer ✓
IPv6 dates back to 1997 and it really should have been adopted more urgently. IPv4 isn’t a huge issue but it sucks that so much of the internet is dependent on cloud providers because it’s the simplest way to get a public IP address. The decentralized web didn’t happen, in part, because of this.

Facebook was famously started and hosted in a dorm room. But this was only possible due to the history of Harvard within the advent of the internet and the fact that they had such an excess of addresses that Zuck could bind to a public IP address. We’ll never know what tiny services could have blown up if people didn’t hit this wall.

I started off with computers by hosting garrysmod servers. My brother started off with computers by hosting a website dedicated to the digital tv switchover in Wisconsin (lol). This was only possible because my dad was a software engineer and paid a bit extra to get us 5 dedicated IP addresses. If he didn’t understand that, who knows what me or my brother would be doing today.

Anyway, I say IPv6.


👤 bryans
I think the most crucial functionality missing from a security standpoint, is the ability for one IP address owner to tell another that they're refusing service, and the owner of the refused IP being required to filter that traffic at THEIR routing layer. This would effectively eliminate nearly every type of DDoS, and it would also shift the responsibility to handle the attack away from the target's provider, and place it squarely on the providers of the compromised systems -- which is how it should be.

Imagine a world without DDoS or Cloudflare's 3+ second redirect.


👤 dredmorbius
DNS itself is certainly problematic.

Incorporating a time dimension to domains, such that it's explicitly recognised, and greatly restricting transfers, would be one element.

Ownership of domains should sit with the registrant, not the registrar.

Characterspace should be explicitly restricted to 7-byte ASCII to avoid homoglyph attacks. It's not a shrine to cultural affirmation, it's a globally-utilised indexing system, and as such is inherently a pidgen.

Other obvious pain points:

- BGP / routing

- Identity, authentication, integrity, and trust. This includes anonymity and repudiation. Either pole of singular authentication or total anonymity seems quite problematic.

- Security. It shouldn't have been permitted to be baked on.

- Protocol extensibility. Standards are good, but they can be stifling, and a tool for control. Much of the worst of the current Internet reflects both these problems.

- Lack of true public advocacy. It's deeply ironic that the most extensive and universal communications platform ever devised is controlled by a small handful of interests, none answerable to the public.

- More address space. IPv6 is a problematic answer.

- Better support for small-player participation. Servers are still highly dependent on persistence and major infrastructure. A much more robust peered / mesh / distributed protocol set that could be utilised without deep expertise ... well, it might not make things better, but we'd have a different problem-set than we face presently.

- Explicit public funding model for content, and a ban or very heavy tax on most advertising.

- A much keener, skeptical, and pessimistic early analysis of the interactions of media and society.


👤 Nevermark
End to end encryption and tracking-resistance at a low enough protocol level that most developers or users would never know the pain of even thinking about either.

Imagine how many times security and privacy have been reimplemented in different contexts.

And that patchwork approach will incentivize security breaches and manipulation through dark surveillance until ... no end in sight.


👤 psion
IPv6 gets mentioned plenty, and I will take the side that it should have been rolled out WAY sooner than it did, and it should have been rolled out in a way that made it easier to do what I call the Apple Method: Just get it out there and the people will adapt to it.

DNS is a horrid mess that should have been designed with ease of reading in mind. And I know that DNS was designed way before it, but I think a data transit format similar to JSON would have made the system a bit more extendable, and given people a chance to make it a more dynamic system.

E-Mail was brilliant in it's time, but for the love of all things explosive and holy is it bad. Just the fact that in it's base design there is no E2E encryption going on is a problem.

My biggest beef with the current internet is HTTP. Everything is built on it, and it isn't the greatest system. There have been so many systems and protocols implemented that did so many things, FTP/gopher/irc/etc, and most of them have gone the way of the dodo. A few hold-outs in the dedicated tech world will still use irc, but we could have done so much with cloud-based systems with FTP. And if we had a new spec for irc, would we need Slack/Discord/MS Teams/etc? They could then all talk to each other. We shouldn't be trying to reinvent the wheel, we should be using these older services in our platforms.

And don't get me thinking about cloud. The worst term that a marketing team got a hold of. At it's core, it's just somebody else's computer. And again, so much of it is build on HTTP protocols. Not many people know or remember that xWindows and X for *nix systems had a distributed system built in. Log into a server, set one environment variable to your IP address as long as you were running one of these systems yourself, and you could runs programs on the server with the GUI on your computer.


👤 IX-103
Can we fix email so that it provides authentication, integrity, and confidentiality protections by default? And also while we're at it make it support binary attachments so that we're not stuck wasting bandwidth and disk doing base64 for everything?

👤 ollerac
Web app capabilities built in to the foundation of HTML and the browser.

Elements that know they're user-editable (images with upload controls attached to them to replace the image that's there, dates that trigger calendar controls when clicked, and inline editing for other elements that actually works and is consistent across browsers).

An offline database in the browser that has the concept of user accounts and resetting passwords baked in, as well as subscription payments, and has a universal API that can be automatically deployed to any hosting platform.

All of this would make building web apps and games trivial for so many more people -- write some basic HTML, upload it to one of a million platforms of your choice with a click, and you have a testing ground for a product that can grow.

It would be a way for anyone on the internet to build something useful and get it out there without learning a thousand technologies with a thousand nuances.


👤 Turing_Machine
The biggest mistake was making it trivial to forge email headers.

While the spam problem is much better than it used to be, that's because there's a whole lot of behind-the-scenes (and expensive) infrastructure devoted to combating it.

That infrastructure has also made it considerably more difficult to run your own mail server. While you can still do it, there are many hoops to jump through if you want to keep your mail from being dropped on the floor by the first overzealous spam filter it encounters. Unless you're big enough to devote a staff just to that (or you have a lot of free time) it's easier (but more expensive) to just use MailGun, MailChimp, or one of their brethren.

I would guess that the annual cost of spam is somewhere in the billions.


👤 6gvONxR4sf7o
Domain parking/squatting is disallowed. You’d have to make the rules in some imperfect way, but it’d be better than what we have today.

👤 jpgvm
Would redesign BGP with more resistance to Byzantine actors. Not so much because many folks announce routes maliciously (though that definitely) happens but because people always manage to screw it up accidentally.

Make it easier and more equitable to obtain addresses and ASNs.

Build protocol to make edge/cloud computing more fungible. Similar to folding@home but more flexible and taking into account networj connectivity rather than just CPU cycles. Probably looks a lot like Cloudflare Workers/Sandstorm but no vendor lockin.

DNSSEC only. TLS only for HTTP/consumer facing web.

Actually on the topic of that probably something simpler than TLS to replace TLS. It has too many features. x509 can stay but the protocol needs reworking for simplicity.


👤 ohCh6zos
Get rid of most of JS to prevent 'appification' and keep the focus document centered.

Edit: I think this would result in protocols over walled gardens. The problem is JS makes HTTP/HTML everything to everyone.


👤 smsm42
Security by design. Old internet protocols have been built with the mind of "we're all friends here, we won't make each other's life hard, right?" Well, that didn't exactly work out. That's why we have spam, DNS hijacking, DDoS, botnets, etc. If some prophet could have convinced people that built early internet that security is as much of a concern as, say, availability and fault-tolerance - I am sure we would have much less of these problems.

👤 anonu
Just a thought experiment: but could we have designed better protocols to not need companies like Google (search) and Facebook (social network)...

For example: a public index service (akin to DNS) where all pages upload all hyperlinks they were using. The end result is a massive graph that you can do PageRank on. You'd have to add some protections to avoid it getting gamed...

Email was the first decentralized social network and with it came bulletin board services and groups. Could these concepts have been developed a bit further or been a bit more user friendly while remaining decentralized?


👤 EvanAnderson
1. SCTP instead of TCP (doesn't suffer from the maddening layering violation in TCP/IP w/ the host IP being part of the TCP tuple). This would solve mobile IP.

2. SRV RR's instead of "well known ports". Solves load balancing and fault tolerance as well as allowing lots of like-protocol servers to coexist on the same IP address.

3. Pie-in-the-sky: IPv4 semantics w/ a 96-bit address space (maybe 64-bit).


👤 slightwinder
I would switch DNS to a pure tree and add discoverability as also locality-aspects. By which I mean, put the TLD at the front, not xyz.com, but /com/xyz. And in the first levels you would automatically get a listing of all valid entries back. Similar I would add local entries, like postal codes and states. So people could do something like /usa/texas/ and get a list of all cities, or /usa/postalcode/90210/ and you would get a list of all companies and websites in the area.

I would also change HTTP to be modular, allowing to add new sub-protocols which would be automatically compatible with old clients. So we could do something like http/imap or http/ftp on top of http, and unprepared clients would still be somewhat compatible to this, but adapted clients could fully utilize those protocols. Thing is, http is not the most perfect protocol, but it's a very pragmatic and straight forward one, which IMHO is the main reason why it succeeded, where other protocols failed. SO instead of adding another batch of protocols that barely anyone will use and support, it would be better to high jack the existing solution, and slowly extend it.

Also, I would make it Illegal to have proprietary protocols, data formats and obfuscated apps. Give the people the power they deserve. You can't open the cloud and business, but at least networks should be balanced for all involved sides.


👤 tbabb
The "browser" is a blank execution sandbox with a rendering context. The remote server sends programs (using something like WASM) with a standardized ABI. The program can use the rendering context to put stuff on the screen or receive user input.

Indexing and page interoperability is done by exposing standard functions which yield the necessary metadata. For example, if you want your site to be indexable by a search engine, you expose a function "contentText()" which crawlers will call (and which the client browser might also call and display using the user's own reader app). In the simplest case, the function simply returns a literal.

Core resources like libraries would be uniquely identified, cryptographically signed, versioned, and shared.

If someone wanted to make use of a browser pipeline like the standard one we have today, they might send code which does something like "DOMLibrary.renderHTML(generateDocument())". But someone else could write a competing library for rendering and laying out content, and it might take hold if it's better, and it wouldn't have to be built on top of the first one.

Also, the browser wouldn't necessarily be a separate app (though someone could make one); it would be a standard built-in feature of operating systems, i.e. the sandboxing would be done at the kernel level. With security properly handled, there'd be no difference between a native app and a web app, except whether you load it from your own filesystem or a remote one.


👤 sjtindell
I have this experience where I come into new orgs or projects and I think if we could just start fresh with the old lessons, we could build it so much cleaner. It’s partially true. But mostly we end up building a really clean base, then as the business requirements come up, we need to add back in all the things I thought were kruft. I think lots of tech is this way. Try to build a new DNS, you’ll just end up reinventing a (probably worse version of) DNS.

👤 nikanj
IPV6 is a monstrosity designed by numerous committees. I’d take ipv4 and add few more octets for address

👤 graiz
I've thought about this a lot. I'd throw away HTML/CSS and start over with a client centric rendering protocol that is based on presentation first and semantics second. The language would be run-time compiled to describe exactly what needs to be rendered on the page and would be streamable to prevent rendering locks.

Typical page size would go from 1M typical to 64K-128K being typical. Images would stream in after initial page renders but since most pages would fit in 1-3 packets, you'd see pages pop-in very quickly. This would also be very helpful for poor connection, mobile and the developing world.

I'd fund a team to do this if I could figure out who would buy it.


👤 hollerith
I would have tried to arrange it so that whatever method comes to the mind of the average person for publishing a document on the internet publishes a static document.

The situation today of course is that the method that usually comes to mind when a person decides that something he or she has written should be put on the internet publishes what is essentially an executable for a very complex execution environment, and (except for a single non-profit with steadily decreasing mind share) the only parties maintaining versions of this execution environment with non-negligible mind share are corporations with stock-market capitalizations in the trillions.


👤 wmf
Maybe make the DNS less US-centric (e.g. .com vs. .co.uk) and replace the PKI with DANE.

We should have had DHCP prefix delegation for IPv4 so people wouldn't need NAT.


👤 superasn
I would wish there was something better for SMTP.

I tried to learn it a while ago and got super frustrated with how things are. The whole thing looked upside-down to me.

I mean The DKIM, SPF, DMARC, Bayesian filtering, etc sound like band-aids upon band-aids to fix something that's really broken inside.


👤 liveoneggs
DNS is far from perfect, TLDs are stupid and always have been, the arbitrary rules for records are stupid, MX records are ridiculous, UDP size constraints on answers.. nevermind I could keep going. DNS sucks.

👤 PhaseMage
I want a network that scales to trillions (or more) of top-level nodes (rather than the ~2 million supported by IP and BGP.) I want everyone to be able to host a router.

I'd go source-routed isochronous streams, rather than address-routed asynchronous packets.

I haven't updated my blog in a few years, but I'm still working on building the above when I have the time. (IsoGrid.org)


👤 gorbachev
The number one change I'd make is not to trust users to behave.

👤 bobajeff
I would prefer if the internet architecture was a mesh network and that sites/pages were content addressable.

👤 soheil
I'd make it less transactional and more realtime. No http request, tcp packets instead more like old tv broadcasts where there is a continuous signal always being sent.

If it was possible to have infinite radio frequencies to use. Then you could load every possible webpage of every website continuously broadcast on a different frequency. To load a page all you have to do is tune in to that frequency. You will then get an instant latest version of that site without having to wait. This gets more complicated for signed in websites. There is no reason you couldn't implement the same thing just add more frequencies for more users. This wouldn't work for POST requests, but I think any GET request would work fine.


👤 ergocoder
One small thing: spell referrer correctly.

👤 doctor_eval
I’d make sessions independent of IP addresses. So that as I move between wifi, fixed and mobile networks, my sessions would remain active even as my interface addresses change.

👤 austincheney
The web was once diverse, but now it is not.

I would eliminate third party requests for page assets. If you need images, css, js, or such it would come from the same origin as the page that requests it.

1. This would eliminate third party tracking

2. This would eliminate abuse of advertising

3. This would eliminate CDNs and thus force page owners to become directly liable for the bandwidth they waste

4. It would make walled gardens more expensive

—-

I would also ensure that section 230 is more precisely defined such that content portals are differentiated from the technology mechanisms on which such traverse. The idea here is that Section 230 continues to protect network operators, web servers, and storage providers from lawsuits about content but not the website operators that publish such.

1. Sites like Facebook and Twitter would become liable for their users submissions regardless of moderation or not

2. Terms of service agreements would largely become irrelevant and meaningless

3. This would radically increase operational risks for content portals and thus reinforce content self-hosting and thus a more diverse internet

——

I would ensure that identity certificates were free and common from the start.

1. The web would start with TLS everywhere.

2. This would make available models other than client/server for the secure and private distribution of content

3. This would, in joint consideration of the adoption of IPv6, also eliminate reliance upon cloud providers and web servers to distribute personal or social content


👤 a1371
A slightly different answer: I would change the domain name ownership model. Right now to buy Kayaks you go to amazon.com, to travel to the amazons you go to kayak.com, it's confusing for end users . It's also empowering squatters and not the innovators. Also the whole .org debacle showed that community representation is not at the center of things.

👤 visarga
I'd make a content addressable system to have better caching, balancing, backups and history of changes. I'd like the internet to preserve its knowledge into the future.

As for security, I'd enhance the web browsers to do internet hygiene - removing unwanted bits and blocking information leaks. The browser should protect user accounts from being linked together.

Another major issue is payments which today is a high friction backwards process and acts like a gatekeeper. We need an equitable solution for micro-payments without too large processing costs.

The last one is search - users can't customize the algorithmic feeds it generates and startups can't deep crawl to support inventing new applications. Search engine operators are guarding their ranking criteria and APIs too jealously while limiting our options.

So in short, ipfs/bittorrent like storage, nanny browser, easy payments and open search.


👤 DeathArrow
* A better IRC with security, anonimity, scalability so people don't actually need Slack, MS Teams, Zoom, Google Meet, Whatsapp, iChat and tens of other apps just to be able to talk with each other.

* A protocol for running distributed binary apps. Transforming the browser in an operating system sucks for both users and developers.


👤 pengaru
HTTP should have had some kind of URI event pub/sub mechanism from the start, including notifications for URIs pending removal to facilitate archival without polling or whatever crazy ad-hoc madness you want to call archive.org's priceless efforts.

Still an unresolved issue to this day AFAIK.


👤 hyperman1
I'd add some micro payment system. Todays internet is based on ads because that's the only way to get some value out of users. If the internet had something like payd phone numbers, people could get actual money for their site without sleazy ad companies. Minitel did it, and it worked.

I'd add a way to boot abusers from the network. Botnets would become the problem of the IOT device owner etc. This means protocols where routers could complain about an abuser to their upstream. If upstream receives eniugh complaint/votes, an end device gets booted off.

I'd like some way to re-decentralize the internet. There are some wrong incentives in the current internet, causing centralization.


👤 zdw
More client side retry built into protocols, and the ability to easily failover from the client-side.

Protocols like DNS and SMTP are designed so that multiple servers that can handle traffic, and one going down isn't a big deal - the clients will just retry using and the entire system just keeps working.

Compare to HTTP which don't have retry mechanisms which results in needing significant engineering to deal with single points of failure, fancy load balancers and other similar hard engineering challenges. Synchronization of state across HTTP requests would still be an issue, but that's already a problem in the load balancer case, pushed usually to a common backing database.


👤 dustymcp
JavaScript would be murdered over and over again..

👤 hkt
Security.

I recall a possibly apocryphal story about TCP - namely that it was originally meant to be encrypted. Supposedly it was the NSA who had gentle words with implementors which led to the spec not including any mechanisms for encrypted connections.

So, encrypted by default TCP, for a start.

DNS should have a much simpler, possibly TOFU model to help with the usability elements. DNSSEC is just a nightmare and confidentiality is nonexistent.

Somewhat controversially, I'd ditch IPv6 in favour of a 64bit IPv4.1 - blasphemy, I know, but the ROI and rate of adoption for IPv6 don't justify its existence, IMO.


👤 cmacleod4
I'm coming to this thread late, so I'm amazed that no-one has mentioned Named Data Networking - see https://en.wikipedia.org/wiki/Named_data_networking and https://named-data.net/ . The basic idea is to manage the "what" rather than the "where". This looks like the future to me.

👤 laserlight
Here is an academic discussion [0]. It was published in 2005. It may not cover more recent problems, but the proposed design principles are still valid.

[0] Tussle in Cyberspace: Defining Tomorrow’s Internet https://groups.csail.mit.edu/ana/Publications/PubPDFs/Tussle...


👤 ChrisArchitect
Plenty of recent related discussion:

https://news.ycombinator.com/item?id=27663618


👤 cm2187
A mail protocol that guaranties the identity of the sender + e2e encryption.

And making more top level domains available from the outset instead of the fetishisation of the .com domain.



👤 rlt
Larger IP address space.

E2EE for all major protocols from the start (DNS, HTTP, SMTP, etc)

Protocols for controlling unwanted email and analytics (a not disastrous version of EU’s cookie consent)


👤 JoshTriplett
Asymmetric keys as the only addresses, and an authenticated version of Virtual Ring Routing as the routing protocol. That guarantees you're talking to the correct server for an address, that addresses never need to change, and that a server can have as many different addresses as it needs to.

Build in a layer for onion routing as well, so that all the servers in the middle don't automatically know who you're trying to reach.


👤 mongol
I first understood the question as "what should have been done different if internet was designed today". I think it is a more interesting question. What mistakes were made that later needed less than optimal workarounds? What comes to my mind are character encodings and i18n. It was solved in many different ways in different protocols.

👤 jotatoquote
Protocolls (?!) ...and the idea to pump 'content' - instead of launching it?

Sounds elevating... P-:

Disclaimer: 'This position is constructed to deal with typical cases based on statistics, wich the reader may find mentioned, discribed or tendency-based elaborated.'

Sure, OT -but you may also like thinking other way about (-;


👤 londgine
I'm not sure how this can be done, but remove location association from IP. It annoys me to no end that google changes the language based on my location rather than my preference (sometimes with no obvious way to fix it). It would also stop the annoying country restrictions on websites.

👤 fahrradflucht
For DNS specifically, CNAME responses applying for the whole domain name instead of a specific record type was definitely a mistake in hindsight. It's just because of that decision that we can't have CNAMEs at the apex, which definitely is in the way in a bunch of situations.

👤 vfulco2
Use whatever technology is required to make it decentralized so tyrants and other evil do'ers have no chance of stopping free speech. The future of the world depends on it vs. the future of a relatively few self-annointed, banal, vile, narcissistic, wealth stealers.

👤 rini17
Some way to make spoofing harder. Complete cryptographic authentication and filtering of every packet is probably technically unfeasible even now. But certainly we can think of other "soft" measures.

👤 forgotmypw17
I would modularize-separate SSL and HTTP, so that the former was a local service which provided a local HTTP connection, and I could keep using browsers I love with most websites, not just my own.

👤 AussieWog93
Performance, specifically making requests as parallel as possible.

Does it really need to take 5 seconds to load a website whose contents total 500kB on a 100Mbit/s connection with a 6-core CPU?


👤 Kuinox
By making it truly decentralized: Like in a mesh network.

👤 DeathArrow
I would design a protocol to make access to porn handier.

👤 aaronmdjones
Make BCP38 mandatory.

👤 westoque
Personally. I'm targeting the browser. Web applications have evolved so much that we now need access to more hardware to further improve our web apps. Imagine having direct access to GPU and other things, we can then be running full apps in the browser, think Photoshop/Final Cut Pro, even games that required full 3D rendering. To an extent, this also requires us to remove the shackles from only having JavaScript as the only allowed scripting language in the browser.

👤 akmittal
Not sure if that should be part of internet or web.

But a decentralised way to do an internet search. So an anbiased/no tracking search engine


👤 yjftsjthsd-h
Make the actual protocol for HTTP always be TLS, always port 443. Insecure HTTP is just self-signed.

👤 DarknessFalls
Several responses have mentioned removing anonymity from the equation to enforce good behavior.

I think we could accomplish something similar just by having a global SSO that works for any website, totally decoupled from your actual identity or location. The only information it reveals might be some sort of quantifiable social standing based on how you interact with others on forums.


👤 TheRealNGenius
Native payments, video, and search

👤 pgt
IP address per device instead of per network interface. Lisp instead of JavaScript.

👤 swebs
TLDs should be reversed. So instead it would be com.google.www/maps

👤 arminiusreturns
Anonymity built in, ever since I heard Eben Moglens story about it.

👤 WarOnPrivacy
That regulation be limited to those with proven proficiency.

👤 perryizgr8
No remote code execution is allowed on the web. You can download binaries and run them, but not in the browser's context.

👤 ChrisArchitect
What inspired you to ask this?

👤 ebanana
it would have a constitution.

👤 short12
No ads at all

👤 throwaway83736
Everything can always be improved. But there are some things that would make life for a lot of people easier.

For one thing, operating systems' tcp/ip stacks. They should have come with TLS as a layer that any app could use just by making the same syscall they use to open a socket. For another, service discovery should not be based on hardcoded numbers, but querying for a given service on a given node and getting routed to the right host and port, so that protocols don't care what ports they use.

Domain registration and validation of records should be a simple public key system where a domain owner has a private key, and can sign sub-keys for any host or subdomain under the domain. When you buy the domain you register your key, and all a registrar does is tell other people what your public key is. To prove you are supposed to control a DNS record, you just sign a message with your key; no more fucking about with what the domain owners email is, or who controls at this minute the IP space that a DNS record at this minute is pointing to. This solves a handful of different problems, from registrars to CAs to nameservers and more.

The security of the whole web shouldn't depend on the security of 350+ organizations that can all independently decide to issue a secure cert for your domain. The public key thing above would be a step in the right direction.

BGP is still ridiculous, but I won't pretend to know it well enough to propose solutions.

Give IPv4 two more octets. It's stupid but sometimes stupid works better than smart.

Give the HTTP protocol the ability to have integrity without privacy (basically, signed checksums on plaintext content). This way we can solve most of the deficiencies of HTTPS (no caching, for one) but we don't get MitM attacks for boring content like a JavaScript library CDN.

And I would make it easier to roll out new internet protocols so we don't have to force ourselves to only use the popular ones. No immediate suggestion here, other than (just like port number limitations) it's stupid that we can't roll out replacements for TCP or UDP.

And I would add an extension that encapsulates protocol-specific metadata along each hop of a network. Right now if a network connection has issues, you don't actually know where along the path the issue is, because no intermediate information is recorded at all except the TTL. Record the actions taken at each route and pass it along both ways. Apps can then actively work around various issues, like "the load balancer can't reach the target host" versus "the target host threw an error" versus "the security group of the load balancer didn't allow your connection" versus "level3 is rejecting all traffic to Amazon for some reason". If we just recorded when and where and what happened at each hop, most of these questions would have immediate answers.


👤 geocrasher
Death to email.

👤 smoldesu
Separate the portions of the web into non-profit and for-profit sections. Web 2.0 ruined the internet with the introduction of paywalls and 'premium' content that was never premium in the first place. You're welcome to live in that dystopian hellscape, but I'd frankly prefer the ability to hit a switch and instantly disable all bullshit.

👤 joshu
identity as a core layer. having been in the wrong end of a large service that was relentlessly attacked and spam. i understand the value of anonymity but it probably should not have been the default.

👤 wetpaws
Enstating a capital punishment for the use of any scripting language

👤 holomorphically
Cryptographic identity verification and trusted clocks. As in, cryptography would be built-in and everyone would have a set of keys that they could use to verify ownership of digital content by using cryptographic signatures and timestamps.

👤 reilly3000
Canonical User Identity. I know this is massively controversial but if we were starting at the beginning, an internet without anonymity would be a kinder, safer internet. One where data asymmetry and it’s enterprises wouldn’t have a toehold. A place that extends reality rather than contorts it. Privacy and anonymity have a place, and should be a right for consumption but never for creation.

👤 mikewarot
First, you have to secure all the computers. Capability Based Security for everyone. This lets us all run mobile code without danger. (No more virus or worm issues)

Next, we use 128 bit IP addresses, 32 bit port and protocol numbers, and absolutely forbid NAT as a way of getting more addresses. (no ip address shortage)

Next, all email has to be cryptographically signed by the sending domain using public key encryption. (No more spam)

Next, selling internet access [EDIT]ONLY is strictly prohibited. Either connections can host servers, or find a different business model. Any node of the internet should be able to run a small server. (No more censorship in the walled gardens) Yes, I know it's stupid to try to host a server on the upload bandwidth I get from a cable modem, but it shouldn't be prohibited. If I want to see my webcams from anywhere, it shouldn't require someone else's server.

DNS should be done using a blockchain, it would need some iteration to get it right.