[books/papers suggestions are welcome!]
The ITU made a high frontier claim that was their core mission in the 10-20 year window. They made this claim 10 years ago. I'm not seeing strong evidence its being solved, aside from trickle down re-purposing of old 3G systems and Chinese investment in not-very-good infra to offset their coltan mining in Africa.
An accquaintance lives 10km outside metro Bangkok. It's an amazing city and its as plugged in and switched on as anywhere else in Asia, the rats nest of wires is a testament to ad-hock hackery. 10km outside the city margins, its a dead zone for high speed service. And Thailand has a huge rural population.
Rinse and repeat for Africa.
Too many people are stuck with slow, expensive, and unreliable cable and/or DSL ISPs. Some have no choice whatsoever. Some others get to choose only from two equally awful options. We need legally available competition EVERYWHERE.
This is a major regression from open source desktop software, and IMO is the reason open source web applications haven’t taken off more.
Akamai solved POPs (point of presence). Equinix solved DC. Both are matching towards table stakes in the context of internet infrastructural. (Not business models). We have lots of under-sea cables / international expansions on-going and planned. And it is now more of a cost efficiency problem, not an infrastructural problem.
We have a decent Ethernet roadmap [1], Terabit Ethernet, Petabit under-sea cable by 2030. If anything I see the only internet's infrastructural problems being closer to the consumer / client side of things where Fibre Cables are not being deployed. But I sense the pandemic has changed a lot of perspective on fast internet and Government are now willing to put more pressure into making FTTH as requirement.
If we look at Mobile, even carriers were a little too optimistic in Data usage projection. 5G proved to be sufficient enough in terms of Tower capacity with enough headroom for expansion without requiring Small / Nano Cells.
It might be different set of infrastructural problems, but more regulations on internet in a per country / jurisdictions basis, which would require Internet infrastructure to adapt to these scenario.
30 years ago, people would've said the same things about routers, so I think it's possible with the right ui/incentives
Around 37 percent of the world's population (2.9 billion people) have never used the Internet (1 in 3 people), per the UN’s 2021 report on the topic.
https://www.itu.int/en/mediacentre/Pages/PR-2021-11-29-Facts...
The great firewall is the prototype, but as the world becomes multipolar again, regional powers will want to control what kinds of data is imported/exported
I can't help but feel distributed computation is a really really fascinating problem and if the socioeconomic wave we're going through now sustains even a fraction of this current moment it'll be a longterm engineering focus.
It's impossible for me to not recognize that the diff blockchains mirror that of different database designs as the web scaled from nineties. First read capacity was needed to support e-commerce. Followed by social platforms where read/write needed to scale and adopt distributed models and eventual consistency.
Now we're scaling distributed computation and all sorts of interesting problems emerge. If things are gonna turn out to be even remotely what an idealist might lead you to believe we're at the cusp of rearchitecting every single layer of computation. Networking. Machine code compilation and execution. File storage.
PS I did a couple of cmd+f for keywords to find someone answering with this context and didn't find any. That seems crazy.
Digital notary. So a third person (digitally) signing a transaction or other document exchange.
(See: that time that a bunch of Google traffic started getting routed through Russia. Or the time that YouTube became inaccessible to the entire world)
1. IPv4 will persist, possibly forever. There's really no compelling reason to migrate to IPv6 other than address space and we've had decades at this point of getting around this problem with various flavours of NAT.
2. Ossification. We've taken the quite reasonable step of discarding any packets or traffic we don't understand from a POV of minimizing threats. For example, there were cases of bypassing security using packet fragmentation. But this makes it increasingly difficult to extend the protocols (eg reliable connectionless messaging aka a reliable UDP).
3. We don't really have a good solution for roaming. If you switch hotspot and get a new external IP it'll typically break your connections. A lot of work has been done to workaround this (eg carrier-grade NAT for mobile IPs) but identifying an endpoint with (address,port) (or just (address) for IPv6) is less than ideal.
Ability to do SDR for wireless networks with smarphones. 5G is not a good solution.
Better security for routers, and generally better software security regulations, which are almost non existent right now. If cars have security regulations, software should, too.
A bit like games need complicated netcode to compensate for latency.
I think the most practical and affordable way of connecting people in the remote and rural areas is by wireless via terrestrial rather than satellite. Regardless of satellite or terrestrial, apparently wireless transmission itself need to be reliable in the face of interference, multi-path and physical obstructions (tree, foliage, building, limited groud clearance, etc).
Most of the efforts for improving wireless technology have been focusing on the urban settings not rural or remote areas for obvious reasons. Ironically the areas where there are less money to be made and there are the places where most of the digital divide are happening.
Personally currently I'm working on a next generation reliable wireless connection technology and the initial results are very encouraging. Hopefully this promising technology can significantly contribute to the improvement of connectivity to reduce digital divide worldwide in affordable manner. Please contact me if you are interested to collaborate on this new technology.
For global initiatives focusing on improving telecom and Internet infrastructure there is Telecom Infrastructure Project supported by the major players including Facebook, Intel, Nokia, NTT, etc:
I don't know in the US, but in most parts of Europe we have reached such levels of speed and low ping that in not too much time it would be more clever and cheaper to ISPs to have more towers than wired solutions.
I think we need DNS for all.
A "secrets service" where people can put a short encrypted message - big enough for an IPv6 address + future extendability - for all to see.
Then users can swap keys, agree on a "secrets service" (or many of them) to store their secret/IP address, and skip any other centralisation altogether.
Apps (open source or otherwise) can then leverage this service to let users simply talk directly to each other.
That's not the only thing peer-to-peer could be good for, nor is it the only implementation possible, but I'm using torrenting as an example because it's a good peer-to-peer technology that works and has been working well for at least 10 years (whenever trackerless torrents and less reliance on trackers became standardized into the standard)
I am biased in this answer because I am building https://hotg.ai/ but I see the world going towards a more fragmented ecosystem. So:
- Portable computations - sending your workloads to any place where the data is
- Good local storage that keeps you compliant with local laws
(edited for format)
Some background maybe first.
There is a massive Global Internet Distribution challenge which works around the cost/bit equation. They are:
1. Undersea cable networks - USD 0.5-1B to deploy over a multi-year project. 10s of millions to maintain with regular cable cuts. Typically now only deployed through consortiums of Internet and Telecom companies. Carry 99% of world’s international data.
2. Inter-City Distribution - National Fiber and Copper networks which connect tier 1-3 cities, towns and villages with a backbone to the nearest Internet Exchange OR telco data center (which in-turn would have a hard line back to Undersea landing stations).
3. Last Mile or within city/urban connectivity - last & middle mile within a city/town connecting homes, offices, towers and DCs.
IMHO the challenges still remain but get worse from top to bottom, costs and complexity often jumping in orders of magnitude from one to the other, with Last mile obv being the craziest.
Telcos nationally in most countries still own most of inter-city distribution and tier 2/3/4 POPs (point of presence), leasing out capacity from POPs to ISPs and Enterprises. The investment in laying these cables is EXTREMELY prohibitive and is the main cause for high Mbps rates, high latency and onerous terms when it comes to in-country network distribution (big e.g. is South Africa). Numbers range from orders of magnitude more expensive (e.g. $1.6B for Telstra Australia in Phase 1, $130-150B for US) than Undersea cables primarily due to Right of Way and operational costs of deployment.
People are now moving from Rural to Tier2-3 cities/towns and also there is reverse-migration from megacities like Manila to Tier2-3 cities/towns (as evidenced by rising cities like Cebu, Bali, Miami, Austin, Pune, etc where housing is more affordable and earning potential remotely is nearly the same). Bandwidth and latency demands are going up 100% Y-on-Y in Tier1-3 cities, especially in WFH COVID times. Starlink & others in LEO wil definitely help with most rural unconnected places (<1-2% of total bulk). Telcos will eventually build out Tier1 cities with fiber more robustly (since they have to deliver on 5G small cell and potentially 6G).
Mid-tier cities & towns where by far the larger total bulk is accumulation will need a LOT of attention and more latency optimized, cost/bit minimized backbones.
Finally, humanity's push to get into deep space in the next decade will require building out infra to support robotic and autonomous missions. Thinking of deep space objects as islands or continents is a helpful model and tightbeaming laser comms to them as "undersea cables but in space" could help address some bandwidth allocation problems in the early days (but local distribution will again have challenges)
There's a lot of room to optimize latency whether it's removing bufferbloat, L4S, or cISP.
It is all a steaming pile of garbage.
not tryna be "that guy", but, isn't the internet concerned with interstructure? When you get to a LAN behind a firewall or code inside a walled garden, ok, that's infrastructure.
e.g. Akamai
that's interstructure, though it might require support from your infrastructure