HACKER Q&A
📣 suyash

Is HN having problem keeping up today?


Hacker News (this site) seems to be going down today just like facebook. Is there a massive DDOS attack occurring or just a co-incidence?


  👤 dang Accepted Answer ✓
It seems to be a side-effect of Facebook being down, attracting so much attention to HN that it might as well be a DDoS attack.

Sorry everyone—there are performance improvements in the works which I hope will make a big difference, but it's not something we can roll out today.

p.s. You can read HN in read-only mode if you log out. Should be way faster.


👤 robocat
Edit: way faster if you click logout.

Running in anonymous mode (clear browser cache) makes it responsive, but read-only.

Edit: I suggest using a private tab or two browsers or two profiles, one where you are logged out to read comments for speed, and if you wish to comment or vote then paste a comment url over to the other. Side effect: you improve responsiveness for everyone!


👤 pshc
It was my perception 10 years ago that HN was powered by a single threaded process with the entire post history loaded into memory (not sure how accurate this perception was). I am curious how much it has changed.

edit: found this https://news.ycombinator.com/item?id=16076041


👤 bevacqua
It's just people flocking to HN for reliable news on the insane outage affecting global Facebook services, and refreshing far more than they usually would. Data point of one

👤 iainctduncan
It's facebook outage rubber necking. Everyone who has been involved with keeping services online is like "oh heads are going to fucking roll today, let's go see what folks are saying happened". haha

👤 CivBase
> General tip: If HN is being laggy and you're determined you want to waste some time here, open it in a private window. HN works extremely quickly if it doesn't know who you are.

Saw this from a user by the name of bentcorner in the "Facebook-owned sites are down" thread[0]. Figured I'd repost it here. It works because HN can cache pages more efficiently when it doesn't have to include user data.

[0] https://news.ycombinator.com/item?id=28750513


👤 lifeisgood99
Getting the "We're having some trouble serving your request. Sorry!" page a lot.

👤 collegeburner
My guess is with facebook properties down people are procrastinating here. Also news.yc temds to be a good place to find timely updates by smart people on outages like this.

👤 nicoburns
I rather suspect that it's because of the Facebook outage, but indirectly. HN is always slow when major tech services (Facebook, Google, AWS, Github, Slack, Fastly, Cloudflare, etc) services are down. The HN community seems to congregate here for updates causing higher than usual traffic.

👤 i_like_apis
It' just heavy traffic from people who would otherwise be on facebook.

👤 huhtenberg
... but only if you are logged in :-?

👤 lucasverra
Indeed from Paris France, most probably MZ is checking it a lot


👤 polote
This is normal, it happens everytime a big communication platform is down. Slack, Teams, Zoom, Google, FB ...

If you want to read HN, logout, or use the private mode


👤 immnn
It actually feels like a periodical DNS outage. On Friday some of my servers weren’t responding like they should. Half an hour later everything was fine again. However, some VMs running in Azure Cloud were still having problems. Today this huge FB/WA/Instagram.

I bet we will read more about this anytime soon in the news.


👤 0des
I can't load user profiles at the moment, and some pageloads are just spinning wheels. Not sure if that is related to network issues or if just a lot of us are rushing to HN to check/comment

EDIT: We're having some trouble serving your request. Sorry!


👤 indianpianist
Yeah, it's loading slower than usual today. One time I even got the "We're having some trouble serving your request. Sorry!" page.

👤 agustif
Yes. Started loading slower pages and then throwed some can´t respond to your request right now!

Seems fb down is making people go other places meanwhile maybe_


👤 mxuribe
So...maybe HN has dependencies on FB (such as tracking pixel, etc.)...and with their outage, timeouts kill things for HN? Not blaming HN, just wondering if second order diminished experiences start happening because of FB's heavy gravitational pull?

👤 H8crilA
This post was sponsored by the liar's paradox and written by Bertrand Russel himself.

(meaning if HN is down you can't post about that, and if it's not down you won't post it; proving that you can never have a post "HN is down" on HN)


👤 raimille1
Noticing the same here

👤 BatteryMountain
For me it lags when scrolling. Very odd.

👤 zanethomas
possibly because of the huge number of commenters laughing about facebook being down

👤 grensley
I've always found it interesting that this fairly influential site rarely changes and hasn't kept up with modern internet standards.

I get that it's an ambiguously useful project for a venture capital firm and all, but you would think that they would throw it a bone or two given the level of influence it has.

It also seems like they're trying to make some kind of point about dead simple web design and running a web server off of old laptops or something. I'm not sure, but there's definitely some combination of neglect and hubris going on with this site.


👤 varjag
No, just the sixty thousand Facebook engineers flock here simultaneously in search for technical clues.

👤 exikyut
For what it's worth, my own desire to see if I can help out somehow has definitely only increased over the past few months as I've noticed the messages at the tops of threads about pagination. I don't have any particular skills that would make me a hand-in-glove fit (eg, I'd need to learn Arc...), but surely there are tons of little boring things I could usefully do that would take the heat off. If I'm thinking this way, surely there are (many?) others...?

For example, something tiny that I'd love to fix is to split vote() in half and move the UI update logic into an Image.onload handler, so that the UI only changes if the upvote absolutely committed. I'm OCD about this because I actually use voting as bookmarking, and recently realized that some unknown percentage of my upvotes/bookmarks have sadly been lost because the auth= value had expired (eg in the case of a weeks-old tab) or because I was on bad 4G on my phone, but the UI state has no bearing on the network response.

Something else that I'd be very happy to put effort into, after being told how to approach the problem :), would be to make the site screen-reader-accessible: https://www.youtube.com/watch?v=G1r55efei5c&t=386s (size-XL volume warning - unfortunately the speaker volume is like 5% of the screen reader volume).

A longer-term project that I also think would also be very useful would be to implement OAuth2, so that users would be able to safely attach external logins to their accounts without needing to supply their actual user passwords (which thankfully none of the alternate UIs have tried to do AFAIK). This could support fine-grained scopes like "can see own comment votes", "can vote", "can post replies", etc. IMHO the best way to do this would be to have a central pool of manually-approved app registrations; this is definitely the most complicated approach :/, but it means the entire system would depend on a human who could go "...that OAuth2 app in particular is behaving weird. [pause]" which would be very tricky to achieve with an autonomous system (where everyone independently creates their own tokens that have no semantic value). This sort of thing utterly fails at large scale (see also: Google, YouTube, etc), but I think it would be perfect for HN. While implementing this it would also make sense to support 2FA using TOTP.

A while back I read that one of the reasons the site was closed was to keep the voting logic private. The chances are there are probably a bunch of other things (like maybe the spam detection/handling systems, and maybe the moderation tools) would be similarly classified as (for want of a better word) "sensitive". Well... it could be very, very interesting to split the codebase in half, with all the sensitive stuff in one corner, and the remainder of the codebase capable of running locally without it. Maybe you've already considered this, and consider it nonviable :(

(NB. The reason for the Rube Goldberg OAuth2 architecture I suggested was precisely to make it that much harder for people to register throwaway/bot accounts etc, keeping in mind the voting logic thing. Couldn't figure out how to reword the above two paragraphs to resolve the info dependency :) so put this here instead. lol)

IIUC, there are a very small pool of enthusiasts around the K programming language (https://kx.com) that privately study the source code to Kdb, and I understand that Arthur Whitney et al. are actually open to newcomers taking an interest in the project. (I'm sure I saw a comment mentioning as such a while back, possibly by geocar, but don't seem to be able to find it. I might've read as such elsewhere.) At some point I hope to go down that rabbithole, which looks genuinely interesting, but learning that it *was* actually accessible left a bit of an impression given that (http://archive.vector.org.uk/art10501320):

> Whitney demonstrated his “research K interpreter” at the Iverson College meeting[5] in Cambridge in 2011. We had visitors from Microsoft Research. The performance was impressive as always. The tiny language, mostly familiar-looking to the APL, J and q programmers participating, must have impressed the visitors. Perhaps conscious that with the occasional wrong result from an expression, the interpreter could be mistaken for a post-doctoral project, Whitney commented brightly, “Well, we sold ten million dollars of K3 and a hundred million of K4, so I guess we’ll sell a billion dollars worth of this.”

> Someone asked about the code base. “Currently it’s 247 lines of C.” Some expressions of incredulity. Whitney displayed the source, divided between five text files so each would fit entirely on his monitor. “Hate scrolling,” he mumbled.*

The above, combined with the project's niche accessibility (I understand that one does have to be genuinely interested) speaks to me of business and engineering focal points in perfect calibration and harmony with each other. (Hnng. :P) It also gives evidence that it is in fact possible in the first place to achieve and sustain this kind of calibration in contexts and situations that make use of niche technology. The (meta-?)question (to me), then, is how the same sort of niche accessibility context might be applicable/applied to news.arc (et al) to varying degrees.

I also wanted to incidentally mention that I've long had mixed feelings about using GitHub (in a sharing capacity). There's a bit of "but I don't have anything interesting enough!" in there, but it's mostly hesitancy about dumping stuff underneath The Giant Spotlight Of Doom, Inc™. This isn't GitHub's fault; it's more that the consumer end of open-source has something of a demand/non-empathy problem toward the higher end of the long tail, that GitHub is the biggest platform, that everything not-GitHub correlates with an exponential drop-off in terms of visibility... and the intrinsic lack of any nice middle ground in the resulting mess. Applying these considerations to HN's crazy popularity, I think that using GitHub could be great for "pop culture accessibility", if you will - but at great potential cost to behind-the-scenes logistics (maintaining CI that merges closed-source modules, explaining Arc for the 5,287th time, etc), and a noteworthily increased maintenance burden. While there are a variety of "alternative" Git hosting platforms, I think LuaJIT's approach is most interesting (https://luajit.org/download.html): the Git repo is only available over HTTPS as a firewall convenience, and there's no browser-accessible repository viewer. Thus everyone needs Git installed. You could also for example require everyone's Git client to provide an HTTPS client certificate, for example. Such speedbumps would enable a scalable form of "proof of interest" (there's also the fact that everyone has to go learn Arc once they do finally get at the code...) and naturally rate-limit this new dimension to something hopefully maintainable.

Lastly, and as a bit of a continuation of the last point, regarding the question of licensing ("oh no"), I'd actually be in favor of something custom. Both because all the opportunity (read: $$$ xD, but also Bay Area) is available to properly figure that option out, and also because virtually all existing licenses (and their wide use) bring a sort of reification to the table that makes politicking/taking sides ("ooooh! XYZ does ABC! that puts it in the same group as DEF!") all too easy, which may potentially threaten HN's pragmatic ethos somewhat. A unique license has no reference points and thus less potential impact to cohesion, and also makes solving niche cases extremely easy; you just work backwards from whatever end state you want (which you've almost certainly had years to think about, or at least subconsciously gather context for). One concrete example (working with the limited context I have) might be disallowing mirroring or copying of the code, which would close the loop on the Git setup described above.

I'm not sure what bits of the above are interesting and what bits ultimately amount to excited bikeshedding :) - but I definitely want to convey that I, and probably (many) others, would be genuinely interested in helping out. Also, I realize stuff does actually get fixed, like * vs \*, which I am very appreciative of :D.


👤 notyourday
I am honestly shocked HN is not using a CDN

👤 jzellis
That's what happens when you run your entire site off a VM running as a WhatsApp chat bot.

👤 throwawaysea
It also looks like the buttons to add comments changed in appearance. Maybe there is some kind of site upgrade going on?