Are there any earch engines which exclude or at least penalize results from, say, top 500 websites?
There are so many cool things I remember reading on the web like 10-20 years ago that still exist that are so buried now on Google they might as well not exist. Nowadays searching any topic seems to always lead you to CNN and Microsoft and Facebook and other huge corporations. Search results are just becoming more sanitized and beige and meaningless every day.
I haven't had that great results with it myself though.
For other engines you can use https://addons.mozilla.org/en-US/firefox/addon/greasemonkey/ with this script https://greasyfork.org/en/scripts/1682-google-hit-hider-by-d...
I do a random city + documentary as the search term, it's taken me all over the world and seen some very strange things.
One of my favourites was Aarhus, which had a Danish language rapper proclaiming he was putting Aarhus on the global map (I have never heard of the city of Aarhus). https://youtu.be/WSZxuzgImLo They dis Copenhagen a lot too, lol. You get a more intimate YouTube experience with the low view videos
But I also seen amazing religious rituals. An excellent documentary on Karachi.
Because it's observable hq you can fork it and figure out your own algorithm for biasing the random.
Specifically this quote: "The way to win here is to build the search engine all the hackers use. A search engine whose users consisted of the top 10,000 hackers and no one else would be in a very powerful position despite its small size, just as Google was when it was that search engine."
There has been a lot of grumblings about the state of search these days. Maybe the time is nigh for a new search engine?
Before I knew about DEVONagent I would often just search multiple engines and sources trying to find something particular (e.g. a particular PDF) or unique results.
Basically the idea is to have people band together and "recommend" links. You then do your normal spidering of the websites to create a search engine (or even just call through to a number of existing search engines). However, the ranking of the results is based on the weighting of the recommendations.
It's essentially a white list based on your own personal bubble. Of course this won't work in general because you will always get SEO creeps spamming recommendations. However, it gives you tools for working around those creeps. The average person probably won't be able to manage it, but power users probably will.
By not trying to solve the problem for everybody, it makes it easier to solve to problem for some people. Or at least that's my thesis :-) I might be wrong.
If you're generous; you can make your index available to other P2P instances.
I wanted to run an API search the other week and was blown away with how quickly I could prop-up my own custom search portal (I didn't want to pay for API access to other search engines, and YaCy comes with a JSON and Solr endpoints).
I ran it locally to test my crawl filters, then pushed a private instance out to Digital Ocean to turn up the heat with the crawling. The only issue I had was the crawler would hit the max memory threshold on long crawls and the container would restart, but that was fixed by scaling up the box.
I share that same desire to visit the web less travelled. I want to discover interesting sites that deserve to be bookmarked because they will never show up in a search engine.
This was fire. If a topic were being discussed on the web, you could find it with this tool. Unfortunately, it did not fit the vision of the parasitic overlords who bred us to produce and consume for their benefit.
You could add a bunch of heuristics such as size, number of links etc.
Maybe even train a classifier to select the “smaller” part of the web.
When I type “shoes”, it would give me: links for the functional and creative history of footwear, the taxonomy of shoes, methods of construction, current and historical footwear industry data, synonyms and antonyms, related terms and professions, the dictionary definition, and similar links related to secondary meanings (such as any protective covering at the base of an object, horseshoes etc). I’d also hope for a comedy link to a biography of Cordwainer Smith.
What I actually get, which I don’t want at all: pages and pages of shoe shopping.
The various means to exclude “top X sites” are the roughest possible heuristic in that direction, and throw out the baby with the bathwater (for example, a long-established manufacturer may well have an informational online exhibit)
Google has essentially failed me in its primary mission. Bing at least has the grace to admit they are here to “connect you to brands”. And sadly, right now, every other option is an also-ran.
In practice I use DDG, directed by !bangs towards known encyclopaedic or domain-specific sources. I am certain that I’m missing out.
Discovering unknown parts and blogs on the internet is one of the enduring goals of a newsletter that I run [1], which provides a single link to an interesting article every day, usually by lesser-known authors and blogs across the internet.
[1] www.thinking-about-things.com
On a daily basis your brain use shortcut to get to the point. Open Firefox (of course) ALT+B. Then add a new bookmark for instance :
Name : Stack Overflow
Location : https://stackoverflow.com/search?q=%s
Tags :
Keyword : st
Now if you want to search "javascript timer", just type : st javascript timer
Add "%s" to all your favorites website search url.
Example : https://en.wikipedia.org/wiki/%s
To discover some new website content, apply the same trick to Hacker news, Reddit or any RSS River.
Voila, bye bye GG.
See this example of filtering Stack Overflow out of search results:
https://www.google.com/search?q=loop+over+array+items+in+jav...
Popularity, Relevance, Age, Type, etc. type could be blog, forum, site, or video. Or like it used to be.
Then I use Violentmonkey an open source js/css injector to inject this user script: https://greasyfork.org/nl/scripts/1682-google-hit-hider-by-d... This will block specific domains for you in google, yahoo, duckduckgo etc. I use this to block domains like Quora, sourceforge, cnet and softonic.
The nice thing about this script is that you can permaban domain you know are junk and they will completely be removed or you can ban a domain like commercial websites. When you ban something it is not removed from google or duckduckgo but it only shows the title in light gray, Im currently experimenting with this on some mayor webstores so I can not really say if this may help you but It can be a good start.
(edit) I saw some people say why this was not possible before. Google allowed you to block domains and website a few years ago, but they removed this feature. Duckduckgo never allowed you to do that because that would mean that you will have a cookie that remembers your preferences and that is against there principles.
Implementing this properly involves having your own search index. And that's pretty expensive.
Edit: Maybe it’s the first million results? I use it to find obscure things sometimes.
A search engine that returns results whose pages weigh in under a certain size.
From the comments it seems most of the "cruft" filling up Google results are newer web apps, generally JS-heavy and advertising-heavy, etc.
If you had a filter for pages with (e.g.) < ABC kb of JS, < XYZ external links (excluding img tags), I feel like there'd be a good chance that the "old" web and the "unknown" web would bubble to the top.
There are plenty of false positives (particularly for "small" forums build with modern JS apps, etc), but it could be one of many filtering tools to achieve better search results.
Now there are a few extensions that do that, but obviously they only hide the results from each page, so sometimes you will see pages with 2 results, if any at all.
--
[search term] -google -youtube -facebook ... -top100website and it should work.
I found a list of the top 1m alexa websites here:
http://s3.amazonaws.com/alexa-static/top-1m.csv.zip
An add-on with that list should do the work.
It’s custom google search results, but since it’s excluding .com, .net, .org etc then you probably won’t see any of the large sites there.
It’s also interesting to see which sites have been built in the last few years, as the new gTLDS haven’t been around that long.
I was intrigued by how dorkweed’s approach has changed over time, as described in a reply to a sibling comment.
As general search results get watered down and rotten tomato inflation maybe trends towards reflecting company interests rather than my interest-level, maybe it’s worth re-evaluating the vetting avenues we take as users.
Here’s mine: for games and shows I’ve recently found myself using quantity of fan-videos on YouTube as a proxy for quality. So far it’s been a decent means to find cult followings for something I otherwise wouldn’t necessarily hear about.
Obviously this approach has its flaws - and is subject to financial perversions to an extent - but I figure if enough people genuinely want to pay tribute to a work, it might be worth checking out.
I find that the YouTube sidebar is useful for me to find interesting music. I have eclectic tastes, and Google seems to have figured that out. I don't mind.
I suspect that it would be possible to create a custom API query to Google that would have a "blacklist."
I think they try to do exactly what you ask, but I haven't used them extensively so don't know how good are they.
Seeing folks mention the NOT operator (-). It's quite powerful! For example, you can do:
intext:"Powered by intercom" -site:intercom.com will find all the sites that use the Intercom widget
or ~blog bread baking -inurl:checkout -intext:checkout will find bread blogs (or similar) without commercial intent
I put together a list of the two dozen or so most useful templates of this, for folks who are interested: https://www.alec.fyi/dorking-how-to-find-anything-on-the-int...
Each session would have an updatable list of sites that are favored, whitelisted or blacklisted for a particular class of search.
Anyone reading this, please post if you find any
Google says they need our information to "improve our experience", but we can't tell them what to omit ...
Its kinda new so it excludes kinda everything :-) But you can make it work better :-)
https://ipfs.io/ipfs/QmQ1Vong13MDNxixDyUdjniqqEj8sjuNEBYMyhQ...
The problem I see on DDG & Google is having to scroll 5-10 pages of utter SEO nonsense.
"Do you have a question about ____? Many ask about _______. ____ is a common question, here the are we some answer. [sic]".
Just utter garbage pages.
It used to be just with recipes or medical questions, but now it feels like most everything that is a general query.
If anyone noticed during the first couple days of covid, google search was free from large media results, the algorithm reverted back to how it was years ago and it was such a breath of fresh air. Of course they fixed the algo immediately, it went back to only showing curated media results..there was an anon google employee who posted why this occurred.
Especially removing Quora, Pinterest, and aggregation/reposting/SEO/affiliate blogs.
And all "product" images with a white background. Only show real photographs.
Just a thought experiment, curious what others think.
can google allow us to exclude certain sites? i was surprised to see w3school showing up above official documentations for pandas and numpy. this is simply ridiculous!!
A search engine that shows only urls that are not indexed b google / another one that gives you the websites with lower pagerank
"If you don’t read the newspaper you are uninformed; if you do read the newspaper you are misinformed." Mark Twain
> Ask HN: Is there a search engine which excludes the world's biggest websites?
> Discovering unknown paths of the web seems almost impossible with google et al..
> Are there any earch engines which exclude or at least penalize results from, say, top 500 websites?
Let's back up a little and then try for an answer:
Some points:
(1) For some qualitative exclamation, there is a LOT of content on the Internet.
(2) There are in principle and no doubt so far significantly in practice a LOT of searches people want to do. The search in the OP is an example.
(3) Much like in an old library card catalog subject index, the most popular search engines are based heavily on key words and then whatever else, e.g., page rank, date, etc.
So: (1) -- (3) represent some challenges so far not very well met: In particular, we can't expect that the key words, etc. of (3) will do very well on all or nearly all the searches in (2) for much of the content in (1).
And the search in the OP is an example of a challenge so far not well met.
Moreover, the search in the OP is no doubt just one of many searches with challenges so far not well met.
Long ago, Dad had a friend who worked at Battelle, and IIRC they did a review of information retrieval that concluded that keyword search covers only a fraction, maybe ballpark only 1/3rd, of the need for effective searching. And the search in the OP is an example of what is not covered because the library card catalog did not index size of the book or Web site! :-)!
Seeing this situation, my rough, ballpark estimate has been that the currently popular Internet search engines do well on only about 1/3rd of the content on the Internet, searches people want to do, and results they want to find.
So, I decided to see what could be done for the other 2/3rds.
I started with some not very well known or appreciated advanced pure math; it looks like useless, generalized abstract nonsense, but if calm down, stare at it, think about it, ..., can see a path for a solution. Although I never thought about the search in the OP until now, in principle the solution should work also for that search. Or, the math is a bit abstract and general which can translate in practice to doing well on something as varied as the 2/3rds.
Then for the computing, I did some original applied math research.
Using TeX, I wrote it all up with theorems and proofs.
So, the project is to be a Web site. While in my career I've been programming for decades, this was my first Web site. I selected Windows and .NET, and typed in 100,000 lines of text with 24,000 statements in Visual Basic .NET (apparently equivalent in semantics to C# but with syntactic sugar I prefer).
The software appears to run as intended and well enough for significant production.
I was slowed down by one interruption after another, none related to the work.
But, roughly, ballpark, the Web site should be good, or by a lot the best so far, for the 2/3rds and in particular for the search in the OP.
So, for
> Ask HN: Is there a search engine which excludes the world's biggest websites?
there's one coded and running and on the way to going live!
I intend to announce an alpha test here at HN.
- health search that excludes sellers, wellness and snake-oil websites
- news search that excludes conspiracy theories, magical thinking, political operatives, and paid bloggers
- image search by similarity, similarity to an uploaded picture/s, words, or description
- media and warez search engine that excludes link-spam and malware sites
- complex queries search because none of them do it well
- anonymity
- shopping search that kicks out disreputable sellers and phony store-fronts
- mapping like OSM but fast, practical with an app, and detail-accessible
- monetize using affiliate links that don't affect ranking
- semi-curated results (domain reputation-ranked voting)
- related pages
- inbound/outbound links search
- archive.org integration &| history page caching
- documented query syntax
- query within results
- quick query history results navigation
- keyword alerts
- keyboard shortcuts that always work