HACKER Q&A
📣 dreamlessfate

Why does Machine Learning use these assumptions?


I'm trying to learn the nuts-and-bolts of Machine Learning, but the more I dig in, the stupider the assumptions seem to be.

The thought keeps popping into my head over and over again: Just because it works doesn't mean it works well, or that it works in a smart, optimal, or even ontologically truthful/useful/realistic way.

There are no shortage of videos, papers, tutorials, blogs that explain the math & models in detail. But there are exceptionally few sources that explain the underlying assumptions...and why these are useful (or not useful) assumptions.

Why does Machine Learning use these assumptions?

--

1) Sigmoid Functions & Binary Classification - I understand the math and the probabilities.

But rather: WHY would you want to classify using a binary system of classification? WHY would you want to reduce everything to yes/no? Or more accurately, a probability of yes/no? Or even chained probabilities of yes/no?

Is it just due to being stuck in the paradigm of programming on machines built on yes/no logic gates? Trying to perform these very complex tasks (identification, generation, whatever) on CPUs and software that are, in and of themselves, built on binary distinction?

If all you have is a binary logic gate (hammer), then everything looks like a cumulative distribution function (nail)?

Isn't this a totally moronic approach? Or is it just the best we got? I feel like it's stuck back in the signal processing days of trying to "fit" and force a signal to achieve a certain pattern without realizing the what or why. Turning knobs on an oscilloscope.

--

2) Layers - Why are artificial neural networks setup as "layers"?

Isn't this more like an assembly line? Doesn't that seem dumb? Why would someone believe, in their heart of hearts, that intelligence or pattern recognition, or any kind of thinking, happens procedurally?

Doesn't this (again) seem like a very moronic approach? One that is based on the procedural nature of the machine itself? And the programmer themself? And not the nature of thinking, intelligence, or even complex analysis / complex systems?

Complex systems with lots of variables and lots of dimensions don't actually interact like this. They don't have "layers", this is a totally made-up assumption that has major implications on the entire field.

Was this just chosen out of necessity, because software and programs need a beginning and an end? And input and an output? Or is there some really convincing argument, that speaks to the philosophy and ontology of these decisions?


  👤 jstx1 Accepted Answer ✓
1. It's built into the task, not into the solution. How do you classify without binary outputs/probabilities? If you want to know if a picture contains someone's face or it doesn't, you need a binary result purely based on the task itself regardless of your approach for solving it. In multiclass classification, you extend your sigmoid to a softmax but it still boils down to a distribution of probabilities. Or in multilabel classification, you essentially perform binary classification for all classes at the same time. Like... what else could you do? In a hypothetical alternative, if you scan your face to unlock your phone, how does the underlying vision model give or deny access to the phone without producing a binary result at some point along the way?

2. To have nonlinearities between the layers, and to have layers with varying complexity and structure. In practice it works much better than all the alternatives that we've tried.

These things are explained very well even at a beginner level, and you aren't really questioning them deeply or proposing any alternatives, instead you seem to be getting into philosophy.


👤 systoll
> Sigmoid Functions & Binary Classification

Sometimes you want to do binary classification.

This isn’t all there is.

> Layers - Why are artificial neural networks setup as "layers"?

They aren’t all set up like that. It’s a simplification of the effects of the limitations of the speed limits of synapses firing and vs the distances between them.

Spiking neural networks model the propagation more precisely, and have some promise. Their biggest issue is that it’s hard to get training data into an appropriate format for them… and that once you do... they don’t really seem to do better.


👤 stealthcat
Just finish learning everything first and come back here. Others answered your concerns pretty well.

It is common to be like an angry freshman/sophomore who yells why should he take all these difficult classes then 5-10 years later he appreciates whatever he learnt before.


👤 PaulHoule
(1) From the viewpoint of ontology, binary classes are the most elemental and keep you away from the open pits of modelling that people are always walking into.

For instance, where should a modern book on digital photography be filed in the library? Should it go in the 000's with computing? In the 700's under art? Or in the 600's with technology (an application of optics, electronics, etc.)

All these answers are right but they are also wrong. (Like why isn't computing filed with electronics in the 600's or math in the 500's?)

If you're physically filing the book in a place in the library you have to assign it one category out of all of those because it can only be in one place.

If you're trying to do anything else and get correct answers it is simultaneously true that a book is about how to use computer software (say Lightroom) and about how to make art, about the optical performance of lenses, but not about asian languages, nuclear energy, or how to play casino games.

There are certain cases where classes are mutually exclusive and in those cases it is usually right to model those as a constraint rather than start with multi-class classification which usually winds up like

https://en.wikipedia.org/wiki/Celestial_Emporium_of_Benevole...

unless there is something structurally special about the problem.

If you approach the classification of books as asking the question "Is this book about this topic?" the problem becomes tractable... Because the reason why a particular book that could be filed in multiple places is filed in one particular place is "because some librarian decided to file it there". You could never train an algorithm to reproduce the same arbitrary decisions that different librarians make arbitrarily, you'd always have a high error rate. If the question is "Is this book about how to use computer software" then you can get close to 100% in accuracy. To attempt the first is to decide to fail at the very beginning.

Also often the math works for binary classification and doesn't work for other kinds. See

https://plato.stanford.edu/entries/arrows-theorem/

for one kind of problem with is trivial for two choices and intractable for more than two.

(Funny there are two kinds of people... the ones who know what the knobs of the oscilloscope do and the ones that don't!)

(2) The visual cortex of your brain has layers much like the layers of a convolutional network.

An anti-aircraft missile system has layers of processing from raw signals, from which are discovered momentary blips, which are assembled into tracks, etc.

Matter is made of quarks and electrons, the quarks form protons and nuclei, which form nuclei, which are the core of atoms, which form molecules, etc.

Insofar as we are not dying at 30, freezing in the dark, frightened of the howling of wolves, and believing everything happens because some god wants it to happen, it's because we see a hierarchical structure in the universe.

If you had a million neurons all wired to each other it would be an intractable problem to solve for the coefficients because there are so many of them not to mention so many symmetries that would let you trade these ones over here for those ones over there which would make it hard to get started. The wiring diagram for your brain is not like the wiring diagram for a TV set, but your genetic code does wire certain populations of neurons in certain areas to other populations in other areas and then the neurons fine-tune their coefficients based on your experience.

And don't dismiss "programs need a beginning and end" and "input and output" as incidental, they're absolutely essential to writing a program.