AI is an umbrella term covering a variety of interrelated, but distinct, subfields. Some of the most common fields you will encounter within the broader field of artificial intelligence include:
Machine learning (ML): a subset of AI in which algorithms are trained on data sets to become machine learning models capable of performing specific tasks.
Deep learning: A subset of ML, in which artificial neural networks (AANs) that mimic the human brain are used to perform more complex reasoning tasks without human intervention.
Natural Language Processing (NLP): A subset of computer science, AI, linguistics, and ML focused on creating software capable of interpreting human communication.
Robotics: A subset of AI, computer science, and electrical engineering focused on creating robots capable of learning and performing complex tasks in real world environments.
Machine learning is a branch of cybernetics.
AI is a branch of machine learning that hopes one day to use circular causality to replace the need for humans in the loop.
AI safety is the branch of AI that worries that AI won't align with humans in terms of incentive and motivations. Their main concern is circular causality in AI.
I assure you the companies posting them couldn't tell you the difference either.
Could this be one of those cases where someone in HR or recruiting just regurgitated keywords onto a job description?
Could be wrong tho since I’m neither AI nor ML engineer lol. I just layman who puts down the pipes for the AI and ML people to set up their stuff.
For example, SVMs, Logistic Regression, and Decision Trees are ML but not AI.
In my opinion they should be defined as a non-overlapping Venn diagram, defined as follows:
( ML { ML & AI ) AI }
ML / Machine Learning we can define as: The field of using learning techniques to train machines. It's worth pointing out that learning is not a necessary technique to reach even Star Trek levels of technology or beyond. Learning is a shortcut. Any task that can be learned through ML could in theory be manually specified by providing either a complete set of step by step instructions to perform the procedure procedurally (where applicable) or otherwise to use more mathematically expressive paradigms like Functional Programming or even Constraint Programming https://en.wikipedia.org/wiki/Constraint_programming . I just want to highlight that Imperative (Procedural Programming, Functional Programming, and Constraint (Declarative) Programming as three other ways that can literally do anything that ML can do, but it usually takes probably 10x-100x longer to apply a computer to a task using the old paradigms alone.
AI / Artificial Intelligence: This is pretty subjective but the way I think we could frame it as "anything that a talented & well-educated, but curmudgeonly &unimaginative, computer programmer living in the year 1980 would have said that a computer will never be able to do". Like "a computer will never write poetry!", "a computer will never be able to design a beautiful painting!", etc.
The reason I think this framing is interesting is that it highlights the existence of the parts of the Venn diagram that are "ML but not AI" as well as "AI but not ML". Here are some interesting examples:
"ML but not AI": One example would be using ML techniques to create a valuable product/experience which is a static object (e.g. a Word Doc or a video) rather than any piece of intelligent software. AI might be involved along the way, but it's only as a "compiler" step, and the end result is something that no longer contains AI in it. As an example, take this hypothetical startup idea of what if someone wants to make a startup using ML to cheaply generate amazing reference books about any topic. Like the startup would use ML-based document information retrieval algorithms to near-instantly generate a helpful reference book, with each passage having a mandatory URL citation, which the startup would use with a web scraper to fact-check every passage in every book. And they'd print the books. You could imagine how this startup might fall into the "ML but not AI" camp, because ML is a critical part of their daily business but they are not trying to make anything "alive" or in any way intelligent - they just sell books and happen to use learning. Additionally, I think we should even consider the process of evolution (both in nature and in genetic algorithms) to be an example of ML which created the entire plant and animal kingdoms and I think it's undeniably an application of learning. Evolution uses variation and natural selection to perform trial and error and the results when compounded over billions of years have given us a biosphere so amazingly rich and complex that the complexity of it was/it the primary argument for Intelligent Design. People said there must be a Creator because there's no way a world this incredibly complex and detailed could have come from an anarchic process without any grand captain at the helm. The honest fact is that biological evolution in nature is fricking amazing and in the interest of giving credit where credit is due, I think we should count Evolution as being a learning algorithm, and I surely think we would call it a learning technique if humans had been the ones to invent it.
"AI but not ML": This is a really important one to highlight! There is a very common misconception that learning must be used to solve key problems that once required human-like intelligence, with a famous example being the 1996 defeat of chess champion Garry Kasparov by IBM's Deep Blue. Deep Blue used no learning, and the game of chess was solved/won through only other approaches, including a mix of manually coded clever algorithms as well as the brute force application of a very large computer. Beating the world champion at chess (which happened in 1996) meets the standard of being something that a gifted but cantankerous programmer in 1980 would have thought that computers would never be able to do, until proven wrong.
I think we in the field need to put some deep focus into Douglas Lenat's visionary Cyc project, and others like it, which seek to formalize human scientific and cultural knowledge for use in Automated Reasoning Systems. As we've seen with GPT-3's massive tendency to lie and hallucinate, learning techniques are very hard to understand and to make safe. I think we should be investing much more in using Automated Reasoning techniques in every market vertical where we can, because Automated Reasoning techniques use the magic of fast computers and powerful math to achieve the goals of AI, but in a way that is fully hand-crafted, interpretable, and has a sensible data architecture (allowing for a way to use something like the Dewey Decimal System to give a way to find the right part of the database if you're looking up a specific fact, its confidence interval, and citations providing evidence.
cyc.com