HACKER Q&A
📣 doobadeedoo

As non-technical person, how can I tell if AI company is innovative?


I ask because I am looking at new job opportunities (non-technical role) and have come across a some AI companies that seem to be doing interesting things. But as a non-technical person, how can I tell if their technology/application is innovative versus trivial? Is it a red flag if they are just using OpenAI or Deepmind in the background? How can I even tell?

I ask because ideally I join a company doing something novel and harder to replicate. But some AI applications seem to have a ton of competitors in their space already (for example, "AI product description"). Thanks ahead of time.


  👤 credit_guy Accepted Answer ✓
It's difficult. 99.9% of AI claims are bull. Lots of people get fooled by AI claims. If you are technical, then you can see through the bull, but if you are not, it's unlikely.

Here's some telltales though. Are they saying AI, or ML. If AI, why AI and not ML? Is it just because it sounds cooler? Then run.

Here's my personal observation, which a lot of people in the AI space will contest: there are some fantastic applications of AI/ML, but they require deep domain knowledge/insight/expertise. Ask yourself if the person pitching you the job has some very deep domain knowledge? If so, why is he pitching the job as an AI opportunity, and not an XYZ opportunity. To give you an example: Boom Supersonic. Most likely they use plenty of ML. But they'll never try to hire someone as an ML company, but as an aviation company.

So, here's a way to see through the fog: ask them if they'd be in the same business if AI were not a thing. AI should be just a force multiplier, AI by itself can't produce shit. If they hesitate answering the question, just run.


👤 version_five
Do you mean innovative as in anchored in some novel machine learning technology, or innovative in product or business model?

You could look at four possibilities of: innovative business vs innovative AI - something with neither is probably what you want to avoid, as in yet-another-whatever, that claims to use "AI" for something that doesn't really change anything. If they business model is cool (which you should be able to have an opinion on as a layperson) then it could still be interesting even if AI is just a nice to have. And if they're doing cool AI stuff, as in research, like OpenAi or something, but the business model is not super clear, that could be cool too. In that case, expect leadership with phds and publication records, and a company track record of publishing or funding research, not just commercialization activities.

I'd also add, maybe controversially, that anyone using "AI" to make some kind of basic prediction (say for personalization, demand planning, medical decisions, really most tabular data) is not going to have a business that lives or dies based on AI. Not saying it's BS, but it's undifferentiated from standard models for those things, so saying they use AI is more marketing.

Look for computer vision, nlp, reinforcement learning, or similar if you want AI to be a differentiator. And be careful with anyone doing "smart search" type stuff, its BS without strong evidence to the contrary.


👤 pedalpete
Focus on the business, not the technology.

First off, anybody can say AI. Anybody can build AI. Consider the AI statement about as valuable as "cloud based". If not today, in the very near future, AI will be everywhere.

Now that you've removed the reliance on valuing the AI, what do you think of the company? Do they have an innovative approach to the market? Good business model? Is it something you think the world needs?

If they are just saying "we're the next big AI company", then you'd need to understand why they think they are, and how they are going to market that. What is the market they are going for.

We've got a bunch of AI, but we don't even mention it in our deck. Sure, it's a big part of what makes our product work, but our customers aren't buying AI, they're buying the benefits of the product.

Ignore the buzzwords and focus on the benefits.


👤 rejectfinite
"As a non doctor, how can I tell that my operation is going to be successful beforehand?"

u just cant

I mean tech people cant either. "AI" is the most obfuscated shit, so is "cloud".


👤 z9znz
My limited but actual experience says that you can disbelieve all business AI claims and promises. You’ll only be wrong very occasionally, if ever.

AI is the latest blockchain. That’s not to say it isn’t being used effectively and innovatively in a few cases. However, many companies dreamed of somehow using AI, did the corresponding marketing, and then never actually delivered useful AI based solutions.

I know some of them believed in the goal, but reality (and deadlines) got priority.

It turns out the you can pay humans behind a curtain and get AI-like results. Go figure.


👤 cl42
I've spent a lot of time helping companies launch ML-driven products. I have a pretty in-depth guide on ML products and warnings signs here: https://phaseai.com/resources/how-to-build-ml-products

In short: avoid any company where the AI is "magic"... i.e., the founders/CTO/technical team talk about how they'll "figure it out" later, or how it's just matter of some "R&D" time... This happens SO MUCH.

Also make sure the company actually has heavy duty AI talent. It blows my mind how often AI companies launch without any AI talent... Then they spend years searching for talent to solve what turns out to be an intractable problem.

At the very least, you should be able to sit down with someone technical who can walk you through the business problem, then show you how the business problem can be solved with data/AI/ML/etc. and tell you what steps need to be taken to make this all work.


👤 serjester
Couple heuristics from someone that works in the space.

Is the problem scoped well? The deliverable should be solving a specific problem. ML to detect credit card fraud? Yes. Synthesizing research papers to get actionable insights. No. Define a research insight?

Is there large amounts of training data that exists for the problem at hand? The goal is to train a computer to do a task, you need concrete inputs and outputs.

Is it a “wicked” problem? Dynamic problems where attempting to make predictions can impact the outcome are notoriously difficult. People problems like “predicting turnover” often fall under this category.

Personally I’d take some time to learn about ML evaluation metrics and ask how their model is performing? If the model already exists, it needs to be compared to a baseline that is tied to a business deliverable ie how accurate does it need to be to be useful to customers?

To the data scientists on HN, I realize there’s a lot of exceptions to what I just suggested. These are not meant to be absolute laws but should act as a useful rule of thumb.


👤 heavyset_go
There are plenty of trivial applications of tech that are successful.

I think assessing fundamentals might be a better use of time. A company could have the most interesting application of AI on the planet, but it wouldn't matter if they fold or can't pay you in 6+ months' time.


👤 mikewarot
One commonly used form of AI is supervised learning, in which a large number of samples with tags (such as names of items in a photo) are used to train the AI. That sample set should be divided randomly into 3 sets, training, testing, and validation. Great care needs to be put into managing these sets and making sure they stay separate and are used appropriately.

Reason I brought this up --> If a company fails to manage these 3 sets properly, they will end up with an AI that is 100% accurate in demos, and fails miserably in the real world.


👤 youssefabdelm
I guess it's kind of clear from their websites whether they're actually building models vs using something off the shelf. OpenAI and Deepmind both regularly publish papers and technical blog posts. If you don't see that on the company's website that's one indication.

I'd just check: are they publishing technical stuff you barely understand? Green flag. Is the general idea exciting? Another green flag.

It's a faulty heuristic because more than a few good ones will fall through the cracks but oh well


👤 belter

👤 anm89
You can get this right almost every single time by always assuming it's bullshit.

👤 faeriechangling
It’s difficult to outsource the task of evaluating competence because how are you competent to evaluate if somebody can evaluate somebodies competence?

👤 JoeyBananas
If you aren't going to be working on the business's AI tech, why do you care?

👤 aristofun
The more innovative a company is, the more chances it will fail.

👤 fuzzfactor
You're going to need a dog named Toto.

. . . if I only had a brain . . .


👤 faangiq
Just say no.