I ask because ideally I join a company doing something novel and harder to replicate. But some AI applications seem to have a ton of competitors in their space already (for example, "AI product description"). Thanks ahead of time.
Here's some telltales though. Are they saying AI, or ML. If AI, why AI and not ML? Is it just because it sounds cooler? Then run.
Here's my personal observation, which a lot of people in the AI space will contest: there are some fantastic applications of AI/ML, but they require deep domain knowledge/insight/expertise. Ask yourself if the person pitching you the job has some very deep domain knowledge? If so, why is he pitching the job as an AI opportunity, and not an XYZ opportunity. To give you an example: Boom Supersonic. Most likely they use plenty of ML. But they'll never try to hire someone as an ML company, but as an aviation company.
So, here's a way to see through the fog: ask them if they'd be in the same business if AI were not a thing. AI should be just a force multiplier, AI by itself can't produce shit. If they hesitate answering the question, just run.
You could look at four possibilities of: innovative business vs innovative AI - something with neither is probably what you want to avoid, as in yet-another-whatever, that claims to use "AI" for something that doesn't really change anything. If they business model is cool (which you should be able to have an opinion on as a layperson) then it could still be interesting even if AI is just a nice to have. And if they're doing cool AI stuff, as in research, like OpenAi or something, but the business model is not super clear, that could be cool too. In that case, expect leadership with phds and publication records, and a company track record of publishing or funding research, not just commercialization activities.
I'd also add, maybe controversially, that anyone using "AI" to make some kind of basic prediction (say for personalization, demand planning, medical decisions, really most tabular data) is not going to have a business that lives or dies based on AI. Not saying it's BS, but it's undifferentiated from standard models for those things, so saying they use AI is more marketing.
Look for computer vision, nlp, reinforcement learning, or similar if you want AI to be a differentiator. And be careful with anyone doing "smart search" type stuff, its BS without strong evidence to the contrary.
First off, anybody can say AI. Anybody can build AI. Consider the AI statement about as valuable as "cloud based". If not today, in the very near future, AI will be everywhere.
Now that you've removed the reliance on valuing the AI, what do you think of the company? Do they have an innovative approach to the market? Good business model? Is it something you think the world needs?
If they are just saying "we're the next big AI company", then you'd need to understand why they think they are, and how they are going to market that. What is the market they are going for.
We've got a bunch of AI, but we don't even mention it in our deck. Sure, it's a big part of what makes our product work, but our customers aren't buying AI, they're buying the benefits of the product.
Ignore the buzzwords and focus on the benefits.
u just cant
I mean tech people cant either. "AI" is the most obfuscated shit, so is "cloud".
AI is the latest blockchain. That’s not to say it isn’t being used effectively and innovatively in a few cases. However, many companies dreamed of somehow using AI, did the corresponding marketing, and then never actually delivered useful AI based solutions.
I know some of them believed in the goal, but reality (and deadlines) got priority.
It turns out the you can pay humans behind a curtain and get AI-like results. Go figure.
In short: avoid any company where the AI is "magic"... i.e., the founders/CTO/technical team talk about how they'll "figure it out" later, or how it's just matter of some "R&D" time... This happens SO MUCH.
Also make sure the company actually has heavy duty AI talent. It blows my mind how often AI companies launch without any AI talent... Then they spend years searching for talent to solve what turns out to be an intractable problem.
At the very least, you should be able to sit down with someone technical who can walk you through the business problem, then show you how the business problem can be solved with data/AI/ML/etc. and tell you what steps need to be taken to make this all work.
Is the problem scoped well? The deliverable should be solving a specific problem. ML to detect credit card fraud? Yes. Synthesizing research papers to get actionable insights. No. Define a research insight?
Is there large amounts of training data that exists for the problem at hand? The goal is to train a computer to do a task, you need concrete inputs and outputs.
Is it a “wicked” problem? Dynamic problems where attempting to make predictions can impact the outcome are notoriously difficult. People problems like “predicting turnover” often fall under this category.
Personally I’d take some time to learn about ML evaluation metrics and ask how their model is performing? If the model already exists, it needs to be compared to a baseline that is tied to a business deliverable ie how accurate does it need to be to be useful to customers?
To the data scientists on HN, I realize there’s a lot of exceptions to what I just suggested. These are not meant to be absolute laws but should act as a useful rule of thumb.
I think assessing fundamentals might be a better use of time. A company could have the most interesting application of AI on the planet, but it wouldn't matter if they fold or can't pay you in 6+ months' time.
Reason I brought this up --> If a company fails to manage these 3 sets properly, they will end up with an AI that is 100% accurate in demos, and fails miserably in the real world.
I'd just check: are they publishing technical stuff you barely understand? Green flag. Is the general idea exciting? Another green flag.
It's a faulty heuristic because more than a few good ones will fall through the cracks but oh well
. . . if I only had a brain . . .