Put a description in the text field, hit search and you get back IPC (International Patent Classification) terms.
Obviously a lot of work on trading. One group of market makers had an RL agent built that would trade small sizes. The value here wasn't in profitability, it was that it freed up the traders to serve larger tickets while being catering to a broader swathe of the market.
Another group dealt with underwriting commercial loans. In that process, borrowers submit hundreds of documents that need to be classified (This is an architects license, this is a pest inspection report ... ). The data was too varied for simple heuristics, but fairly straightforward NLP eliminated a good chunk of work.
If you extrapolate, a lot of the problems I've seen "the average" (not Google) company solve with AI is optimizing an internal process, as opposed to making a new product offering. So "huge success" these were not, but they were successfull. I think that for the average company, expecting "AI" to make a "huge success" in terms of business impact is often a sign of weak product thinking.
Training simulated robots instead of iterating with real hardware.
First of all, what do you consider AI? Deep learning? Hierarchical bayesian models? Linear regression? I would call all of these "AI" simply because I have seen all of these labeled as such at one point or another.
Does your business revolve around computer vision tasks ("how can I track people/cars/things in this image/video?")? You almost definitely need "AI" even if you have almost no historical data.
Does your business revolve around optimizing decisions based on large amounts of historical and relevant data (e.g. user interactions on your website, etc.)? You _probably_ need "AI" (but it depends).
There are an infinite number of ways to incompetently apply AI and upsell what you're doing to people who don't know the difference. It happens all the time and is a natural consequence of any hype cycle. But underneath all of the hype there is obvious substance (just maybe not at your company).
Concrete and very public examples of competent AI:
* Tesla's "auto"-pilot (computer vision/SLAM)
* Recommender systems that you see everywhere (Facebook Watch/News Feed, YouTube, Netflix, etc.) -- can be reinforcement learning based, factorization based, etc.
* Real-time bidding markets in advertising (DSP's like Google's DV360, Trade Desk, Xandr run optimized bidding "AI")
As of today, we worked on several problems in diverse sectors and industries. Banking, telecoms, energy, transportation, communication, storage, cosmetics, recruiting, industrial supplies.
The problems we worked on: customer churn reduction, next best offer (and recommendation problems), predictive maintenance, forecasting, sound event detection, etc.
One of the most important thing we did in our process is to thoroughly understand the problem before we even begin "AIying". Many times, organizations come to us because they don't want to be left behind and want to "use AI". Failing to truly understand what the job to be done is, what the problem is, what the end state is, is one of the major reasons of failed projects and underwhelming results and bitterness.
Second: failing to include the domain experts and people who actually will be using a data product will lead to people suffering from the "AI project" instead of enjoying its fruits. This is horrible. What we started doing right off the bat when talking with executives is stating our position clearly: who will use this? Your marketing/sales/engineers? Good, get them to the table so they could talk about their workflow and process, and what they need help with, and what they want, and what would make them happy, with their own words. We share with them horror stories when we only talked with upper management and only met the domain experts when the project was "done", developed in a vacuum, and we had to have difficult conversations with disgruntled people who now had a solution they were not involved with on their lap.
Third: it's important to have a cadence and regular meetings to monitor the progress of the project (and again, with the people involved around the table). They'll tell you what matters and what does not. They'll tremendously help with the metrics or what's an acceptable outcome, because by that time, you know it's obviously not the F1 score.
Sometimes, it's useful to ask negated questions and to make sure something matters. For example, we were talking with someone in the energy sector to mitigate an event which, when it happens, leads to a huge loss (in the nine figures). The person said they wanted to predict the event 48 hours in advance. Now, you can take this at face value, but you can also ask "Why 48? Why not 49?". It seems arbitrary. One way we asked the question was "At what point alerting you is useless and you can do nothing about it?". The person said "Even if you alert me 2 minutes beforehand, I can do things to mitigate the damage and I'll take it".
This dramatically changes the direction of the project. The same goes when talking with people who want a 80% accuracy or something. Why 80%? Why not 82%? What went into coming up with this figure? What's the actual objective?
That was part of the effort of reshaping our consulting process upstream over the years. Coming up with better questions to scope the project, to include the actual stakeholders, to better understand the problem, and to reduce the chances of projects dying or disappointing. Downstream, we built our machine learning platform [collaborative notebooks, train, track, deploy, and monitor models, etc] so it does not take us an eternity to deliver, because the longer the "time to value", the higher the likelihood there would be any value at all, and the higher the chance the project will fail.
It is a learning curve and there are bumps on the road, especially in the beginning. If you only had failures, there's an "imbalance" in your data and it's hard to make a model of how to go through it, yet it is paramount to examine these failures and systematically, ruthlessly, eliminate their root causes.
One project is not really enough to extract all the learnings, and it took us many years and many projects to see a bigger picture of the tooling we wanted for ourselves.