I'm a CS junior (about to become a senior) and in our last year, people choose their capstone project that they work on for the entire year. For some years (say since 2017), deep learning projects completely dominate the other projects in term of number and the awards that they go on to win. I understand the principles behind it, even find it cool, but the whole inscrutable nature of it is problematic to me.
But that isn't really a problem. In most cases, the projects you do as an undergrad don't affect your professional life in any way after you get your first job. Very, very few undergrad projects turn into real projects that anyone uses after the student graduates.
So don't worry too much about it. Ten years ago, every senior project was an app. Twenty years ago, every senior project was a website. It's just a sign of the times and doesn't matter in the long run.
Spot on.
On top of that we have 'AI' models getting fooled over adversarial attacks which just involve a single pixel change. As long as these issues are not tackled or not researched well enough, then we'll be pretty much be heading into another AI Winter and the hype cycle will go through its trough of disillusionment phase. Being unable to inspect the black-box nature of such deep-learning systems is why highly regulated industries involving danger to life such as healthcare or other safety critical industries label deep-learning solutions as unsafe for them.
Sure, all you see right now are other students and startups 'applying' deep learning everywhere, but they are hardly advancing the field unlike DeepMind and OpenAI are. In terms of learning, it's something good to learn as a student at college, but creating a AI startup now requires using Google, Amazon or Microsofts data center's for training which is clearly not sustainable anyway.
Security related projects and research are always where it's at.
To understand why neural networks work, you will have to understand how a whole host of smaller, simpler ML models work in excruciating detail. Multiple linear regression, logistic regression, etc. What they mean, how they work, what's really going on "inside", what the underlying probabilistic model represents, etc.
Neural networks are great because it takes basically all of those smaller ideas and concatenates them into a super flexible statistical machine. It's really cool to see the "in->out" but it's even cooler once you have a good grasp on what's going on in the intermediate steps.
In my experience, almost everyone working with neural networks don't have those details down. This goes 100-fold for non-research roles. They learned the Keras API and are happy stacking layers, and as long as the output looks nice they push to production. For most cases empirical validation is probably enough, because NNs usually can achieve some incremental improvement just by virtue of the fact they have so many damn degrees of freedom. But to get a well-performing, well-founded model, you need to know the ins and outs.
Imo deep learning is so popular because it "works". For a classification problem, if you try a linear baseline and a deep learning model, and you do a reasonable job of hyperparameter tuning and experimental design, it's likely you will outperform a simpler model. This holds true across many problem spaces.
I think the issue is that modern DL frameworks make it a little too easy to get pretty good performance on new problems. Other techniques generally require more background knowledge to make reasonable modeling assumptions, and still frequently perform worse than a naively applied DL approach.
I think DL will remain, in practice and education, a very popular tool. But it is essential to learn traditional statistical inference and other background to appropriately contextualize DL models so it isn't just some form of black magic.
So, pick something in "AI" that is the hotness of the moment, learn what you can, do your best, and then get on with life and career.
It's not ideal - but if it wasn't DL it would be another topic / application.
It's not completely inscrutable in why it works, follow the thread on deep learning theory research by starting from the names here: http://www.vision.jhu.edu/tutorials/CVPR17-Tutorial-Math-Dee...
Given all that you've said, a capstone that tries to dent the "inscrutable nature" of deep learning might be an interesting choice.
I will point out that the real win is with new data sources, and simple linear regressions may still work there.
Make a Rust front-end for the GNU Compiler Collection.
Emulate something.
Write a hypervisor.