https://news.ycombinator.com/item?id=36195527 and
Hacker's Guide to LLMs by Jeremy from Fast.ai - https://www.youtube.com/watch?v=jkrNMKz9pWU
State of GPT by Karpathy - https://www.youtube.com/watch?v=bZQun8Y4L2A
LLMs by 3b1b - https://www.youtube.com/watch?v=LPZh9BOjkQs
Visualizing transformers by 3b1b - https://www.youtube.com/watch?v=KJtZARuO3JY
How ChatGPT trained - https://www.youtube.com/watch?v=VPRSBzXzavo
AI in a nutshell - https://www.youtube.com/watch?v=2IK3DFHRFfw
How Carlini uses LLMs - https://nicholas.carlini.com/writing/2024/how-i-use-ai.html
For staying updated:
X/Twitter & Bluesky. Go and follow people that work at OpenAI, Anthropic, Google DeepMind, and xAI.
Podcasts: No Priors, Generally Intelligent, Dwarkesh Patel, Sequoia's "Training Data"
For a general audience - https://www.ai-supremacy.com/?utm_source=substack&utm_medium...
Fromm inside the AI Labs - https://aligned.substack.com/
https://milesbrundage.substack.com/
for swe - https://artificialintelligencemadesimple.substack.com/
It's better to do it more batchy, like once every 6-12 months or so.
Started off here: https://www.youtube.com/watch?v=hZWgEPOVnuM&list=PL6e-Bu0cqf...
Ended up here: https://www.youtube.com/watch?v=_5XYLA2HLmo&list=PL6e-Bu0cqf...
And after that, I've had some recent projects that I love to mess around with such as a better license plate detection API than what currently exists for U.K. plates, and once I completed those two courses I had a good enough baseline to work from where I'd encounter a repository and google around if I needed to learn something new.
Short, simple, not painful etc. and I don't have the advanced mathematical background (nor the background within the American mathematical notation) that I'd need to digest the MIT course set, so this learning path has been the best for me. I'm no expert whatsoever, though.
https://arxiv.org/pdf/2404.17625 (pdf)
https://news.ycombinator.com/item?id=40408880 (llama3 implementation)
https://news.ycombinator.com/item?id=40417568 (my comment on llama3 with breadcrumbs)
Admittedly, I'm way behind on how this translates to software on the newest video cards. Part of that is that I don't like the emphasis on GPUs. We're only seeing the SIMD side of deep learning with large matrices and tensors. But there are at least a dozen machine learning approaches that are being neglected, mainly genetic algorithms. Which means that we're perhaps focused too much on implementations and not on core algorithms. It would be like trying to study physics without change of coordinates, Lorentz transformations or calculus. Lots of trees but no forest.
To get back to rapid application development in machine learning, I'd like to see a 1000+ core, 1+ GHz CPU with 16+ GBs of core-local ram for under $1000 so that we don't have to manually transpile our algorithms to GPU code. That should have arrived around 2010 but the mobile bubble derailed desktop computing. Today it should be more like 10,000+ cores for that price at current transistor counts, increasing by a factor of about 100 each decade by what's left of Moore's law.
We also need better languages. Something like a hybrid of Erlang and Go with always-on auto-parallelization to run our human-readable but embarrassingly parallel code.
Short of that, there might be an opportunity to write a transpiler that converts C-style imperative or functional code to existing GPU code like CUDA (MIMD -> SIMD). Julia is the only language I know of even trying to do this.
Those are the areas where real work is needed to democratize AI, that SWEs like us may never be able to work on while we're too busy making rent. And the big players like OpenAI and Nvidia have no incentive to pursue them and disrupt themselves.
Maybe someone can find a challenging profit where I only see disillusionment, and finally deliver UBI or at least stuff like 3D printed robots that can deliver the resources we need outside of a rigged economy.
* Matt Berman on X / YT
* AI-summarized AI news digest: https://buttondown.com/ainews by swyx
* https://codingwithintelligence.com/about by Rick Lamers
Then I manually follow up to learn more about specific topic/news I'm interested in.
Ollama Course – Build AI Apps Locally https://youtu.be/GWB9ApTPTv4?feature=shared
As an aside, does anyone have any ideas about this: there should be an app like an 'auto-RAG' that scrapes RSS feeds and URLs, in addition to ingesting docs, text and content in the normal RAG way. Then you could build AI chat-enabled knowledge resources around specific subjects. Autogenerated summaries and dashboards would provide useful overviews.
Perhaps this already exists?
For me personally, I prefer to work backwards and then forwards. What I mean by that is that I want to understand the basics and fundamentals first. So, I'm, slowly, trying to bone up on my statistics, probability, and information theory and have targeted machine learning books that also take a fundamental approach. There's no end to books in this realm for neural networks, machine learning, etc., so it's hard to recommend beyond what I've just picked, and I'm just getting started anyway.
If you can get your employer to pay for it, MIT xPRO has courses on machine learning (https://xpro.mit.edu/programs/program-v1:xPRO+MLx/ and https://xpro.mit.edu/courses/course-v1:xPRO+GenAI/). These will likely give a pretty up to date overview of the technologies.
It has a mix of concepts and hands on code, and lots of links to the best places to learn more. I'm keeping it up to date as well, about to merge a guide on building applications, which is what it sounds like you want.
Here's my Google scholar if you want credentials https://scholar.google.com/citations?user=Oq99ddEAAAAJ&hl=en...
If you want to be an AI engineer study this:
https://github.com/karpathy/llm.c
And build around llama.cpp
Ollama is like cpanel for models. It’s not going to familiarize you with lower level implementation which is just as important as knowing the math.
That was my approach. Being aware of the internals not just the equivalent of “git pull model” got me a job, without a CS degree and a long career in software. Ymmv
Then spin up a RAG-enhanced chatbot using pgvector on your favourite subject, and keep improving it when you learn about cool techniques
I use tags a lot - these ones might be more useful for you:
https://simonwillison.net/tags/prompt-engineering/ - collects notes on prompting techniques
https://simonwillison.net/tags/llms/ - everything relating to LLMs
https://simonwillison.net/tags/openai/ and https://simonwillison.net/tags/anthropic/ and https://simonwillison.net/tags/gemini/ and https://simonwillison.net/tags/llama/ and https://simonwillison.net/tags/mistral/ - I have tags for each of the major model families and vendors
Every six months or so I write something (often derived from a conference talk) that's more of a "catch up with the latest developments" post - a few of those:
- Stuff we figured out about AI in 2023 - https://simonwillison.net/2023/Dec/31/ai-in-2023/ - I will probably do one of those for 2024 next month
- Imitation Intelligence, my keynote for PyCon US 2024 - https://simonwillison.net/2024/Jul/14/pycon/ from July this year
It depends what you are looking for honestly “the latest things happening” is pretty vague. I’d say the place to look is probably just the blogs of OpenAI/Anthropic/Genini, since they are the only teams with inside information and novel findings to report. Everyone else is just using the tools we are given.
We wrote a zine on system evals without jargon: https://forestfriends.tech
Eugene Yan has written extensively on it https://eugeneyan.com/writing/evals/
Hamel has as well. https://hamel.dev/blog/posts/evals/
Swyx also has a lot of stuff keeping up to date at https://www.latent.space/, including the Latent Space podcast, although tbh I haven't listened to more than one or two episodes.
- https://www.youtube.com/@aiexplained-official - https://www.youtube.com/@DaveShap - https://www.youtube.com/@TwoMinutePapers/videos
Then newsletter AI supremacy
Github blog: https://github.blog/ai-and-ml/ Cursor blog: https://www.cursor.com/blog
They also have a weekly podcast.
Then find a small dataset and see if you can start getting close to some of the reported benchmark numbers with similar architectures.
Beyond that: there are some decent sub-reddits for keeping up with AI happenings, a lot of good Youtube channels (although a lot of the ones that talk about the "current, trendy" AI stuff tend to be a bit tabloid'ish), and even a couple of Facebook groups. You can also find good signal by choosing the right people to follow on Twitter/LinkedIn/Mastodon/Bluesky/etc.
https://www.reddit.com/r/artificial/
https://reddit.com/r/machineLearning/
https://www.reddit.com/r/ollama/
https://www.youtube.com/@matthew_berman
https://www.youtube.com/@TheAiGrid
https://www.youtube.com/@WesRoth
https://www.youtube.com/@DaveShap
https://www.youtube.com/c/MachineLearningStreetTalk
https://www.youtube.com/@twimlai
https://www.youtube.com/@YannicKilcher
And you can always go straight to "the source" and follow pre-prints showing up in arXiv.
For tools to make it easier to track new releases, arXiv supports subscriptions to daily digest emails, and also has RSS feeds.
https://info.arxiv.org/help/subscribe.html
https://info.arxiv.org/help/rss.html
There are also some bots in the Fediverse that push out links to new arXiv papers.
Is there a way to SAVE THIS THREAD on HN ? 'Cos I'd love that.
Thx