If You Cannot Create It, You Don't Understand It — Even with AI"
I recently built a tiny autograd engine from scratch — just a hundred lines of Python that can compute gradients through a computation graph. The kind of thing an LLM could spit out in seconds. But I didn’t let it. I wrote every line myself, hit bugs, traced gradients by hand on paper, and asked the AI to check my work — not to do it for me. And honestly? It was the most satisfying learning experience I’ve had in a while.
Reflections on Learning with AI
LLMs can hinder learning if you let them: Learning is fundamentally hard because it requires working your brain. An LLM can provide solutions instantly, but that bypasses the struggle where real understanding is built. Think of the LLM as a teacher sitting beside you — the teacher cannot complete the task for the student. You still have to do the thinking.
LLMs can act as a teacher, but only if you drive the conversation: They can verify your understanding, answer questions, and generate practice problems. However, current LLMs are not proactive teachers — they respond, they don’t lead. It takes skill to prompt the LLM effectively, which is challenging, especially for younger students who may not yet be comfortable with self-directed learning.
LLMs are better at details than intuition: They tend to explain step-by-step solutions well but struggle to convey the big picture and the intuition behind the concepts. For example, Section 1 (The Big Picture) of these notes was my own mental model, polished by the LLM — not generated by it.
Writing code by hand is irreplaceable: Having the opportunity to implement something yourself is deeply satisfying and solidifies understanding in a way that reading or prompting never can. As the saying goes, “What I cannot create, I do not understand.” (Richard Feynman). The bugs you hit, the edge cases you miss, the moments where it finally clicks — that’s where the real learning happens.
Attached the study note on autogradiant decent prepared together with LLM:
