AI in My Daily Workflow: Use Cases and Reflections
The Last Human Job: Deciding What We Want
Use Cases
Understand a codebase quickly. DeepWiki provides a high-level, human-readable overview of any codebase — far more efficient than reading raw code. You can also ask questions directly about the project structure and logic.
Automate workflows and solve problems computationally. Claude Code is my go-to here.
Recently, I used Claude Code (with Agent Teams) to generate personal financial statements from bank, broker, and credit card statements. The result was surprisingly good. I spent about one working day polishing the output — something that would have taken me weeks to build from scratch.
Previously, I built a property listing monitor that pulls new listings from targeted developments and sends updates via Telegram. (Github)
Q&A — information retrieval and brainstorming. I use Gemini across the board: text, images, and video (directly on YouTube). Frontier models are converging in capability, and Gemini offers better value for money. The bundled 2TB Google Drive is a bonus I always end up using anyway.
Understand a new subject quickly. My workflow:
Find resources via Google Scholar Labs, Gemini Deep Research, or Zhihu AI Chat (知乎直答).
Upload selected resources to NotebookLM and start asking questions.
I recently used this to get up to speed on group fraud detection as part of a push to take on broader antifraud responsibilities — what might have taken weeks of reading compressed into a few focused sessions.
Build a personal knowledge base. I store personal notes in Markdown, use Claude Code to build a RAG server on top of them, and pipe in RSS feeds converted to Markdown. Everything becomes searchable and queryable.
Reflections
AI has fundamentally changed how I learn. I spend far less time on Google Search — only when I need to trace a primary source. The structured summaries AI produces help me grasp the shape of a subject much faster than stitching together search results ever did.
AI is already very capable at coding. More importantly, coding is just a means to an end — a method for solving problems. AI lowers the barrier between having a problem and implementing a computational solution. If something can be solved programmatically, AI dramatically accelerates getting there.
One thing I’ve noticed with Claude Code specifically: memory and context have improved significantly. Previously, every new session started from scratch with no accumulated context. Now, the agent can automatically build a memory file or be explicitly prompted to maintain one across sessions. It feels more like working with a collaborator than restarting a tool.
That said, QA and critical thinking remain distinctly human strengths. AI output still needs to be sense-checked. Common sense matters — the ability to notice when something doesn’t add up, when a result feels off, when the logic is internally consistent but wrong in practice. Judgment about what’s right is still something humans do better.
Looking further ahead, once AI reliably solves coding, it gains the ability to improve itself. At that point, the barrier to completing any cognitive task effectively disappears. The human role shifts toward defining objectives — what we want the world to look like, what problems are worth solving, and what the company should be doing. AI handles the execution.
But there’s a second, subtler role: QA at the objective level. We’ll need to examine AI-proposed solutions and verify they actually align with what we intended — until AI develops its own objectives, and becomes sophisticated enough that the misalignment isn’t obvious. That’s the part worth thinking carefully about now.
