Final season hits like a whirlwind. Multiple exams, parallel deadlines, and labs and projects still ongoing. It used to take me more time to get started than to actually study. That’s where ChatGPT became a real advantage, not because it’s a genius at everything, but because it can organize chaos into clarity. Here's how I use it.
Instead of manually sifting through slides, labs, readings, and announcements, I offload that first step to ChatGPT.
Here’s my workflow:
ChatGPT consolidates the material, organizing it into a master guide. I then iterate on that draft, refining sections and adding structure. For example, after introducing error evaluation techniques, I asked ChatGPT to insert a table comparing different strategies—formulas, use cases, pros, cons. It responded immediately and integrated it without losing coherence.
Time management is a real struggle, especially when juggling multiple responsibilities. So I also ask ChatGPT to estimate how long I need to spend on each section. I give it my study schedule — preferred hours (I focus better in the early morning or late afternoon), blocks of time available — and it produces three study plans. I pick one to plug into Google Calendar and keep the others as fallback options. Something always comes up, and this gives me flexibility without losing structure.
Once the guide is ready, it's time for drills.
If past exams exist, I feed them into ChatGPT. It analyzes patterns: what was tested, how it’s structured, what concepts are emphasized. Then I ask it to generate several practice exams — same concepts, different forms. This saves hours and helps me focus directly on what’s likely to matter, not just what’s easy to do.
No past exams? Still manageable. I either source exams from similar courses at other schools (especially if the curriculum is based on a known textbook) and feed that in, or rely on ChatGPT to generate practice based on core syllabus themes. It’s not perfect, but effective at simulating test conditions.
After each practice run, I review where I went wrong by feeding my answers back in and ask ChatGPT to analyze my errors: “What am I consistently missing? What strategies can keep me from making this kind of mistake again?” This turns feedback into strategy.
For memorization-heavy courses, I also have it make flashcards or concept sheets — paired definitions, theorems, properties — and then generate multiple-choice or true/false recall questions. It’s low-friction and helps internalize content without hours of repeating flashcard decks.
Most of my coursework in CS and statistics is conceptual and text-based, so the model naturally fits the way I study. But learning styles vary, and so do the demands of different subjects. That led me to a broader question: Where are LLMs genuinely helpful, and where do they fall short? To explore this, I asked ChatGPT to evaluate its own strengths and limitations across a range of college-level subjects.
1. What are your greatest strengths and weaknesses?
When asked about its capabilities, ChatGPT identified its strongest skill as making complexity feel manageable. It excels at breaking down dense concepts into structured explanations, turning abstract theory into intuitive reasoning, and walking through multi-step problems in a way that mirrors how a good TA might tutor one-on-one. It’s particularly strong at interpreting messy inputs—random screenshots, handwritten notes, incomplete slides—and reorganizing them into usable formats like summaries, formulas, study guides, and exam-style questions. It also supports active learning through quizzing, spaced repetition, and pattern detection.
But these strengths come with clear weaknesses. ChatGPT can produce confident but incorrect answers — especially in tasks requiring precise symbolic work, detailed computations, or multi-step algebra. Without context, it often defaults to a generic textbook explanation that may not match your professor’s framework or notation. It becomes less reliable in areas that require highly specialized expertise — advanced engineering, graduate-level math, subtle literary interpretation, and chemical mechanisms without supporting diagrams.
ChatGPT acknowledges that its reliability varies by domain. It’s strongest when the subject follows clear definitions and logical rules—like statistics, math, CS, physics, and chemistry. It becomes less confident in fields that are heavily diagram-based, deeply interpretive, or extremely specialized—graduate physics, RF systems, organic reaction pathways, or literary texts requiring close reading. It can still help in these areas, but errors are more likely to slip through.
Because of these limits, the best way to use ChatGPT is not as a shortcut but as a collaborator. Give it the context it lacks — your slides, syllabus, assignments, your attempt at solving a problem — and it becomes a tool for clarifying your thinking, matching course language, and building mastery. It works best when you work with it, not around it.
The effectiveness of ChatGPT comes down to context. Feeding high-quality inputs — slides, notes, textbook excerpts, your own attempted solutions — helps it respond based on your class, not a generic idea. Direct prompts such as “Explain this in the context of my course” or “Use the notation from these lecture slides” make output dramatically better.
Some of the most powerful strategies include:
ChatGPT isn’t a magic wand. It makes mistakes, it doesn’t know your professor personally, and it won’t know what’s on your final. But used wisely, it can help you study faster, smarter, and with less frustration. It won’t replace your effort—but it can channel it.
Whether it becomes your TA, your study coach, or your emergency brain during finals season—that depends on how you use it. In the end, the real skill isn’t learning to rely on AI. It’s learning to collaborate with it.
15 Jan 2026
When change is constant and rebuilding is cheap, maybe more software should be built for …
09 Jan 2026
Principles for building with AI
19 Dec 2025
Building and shipping a real-time audio plugin with AI assistance—what helped, what didn’t, and what …