Context
My generative AI journey began with ChatGPT in 2022 and has since expanded to include Claude.ai, Perplexity.ai, and Canva. My use cases generally relate to research and composition pedagogy, though I’ve even used Claude for some coding work.
When someone recently asked why I use different AI models for different tasks, I realized that I couldn’t articulate a clear answer. There’s significant overlap in what popular generative AI models can do, but each has distinct strengths and weaknesses. Their interfaces create different user experiences that can be difficult to explain—especially to those unfamiliar with these tools. Keep in mind, I’ve only used the free versions of these models, and paid versions often offer significant enhancements.
In my experience, Claude has always seemed more restricted by self-imposed ethical guardrails than ChatGPT, sometimes limiting its utility. Perplexity has been research-oriented from the start, with its links to credible and scholarly sources establishing reliability for its summaries. ChatGPT retains a nostalgic appeal as the first truly useful chatbot I encountered, but it carries baggage from “hallucinating” too much—especially in its earliest iterations. I avoided Google’s AI offerings due to similar accuracy issues with its Bard model upon release, but Google has since improved its AI products substantially. Its latest release, Gemini 2.5 Pro, ranks among the best available. Another AI-powered tool from Google is NotebookLM, which differs significantly from the simple chat interfaces offered by ChatGPT, Claude, and Perplexity.
NotebookLM has been on my radar for a while. I follow several AI content creators on YouTube (primarily AI Explained and Matthew Berman) who keep me updated on developments and reviews of new models. While NotebookLM has been mentioned occasionally, I don’t recall seeing a comprehensive review until watching this one from Skill Leap AI: https://www.youtube.com/watch?v=9xjmvUS-UGU
What is NotebookLM?
As I now understand it, NotebookLM is a tool with an integrated chatbot that draws knowledge and makes connections from a user-created dataset. Users can upload PDF documents, YouTube videos, and web URLs into a notebook. The chat feature within a notebook can answer questions about the uploaded content and make connections between different sources.
Within each notebook’s simple interface, Google created a “studio” feature that generates a podcast-style discussion about all uploaded content. This creatively summarizes the material and could benefit many users. However, the uncanny valley effect of this AI-generated podcast feels somewhat unsettling to me. The hosts have an awkward morning talk radio vibe, though the premium upgrade apparently allows users to change the hosts’ voices and tone/focus. The Skill Leap AI review demonstrates how users can create a podcast using NotebookLM and then use other generative AI tools to adjust host voices and even create virtual video hosts to read the script.
The studio section also offers one-click study guides, timelines, and briefings generated from your uploaded content. This content database functions as a context window (in computing terms) from which the chatbot can access information as part of its working memory when responding to users. The volume of text that can fit within this context window represents a significant advancement in publicly accessible, free generative AI models. Those who remember using ChatGPT before attachment uploads will appreciate this leap forward.
Capabilities and Limitations
Once you create a notebook (database), you can use the chat feature to answer questions about your uploaded source material. The responses provide links to the source material and include the full text from the source that supports each response. This crucial feature of linking to source text—now adopted by other popular models—allows users to verify the chatbot’s responses rather than relying on paraphrases or potential “hallucinations” with fake or missing citations.
Based on my experience with NotebookLM, limiting the chat feature’s responses to draw only from the user-created database significantly reduces fabricated information/hallucinations. In one case, the NotebookLM provided me some information that did not come from the source material I provided, but it also provided a disclaimer for that particular information:
The chat seems quite capable of digesting and synthesizing large volumes of text–to a point. For one notebook, I uploaded the full text of Bizzell’s 4,400-page anthology, The Rhetorical Tradition. Since this text is enormous (roughly 1.25 million words by my estimate), it exceeds NotebookLM’s processing capacity. Despite accepting the entire book as a pdf file upload, the tool only accessed approximately the first 1,000 pages. I resolved this by creating separate 1,000-page uploads for the remainder of the book. I confirmed this as an effective solution by asking the chatbot questions about different sections of the full text. Before I added separate uploads for pages 1001-4400, the chatbot could not provide links to anything beyond page 1000; after the additional uploads, the chatbot was able to answer questions and link responses to the rest of the text.
A Major—Welcome—Surprise
While NotebookLM users can upload webpages (not entire websites) as sources, many webpages include dynamic digital text elements that might not be captured as part of the uploaded information. In one case, I doubted whether an upload would include book notes that are only viewable when hovering over particular book titles on a webpage and then scrolling through the notes.
Surprisingly, the upload to NotebookLM included notes from all titles pictured on the webpage. In a sense, this gave NotebookLM agent-like capabilities, as I didn’t need to physically hover and scroll with my mouse to uncover the information. While I understand that all this information was embedded in the webpage’s code like any other text element, I was surprised that NotebookLM captured, accessed, and cited this content directly. As AI tools develop more sophisticated agent capabilities, their utility will grow substantially.
Notebook Capacity and Limitations
Directly from Google’s help page:
NotebookLM vs NotebookLM Plus User Limits
With NotebookLM, you can have up to 100 notebooks, with each notebook containing up to 50 sources. Each source can be up to 500,000 words long. All users start with up to 50 chat queries and 3 audio generations per day.
If you upgrade to NotebookLM Plus, you get at least 5X more usage, with up to 500 notebooks and 300 sources per notebook. The daily query limits also increase, providing up to 500 chat queries and 20 audio generations per day. Sharing a notebook doesn’t change the source limit: both you and anyone you share with can upload up to 300 sources to that notebook.
The X Factor
Perhaps novelty is the hard-to-describe factor that makes NotebookLM so appealing to me. I’m enjoying my learning process more in various areas because of the fascination of watching what a language model can do, while simultaneously testing its limits and trying to extract maximum utility from my laptop.
Implications for Teaching
Teachers already know how to use online platforms to provide students access to course materials. Imagine NotebookLM as a course site where all materials are aggregated and synthesized in helpful ways with a single click (to produce study guides and podcast overviews). More importantly, a NotebookLM notebook will function like a competent teaching assistant that can answer questions about all source material—even questions requiring connections between multiple sources.
I have already created a notebook for a class I am teaching this summer, and I am interested to see how students utilize it. (A notebook can be shared, but I believe it can only be shared with others who have a Google account.) Part of this ongoing project with NotebookLM is to learn how others leverage the technology in ways I have not thought of.
Implications for Writing
I’m still exploring how this tool can aid writing. For starters, it can serve as a database of one’s own writings. Unlike basic digital file storage, this tool helps writers find information from their database in ways that CTRL+F cannot. One of the best features of any LLM-based tool is its ability to instantly augment our searches with related terms/strategies we might not have considered.
Of course, the chat feature in notebook comprised of one’s writings can also make connections for the writer that were not evident. Maybe the chatbot could even make suggestions for where the writer could go next.
A classmate suggested this technology could help authors who create large fictional worlds that they need to track. NotebookLM would excel at reminding authors of obscure details that might otherwise conflict between previous works and current ones.
Tradeoffs and Conclusion
The most obvious downside to NotebookLM as a teaching and learning tool is that it may provide too much assistance, exacerbating our “TL;DR” tendencies. I often struggle with whether I need to read a full text to get what I need from it. As a painfully slow reader, my challenge is typically the finite nature of time rather than comprehension. NotebookLM addresses this time problem, albeit similarly to other generative AI models.
We should want to read primary texts in their entirety rather than relying on abstracts, summaries, and AI-generated bullet points. There’s significant value in struggling to make meaning from longform human communication. However, if information and technology continue accelerating, perhaps our paradigms for learning and processing information should evolve too. As we become an increasingly information-inhaling species, perhaps briefings, summaries, and podcast-style content we can process while driving are equally important to our learning, research, and writing processes.
We’ve all read longform materials that contained only small portions we truly needed. Context is often helpful and necessary, but not always. If NotebookLM can present context efficiently and effectively, perhaps this Google tool will be the first one I end up paying a subscription for. However, as these technologies increasingly filter communication between human writers and readers, we should always: 1) recognize the power, biases, and essence of the filter, and 2) remain mindful of what we may be sacrificing in attention span and deep reading comprehension.



(Description from Google’s Student sign-up page, April 2025

