A New Approach for Teaching Critical Thinking with Generative AI

As I experimented this past week with perplexity.ai’s new R1 feature, I was intrigued with the way that the model showed its reasoning in step-by-step fashion. I used it as an example in an ENGL 1102 composition class to discuss critical thinking. I was most interested to ask my students whether the computer was showing evidence of critical thinking, and either way, what could we learn from it.

Of course, to answer deductively whether the AI was critically thinking, a definition of critical thinking was necessary. We decided this one was clear and simple enough:

For the perplexity.ai demonstration, I prompted the AI with a question that my son is exploring right now as he writes his own essay for his middle school English class.

We could debate whether or not the full response of the AI shows evidence of critical thinking, but if we expand the “Reasoning with R1” section, we appear to have a window into how the full response came about. That is where I think we might be better off looking for evidence of critical thinking (or the simulation thereof). Let’s start with step 1:

Just in this step alone, we see that perplexity.ai decided to do a few different things. First, it turned my query into three separate searches and it prioritized finding evidence/sources which included content from political analysts and scholars. It also decided (for lack of a better term) that it should locate or reference a definition of oligarchy. If this were my 8th-grade son’s first steps or my college students’ first steps in answering the given question, I think I would have to say that they were on the right road to “analyzing available facts, evidence, observations, and arguments”—straight from our definition of critical thinking.  The selection of sources that were then chosen by the AI involved its own process worth discussing, and I expect my students to exercise their information literacy skills when doing that task for themselves. On another day, I may seek some clarification from the AI on how it went about selecting those particular sources, but suffice it to say for now that they were algorithmically generated like many information searches we may already be familiar with. If we can get past the idea that the AI states it was “Reading” the sources, rather than “processing” or “scanning” I suppose, we can move on to its next reasoning step.

Step 2:

I still struggle with seeing this AI using the personal pronoun “I,” but setting that aside, we can see how it defines oligarchy and links to sources where the definition comes from. It is worth checking those specific links to see if they do inform the definition the AI chooses to use, but what is happening here is exactly what deductive reasoning calls for. You cannot say X is Y until you define what Y is. In this case, America can be X and oligarchy can be Y.

Step 3:

This is where we see the AI connecting aspects of oligarchy to America as it summarizes some of the key findings from its sources. If step 2 gave us the deductive major premise that Y (oligarchy) is a system where a “small group of ultra-wealthy individuals hold significant power over political decisions, often to increase their own wealth,” now we have with step 3 the minor premise of the syllogism: X (America) has  “a few billionaires, influencing policies and elections.”

 

Steps 4 and 5:

Here we see the AI doing something that most writing instructors encourage their students to do: account for and consider counterarguments instead of just satisfying your confirmation bias monster. This relates to the “informed choices” language of our critical thinking definition. A choice is more informed once counterarguments are known and considered.

Step 6:

Instead of presenting a definitive answer to the question, the AI has settled on a thesis that expresses a degree of nuance. Getting my students to land on measured, qualified thesis statements like this is an achievement worth celebrating in a first-year composition course, especially when doing so forces them to move even a little bit away from their former strongly held, but ill-informed, convictions.

Conclusion:

So did the AI use critical thinking to answer the question it was given? And if so, what does that mean?

I cannot say for certain if the machine was critically thinking or not. And just because the reasoning steps it presented to me look just like critical thinking (or simulations of critical thinking), that doesn’t mean that AI was really doing those things. It just means that it says it was doing those things.

When Alan Turing first explored the question of whether a machine can think in his 1950 paper, he decided that particular question was not as suitable as asking if a machine can appear to think. In other words, can a machine fool a person into believing that it (the machine) is another thinking human being. In the case of perplexity.ai this past week, I think the appearance of critical thinking is quite strong.

Whether or not the AI was really critically thinking is important, but what might be more important (i.e. practical) is the new strategy I have for discussing the concept with my students.

Here is the link to the perplexity.ai query and answer:

https://www.perplexity.ai/search/is-the-united-states-an-oligar-5p9jmfAwSaW9kCW4wiHSLQ

 

Leave a Reply

Your email address will not be published. Required fields are marked *