Final Project
Wednesday May 07th 2025, 3:41 am
Filed under: Uncategorized  |  Leave a Comment

For my final project, ChatGPT and I worked together to compile and synthesize information from three key working papers jointly written by task force convened by the Conference on College Composition and Communication (CCCC) and the Modern Language Association (MLA). Our aim was to integrate their guidance and analysis into a clear, visually engaging infographic tailored specifically for educators who are navigating the growing role of AI in writing courses. The infographic serves as a resource that highlights the most relevant guidelines, best practices, and ethical considerations for incorporating AI into the composition classroom, based on the latest academic recommendations from the MLA and 4Cs. In addition to the infographic, I wrote a reflective paper that examines the process of creating the visual representation with the aid of ChatGPT. In this reflection, I also discuss some of the limitations of the current guidance on AI in college composition.


Design by Jennifer Molton

Understanding the MLA-CCCC Joint Task Force on Writing and AI Guidance and Policy

As a novice English composition instructor, I am frightened by the growing implications of generative AI and what that means for my role in the classroom. Already grappling with the pressures of modern society in a litany of ways, I also find myself teaching in a moment when the boundaries of authorship, learning, and human creativity are being remodeled. It feels like the educational landscape is shifting right underneath me as the culture is pushed forward by technologies I barely understand and framed by discourses of efficiency, enhancement, and automation. It seems our world is rapidly approaching posthumanism. Increasingly, machines are no longer designed to simply support human processes but instead more so they are designed to simulate or even replace us. The foundational skills I was trained to teach suddenly seem contested and at times obsolete. What does it mean to guide students through invention, revision, and rhetorical decision-making when a machine can draft an entire essay in seconds? What does authentic writing look like in a world where the line between human and AI-generated text grows blurrier by the day? These questions are not mine alone. In recognition of the profound shifts underway, the Modern Language Association (MLA) and the Conference on College Composition and Communication (CCCC) formed a joint task force co-chaired by Holly Hassel and Elizabeth Mathews Losh. Their goal has been to take stock of the current moment, support educators like myself, and provide thoughtful, community-driven guidance for navigating AI in writing instruction. The task force’s three working papers offer a progression of insights: from identifying risks and values, to proposing ethical policies, to calling for long-term cultural and institutional change. In this paper, I reflect on these documents through the lens of my own uncertainty, hope, and skepticism, as I try to reconcile what it means to teach writing in an AI-saturated world. I also consider the gaps and limitations in these documents, especially where they intersect with my lived experience as an early-career instructor seeking to uphold the integrity of student voice, process, and critical engagement.
The joint task force released the first of its working papers in July 2023. In this comprehensive paper the committee addresses the urgent need for clear, equitable, and pedagogically sound policies around the use of generative AI (GAI) in postsecondary writing and language instruction. It builds upon an earlier statement of principles from the task force and focuses specifically on the challenges and opportunities of GAI policy development in college classrooms. The paper is organized into four major sections: tiered policy structures, implementation principles, faculty use of GAI, and the broader impact of GAI across educational domains. The authors propose a tiered model for GAI policy – spanning institutional, departmental, and individual classroom levels – allowing for local autonomy while maintaining alignment and coherence across different educational spaces. Policies should be flexible, participatory, and responsive to the needs of marginalized and multilingual students. Classroom-level policies, in particular, must be detailed and contextualized, outlining specific expectations and ethical boundaries for GAI use based on assignment goals and course objectives. The paper identifies three guiding principles for implementation: (1) policy must center academic integrity, learning outcomes, and the teacher-student relationship; (2) policies should minimize harm and support critical AI literacy rather than enforcing blanket prohibitions; and (3) AI detection tools should be used sparingly, if at all, due to their inaccuracy, bias, and potential for misuse. Instructors are encouraged to evaluate student work through collaborative conversations about processes rather than relying on surveillance or unverified detection software. Faculty should also be wary of how assumptions about language and identity may result in disproportionate accusations of misconduct against multilingual or racially marginalized students. The report also addresses faculty use of GAI, particularly in grading, feedback, and course preparation. It calls for transparency when AI is used to generate syllabi, assignments, or feedback and warns against overreliance on automated tools that may erode trust and student engagement. Any institutional or departmental guidance on faculty use should balance labor considerations, academic values, and student perspectives.

In its final section, the paper outlines six key policy areas affected by GAI: multilingualism, literacy, surveillance, intellectual property, definitions of “good” writing, and writing as a process. It argues that AI policies must not inadvertently enforce white linguistic norms or devalue nonstandard English. Writing is reframed as a recursive, human-centered process that cannot be reduced to simple binary rules about AI use. Rather than policing behavior, the report urges educators to foster critical inquiry, metacognitive reflection, and informed experimentation with digital tools. Overall, this document is a foundational guide for educators and administrators navigating the complex landscape of AI in higher education. It blends theoretical insight with practical scenarios and serves as a model for developing just, transparent, and pedagogically aligned GAI policies.

In the second of the series of their working paper published in April 2024, the MLA-CCCC Joint Task Force offers detailed, practical guidance for developing equitable, ethical, and pedagogically sound policies surrounding the use of GAI in writing and language instruction. While the first working paper focused on high-level ethical questions such as authorship, originality, and the role of AI in intellectual labor, this follow-up emphasizes implementation strategies, institutional frameworks, and scenario-based policymaking. A central feature of this paper readdresses the advocacy for tiered policy development across the institutional, program/departmental, and classroom levels. Each level is tasked with specific responsibilities – defining broad principles, offering field-specific guidance, or crafting detailed assignment-based expectations. This focus on multilevel alignment and flexibility expands on the call for local autonomy emphasized in the first paper.

This paper introduces three core principles for ethical policy implementation: keeping learning outcomes and teacher-student relationships central, reducing harm through inclusive AI literacy rather than punitive bans, and rejecting overreliance on flawed AI detection software. These principles refine and build upon the first paper’s caution against surveillance and emphasize the need for trust, transparency, and anti-racist pedagogical practice – particularly in response to biases in detection tools that disproportionately flag work by multilingual and marginalized students.This second paper introduces faculty use of GAI. It explores the implications of using AI for grading, feedback, and course design. Faculty are encouraged to maintain transparency, avoid undermining student trust, and ensure AI use aligns with educational values. The paper warns against creating double standards where faculty benefit from AI efficiency while students face restrictions. The final section analyzes six domains impacted by GAI policy : multilingualism, literacy, surveillance, intellectual property, definitions of “good” writing, and writing as a process. These discussions connect AI policy to long-standing debates in composition studies, such as linguistic justice and process pedagogy, and present a more intersectional, justice-focused approach than the first paper, which concentrated more on foundational definitions and principles. Overall, this second paper is a practical extension of the first, maintaining the core values of transparency, equity, and pedagogical integrity established in the initial document while shifting the focus from theoretical frameworks to operational strategies and institutional questions. Together, the two documents work to address why AI in writing matters and how institutions can responsibly address it.

The third working paper from the MLA-CCCC Joint Task Force builds on its predecessors by shifting from outlining risks (Working Paper 1) and crafting policy infrastructure (Working Paper 2) to promoting a sustained cultural transformation around AI literacy in writing, language, and literature education. The document reframes GAI not only as a tool that must be regulated or accommodated, but as a literacy ecosystem that must be consciously shaped through collective, inclusive, and critical educational practice. It identifies four domains for cultivating GAI literacy adding students to the previously discussed tiered approach including educators, programs/departments, and institutions. Each section outlines specific literacies, competencies, and ethical considerations relevant to that group. For students, it advocates learning outcomes that center on critical AI evaluation, rhetorical use, bias awareness, transparency, and metacognition. For educators, the paper recommends technical training, professional development, and pedagogical reflection, along with a commitment to transparency and inclusive practices. Programs and institutions are encouraged to embed GAI literacy across curricula and institutional structures, supporting this work through faculty development, cross-departmental collaboration, and equity-minded leadership.

What distinguishes this third paper from the first two is its explicit grounding in multiliteracies and critical pedagogy. Drawing on the New London Group and scholars like Maha Bali and Ruha Benjamin, the paper frames AI literacy as a multilayered, socially-situated competency that must be built through engagement with issues of race, power, surveillance, labor, and linguistic justice. While the first paper introduced AI as a rhetorical partner and the second paper outlined mechanisms for institutional and classroom-level governance, this third paper envisions GAI as a cultural and ethical landscape, one that demands not just policies or guidelines, but a transformational shift in educational habits, values, and community discourse.

This paper includes nuanced analysis of power dynamics and emphasizes the burdens of AI literacy development that often fall disproportionately on contingent faculty, graduate students, multilingual learners, and other historically marginalized groups. It also warns against over-reliance on surveillance tactics like AI-detector and locked-down browsers. Alternatively, it encourages long-term curricular integration and institutional investment. The latest installment of these working papers expounds on the task force’s earlier work by foregrounding culture-building over compliance, inclusivity over surveillance, and agency over restriction. It invites educators not only to respond to AI, but to reshape the terms on which it enters academic life. Whereas the first paper diagnosed the landscape and the second mapped policy responses, this third installment calls for a justice-oriented approach to AI literacy development marking a further evolution from reactive guidance.

The three working papers represent a progressive evolution in both focus and function. The first serves as a foundational document, offering a diagnostic and philosophical lens through which to understand AI’s impact on writing instruction. It introduces core ethical concerns such as authorship, originality, linguistic justice, and the value of writing as a human-centered process and warns against surveillance-based enforcement and overreliance on flawed detection tools. This paper sets the stage by affirming writing as a form of labor, cognition, and identity, and calls for educators to approach AI critically and reflectively. The second paper builds upon these values by translating them into practical, administrative strategies. It focuses on policy implementation, providing a tiered framework that addresses the institutional, departmental, and classroom levels. This working paper is procedural and scenario-driven, offering sample syllabus language, ethical decision-making guidance, and a clear stance on minimizing harm, particularly for multilingual and marginalized students. Importantly, it expands the scope of the conversation to include faculty use of AI for grading, feedback, and instructional design emphasizing transparency and fairness. Where the first paper called for ethical reflection, the second paper addresses the mechanics of policy design in response to AI’s proliferation. The third paper, in contrast, is educational and transformational in tone. Rather than focusing on regulation or infrastructure, it promotes building a culture of generative AI literacy across institutions. Drawing on multiliteracies theory and critical pedagogy, it frames AI literacy as a complex, justice-oriented competency that must be fostered among students, educators, programs, and institutions. This paper calls for the development of inclusive curricular practices, institutional investment in long-term faculty support, and a pedagogical approach rooted in equity, accessibility, and rhetorical agency. While the first paper explains why we must be cautious, and the second explains how to respond responsibly, the third challenges us to imagine what a generative, inclusive, and sustainable AI-literate academic culture might look like.Each paper retains the core values of equity, transparency, and rhetorical agency, but seeks to deepen the collective understanding of what it means to write, teach, and learn in an AI-augmented world.

Although the working papers offer comprehensive frameworks, policies, and pedagogical guidance for addressing generative AI in writing instruction, they also reveal several limitations both structural and philosophical that warrant critical examination. One of the most striking tensions appears in the first working paper which encourages instructors to prepare students for responsible AI use in “other classes, work environments, and their personal lives.” This aim feels entirely aspirational as it assumes an unrealistic level of foresight and pedagogical reach. Instructors, particularly those in entry-level or GTA positions, often lack knowledge of institutional framework, curricular alignment, or cross-disciplinary insight into what students will encounter beyond their courses. The burden placed on individual educators to anticipate such diverse futures reveals a blind spot in the task force’s otherwise student-centered approach. It offers no concrete strategy for mapping AI literacy across institutional curricula or for articulating clear transfer goals beyond vague outcomes.

Another limitation surfaces in the papers’ repeated assertion that penalizing AI use “disproportionately affects marginalized groups.” This claim, while valid in some contexts particularly regarding the unreliability and bias of AI detection tools, is often presented without sufficient nuance or factual grounding. The argument tends to conflate surveillance-based harm with all forms of AI restriction, dismissing the equally pressing concern that widespread, uncritical AI use may actually diminish the goals of equitable education. While the papers defend AI as a potential scaffold for overcoming psychological or linguistic barriers, they downplay how such tools might also mask a student’s struggles rather than address them. For students who need more explicit instruction in invention, structure, and revision, AI may offer a shortcut that suppresses growth, rather than a bridge to agency. In this regard, the enthusiasm for AI’s role in “creative wordplay” or as a brainstorming companion, as highlighted in *Working Papers 2 and 3*, appears ideologically driven. It risks romanticizing experimentation without recognizing that genuine creativity often emerges through friction, struggle, and iterative failure processes that AI can obscure or bypass.

Equally underexplored are the practical limitations educators face in implementing the guidance offered in these papers. The documents presume a level of familiarity with AI systems that does not always align with reality, especially given the rapid evolution of tools, lack of institutional training, and divergent access to technology across institutions. Working Paper 2, for instance, outlines detailed policy language and implementation strategies, but these suggestions may be inaccessible to under-resourced faculty who are already overburdened with course loads and administrative demands. Similarly, Working Paper 3 calls for a cultural shift toward AI literacy at every level of academia, but offers few models for sustained, cross-functional professional development that would allow such transformation to occur equitably.

All three papers largely ignore the philosophical concern of human-writing versus AI-generated content. Although they acknowledge that AI simulates meaning rather than enacting it, they stop short of interrogating whether the simulated voice produced by generative tools displaces the writer’s authentic, embodied voice. This omission, particularly in the context of marginalized students, is crucial as embodiment and lived experiences is critical to the rhetorical agency students often bring to the classroom. The possibility that AI might obscure or overwrite those voices is not adequately addressed. Ultimately, while the working papers do a good job of starting an important conversation about generative AI and writing instruction (they definitely help provide guidance through this virtually uncharted territory) they also leave a lot of things unresolved. There are some big assumptions about what instructors have the ability, patience, know-how to do, and some equity claims that feel a little too simplified. Additionally the purported benefits of AI ignore part of the real core of composition studies in that the struggle AI tools are built to avoid, is an integral part of learning to write. There are also clear gaps when it comes to preparing faculty for these changes and not enough deep thinking about what writing should look like in a world where AI is part of everything. Without addressing these, this guidance seems helpful in theory, but a little out of touch in practice – too strict in places where we need room to adapt, and too open-ended where we really need clearer direction.

ChatGPT and I worked together to turn the three working papers into digital readable infographics that presented the information in a clear and concise manner. In our collaborative effort to create the infographics I learned some important limitations inherent to AI-assisted graphic design. While we successfully transformed a comparative table of the three working papers into downloadable formats including an Excel sheet and multiple image-based infographics, the process revealed that AI systems often lack the precision and design sensibility of human creators.

Initially, I used ChatGPT to generate a standard infographic-style table, however, the first versions suffered from readability issues, especially regarding font size and layout spacing. Despite adjustments, including enlarging the font size to 32 and applying Canva-inspired styling with alternating row colors and header highlights, the AI’s visual output remained clunky and inconsistent in aesthetic design. The AI-styling could not replicate the nuance, spacing, and visual hierarchy that a human designer would instinctively fine-tune such as balancing whitespace, choosing accessible color contrasts, or adjusting typography across sections for clarity and flow. Additionally, it did not help that my knowledge of graphic design was limited. These challenges underscored the notion that while AI can help automate layout tasks and generate templates quickly, it still lacks the intuitive, context-sensitive judgment necessary to make design choices that skilled human perception requires. Ultimately, the infographics though erroneous in some regard, still were created in a better quality and lesser amount of time than I would have been able to complete without the aid of AI tools.

The experience of using AI to create the infographics works as a microcosm of the broader argument advanced by the MLA-CCCC papers themselves: AI can be a useful tool, but it must be paired with human insight, critique, and iterative refinement. In the realm of digital visual rhetoric just as in written rhetoric human intervention remains essential for ensuring clarity, accessibility, and meaning. It really feels like the integration of AI into everyday life isn’t something we’re choosing anymore; it’s just happening, with or without our input. I can’t simply opt out or avoid it. Even when I try to ignore it, it’s in the background of almost everything I use, from the way I search for information to how my emails autofill or how people talk about writing and creativity. In certain circles, how you approach AI seems to say something about you. It’s like your stance on it becomes a kind of litmus test: for your social responsibility and ethics. But the more I learn about how AI actually works, the harder it is to feel comfortable with it. I keep hearing about the environmental cost, like how much energy it takes to run these massive models, and worse, the invisible human labor behind it all. People are working under exploitative conditions with low pay to train and moderate AI systems. I can’t help but wonder what is being brushed under the wrong in the name of progress as we celebrate AI’s convenience or so-called intelligence. But it is clear that AI is already taking jobs as I hear my Uber drivers lament about Waymo driverless vehicles being released onto the Atlanta market. And I see my fellow instructors and tutors draw concern at the rise of students submitting completely AI written essays. While AI is becoming a bigger part of everyday life, I’m wondering what kind of future we’re aiming towards. It doesn’t make sense to work towards a completely autonomous future for some particularly when it is built on the back of the suffering of others. We must continue asking hard questions, making space for skepticism, and maybe even saying no when the hype overshadows the harm. Pretending it’s all progress, despite some merit that some of it is indeed progressive, simply doesn’t feel honest.

Works Cited

MLA-CCCC Joint Task Force on Writing and AI. Working Paper 1: Overview of the Issues,
Statement of Principles, and Recommendations. Modern Language Association of America and Conference on College Composition and Communication, July 2023. https://aiandwriting.hcommons.org/working-paper-1/

MLA-CCCC Joint Task Force on Writing and AI. Generative AI and Policy Development:
Guidance from the MLA-CCCC Task Force. Modern Language Association of America and Conference on College Composition and Communication, Apr. 2024.

MLA-CCCC Joint Task Force on Writing and AI. Building a Culture for Generative AI Literacy in
College Language, Literature, and Writing. Modern Language Association of America, Oct. 2024.



AI and Social Media
Wednesday May 07th 2025, 3:40 am
Filed under: Uncategorized  |  Leave a Comment

I feel like social media is slowly being overtaken by AI. It’s crazy how these apps seem to be pushing AI-driven conversations more and more, and it’s hard not to think about how it all ties back to the money-making machine behind it all. It’s like, where are we really heading with this? Every time I open an app, there’s some new AI integration, or worse, clearly AI-driven engagement that’s trying to pass itself off as real conversation. It’s getting a little weird, and I can’t help but wonder what’s the end game here? Especially with text-based apps like FKA Twitter (I still refuse to call it “X”) and Threads (Instagram’s version of Twitter), it’s getting pretty obvious that AI is playing a huge role. These apps push you to engage with posts from accounts you don’t even follow, and when you look at the accounts they want you to check out, most of them seem to be posting AI-generated stories. It’s wild how all the posts are starting to follow the same format like they’re intentionally trying to spark some kind of “engagement” and “discourse,” but it seems to be about driving traffic and consequently making money. At the same time on platforms like TikTok, I see more users calling out this rise in reliance on AI. I’ve seen a lot of authentic discourse and videos where people are speaking out about how AI is reshaping how we communicate and think. I came across one about AI potentially replacing traditional composition that really stuck with me copied below. There’s been a lot of authentic discourse happening there, with users speaking out about how AI is not just changing how we communicate, but even reshaping how we think. I came across one video that talked about AI potentially replacing traditional composition, and it really stuck with me. It made me think a lot about where AI is heading and how it could impact fields like writing and education in ways we’re only starting to see. But I’ve noticed recently that the search function on TikTok will often return AI-generated results, kind of like what you see at the top of Google’s search results. The weird part is that these AI-generated descriptions are often inaccurate, which is a little concerning when you’re trying to find reliable info. I’ve been looking for examples to share with my classmates, but most of the time, the inaccuracies pop up when it’s talking about slang, niche communities, or more cultural references that may not be perfectly appropriate. It’s almost like AI results are misinterpreting things in a way that’s borderline hallucination. Whatever the case, AI isn’t always doing a great job of capturing the nuanced meaning of things.

https://www.tiktok.com/t/ZTjDDQ4KA/



Week 7
Wednesday May 07th 2025, 3:40 am
Filed under: Uncategorized  |  Leave a Comment

I want you to be my Socratic mentor for refining my AI policy for my 1102 course.Use the Socratic method to help me identify cognitive biases and blind spots in my current approach. Start by asking me the 3 most penetrating questions about my situation, and then guide our conversation using systematic doubt and metacognition frameworks. For each of my responses, highlight one assumption I’m making and help me pressure-test.

I’d be happy to help guide you through refining your AI policy for your English 1102 course. Let’s begin by using the Socratic method to encourage deep self-reflection and critical thinking.

**1. What do you perceive as the primary role of AI in your course?**

This will help clarify your purpose behind integrating AI and guide the foundation of your policy.

**2. How do you define academic integrity in the context of using AI in assignments and assessments?**

This question will focus on the key ethical assumptions you might be making regarding fairness, honesty, and the impact on learning.

**3. What potential risks or unintended consequences do you foresee arising from AI usage in your course, and how do you plan to address them?**

This will push you to think about any potential cognitive biases that could blind you to the broader implications of AI’s influence on students and their learning experiences.

Let’s dive into these questions one by one, and I will be challenging the assumptions you make along the way to uncover potential biases and blind spots in your thinking. Feel free to respond with your thoughts!

Okay for the first one I don’t think AI necessarily has a role in my course. Well not one that I have invited, or created. I think AI is something that exist in my students lives and I would be remiss to ignore it, but I am still not quite sure what to do with it

Your response suggests that you’re grappling with the role of AI as a tool versus as a topic of focus in your course. There’s an assumption here that because AI exists in your students’ lives, it’s something that must be addressed—but does that automatically mean it should have a significant role in your course design?

Let’s pressure-test this assumption with a few questions:

1. **What is the underlying reason for your discomfort in inviting AI into your course?** Is it a matter of uncertainty about its educational value, a concern about potential disruptions to traditional teaching methods, or something else entirely?

2. **Is the presence of AI in your students’ lives inherently a problem or an opportunity?** Could it be that you’re seeing AI as a challenge to your pedagogy rather than a potential tool for exploration, discussion, or critical thinking?

By considering these questions, we can better examine if there’s an assumption about the “neutrality” of AI or a fear that it might dilute the learning process. Perhaps there’s a cognitive bias around seeing technology as something either wholly positive or negative without examining its nuanced applications.

How do you respond to these questions?

I think I take issue with your use of “inviting” and overall insinuation that I am creating a vested focus on AI in my course. It just so happens that I am teaching composition in a time where students are able to generate essays without having to do the “real work” maybe? I am unsure, I think an essay solely generated by AI probably won’t be a very good essay, but due to it being such a burgeoning field, and also I am new to teaching, I am unsure with how to deal with it

I see, and I appreciate the clarification. It seems like the core issue is less about AI having a *role* in your course and more about how to address the possibility that students may use it to bypass traditional academic work, particularly when it comes to generating essays. You’re concerned about the quality of AI-generated work and the integrity of the assignments, which is understandable given the growing prevalence of this technology.

You’re also acknowledging that as someone who is new to teaching, you’re unsure how to handle it. This self-awareness is important because it shows that you’re open to adapting your approach while still being mindful of potential pitfalls. Let’s break this down and test a few assumptions:

### Assumption 1: **AI-generated essays won’t be “good” essays.**

* **Pressure test:** Is it possible that AI-generated essays could still be effective in certain contexts, perhaps as a draft or starting point? Or do you view them as inherently inferior to student-generated work?

* Could AI potentially be used for brainstorming or outlining, where it supports the writing process instead of replacing it entirely?

### Assumption 2: **AI bypasses the “real work” of writing.**

* **Pressure test:** What exactly is “real work” in the writing process? Is it *just* about the creation of the content, or does it also involve other elements like critical thinking, synthesis, and revision?

* If AI helps with content creation, could students still be engaged in other aspects of the writing process that foster learning (e.g., revision, analysis, and self-reflection)?

### Assumption 3: **As a new teacher, you’re unsure how to deal with AI.**

* **Pressure test:** Do you feel uncertain because you’re worried about how to manage AI’s impact on student learning, or is it more about how to *address* it in your grading and policies?

* Could there be a middle ground where you embrace the reality of AI’s existence while maintaining your standards of academic integrity? Could you set guidelines that help students engage ethically with AI?

By reflecting on these assumptions, we can uncover potential biases that may be influencing how you see the role of AI in your course. How do you feel about these questions, and where do you see potential areas for further exploration?



Final Project Ideas
Wednesday May 07th 2025, 3:40 am
Filed under: Uncategorized  |  Leave a Comment

For my final project I intend to write a paper investigative and curious on AI in composition. I want to write an essay exploring The ethics of AI Use in English composition, examining the topic from both the student and instructor perspectives. From the student’s view, AI can be a helpful tool for brainstorming, drafting, and refining ideas. It may democratize access to writing support, especially for students with different learning styles. However, it also raises concerns about academic integrity, as students might misuse AI to bypass the actual writing process, potentially undermining the purpose of assignments. The key issue is how students balance AI assistance with personal engagement in their work. From the instructor’s perspective, AI complicates traditional methods of assessment. Instructors must figure out how to evaluate work that may be heavily influenced or generated by AI. There is also the challenge of establishing ethical guidelines for AI use in the classroom. Instructors must determine how to allow AI as a tool without compromising academic rigor or fairness, especially considering issues of equity and access to AI tools. Ultimately, the essay will explore whether and how AI can be ethically incorporated into composition studies, proposing a balanced approach where AI is used responsibly to enhance, rather than replace, the intellectual work of writing. This week I had a conversation with my students about their AI use in their private and academic lives. Most of my students hesitantly acknowledge that they use AI to some extent in their daily lives (usually personal uses) with a few students from each of my sections being starkly opposed to AI use.



Week 8
Wednesday May 07th 2025, 3:39 am
Filed under: Uncategorized  |  Leave a Comment

I had ChatGPT read and create a infographic containing the important bits of information from the joint MLA CCCC working paper on AI in composition. This was the result after asking chat to make it prettier.





Writing & AI: Key Insights


Writing & AI: Key Insights

🧠 The Core Message

AI writing tools are not neutral. Their use in education requires ethical, rhetorical, and pedagogical care.

📘 What Educators Should Understand

  • AI tools replicate cultural biases from training data.
  • They often produce fluent but inaccurate content.
  • Common student motivations include time pressure, anxiety, and language challenges.

🏫 Classroom Recommendations

  • Teach AI literacy: Explain how AI works and its limitations.
  • Promote rhetorical thinking: Focus on purpose and audience, not just correctness.
  • Use AI meaningfully: Try activities like comparing drafts or identifying AI bias.

🛠️ Pedagogical Approaches

  • Center rhetorical education on student agency.
  • Encourage transparency about AI use in classwork.
  • Support multilingual learners without bias.
  • Design for equity and inclusion.

📜 Policy Considerations

  • Avoid blanket bans or surveillance-based enforcement.
  • Clarify acceptable AI use in syllabi and assignments.
  • Update honor codes through faculty-student collaboration.

📍 Final Takeaway

AI is not the enemy. The goal is to redefine writing pedagogy for a world where AI is a rhetorical partner.

Based on the MLA-CCCC Joint Task Force on Writing and AI Working Paper 1


Not quite satisfied with the results, I took this outline to canva and created this version containing the relevant information from the original infographic. This is what I created in Canva

I then took this canva poster back to chatgpt and asked it to help me make it more organized and add additional information onto the infographic while keeping the integrity of my design. This was the final result.

I noticed some errors in the text, but overall was satisfied with this basic outline. However, I was curious if chatgpt could give more information from the essay and tasked it with creating a multipage info graphic from the essay. Here were those results

The quality of information in this version was significantly reduced.



Week 6
Wednesday May 07th 2025, 3:39 am
Filed under: Uncategorized  |  Leave a Comment

**Protocol for Thinking Through Argumentative Situations** ChatGPT

Use this step-by-step guide to analyze and respond effectively in any argumentative situation. Whether you’re preparing for a classroom debate, writing a persuasive essay, or navigating a tough conversation, this protocol will help you think critically, clarify your position, and engage others thoughtfully.

**Step 1: Define the Situation (Rhetorical Context)**

* Who is involved? Consider the audience, stakeholders, and possible opponents.
* What is the issue? Clearly identify the central topic or problem.
* Where and when is it happening? Note the physical, digital, or cultural setting.
* Why now? Identify what prompted the argument.
* Tip: Use the rhetorical triangle (ethos, pathos, logos) to understand the situation.

**Step 2: Identify the Claim**

* What is being argued?
* Is it a claim of **fact**, **value**, or **policy**?
* Reframe the argument as: “The speaker believes \_\_\_ because \_\_\_.”

**Step 3: Examine the Evidence**

* What support is offered for the claim?
* Is the evidence credible, relevant, and sufficient?
* Are there logical fallacies or gaps in reasoning?

**Step 4: Consider the Counterarguments**

* What might a reasonable opponent say?
* Are objections addressed directly or ignored?
* Does the argument acknowledge complexity, or oversimplify?

**Step 5: Analyze the Stakes**

* What are the consequences if the argument is accepted or rejected?
* Who benefits or is harmed?
* What values or ideologies are embedded in the argument?

**Step 6: Reflect on Your Position**

* Where do you stand, and why?
* What biases or assumptions are you bringing?
* What additional information would help clarify your position?

**Step 7: Strategize a Response**

* Will you agree, refute, qualify, or redirect the claim?
* What tone, medium, and structure will best support your response?
* What rhetorical tools (e.g., stories, data, analogies) will persuade your audience?

**Mnemonic to Remember: CRAFTES**

* **C**ontext
* **R**hetorical claim
* **A**rguments & evidence
* **F**oresee counterpoints
* **T**rack consequences
* **E**xpress your stance
* **S**trategize response

https://www.perplexity.ai/search/develop-a-protocol-for-thinkin-8ja7W9svR6OH1ULnqMRi1A

 



Week 5
Wednesday May 07th 2025, 3:39 am
Filed under: Uncategorized  |  1 Comment

I think back to my childhood and how different life was without all the technology we have now. One memory that sticks with me is when my sister and I were trying to find a song we heard on an episode of The Powerpuff Girls. We didn’t know the name of the song or which episode it came from. We didn’t have YouTube or Google to help us. So, we decided to record every single episode that came on TV, hoping that one day the episode with the song would rerun. And eventually, it did. But we had to wait and watch a lot of episodes just to find it. Now all you’d have to do is type a few words into Google, like “Powerpuff Girls episode with Love Makes the world go round,” and the exact episode, video clips, and even lyrics would show up in seconds. Similarly, I think AI will keep filling in those little gaps in our lives. Things that used to take time or effort are already getting faster and easier. I see it happening all the time. Some of my friends use AI or TikTok to look things up instead of using Google. I can see the appeal of being able to be more conversational with this tech and treat more like a human capable of understanding context.  You can just type in a question or describe a problem and get an answer right away without having to sift through irrelevant results. It’s helpful, but it also makes me wonder what we might be losing. I grew up loving reading and writing, and I learned so much through those habits. Sometimes I wonder if AI might make people less interested in doing those things for themselves. It reminds me of how my teachers used to worry about calculators and computers in class. They thought we wouldn’t learn basic math skills if we relied on tech too much. And now, I kind of see what they meant. I just learned that on the iPhone calculator you don’t even need to convert percentages to decimals anymore. You can just type “6% * 299” or whatever and it gives you the right answer. It’s super convenient, but I do wonder if kids are still learning how the math works or if they just learn to type it in and move on. I’m not against technology at all. I think it’s amazing how far we’ve come. I just find myself thinking about how these tools are shaping the way we learn, think, and problem solve.



Week 4
Wednesday May 07th 2025, 3:38 am
Filed under: Uncategorized  |  Leave a Comment

I was quite intrigued by the idea that there was a career in AI that is not coding based that pays securely into the six-figures. I started poking around on job boards like Handshake to see if I could actually find one of these jobs. I wasn’t exactly sure what I was looking for though. The term “prompt engineer” kept coming up, but I realized I didn’t really know what that meant. Was it someone who just plays around with chatbots all day? Like a beta tester for AI? Or was it something more technical behind the scenes? The few job listings I did come across were kind of vague. Some seemed super technical, with lots of mentions of machine learning, data sets, and programming languages I barely recognized. Others were more about communication and content creation, which made me wonder if the role could be more flexible than I thought.  Articles like the one linked to this weeks module kind of dismiss it as more of a buzzword than a stable career path.Honestly, the mixed messaging made it hard to figure out if prompt engineering is a serious role with long-term potential or if it’s just a shiny new title that’ll fade as AI tools become more user-friendly. But I think what really stuck with me wasn’t just the job title or the salary but that at its core, prompting is about understanding how to talk to AI in a way that gets meaningful results. That might sound simple, but the more I think about it, the more I realize that there’s a real skill in knowing how to ask the right questions. I  have not found the elusive dream “non-coding six-figure AI job” yet, but the search definitely opened my eyes to how broad the field actually is and how communication is burgeoning on just as valuable as technical skills in this space.

 



Week 3
Wednesday May 07th 2025, 3:38 am
Filed under: Uncategorized  |  Leave a Comment

I came into the world of artificial intelligence with a pretty basic understanding of how computer programs work. I knew the fundamentals — basic functions, input and output, and how to use logic to get a program to respond to certain actions. In college, I had to build simple console-based games for programming classes, and I remember relying heavily on if statements. If the user chose option A, the program would respond in one way. If they chose option B, it would do something else. From there, I’d try to build out all the different paths and reactions. It wasn’t the most efficient code. A lot of it turned into long chains of nested loops that were hard to follow and debug. I knew there were better ways to organize the logic, but at the time, I didn’t have the skills to simplify it. What I did know was that writing these kinds of programs required thinking through every possible user action and preparing for it in advance. That meant not only building the logic to handle expected behavior, but also thinking ahead to possible mistakes, incorrect inputs, or unexpected combinations. It was a process of trial and error. I thought artificial intelligence worked in a similar way (I guess it kind of does). But I assumed it was just a more advanced version of those early programs, still built on rules and scripts created by human programmers. I pictured a system with a large number of preset responses, ready to be triggered by specific inputs. It felt more like a clever illusion of intelligence rather than anything close to how people actually think. After doing some reading and using generative AI tools myself, I started to see the difference. While these systems are still grounded in logic and code, they’re not built around manually programmed responses to every possible input. Instead, they’re trained on large sets of data and learn to recognize patterns in that information. That allows them to generate new responses in real time, based on the context of what they’re being asked. In my coding practice, I had to guess how a user might behave and then write code to handle that exact situation. Generative AI doesn’t rely on pre-written rules for every outcome. It adapts to new inputs by drawing from patterns and data models.

 



My Commonplace Habit
Wednesday May 07th 2025, 3:38 am
Filed under: Uncategorized  |  Leave a Comment

I keep track of my thoughts in a few different places, depending on what kind of idea it is and what I think I might do with it later. It’s not the most organized system, but it works for me. Over time, I’ve developed a few habits that help me hold onto the random ideas that pop into my head throughout the day. The first place I usually turn to is the Notes app on my phone. It’s quick, easy to open, and always nearby, which is helpful because most of my creative thoughts show up when I’m not trying to think too hard. I’ll be out walking or eating lunch or lying in bed, and suddenly a line will just come to me. I don’t always know what it’s for, but I write it down anyway. That’s where I keep bits of poems, ideas for songs, interesting phrases, or just one-liners that feel like they could lead to something later. I don’t worry about organizing those notes too much. That part is more about catching the thought before it disappears. When I need to do longer or more focused writing, especially for school, I switch over to my laptop. I use Google Docs or Pages, depending on the day. These are the spaces where I try to do more structured work, like essays or other big writing projects. I try to keep things separate—one document for brainstorming, another for the actual draft—but honestly, it can get pretty messy. I’ll start outlining something, then end up freewriting, and then I might bounce back to take a note from another file. Nothing really feels solid until I’m deep into the project. Before that, it’s all a bit chaotic, but it eventually comes together. Even though I try to keep my creative and academic writing in separate places, the two sometimes mix in surprising ways. A lyric idea I scribbled into Notes might work perfectly as an ending sentence in an essay. Or a phrase I wrote in a research brainstorm might sound poetic enough to use in a poem. I actually like that about my process. It reminds me that good ideas don’t always stick to the category I thought they belonged in. So overall, my commonplace habit isn’t very organized or commonplace. It’s more about giving my thoughts somewhere to land. I just want to make space for ideas to show up, grow, and maybe turn into something real later.



Skip to toolbar