Prompt Crafting

This week’s assignment on prompt engineering made me realize just how much power lies in how we ask AI questions. Even slight changes in wording can produce wildly different results. Prompt engineering is all about precision, creativity, and figuring out how AI understands what we mean. Although I have improved over time, I still encounter the occasional mistake that reminds me that communication—whether between humans or AI—is always evolving.

The ZDNet article, “Is prompt engineering a ‘fad’ hindering AI progress?” questions the long-term viability of prompt engineering, suggesting it might be a temporary phenomenon in the evolution of AI interaction. Meredith Ringel Morris from Google DeepMind argues that prompts are a “poor user interface” that relies on “pseudo” natural language, leading to inconsistent results and hindering genuine progress in AI development. She advocates for more user-friendly interfaces like gesture-based or emotional-state-mediated systems.

The ITPro article, “This engineering discipline was hailed as the next big thing, but AI has killed it before it even started,” echoes this sentiment, indicating that AI’s increasing ability to generate its own prompts may soon make human prompt engineering obsolete (Battle and Gollapudi). A recent study by VMware Labs discovered that prompts created with AI optimization actually perform better than those crafted by humans. This finding suggests that we might see a shift in AI career paths. Instead of just focusing on creating effective prompts, there could be more emphasis on ensuring ethical use and exploring innovative ways to apply AI technology.

I understand the valid concerns expressed by experts such as Meredith Ringel Morris regarding the limitations of using prompts as a user interface (Ray), as well as the possibility that AI could generate its own prompts more effectively (Battle and Gollapudi). However, I still believe that thoughtfully crafted prompts created by humans add significant value. After all, who knows our needs and intentions better than ourselves?

I gave ChatGPT certain prompts to see how the quality and specificity of the prompt would influence the output—reinforcing just how much human input shapes the effectiveness of AI interactions.

Task 1

My question to ChatGPT:

“How does Gilbert Austin’s theory of gesticulation apply to contemporary female public speakers like Kamala Harris, Oprah Winfrey, and Beth Moore? Analyze how their body language and gestures help them assert authority, connect emotionally, and navigate societal expectations in their respective speaking environments.”

Is this a good prompt?

Response from Chat GBT:

Yes, this is a good prompt because it is clear, specific, and analytical. Here’s why:

Strengths of the Prompt

Focuses on a specific theory – It directly ties the analysis to Gilbert Austin’s theory of gesticulation, providing a clear framework for discussion.

Targets contemporary examples – By naming Kamala Harris, Oprah Winfrey, and Beth Moore, it ensures the response is grounded in real-world examples rather than remaining abstract.

Encourages analysis – Instead of just asking for a description, the prompt requires examining how body language functions in asserting authority, building emotional connections, and navigating societal expectations.

Has a clear scope – It balances historical theory (Austin) with modern application (female public speakers), making it manageable and meaningful.

Possible Refinements

If you wanted to make it even more precise, you could specify:

  • A specific aspect of Austin’s theory (e.g., degrees of force in gestures).
  • A comparison between Austin’s classical framework and modern body language norms.

For example:
“How do Kamala Harris, Oprah Winfrey, and Beth Moore exemplify or challenge Gilbert Austin’s framework of gesticulation in their public speaking? Analyze how their gestures contribute to authority, emotional connection, and societal navigation, considering historical shifts in nonverbal communication.”

This slight revision maintains clarity while prompting even deeper analysis. Would you like me to refine it further based on your paper’s specific focus?

My question: Can you cite an example of a bad prompt?

Response from Chat GBT:

Sure! Here’s an example of a bad prompt based on your topic:

“Discuss body language in public speaking and how women use it.”

Why is this a bad prompt?

Too vague – It doesn’t specify which aspects of body language to analyze or reference any theoretical framework.

Lacks a clear focus – There’s no mention of Gilbert Austin’s theory, which is central to your research.

Overly broad – “Public speaking” and “how women use it” could lead to an unfocused discussion rather than a targeted analysis.

How to Fix It?

A better prompt would include:

  • A clear theoretical foundation (Austin’s gesticulation theory).
  • Specific individuals (Kamala Harris, Oprah Winfrey, Beth Moore).
  • An analytical angle (how their gestures assert authority, connect emotionally, and navigate societal expectations).

Task 2

Instead of providing the prompt myself, I asked ChatGPT to generate both an effective and an ineffective prompt on the topic of AI-powered tools designed to assist non-native English speakers in writing studios.

Here’s what it came up with.

Here’s an example of a good prompt:

“How can AI-powered tools, like grammar and style checkers, support non-native English speakers in improving their writing skills? Analyze the benefits and limitations of these tools in providing real-time feedback and fostering language development in an educational or professional context.”

Here’s an example of a Bad prompt:

“Discuss how AI can help people with writing.”

This exercise highlights the important role that prompt crafting plays in shaping the outcome of AI interactions. The difference between a well-constructed prompt and a vague one is clear: a thoughtful, specific prompt can guide the AI to provide a focused, insightful response, while a poorly formulated prompt can lead to a vague and unfocused answer. It’s a dialogue between me and the machine, where each iteration teaches me more about how to communicate effectively. My experience with prompt crafting reinforces the idea that learning is an ongoing process. As AI continues to evolve and new techniques emerge, we should ensure that we understand AI’s strengths, recognise its limitations, and refine our ability to communicate effectively with it. I truly can’t think of a better way to use AI tools than by giving them specific prompts for what we’re looking for. I’m still waiting for the day when the machine can read my mind, but until that happens, prompting is still our best bet, right?