Prompt Engineering - Resource Links
Key Terms:
Generative AI is a broad description for technology that leverages AI to generate data. This data can include text, images, audio, video, and even code.
ChatGPT is an artificial-intelligence chatbot developed by OpenAI and launched in November 2022.
It is built on top of OpenAI's GPT-3.5 and GPT-4 families of large language models (LLMs) and has been fine-tuned using both supervised and reinforcement learning techniques.
When working with prompts, you will be interacting with the LLM via an API or directly.
Prompt engineering (PE) is the process of communicating effectively with an AI to achieve desired results.
Prompt Engineering, also known as In-Context Prompting, refers to methods for how to communicate with Large Language Models or LLM to steer its behavior for desired outcomes without updating the model weights.
It is an empirical science and the effect of prompt engineering methods can vary a lot among models, thus requiring heavy experimentation and heuristics.
Two key aspects of prompt engineering are a thorough understanding of the LLM and a command of English.
Prompts are sentences of text that are fed into complex AI models to get a desired output, for example and image of a certain object in a certain style.
Prompts can be as simple as an instruction/question, or complex as complex as huge chunks of text. A perfect prompt is the key for great AI-generated content.
A Prompt Engineer is a new kind of technician, skilled at crafting the text prompts required for an AI model to produce consistent outputs (e.g. images, text or code).
Common types of prompts used with current LLMs -
Zero-shot prompts give the model a specific task or question without any explicit training on that specific prompt. Example: "Compose a poem about love and nature."
Few-shot prompts have examples of inputs and outputs, such that the model has a reference for the type of output you're looking for.
Task: Write a story about a robot who falls in love with a human
Examples:
"I, Robot" by Isaac Asimov
"Bicentennial Man" by Isaac Asimov
"The Machine Stops" by E.M. Forster
"Do Androids Dream of Electric Sheep?" by Philip K. Dick
Prompt: Write a story about a robot who falls in love with a human.
Chain-of-thought prompts provide a series of instructions that the model is asked to follow. This can be useful for tasks that involve multiple steps, such as writing a story or creating a piece of art. For example, a chain-of-thought prompt for a story writing task might be: "Write a story about a dog who goes on an adventure. The dog should meet a new friend and have a few exciting experiences. The story should end with the dog returning home safely."
Explicit prompts provide the LLM with a clear and precise direction. When you need to come up with short, factual answers or finish a certain task, like summarizing a piece of writing or answering a multiple-choice question, explicit hints can help. An example of an explicit prompt would look like “Write a short story about a young girl who discovers a magical key that unlocks a hidden door to another world.”
Conversational prompts are meant to get you to talk with the LLM in a more natural way. Example - “Hey, Bard! Can you tell me a funny joke about cats?”
Context-based prompts give the LLM more information about the situation via domain-specific terms or background information, which helps it come up with more correct and useful answers. Example - “I’m planning a trip to New York next month. Can you give me some recommendations for popular tourist attractions, local restaurants, and off-the-beaten-path spots to visit?”
Role Prompts start with the words “Act as a …” to make LLM behave in a particular role. To simulate a job interview for any position “Act as hiring manager. Ask me interview questions for a data science position. I want you to only reply as the interviewer.”
Personality prompts can be created by adding a style and descriptors. Adding a style can help our text get a specific tone, formality, domain of the writer, and more. Example - "Write [topic] in the style of an expert in [field] with 10+ years of experience."
Before generating the post, you could first generate knowledge and then feed that information to the next prompt to write a better post. For example -
Generate 5 facts about “AI will not replace humans”
Use the above facts to write a witty 500-blog post on why AI will not replace humans. Write in the style of an expert in artificial intelligence with 10+ years of experience. Explain using funny examples.
Open-ended prompts can help you write creatively, tell a story, or come up with ideas for articles or writings. Example: “Tell me about the impact of technology on society.”
Bias-mitigating prompts can be designed in such a way that they force the LLMs to avoid possible biases in the output. Example - “Please generate a response that presents a balanced and objective view of the following topic: caste-based reservations in India. Consider providing multiple perspectives and avoid favoring any particular group, ideology, or opinion. Focus on presenting factual information, supported by reliable sources, and strive for inclusivity and fairness in your response.”
Code-generation prompts should be specific and clear and provide enough information for the LLM to generate in a specific language. Example: “Write a Python function that takes in a list of integers as input and returns the sum of all the even numbers in the list.”
Links:
- OpenAI - Examples
- AI21 Studio Examples
- LearnPrompting.org
- The New Stack - Prompt Engineering: Get LLMs to Generate the Content You Want
- Prompt Engineering Guide
- The OpenAI Cookbook shares example code for accomplishing common tasks with the OpenAI API.
- PromptPerfect is a cutting-edge prompt optimizer
- PromptBase is a marketplace for buying and selling prompts for DALL·E, GPT, Stable Diffusion + Midjourney.
- Best Practices for Prompt Engineering with GitHub Copilot
- ChatGPT Curations
Limitations of AI models:
Sources - LLMs for the most part cannot accurately cite sources. This is because they do not have access to the Internet, and do not exactly remember where their information came from. They will frequently generate sources that look good, but are entirely inaccurate.
Bias - LLMs are often biased towards generating stereotypical responses. If DALL-E 2 is asked to generate images of a CEO, a builder, or a technology journalist, it will typically return images of men, based on the image-text pairs it saw in its training data.
Unreliable - LLMs may “hallucinate”, make up things about the world and confidently spout nonsense.
Comments
Post a Comment