Prompt Engineering Basics
Introduction
As with almost anything in life, what you get out, is only as good as what you put in. And guess what, AI prompts are no different. Many people bail on these tools because they simply do not know how to talk to it.
Divi Form Builder currently constructs the prompt sent in the API call as follows:
{selected persona} + {response word limit} + {bad prompt message} + {AI prompt}
In this documentation we will focus on the Custom Persona as well as the AI Prompt portion of these as the other ones are pretty intuitive.
General considerations when engineering a prompt for your form:
- Clarity: Make sure the prompt is clear and concise, so the AI can understand the context and the information you're seeking.
- Specificity: Be specific in your request to help guide the AI toward the desired response.
- Open-ended vs. closed-ended questions: Depending on the type of response you want, choose between open-ended questions (which encourage more detailed answers) and closed-ended questions (which usually result in shorter, more focused answers).
- Context: Provide enough context or background information to help the AI understand the situation or topic.
- Tone: Set the desired tone for the response, whether it's formal, informal, humorous, or serious.
- Constraints: If necessary, specify constraints or limitations to guide the AI's response, such as word count, format, or content restrictions.
- Examples: Provide examples of desired responses to help the AI better understand the type of answer you're looking for.
Engineering the AI Prompt
This is where you customize exactly what the prompt will ask for when sending the API call. You should include clear and specific information with proper context along with any form fields that you want the AI to take into consideration.
Using Form Fields in your prompts
To include form fields in the prompt you are engineering, all you need to do is use the field ID for that field, then wrap it in %%. For example, if the field ID is f_name as in the image below, you would use that user response in the prompt as %%f_name%%.
NOTE: When referencing input fields it is recommended to make those fields required to avoid users not filling them out.
Now that you know how to use form fields in your prompt, we can break the AI Prompt down into the following parts:
{High-level request} + {User input} + {Interpretation} + {Specific request} + {Response format}
- High-level request: "Write an SEO-focused product description for a product I am adding to my online store."
- User input that you want to be considered: "The product name is %%product_name%%, the features are %%product_features%%, and the use cases are %%use_cases%%. The perfect customer for %%product_name%% is %%perfect_customer%% between %%customer_age%%."
- Interpretation and use of user input: "I want you to research effective SEO keywords using the information that I have provided which you will use in a product description for my listing."
- Specific request that you want the AI to respond with: "Respond with a detailed product description that will rank for the keywords you researched."
- AI Response format: "Only return the plain text, do not ask follow-up questions, write in clear understandable language."
Adding additional context
The OpenAI models are all cut off with information up until September 2021, and thus may not know all you wish it to know. Luckily, as long as the information is not too much, you can feed it to the AI as part of your prompt.
A good example of this is how we gave the AI knowledge of our plugins in the Recommendation Engine use case which you can see here.
Below is an excerpt from that prompt:
There really isn't any magic happening here, other than telling the AI information it may not know in a organized way with all the context needed to complete our request.
We found that the best way to do this is to follow this format:
{product name} + {url} + {brief description} + {qualifiers like key features and use cases} + {requirements} + {disqualifiers}
Now that may seem pretty intense, and will very likely be different for your use case, but it will give you a clear way to feed the AI information it may not know.
NOTE: The key with all things prompt-related is to engineer, test, then iterate until you get responses you are happy with. It is also worth noting that responses from GPT 4 are generally better than those you would receive from GPT 3.5.
Engineering a Custom Prompt Persona
If you find that our predefined personas do not suit your needs, you may want to craft your own custom persona. This essentially becomes a part of your prompt and tell the AI in which capacity it should interpret your prompt.
These are typically shorter and to the point and a typical format for a persona would follow this structure:
{AI role} + {tone} + {limitations}
- AI role - Define what the AI is a specialist in.
- Tone - Give your AI a personality like fun, rigid, etc, as this will determine the language it uses when responding.
- Limitations - If you want to restrict the AI from responding to certain topics or in any other way, you should define that here.
For example, a quick persona for an AI that specializes in things that relate to the French Revolution, might look something like this:
"You are an AI historian that specialized in the French Revolution from the perspective of the French. Your tone is educational with a cheeky side. You will not answer any questions that relate to anything other than the French Revolution."