The prompt is the most impactful and important part of your customization to create AI assistants successfully.

We suggest having a look at the Prompt Engineering guides by OpenAI, but they can be very technical and less effective for your case. So, we prepared some tips for you!

Some tips:
- Tell the chatbot who it is (e.g., "You are Gali, the AI Knowledge Support at CompanyName").
- Instruct the chatbot not to answer questions for which there isn't an answer in the sources.
- Provide instructions on how to contact you or your team if the answer is not present in the sources (e.g., provide an email or Calendly link).
- Avoid being overly detailed or adding too many rules. The shorter and clearer the prompt, the easier it will be for the chatbot to follow your rules.
- Instruct to add the link to the sources if needed. When your sources come from a URL, it might be useful to add the link to the source at the end of the answer so that the user can click and open the mentioned URL.
- Decide on the style of the chatbot: professional, friendly, concise, funny, etc.

Please note that GPT-4 will be much smarter in following your prompts compared to GPT-3.5 that sometimes hallucinates  or forgets to precisely follow the prompt.

Prompt engineering will require some 'trial and error,' and you might need to reiterate on it several times.

We offer some standard prompts for inspiration when you create a new chatbot, but we strongly suggest that you reiterate and work on your own prompt.

Engine models

"Shall I use GPT-3.5 or GPT4?"

This is one of the most frequently asked questions we receive. And there is no single answer; it depends on the case!

GPT-3.5 is cheaper but can hallucinate more and sometimes takes some liberties with following the prompt. The quality of the output is generally good.

GPT-4 provides higher quality of the output, especially for complex questions. It's just a bit more expensive (6x in our pricing, which is the most accessible out there). Moreover, it follows the prompt much better and is very, very unlikely to hallucinate.

So, in a nutshell:
- Use GPT-3.5 if the documentation is easy, straightforward, and if some random hallucinations don't harm you (e.g., internal knowledge of policies, documentation, etc.).
- Use GPT-4 if the documentation is more complex and providing a great answer is very important to you (customer support, lead generation, internal complex documents).

Test it with your prompts and sources, and see the differences!


'Extended context' allows the chatbot to retrieve more sources from your documentation. If you have proper documentation (i.e., exhaustive and exclusive), then in most cases, you don't need the extended context. How do you understand if you need an extended context, then?

- You have a lot of documentation – I can't give exact numbers because it also depends on its quality, but let's say that you have tons of pages of documentation.
- Your documentation is not structured to be exclusively informative, meaning that the same information can be retrieved in different places. In this case, the chatbot will retrieve the most relevant sources, but it's better to have more sources (and therefore an 'extended context') to provide OpenAI with more information for a better answer.

Since it uses more tokens, the price is a bit higher than for the 'non-extended context': GPT-3.5 extended context uses 2 credits per message; GPT-4 extended context uses 10 credits per message."

Like always, try it out to find the best solution for your situation.

Sources - Documents

When adding document sources to train your Gali chatbots, it's a great practice to provide documentation that is both exhaustive and exclusive.

This MECE principle (Mutually Exclusive, Collectively Exhaustive) ensures that as many questions as possible are answered with very precise content. 🎯Here are a few tips:
- Exhaustive: Include all relevant information across your provided documents to cover the topics fully.
- Exclusive: Avoid overlapping information to prevent confusion and ensure that each piece of information is unique to its document.

If you add some links in your documentation (eg. "You can signup at"), remember to add "https://" to the link to facilitate GPT in providing the correct link.

Sources - Website

When incorporating new web sources, particularly through web crawling, it's important to filter out links that don't provide valuable content, such as Terms of Service and Privacy Policies, which are commonly omitted.

Additionally, aim to minimize the inclusion of web pages that contain redundant information.

In this context, the MECE principle (mutually exclusive, collectively exhaustive) is relevant, ensuring that each piece of information is unique yet, as a whole, covers all necessary topics comprehensively.

Keeping the information concise and clear will significantly enhance the chatbot's ability to provide accurate responses.


A temperature of 0 means roughly that the model will always select the word with the highest probability. A higher temperature (e.g., 0.8-1) means that the model might select a word with slightly lower probability, leading to more variation, randomness, and creativity.
We suggest keeping the temperature low (0 to 0.2) to avoid too much randomness in the answers, especially if you are using it for customer support or don't want weird surprises in the chatbot's answers.