How to Configure OpenAI Temperature Parameter

This OpenAI API GPT article will explain how to use the “temperature” parameter when working With the platform.

The temperature parameter controls how random the output generated by the platform will be. A higher value (0.8) will make the output results more creative, while a lower value (0-0.2) will make them more focused.

API Request Example

We control the temperature when sending API requests to the OpenAI platform (GPT-3 and GPT-4). In the code below, I’m using the official Python Library and set the parameter to 2.

openai.api_key = os.getenv("OPENAI_API_KEY")
completion = openai.ChatCompletion.create(
  model="gpt-4",
  messages=[
    {"role": "user", "content": "Something"},
  ],
  temperature=2
  
)

To make it more focused, I can use a 0.5 value. If you use Postman to make REST API calls to OpenAI, you will set the temperature value in the request’s body, as shown below.

{
  "prompt": "something",
  "max_tokens": 100,
  "temperature": 0.5,
  "engine": "davinci" 
}

For more OpenAPI articles, visit the category page.


Posted

in

by

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.