Generative AI with Node.js in Hindi #6 | OpenAI Optional Parameters (Temperature, Max Tokens, Store)

Open AI optional and Important parameters

  1. temperature
  2. max_output_tokens
  3. store and retrieve response with id


What is temperature in LLM models?


Temperature in LLMs (like ChatGPT or GPT-4) controls how creative or random the model’s answers are. Low temperature (e.g., 0 or 0.2): The model gives more accurate and predictable answers — good for coding, facts, or interviews. High temperature (e.g., 0.8 or 1): The model becomes more creative and varied — good for stories, brainstorming, or ideas.


max_output_tokens?

max_output_tokens means the maximum number of tokens the model can generate in its response.

Think of it like setting a word limit for the AI’s answer.


What is Store?

store in the OpenAI API means saving the conversation or response for later use or review.



Code :-

import OpenAI from "openai"
import dotenv from 'dotenv'
dotenv.config();

const client = new OpenAI({ apiKey: process.env.openAI_Key })

const prompt = `How are you?`;
const model = "gpt-4o-mini"

const response = await client.responses.create({
input: [

{ role: 'user', content: prompt }
],
model,
// temperature:2
// max_output_tokens:16
store:true
});
// console.log(response);


const oldResp= await client.responses.retrieve("resp_68bdea78bfbc8190aa3c72a4911136480c21a007e1b5f649");
console.log(oldResp);