Class AskInput
java.lang.Object
de.xima.fc.prompt.ms.impl.service.generic_oai.client.model.AskInput
Represents a question to be asked to a prompt service, consisting of a propt and optional file inputs. The prompt
service responds with a text answer.
- Since:
- 8.5.0
-
Nested Class Summary
Nested Classes -
Method Summary
Modifier and TypeMethodDescriptionstatic AskInput.Builderbuilder()Creates a new builder forAskInput.files()Gets the optional file to include in the request.Number between -2.0 and 2.0.Gets the JSON schema for the response format, as a serialized string.The name of the JSON schema, to aid the model in understanding the purpose of the schema.Gets the maximum number of new tokens to generate.model()Gets the model to use for the request.Number between -2.0 and 2.0.prompt()Gets the prompt for the request.Constrains effort on reasoning for reasoning models.A stable identifier used to help detect users of your application that may be violating the service's usage policies.system()Gets the optional system prompt to guide the model's behavior.Gets the temperature for the request.topP()Gets the top-p value for the request.
-
Method Details
-
files
Gets the optional file to include in the request. If given, the model will use the file as context for generating a response.- Returns:
- The file to include in the request.
-
frequencyPenalty
Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim.- Returns:
- The frequency penalty.
-
jsonSchema
Gets the JSON schema for the response format, as a serialized string.- Returns:
- The JSON schema, serialized as a string.
-
jsonSchemaDescription
-
jsonSchemaName
The name of the JSON schema, to aid the model in understanding the purpose of the schema.- Returns:
- The name of the JSON schema, or null if no schema is set.
-
maxTokens
Gets the maximum number of new tokens to generate. Wil be either null; or greater than or equal to 1. Default is implementation-specific.- Returns:
- The maximum number of new tokens to generate.
-
model
Gets the model to use for the request.- Returns:
- The model to use for the request.
-
presencePenalty
Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics.- Returns:
- The presence penalty.
-
prompt
Gets the prompt for the request. This is required.- Returns:
- The prompt for the request.
-
reasoningEffort
Constrains effort on reasoning for reasoning models. Currently supported values are none, minimal, low, medium, high, and xhigh. Reducing reasoning effort can result in faster responses and fewer tokens used on reasoning in a response.- Returns:
- The reasoning effort.
-
safetyIdentifier
A stable identifier used to help detect users of your application that may be violating the service's usage policies. The IDs should be a string that uniquely identifies each user. We recommend hashing their username or email address, in order to avoid sending us any identifying information.- Returns:
- The stable identifier.
-
system
Gets the optional system prompt to guide the model's behavior.- Returns:
- The system prompt to guide the model's behavior.
-
temperature
Gets the temperature for the request. Will be either null; or between 0 and 1. Default is implementation-specific.- Returns:
- The temperature for the request.
-
topP
Gets the top-p value for the request. Will be either null; or between 0 and 1. Default is implementation-specific.- Returns:
- The top-p value for the request.
-
builder
Creates a new builder forAskInput.- Returns:
- The new builder.
-