Inference
Inference APIs allow you to invoke AI models defined in your project’s manifest with less scaffolding.
We’re introducing new APIs consistently through ongoing development with build partners. Let’s chat about what would make the Functions SDK even more powerful for your next use case!
generateText
Invoke a generative AI model with an instruction and prompt, resulting in a text response.
Internal name of your model, as defined in your manifest.
High-level instruction for the processing of the given prompt.
Queries for response within the context of the given instruction.
computeClassificationLabels
Invoke a fine-tuned classification model with a text input, resulting in an array of labels and probabilities.
Internal name of your model, as defined in your manifest.
Text input for classification amongst labels defined in the fine-tuning process.
embedText
Invoke an embedding model with a text input, resulting in a vector embedding.
Internal name of your model, as defined in your manifest.
Text input for embedding.