Microsoft: Phi 4 Multimodal Instruct

microsoft/phi-4-multimodal-instruct

Phi-4 Multimodal Instruct is a versatile 5.6B parameter foundation model that combines advanced reasoning and instruction-following capabilities across both text and visual inputs, providing accurate text outputs. The unified architecture enables efficient, low-latency inference, suitable for edge and mobile deployments. Phi-4 Multimodal Instruct supports text inputs in multiple languages including Arabic, Chinese, English, French, German, Japanese, Spanish, and more, with visual input optimized primarily for English. It delivers impressive performance on multimodal tasks involving mathematical, scientific, and document reasoning, providing developers and enterprises a powerful yet compact model for sophisticated interactive applications. For more information, see the [Phi-4 Multimodal blog post](https://azure.microsoft.com/en-us/blog/empowering-innovation-the-next-generation-of-the-phi-family/).

Pricing

Price per input token: $0.00000007

Price per output token: $0.00000011

Usage Example

NOTE: BrainLink is compatible with the OpenAI API, which allows you to use the OpenAI SDK even with non-OpenAI models

import OpenAI from "openai";
const userAccessToken = await BrainLink.getUserToken();
const openai = new OpenAI({
    baseURL: "https://www.brainlink.dev/api/v1",
    apiKey: userAccessToken,
});
const completion = await openai.chat.completions.create({
    model: "microsoft/phi-4-multimodal-instruct",
    messages: [
      { role: "user", content: "Hi! How are you today?" }
    ],
});