Semantic Kernel 通過 LocalAI 集成本地模型

2024年2月6日 20点热度 0人点赞
本文是基於 LLama 2是由Meta 開源的大語言模型,通過LocalAI 來集成LLama2 來演示Semantic kernel(簡稱SK) 和 本地大模型的集成示例。

SK 可以支持各種大模型,在官方示例中多是OpenAI 和 Azure OpenAI service 的GPT 3.5 。今天我們就來看一看如何把SK 和 本地部署的開源大模型集成起來。我們使用MIT協議的開源項目“LocalAI“:https://github.com/go-skynet/LocalAI。
LocalAI 是一個本地推理框架,提供了 RESTFul API,與 OpenAI API 規范兼容。它允許你在消費級硬件上本地或者在自有服務器上運行 LLM(和其他模型),支持與 ggml 格式兼容的多種模型傢族。不需要 GPU。LocalAI 使用 C 綁定來優化速度。它基於用於音頻轉錄的 llama.cpp、gpt4all、rwkv.cpp、ggml、whisper.cpp 和用於嵌入的 bert.cpp。


可參考官方 Getting Started 進行部署,通過LocalAI我們將本地部署的大模型轉換為OpenAI的格式,通過SK 的OpenAI 的Connector 訪問,這裡需要做的是把openai的Endpoint 指向 LocalAI,這個我們可以通過一個自定義的HttpClient來完成這項工作,例如下面的這個示例:

internal class OpenAIHttpclientHandler : HttpClientHandler

{

private KernelSettings _kernelSettings;

public OpenAIHttpclientHandler(KernelSettings settings)

{

this._kernelSettings = settings;

}

protected override async Task<HttpResponseMessage> SendAsync(HttpRequestMessage request, CancellationToken cancellationToken)

{

if (request.RequestUri.LocalPath == "/v1/chat/completions")

{

UriBuilder uriBuilder = new UriBuilder(request.RequestUri)

{

Scheme = this._kernelSettings.Scheme,

Host = this._kernelSettings.Host,

Port = this._kernelSettings.Port

};

request.RequestUri = uriBuilder.Uri;

}

return await base.SendAsync(request, cancellationToken);

}

}

上面我們做好了所有的準備工作,接下來就是要把所有的組件組裝起來,讓它們協同工作。因此打開Visual studio code 創建一個c# 項目sk-csharp-hello-world,其中Program.cs 內容如下:

using System.Reflection;

using config;

using Microsoft.Extensions.DependencyInjection;

using Microsoft.Extensions.Logging;

using Microsoft.SemanticKernel;

using Microsoft.SemanticKernel.ChatCompletion;

using Microsoft.SemanticKernel.Connectors.OpenAI;

using Microsoft.SemanticKernel.PromptTemplates.Handlebars;

using Plugins;

var kernelSettings = KernelSettings.LoadSettings();

var handler = new OpenAIHttpclientHandler(kernelSettings);

IKernelBuilder builder = Kernel.CreateBuilder();

builder.Services.AddLogging(c => c.SetMinimumLevel(LogLevel.Information).AddDebug());

builder.AddChatCompletionService(kernelSettings,handler);

builder.Plugins.AddFromType<LightPlugin>();

Kernel kernel = builder.Build();

// Load prompt from resource

using StreamReader reader = new(Assembly.GetExecutingAssembly().GetManifestResourceStream("prompts.Chat.yaml")!);

KernelFunction prompt = kernel.CreateFunctionFromPromptYaml(

reader.ReadToEnd(),

promptTemplateFactory: new HandlebarsPromptTemplateFactory()

);

// Create the chat history

ChatHistory chatMessages = [];

// Loop till we are cancelled

while (true)

{

// Get user input

System.Console.Write("User > ");

chatMessages.AddUserMessage(Console.ReadLine()!);

// Get the chat completions

OpenAIPromptExecutionSettings openAIPromptExecutionSettings = new()

{

};

var result = kernel.InvokeStreamingAsync<StreamingChatMessageContent>(

prompt,

arguments: new KernelArguments(openAIPromptExecutionSettings) {

{ "messages", chatMessages }

});

// Print the chat completions

ChatMessageContent? chatMessageContent = ;

await foreach (var content in result)

{

System.Console.Write(content);

if (chatMessageContent == )

{

System.Console.Write("Assistant > ");

chatMessageContent = new ChatMessageContent(

content.Role ?? AuthorRole.Assistant,

content.ModelId!,

content.Content!,

content.InnerContent,

content.Encoding,

content.Metadata);

}

else

{

chatMessageContent.Content = content;

}

}

System.Console.WriteLine();

chatMessages.Add(chatMessageContent!);

}

首先,我們做的第一件事是導入一堆必要的命名空間,使一切正常(第 1 行到第 9 行)。

然後,我們創建一個內核構建器的實例(通過模式,而不是因為它是構造函數),這將有助於塑造我們的內核。

IKernelBuilder builder = Kernel.CreateBuilder();

你需要知道每時每刻都在發生什麼嗎?答案是肯定的!讓我們在內核中添加一個日志。我們在第14行添加了日志的支持。

我們想使用Azure,OpenAI中使用Microsoft的AI模型,以及我們LocalAI 集成的本地大模型,我們可以將它們包含在我們的內核中。正如我們在15行看到的那樣:

internal static class ServiceCollectionExtensions
{
/// <summary>
/// Adds a chat completion service to the list. It can be either an OpenAI or Azure OpenAI backend service.
/// </summary>
/// <param name="kernelBuilder"></param>
/// <param name="kernelSettings"></param>
/// <exception cref="ArgumentException"></exception>
internal static IKernelBuilder AddChatCompletionService(this IKernelBuilder kernelBuilder, KernelSettings kernelSettings, HttpClientHandler handler)
{

switch (kernelSettings.ServiceType.ToUpperInvariant())
{
case ServiceTypes.AzureOpenAI:
kernelBuilder = kernelBuilder.AddAzureOpenAIChatCompletion(kernelSettings.DeploymentId, endpoint: kernelSettings.Endpoint, apiKey: kernelSettings.ApiKey, serviceId: kernelSettings.ServiceId, kernelSettings.ModelId);
break;

case ServiceTypes.OpenAI:
kernelBuilder = kernelBuilder.AddOpenAIChatCompletion(modelId: kernelSettings.ModelId, apiKey: kernelSettings.ApiKey, orgId: kernelSettings.OrgId, serviceId: kernelSettings.ServiceId);
break;

case ServiceTypes.HunyuanAI:
kernelBuilder = kernelBuilder.AddOpenAIChatCompletion(modelId: kernelSettings.ModelId, apiKey: kernelSettings.ApiKey, httpClient: new HttpClient(handler));
break;
case ServiceTypes.LocalAI:
kernelBuilder = kernelBuilder.AddOpenAIChatCompletion(modelId: kernelSettings.ModelId, apiKey: kernelSettings.ApiKey, httpClient: new HttpClient(handler));
break;
default:
throw new ArgumentException($"Invalid service type value: {kernelSettings.ServiceType}");
}

return kernelBuilder;
}
}

接下來開啟一個聊天循環,使用SK的流式傳輸 InvokeStreamingAsync,如第42行到46行代碼所示,運行起來就可以體驗下列的效果:

本文示例源代碼:
https://github.com/geffzhang/sk-csharp-hello-world

參考文章:

  • Docker部署LocalAI 實現本地私有化 文本轉語音(TTS) 語音轉文本 GPT功能 | Mr.Pu 個站博客 (putianhui.cn)

  • LocalAI 自托管、社區驅動的本地 OpenAI API 兼容替代方案