mirror of
https://git.freebsd.org/ports.git
synced 2025-05-05 16:07:38 -04:00
Call all LLM APIs using the OpenAI format [Bedrock, Huggingface, VertexAI, TogetherAI, Azure, OpenAI, etc.] LiteLLM manages: - Translate inputs to provider's completion, embedding, and image_generation endpoints - Consistent output, text responses will always be available at ['choices'][0]['message']['content'] - Retry/fallback logic across multiple deployments (e.g. Azure/OpenAI) - Router - Track spend & set budgets per project OpenAI Proxy Server WWW: https://github.com/BerriAI/litellm
8 lines
202 B
Bash
8 lines
202 B
Bash
--- litellm/proxy/start.sh.orig 2024-02-11 03:13:21 UTC
|
|
+++ litellm/proxy/start.sh
|
|
@@ -1,2 +1,2 @@
|
|
-#!/bin/bash
|
|
-python3 proxy_cli.py
|
|
\ No newline at end of file
|
|
+#!/bin/sh
|
|
+%%PYTHON_CMD%% proxy_cli.py
|