ports/misc/py-litellm/files/patch-litellm_proxy_start.sh
Hiroki Tagato 730828c627 misc/py-litellm: add port: Call all LLM APIs using the OpenAI format
Call all LLM APIs using the OpenAI format [Bedrock, Huggingface,
VertexAI, TogetherAI, Azure, OpenAI, etc.]

LiteLLM manages:
- Translate inputs to provider's completion, embedding, and
  image_generation endpoints
- Consistent output, text responses will always be available at
  ['choices'][0]['message']['content']
- Retry/fallback logic across multiple deployments (e.g. Azure/OpenAI)
  - Router
- Track spend & set budgets per project OpenAI Proxy Server

WWW: https://github.com/BerriAI/litellm
2024-02-12 17:34:14 +09:00

8 lines
202 B
Bash

--- litellm/proxy/start.sh.orig 2024-02-11 03:13:21 UTC
+++ litellm/proxy/start.sh
@@ -1,2 +1,2 @@
-#!/bin/bash
-python3 proxy_cli.py
\ No newline at end of file
+#!/bin/sh
+%%PYTHON_CMD%% proxy_cli.py