misc/py-gguf: New port: Read and write ML models in GGUF for GGML

This commit is contained in:
Yuri Victorovich 2025-03-10 07:57:01 -07:00
parent d6f0c70df2
commit dc0912208a
4 changed files with 44 additions and 0 deletions

View file

@ -437,6 +437,7 @@
SUBDIR += py-files-to-prompt
SUBDIR += py-fleep
SUBDIR += py-fuzzy
SUBDIR += py-gguf
SUBDIR += py-gluoncv
SUBDIR += py-gluonnlp
SUBDIR += py-halo

37
misc/py-gguf/Makefile Normal file
View file

@ -0,0 +1,37 @@
PORTNAME= gguf
DISTVERSION= 0.16.0
CATEGORIES= misc python # machine-learning
#MASTER_SITES= PYPI # the PYPI version is way behind of llama-cpp
PKGNAMEPREFIX= ${PYTHON_PKGNAMEPREFIX}
MAINTAINER= yuri@FreeBSD.org
COMMENT= Read and write ML models in GGUF for GGML
WWW= https://ggml.ai \
https://github.com/ggml-org/llama.cpp
LICENSE= MIT
LICENSE_FILE= ${WRKSRC}/LICENSE
BUILD_DEPENDS= ${PYTHON_PKGNAMEPREFIX}poetry-core>=1.0.0:devel/py-poetry-core@${PY_FLAVOR}
RUN_DEPENDS= ${PYNUMPY} \
${PYTHON_PKGNAMEPREFIX}pyyaml>=5.1:devel/py-pyyaml@${PY_FLAVOR} \
${PYTHON_PKGNAMEPREFIX}sentencepiece>=0.1.98:textproc/py-sentencepiece@${PY_FLAVOR} \
${PYTHON_PKGNAMEPREFIX}tqdm>=4.27:misc/py-tqdm@${PY_FLAVOR}
USES= python shebangfix
USE_PYTHON= pep517 autoplist pytest
USE_GITHUB= yes
GH_ACCOUNT= ggml-org
GH_PROJECT= llama.cpp
GH_TAGNAME= b4837
WRKSRC= ${WRKDIR}/${GH_PROJECT}-${GH_TAGNAME}/gguf-py
SHEBANG_GLOB= *.py
NO_ARCH= yes
# tests as of 0.16.0: 5 passed in 1.64s
.include <bsd.port.mk>

3
misc/py-gguf/distinfo Normal file
View file

@ -0,0 +1,3 @@
TIMESTAMP = 1741639456
SHA256 (ggml-org-llama.cpp-0.16.0-b4837_GH0.tar.gz) = 60587fd5b417ac35d691284e1b117a8c114f10c8d3960494551a4e49338b5e0f
SIZE (ggml-org-llama.cpp-0.16.0-b4837_GH0.tar.gz) = 20796825

3
misc/py-gguf/pkg-descr Normal file
View file

@ -0,0 +1,3 @@
gguf is a Python module for reading and writing ML models in GGUF for GGML.
gguf is a spin-off of the llama-cpp project.