Block a user
llama-pod (llama-cpp-vulkan-stepfun-merge-latest)
Published 2026-02-17 05:07:26 +00:00 by dan
Installation
docker pull gitea.coffee-anon.com/dan/llama-pod:llama-cpp-vulkan-stepfun-merge-latestsha256:f94ffde17f5023e0af36a8fadcd73ea67301aa47024399869d2a312b0e357fda
Images
| Digest | OS / Arch | Size |
|---|---|---|
| 4b572a9276 | linux/amd64 | 706 MiB |
Image Layers ( linux/amd64)
| KIWI 10.2.33 |
| RUN /bin/sh -c microdnf -y --nodocs --setopt=install_weak_deps=0 install bash ca-certificates libatomic libstdc++ libgcc vulkan-loader vulkan-loader-devel vulkaninfo mesa-vulkan-drivers radeontop procps-ng && microdnf clean all && rm -rf /var/cache/dnf/* # buildkit |
| COPY /usr/ /usr/ # buildkit |
| COPY /usr/local/ /usr/local/ # buildkit |
| COPY /opt/llama.cpp/build/bin/rpc-* /usr/local/bin/ # buildkit |
| RUN /bin/sh -c echo "/usr/local/lib" > /etc/ld.so.conf.d/local.conf && echo "/usr/local/lib64" >> /etc/ld.so.conf.d/local.conf && ldconfig && cp -n /usr/local/lib/libllama*.so* /usr/lib64/ 2>/dev/null || true && ldconfig # buildkit |
| COPY gguf-vram-estimator.py /usr/local/bin/gguf-vram-estimator.py # buildkit |
| RUN /bin/sh -c chmod +x /usr/local/bin/gguf-vram-estimator.py # buildkit |
| CMD ["/bin/bash"] |
| LABEL maintainer=citizendaniel |
| LABEL description=llama.cpp with Step-3.5 architecture support on Vulkan RADV |
| LABEL step35.support=true |
| LABEL autoparser.pr=https://github.com/ggml-org/llama.cpp/pull/18675 |
| LABEL hf.model=https://huggingface.co/stepfun-ai/Step-3.5-Flash-GGUF-Q4_K_S |
| COPY /staging/usr/bin/llama-* /usr/bin/ # buildkit |
| COPY /staging/usr/lib64/libllama* /usr/lib64/ # buildkit |
| COPY /staging/usr/lib64/libggml* /usr/lib64/ # buildkit |
| RUN /bin/sh -c ldconfig # buildkit |
| RUN /bin/sh -c echo "=== Step-3.5 + Vulkan overlay verification ===" && ls -la /usr/bin/llama-server && ls -la /usr/lib64/libllama* 2>/dev/null && ls -la /usr/lib64/libggml* 2>/dev/null && echo "=== Binary check ===" && llama-server --version 2>&1 || true # buildkit |
Labels
| Key | Value |
|---|---|
| autoparser.pr | https://github.com/ggml-org/llama.cpp/pull/18675 |
| description | llama.cpp with Step-3.5 architecture support on Vulkan RADV |
| hf.model | https://huggingface.co/stepfun-ai/Step-3.5-Flash-GGUF-Q4_K_S |
| io.buildah.version | 1.42.2 |
| license | MIT |
| maintainer | citizendaniel |
| name | fedora-minimal |
| org.opencontainers.image.license | MIT |
| org.opencontainers.image.licenses | MIT |
| org.opencontainers.image.name | fedora-minimal |
| org.opencontainers.image.title | fedora-minimal |
| org.opencontainers.image.url | https://fedoraproject.org/ |
| org.opencontainers.image.vendor | Fedora Project |
| org.opencontainers.image.version | 43 |
| step35.support | true |
| vendor | Fedora Project |
| version | 43 |
Details
2026-02-17 05:07:26 +00:00
Versions (1)
View all
Container
6
OCI / Docker
llama-cpp-vulkan-stepfun-merge-latest
2026-02-17