Index | index by Group | index by Distribution | index by Vendor | index by creation date | index by Name | Mirrors | Help | Search |
Name: llama-cpp | Distribution: Fedora Project |
Version: b3561 | Vendor: Fedora Project |
Release: 1.fc40 | Build date: Sat Oct 26 18:05:30 2024 |
Group: Unspecified | Build host: buildvm-a64-09.iad2.fedoraproject.org |
Size: 1995907 | Source RPM: llama-cpp-b3561-1.fc40.src.rpm |
Packager: Fedora Project | |
Url: https://github.com/ggerganov/llama.cpp | |
Summary: Port of Facebook's LLaMA model in C/C++ |
The main goal of llama.cpp is to run the LLaMA model using 4-bit integer quantization on a MacBook * Plain C/C++ implementation without dependencies * Apple silicon first-class citizen - optimized via ARM NEON, Accelerate and Metal frameworks * AVX, AVX2 and AVX512 support for x86 architectures * Mixed F16 / F32 precision * 2-bit, 3-bit, 4-bit, 5-bit, 6-bit and 8-bit integer quantization support * CUDA, Metal and OpenCL GPU backend support The original implementation of llama.cpp was hacked in an evening. Since then, the project has improved significantly thanks to many contributions. This project is mainly for educational purposes and serves as the main playground for developing new features for the ggml library.
MIT AND Apache-2.0 AND LicenseRef-Fedora-Public-Domain
* Sat Oct 26 2024 Tom Rix <[email protected]> - b3561-1 - Update to b3561 * Tue May 21 2024 Mohammadreza Hendiani <[email protected]> - b2879-7 - removed old file names .gitignore * Sun May 19 2024 Tom Rix <[email protected]> - b2879-6 - Remove old sources * Sun May 19 2024 Tom Rix <[email protected]> - b2879-5 - Include missing sources * Sat May 18 2024 Mohammadreza Hendiani <[email protected]> - b2879-4 - added build dependencies and added changelog * Sat May 18 2024 Mohammadreza Hendiani <[email protected]> - b2879-3 - added aditional source * Fri May 17 2024 Mohammadreza Hendiani <[email protected]> - b2879-2 - updated * Fri May 17 2024 Mohammadreza Hendiani <[email protected]> - b2879-1 - updated and fix build bugs * Mon May 13 2024 Mohammadreza Hendiani <[email protected]> - b2861-7 - removed source 1 * Mon May 13 2024 Mohammadreza Hendiani <[email protected]> - b2861-6 - added llama.cpp-b2861.tar.gz to .gitignore * Mon May 13 2024 Mohammadreza Hendiani <[email protected]> - b2861-5 - fixed source 1 url * Mon May 13 2024 Mohammadreza Hendiani <[email protected]> - b2861-4 - added tag release as source 1 * Mon May 13 2024 Mohammadreza Hendiani <[email protected]> - b2861-3 - fix source hash * Sun May 12 2024 Mohammadreza Hendiani <[email protected]> - b2861-2 - fix mistake mistake in version * Sun May 12 2024 Mohammadreza Hendiani <[email protected]> - b2861-1 - update b2861 * Sun May 12 2024 Mohammadreza Hendiani <[email protected]> - b2860-2 - added changelog * Sun May 12 2024 Mohammadreza Hendiani <[email protected]> - b2860-1 - bump version to b2860 * Sun May 12 2024 Mohammadreza Hendiani <[email protected]> - b2619-5 - upgrade to b2860 tag * Sun May 12 2024 Mohammadreza Hendiani <[email protected]> - b2619-4 - added ccache build dependency because LLAMA_CCACHE=ON on by default * Sun May 12 2024 Mohammadreza Hendiani <[email protected]> - b2619-3 - added numactl as Weak dependency * Thu Apr 11 2024 Tom Rix <[email protected]> - b2619-2 - New sources * Thu Apr 11 2024 Tomas Tomecek <[email protected]> - b2619-1 - Update to b2619 (required by llama-cpp-python-0.2.60) * Sat Mar 23 2024 Tom Rix <[email protected]> - b2417-2 - Fix test subpackage * Sat Mar 23 2024 Tom Rix <[email protected]> - b2417-1 - Initial package
/usr/lib/.build-id /usr/lib/.build-id/15 /usr/lib/.build-id/15/2694c1a580e5c38daa4ed6a5902167e6bf8587 /usr/lib/.build-id/8d /usr/lib/.build-id/8d/f49d6c108c60e3f2084e4117f05d5af3d83e9a /usr/lib64/libggml.so.b3561 /usr/lib64/libllama.so.b3561 /usr/share/licenses/llama-cpp /usr/share/licenses/llama-cpp/LICENSE
Generated by rpm2html 1.8.1
Fabrice Bellet, Tue Jan 7 03:17:53 2025