Package: ggmlR
Type: Package
Title: 'GGML' Tensor Operations for Machine Learning
Version: 0.6.3
Authors@R: c(
    person("Yuri", "Baramykov",
        email = "lbsbmsu@mail.ru",
        role = c("aut", "cre")),
    person("Georgi", "Gerganov",
        role = c("ctb", "cph"),
        comment = "Author of the GGML library"),
    person("Jeffrey", "Quesnelle",
        role = c("ctb", "cph"),
        comment = "Contributor to ops.cpp"),
    person("Bowen", "Peng",
        role = c("ctb", "cph"),
        comment = "Contributor to ops.cpp"),
    person("Mozilla Foundation",
        role = c("ctb", "cph"),
        comment = "Author of llamafile/sgemm.cpp")
    )
Description: Provides 'R' bindings to the 'GGML' tensor library for machine
    learning, designed primarily for 'Vulkan' GPU acceleration with full CPU
    fallback. 'Vulkan' support is auto-detected at build time on Linux (when
    'libvulkan-dev' and 'glslc' are installed) and on Windows (when 'Vulkan'
    'SDK' is installed and 'VULKAN_SDK' environment variable is set); all
    operations fall back to CPU transparently when no GPU is available.
    Implements tensor operations, neural network layers, quantization, and a
    'Keras'-like sequential model API for building and training networks.
    Includes 'AdamW' (Adam with Weight decay) and 'SGD' (Stochastic Gradient
    Descent) optimizers with 'MSE' (Mean Squared Error) and cross-entropy
    losses. Also provides a dynamic 'autograd' engine ('PyTorch'-style) with
    data-parallel training via 'dp_train()', broadcast arithmetic, 'f16'
    (half-precision) support on 'Vulkan' GPU, and a multi-head attention layer
    for building Transformer architectures. Supports 'ONNX' model import via
    built-in zero-dependency 'protobuf' parser: load 'pretrained' 'ONNX' models
    from 'PyTorch', 'TensorFlow', or other frameworks and run inference on
    'Vulkan' GPU or CPU. Covers 40+ 'ONNX' ops including convolutions,
    attention primitives, normalization, and shape operations — sufficient to
    run real-world models such as 'BERT', 'SqueezeNet', 'Inception v3', and
    'MNIST' out of the box. Serves as backend for 'LLM' (Large Language Model)
    inference via 'llamaR' and Stable Diffusion image generation via 'sd2R'.
    See <https://github.com/ggml-org/ggml> for more information about the
    underlying library.
Depends: R (>= 4.1.0)
License: MIT + file LICENSE
URL: https://github.com/Zabis13/ggmlR
BugReports: https://github.com/Zabis13/ggmlR/issues
Encoding: UTF-8
SystemRequirements: C++17, GNU make, libvulkan-dev, glslc (optional,
        for GPU on Linux), 'Vulkan' 'SDK' (optional, for GPU on
        Windows)
Suggests: testthat (>= 3.0.0)
RoxygenNote: 7.3.3
Config/testthat/edition: 3
NeedsCompilation: yes
Packaged: 2026-03-18 09:58:00 UTC; yuri
Author: Yuri Baramykov [aut, cre],
  Georgi Gerganov [ctb, cph] (Author of the GGML library),
  Jeffrey Quesnelle [ctb, cph] (Contributor to ops.cpp),
  Bowen Peng [ctb, cph] (Contributor to ops.cpp),
  Mozilla Foundation [ctb, cph] (Author of llamafile/sgemm.cpp)
Maintainer: Yuri Baramykov <lbsbmsu@mail.ru>
Repository: CRAN
Date/Publication: 2026-03-18 10:30:13 UTC
Built: R 4.4.3; aarch64-apple-darwin20; 2026-03-18 15:05:17 UTC; unix
Archs: ggmlR.so.dSYM
