Package: madgrad
Title: 'MADGRAD' Method for Stochastic Optimization
Version: 0.2.0
Authors@R: c(
    person("Daniel", "Falbel", email = "dfalbel@gmail.com", role = c("aut", "cre", "cph")),
    person(family = "Posit Software, PBC", role = c("cph")),
    person(family = "MADGRAD original implementation authors.", role = c("cph"))
    )
Description: A Momentumized, Adaptive, Dual Averaged Gradient Method for Stochastic 
  Optimization algorithm. MADGRAD is a 'best-of-both-worlds' optimizer with the 
  generalization performance of stochastic gradient descent and at least as fast 
  convergence as that of Adam, often faster. A drop-in optim_madgrad() implementation
  is provided based on Defazio et al (2020) <doi:10.48550/arXiv.2101.11075>.
License: MIT + file LICENSE
Encoding: UTF-8
RoxygenNote: 7.3.3
Imports: torch (>= 0.3.0), rlang
Suggests: testthat (>= 3.0.0)
Config/testthat/edition: 3
NeedsCompilation: no
Packaged: 2026-04-28 13:10:26 UTC; dfalbel
Author: Daniel Falbel [aut, cre, cph],
  Posit Software, PBC [cph],
  MADGRAD original implementation authors. [cph]
Maintainer: Daniel Falbel <dfalbel@gmail.com>
Repository: CRAN
Date/Publication: 2026-04-29 09:10:02 UTC
Built: R 4.7.0; ; 2026-04-29 23:51:48 UTC; windows
