mfrmr

GitHub R-CMD-check pkgdown test-coverage License: MIT

Native R package for many-facet Rasch model (MFRM) estimation, diagnostics, and reporting workflows.

Start here first

If you are new to mfrmr, use this route first and ignore the longer feature lists below until it works end to end.

library(mfrmr)
toy <- load_mfrmr_data("example_core")

fit <- fit_mfrm(
  toy,
  person = "Person",
  facets = c("Rater", "Criterion"),
  score = "Score",
  method = "MML",
  model = "RSM",
  quad_points = 15
)

diag <- diagnose_mfrm(
  fit,
  diagnostic_mode = "both",
  residual_pca = "none"
)

summary(fit)
summary(diag)
plot_qc_dashboard(fit, diagnostics = diag, preset = "publication")
chk <- reporting_checklist(fit, diagnostics = diag)

If that route works, the next natural step is:

What this package is for

mfrmr is designed around four package-native routes:

If you want the shortest possible recommendation:

Minimum input contract

mfrmr expects long-format rating data: one row per observed rating.

Minimal pattern:

names(df)
# [1] "Person" "Rater" "Criterion" "Score"

fit <- fit_mfrm(
  data = df,
  person = "Person",
  facets = c("Rater", "Criterion"),
  score = "Score",
  method = "MML",
  model = "RSM"
)

Main capabilities

Core analysis:

Reporting and QA:

Linking, fairness, and advanced review:

Advanced or compatibility scope:

Latent regression status

mfrmr now includes a first-version latent-regression branch inside fit_mfrm(). Activate it with method = "MML", population_formula = ~ ..., and one-row-per-person person_data.

Current supported boundary:

What to inspect after fitting:

Beginner quick start:

# response data: one row per rating event
# person data: one row per person, with the same Person IDs
person_tbl <- unique(dat[c("Person", "Grade", "Group")])

fit_pop <- fit_mfrm(
  data = dat,
  person = "Person",
  facets = c("Rater", "Criterion"),
  score = "Score",
  method = "MML",
  model = "RSM",
  population_formula = ~ Grade + Group,
  person_data = person_tbl,
  population_policy = "error"
)

s_pop <- summary(fit_pop)
s_pop$population_overview      # posterior basis, residual variance, omissions
s_pop$population_coefficients  # latent-regression coefficients
s_pop$population_coding        # categorical levels / contrasts / encoded columns
s_pop$caveats                  # complete-case and category-support warnings

Use population_policy = "omit" only when complete-case removal is intended, then report the omitted-person and omitted-row counts. Coefficients in population_coefficients are conditional-normal population-model parameters, not a post hoc regression on EAP/MLE scores.

Reference checks for this branch:

bench_pop <- reference_case_benchmark(
  cases = c("synthetic_latent_regression_omit", "synthetic_conquest_overlap_dry_run"),
  method = "MML",
  model = "RSM",
  quad_points = 5,
  maxit = 30
)

summary(bench_pop)
bench_pop$population_policy_checks  # complete-case omission check
bench_pop$conquest_overlap_checks   # package-side ConQuest preparation check

The ConQuest preparation case checks only package-side preparation. It does not run ConQuest. When actual ConQuest output tables are available for the documented overlap case, use the external-table comparison helpers:

bundle <- build_conquest_overlap_bundle(fit_overlap, output_dir = "conquest_overlap")
normalized <- normalize_conquest_overlap_files(
  population_file = "conquest_population.csv",
  item_file = "conquest_items.csv",
  case_file = "conquest_cases.csv"
)
audit <- audit_conquest_overlap(bundle, normalized)
summary(audit)$summary
audit$attention_items

Treat this as a scoped comparison, not as full ConQuest numerical equivalence. ConQuest must be run separately and the extracted tables must be reviewed.

Current non-goals for this branch:

This should be described as first-version overlap with the ConQuest latent-regression framework, not as ConQuest numerical equivalence.

predict_mfrm_population() remains a simulation-based scenario-forecasting helper. It should not be described as the latent-regression estimator itself.

Bounded GPCM support

GPCM is now part of the supported core package scope, but only within a bounded route. Use gpcm_capability_matrix() to see the current release boundary in one place.

The unsupported helpers depend on score-side or planning assumptions that are validated for the Rasch-family route but not yet generalized to bounded GPCM.

Equal weighting and when to prefer Rasch-MFRM

mfrmr treats RSM / PCM as the package’s equal-weighting reference models. In that Rasch-family route, category discrimination is fixed, so the operational scoring contract does not let the psychometric model reweight some item-facet combinations more heavily than others.

bounded GPCM serves a different purpose. It allows estimated slopes, so some observed design cells become more influential than others through discrimination-based reweighting. This often improves fit, but a better-fitting GPCM does not automatically make it the preferred operational model.

The package therefore recommends:

One more distinction matters. The weight = argument in fit_mfrm() is for an observation-weight column. That is different from the equal-weighting question discussed above. Observation weights adjust how rating events enter estimation and summaries; they do not turn a Rasch-family fit into a discrimination-based model.

Documentation map

The README is only the shortest map. The package now has guide-style help pages for the main workflows.

Companion vignettes:

Installation

# GitHub (development version)
if (!requireNamespace("remotes", quietly = TRUE)) install.packages("remotes")
remotes::install_github("Ryuya-dot-com/R_package_mfrmr", build_vignettes = TRUE)

# CRAN (after release)
# install.packages("mfrmr")

If you install from GitHub without build_vignettes = TRUE, use the guide-style help pages included in the package, for example:

Installed vignettes:

browseVignettes("mfrmr")

Core workflow

fit_mfrm() --> diagnose_mfrm() --> reporting / advanced analysis
                    |
                    +--> analyze_residual_pca()
                    +--> estimate_bias()
                    +--> analyze_dff()
                    +--> compare_mfrm()
                    +--> run_qc_pipeline()
                    +--> anchor_to_baseline() / detect_anchor_drift()
  1. Fit model: fit_mfrm()
  2. Diagnostics: diagnose_mfrm()
  3. Optional residual PCA: analyze_residual_pca()
  4. Optional interaction bias: estimate_bias()
  5. Differential-functioning analysis: analyze_dff(), dif_report()
  6. Model comparison: compare_mfrm()
  7. Reporting: apa_table(), build_apa_outputs(), build_visual_summaries()
  8. Quality control: run_qc_pipeline()
  9. Anchoring & linking: anchor_to_baseline(), detect_anchor_drift(), build_equating_chain()
  10. Compatibility-contract audit when needed: facets_parity_report(); this audits package output contracts, not external FACETS numerical equivalence
  11. Reproducible inspection: summary() and plot(..., draw = FALSE)

Choose a route

Use the route that matches the question you are trying to answer.

Question Recommended route
Can I fit the model and get a first-pass diagnosis quickly? fit_mfrm() -> diagnose_mfrm() -> plot_qc_dashboard()
Which reporting elements are draft-complete, and with what caveats? diagnose_mfrm() -> precision_audit_report() -> reporting_checklist()
Which tables and prose should I adapt into a manuscript draft? reporting_checklist() -> build_apa_outputs() -> apa_table()
Is the design connected well enough for a common scale? subset_connectivity_report() -> plot(..., type = "design_matrix")
Do I need to place a new administration onto a baseline scale? make_anchor_table() -> anchor_to_baseline()
Are common elements stable across separately fitted forms or waves? fit each wave -> detect_anchor_drift() -> build_equating_chain()
Are some facet levels functioning differently across groups? subset_connectivity_report() -> analyze_dff() -> dif_report()
Do I need old fixed-width or wrapper-style outputs? run_mfrm_facets() or build_fixed_reports() only at the compatibility boundary

Additional routes

After the canonical MML + both route above, these are the next shortest specialized routes.

Shared setup used by the snippets below:

library(mfrmr)
toy <- load_mfrmr_data("example_core")

1. Quick first pass

fit <- fit_mfrm(toy, "Person", c("Rater", "Criterion"), "Score",
                method = "MML", model = "RSM", quad_points = 7)
diag <- diagnose_mfrm(fit, diagnostic_mode = "both", residual_pca = "none")
summary(diag)
plot_qc_dashboard(fit, diagnostics = diag, preset = "publication")

1b. Preferred MML + marginal-fit route

fit_final <- fit_mfrm(
  toy,
  "Person",
  c("Rater", "Criterion"),
  "Score",
  method = "MML",
  model = "RSM",
  quad_points = 15
)

diag_final <- diagnose_mfrm(
  fit_final,
  diagnostic_mode = "both",
  residual_pca = "none"
)

summary(fit_final)
summary(diag_final)

For RSM / PCM, this is the recommended final-analysis route when you want legacy continuity plus the newer strict marginal screening path.

2. Design and linking check

diag <- diagnose_mfrm(fit, residual_pca = "none")
sc <- subset_connectivity_report(fit, diagnostics = diag)
summary(sc)
plot(sc, type = "design_matrix", preset = "publication")
plot_wright_unified(fit, preset = "publication", show_thresholds = TRUE)

3. Manuscript and reporting check

# Add `bias_results = ...` if you want the bias/reporting layer included.
chk <- reporting_checklist(fit, diagnostics = diag)
apa <- build_apa_outputs(fit, diag)

chk$checklist[, c("Section", "Item", "DraftReady", "NextAction")]
cat(apa$report_text)

Estimation choices

The package treats MML and JML differently on purpose.

Typical pattern:

toy <- load_mfrmr_data("example_core")

fit_final <- fit_mfrm(
  toy, "Person", c("Rater", "Criterion"), "Score",
  method = "MML", model = "RSM", quad_points = 15
)

diag_final <- diagnose_mfrm(
  fit_final,
  diagnostic_mode = "both",
  residual_pca = "none"
)

precision_audit_report(fit_final, diagnostics = diag_final)

Mathematical note for expert users

The package help and the mfrmr-mml-and-marginal-fit vignette carry the full expert-facing definitions, but the two key ideas are:

  1. MML optimizes the marginal likelihood

[ L() = {n=1}^{N} {q=1}^{Q} w_q , p(_n _q, ), ]

with Gauss-Hermite nodes (_q) and weights (w_q), and then forms posterior summaries such as EAP scores from the same latent-integrated bundle.

  1. The strict marginal branch compares observed summaries with posterior-integrated expectations rather than EAP plug-in residuals. In schematic form, for a flagged cell or grouped summary (g),

[ {}(N{gc}) = {n=1}^{N}{q=1}^{Q} (_q _n, ) , I(n g), (X_n = c _q, ). ]

This is why legacy residual diagnostics and marginal_fit diagnostics are reported separately in mfrmr.

In many-facet reporting, keep facet-level reliability and inter-rater agreement separate. The former summarizes separation/precision on the latent scale; the latter summarizes observed agreement across matched rater contexts.

Literature positioning:

For the longer expert note, run:

vignette("mfrmr-mml-and-marginal-fit", package = "mfrmr")

Help-page navigation

Core analysis help pages include practical sections such as:

Recommended entry points:

Utility pages such as ?export_mfrm, ?as.data.frame.mfrm_fit, and ?plot_bubble also include lightweight export / plotting examples.

Performance tips

Documentation datasets

Quick start

library(mfrmr)

data("mfrmr_example_core", package = "mfrmr")
df <- mfrmr_example_core

# Fit
fit <- fit_mfrm(
  data = df,
  person = "Person",
  facets = c("Rater", "Criterion"),
  score = "Score",
  method = "MML",
  model = "RSM",
  quad_points = 7
)
summary(fit)

# Fast diagnostics first
diag <- diagnose_mfrm(fit, residual_pca = "none")
summary(diag)

# APA outputs
apa <- build_apa_outputs(fit, diag)
cat(apa$report_text)

# QC pipeline reuses the same diagnostics object
qc <- run_qc_pipeline(fit, diagnostics = diag)
summary(qc)

Main objects you will reuse

Most package workflows reuse a small set of objects rather than recomputing everything from scratch.

Typical reuse pattern:

toy <- load_mfrmr_data("example_core")

fit <- fit_mfrm(toy, "Person", c("Rater", "Criterion"), "Score",
                method = "MML", model = "RSM", quad_points = 7)
diag <- diagnose_mfrm(fit, residual_pca = "none")
chk <- reporting_checklist(fit, diagnostics = diag)
apa <- build_apa_outputs(fit, diag)
sc <- subset_connectivity_report(fit, diagnostics = diag)

Reporting and APA route

If your endpoint is a manuscript or internal report, use the package-native reporting contract rather than composing text by hand.

diag <- diagnose_mfrm(fit, residual_pca = "none")

# Add `bias_results = ...` to either helper when bias screening should
# appear in the checklist or draft text.
chk <- reporting_checklist(fit, diagnostics = diag)
chk$checklist[, c("Section", "Item", "DraftReady", "Priority", "NextAction")]

apa <- build_apa_outputs(
  fit,
  diag,
  context = list(
    assessment = "Writing assessment",
    setting = "Local scoring study",
    scale_desc = "0-4 rubric scale",
    rater_facet = "Rater"
  )
)

cat(apa$report_text)
apa$section_map[, c("SectionId", "Available", "Heading")]

tbl_fit <- apa_table(fit, which = "summary")
tbl_reliability <- apa_table(fit, which = "reliability", diagnostics = diag)

For a question-based map of the reporting API, see help("mfrmr_reporting_and_apa", package = "mfrmr").

Visualization recipes

If you want a question-based map of the plotting API, see help("mfrmr_visual_diagnostics", package = "mfrmr").

# Which plotting routes are supported for this run?
chk <- reporting_checklist(fit, diagnostics = diag)
chk$visual_scope

# Where should each figure go in a paper or appendix?
visual_reporting_template("manuscript")[, c("FigureFamily", "CaptionSkeleton")]
visual_reporting_template("surface")[, c("FigureFamily", "ResultsWording")]

# Wright map with shared targeting view
plot(fit, type = "wright", preset = "publication", show_ci = TRUE)

# Pathway map with dominant-category strips
plot(fit, type = "pathway", preset = "publication")

# Category probability curves
plot(fit, type = "ccc", preset = "publication")

# 3D-ready category probability surface payload.
# This is exploratory data for downstream rendering, not a default APA figure.
surface <- plot(fit, type = "ccc_surface", draw = FALSE)
head(surface$data$surface)
surface$data$category_support

# Linking design matrix
sc <- subset_connectivity_report(fit, diagnostics = diag)
plot(sc, type = "design_matrix", preset = "publication")

# Unexpected responses
plot_unexpected(fit, diagnostics = diag, preset = "publication")

# Displacement screening
plot_displacement(fit, diagnostics = diag, preset = "publication")

# Facet variability overview
plot_facets_chisq(fit, diagnostics = diag, preset = "publication")

# Residual PCA scree and loadings
pca <- analyze_residual_pca(diag, mode = "both")
plot_residual_pca(pca, mode = "overall", plot_type = "scree", preset = "publication")

# Bias screening profile
bias <- estimate_bias(fit, diag, facet_a = "Rater", facet_b = "Criterion")
plot_bias_interaction(bias, plot = "facet_profile", preset = "publication")

# One-page QC screen
plot_qc_dashboard(fit, diagnostics = diag, preset = "publication")

Linking, anchors, and DFF route

Use this route when your design spans forms, waves, or subgroup comparisons.

data("mfrmr_example_bias", package = "mfrmr")
df_bias <- mfrmr_example_bias
fit_bias <- fit_mfrm(df_bias, "Person", c("Rater", "Criterion"), "Score",
                     method = "MML", model = "RSM", quad_points = 7)
diag_bias <- diagnose_mfrm(fit_bias, residual_pca = "none")

# Connectivity and design coverage
sc <- subset_connectivity_report(fit_bias, diagnostics = diag_bias)
summary(sc)
plot(sc, type = "design_matrix", preset = "publication")

# Anchor export from a baseline fit
anchors <- make_anchor_table(fit_bias, facets = "Criterion")
head(anchors)

# Differential facet functioning
dff <- analyze_dff(
  fit_bias,
  diag_bias,
  facet = "Criterion",
  group = "Group",
  data = df_bias,
  method = "residual"
)
dff$summary
plot_dif_heatmap(dff)

For linking-specific guidance, see help("mfrmr_linking_and_dff", package = "mfrmr").

DFF / DIF analysis

data("mfrmr_example_bias", package = "mfrmr")
df_bias <- mfrmr_example_bias
fit_bias <- fit_mfrm(df_bias, "Person", c("Rater", "Criterion"), "Score",
                     method = "MML", model = "RSM", quad_points = 7)
diag_bias <- diagnose_mfrm(fit_bias, residual_pca = "none")

dff <- analyze_dff(fit_bias, diag_bias, facet = "Criterion",
                   group = "Group", data = df_bias, method = "residual")
dff$dif_table
dff$summary

# Cell-level interaction table
dit <- dif_interaction_table(fit_bias, diag_bias, facet = "Criterion",
                             group = "Group", data = df_bias)

# Visual, narrative, and bias reports
plot_dif_heatmap(dff)
dr <- dif_report(dff)
cat(dr$narrative)

# Refit-based contrasts can support ETS labels only when subgroup linking is adequate
dff_refit <- analyze_dff(fit_bias, diag_bias, facet = "Criterion",
                         group = "Group", data = df_bias, method = "refit")
dff_refit$summary

bias <- estimate_bias(fit_bias, diag_bias, facet_a = "Rater", facet_b = "Criterion")
summary(bias)

# App-style batch bias estimation across all modeled facet pairs
bias_all <- estimate_all_bias(fit_bias, diag_bias)
bias_all$summary

Interpretation rules:

Model comparison

fit_rsm <- fit_mfrm(df, "Person", c("Rater", "Criterion"), "Score",
                     method = "MML", model = "RSM")
fit_pcm <- fit_mfrm(df, "Person", c("Rater", "Criterion"), "Score",
                     method = "MML", model = "PCM", step_facet = "Criterion")
cmp <- compare_mfrm(RSM = fit_rsm, PCM = fit_pcm)
cmp$table

# Request nested tests only when models are truly nested and fit on the same basis
cmp_nested <- compare_mfrm(RSM = fit_rsm, PCM = fit_pcm, nested = TRUE)
cmp_nested$comparison_basis

# RSM design-weighted precision curves
info <- compute_information(fit_rsm)
plot_information(info)

Design simulation

spec <- build_mfrm_sim_spec(
  n_person = 50,
  n_rater = 4,
  n_criterion = 4,
  raters_per_person = 2,
  assignment = "rotating",
  model = "RSM"
)

sim_eval <- evaluate_mfrm_design(
  n_person = c(30, 50, 80),
  n_rater = 4,
  n_criterion = 4,
  raters_per_person = 2,
  reps = 2,
  maxit = 15,
  sim_spec = spec,
  seed = 123
)

s_sim <- summary(sim_eval)
s_sim$design_summary
s_sim$ademp

rec <- recommend_mfrm_design(sim_eval)
rec$recommended

plot(sim_eval, facet = "Rater", metric = "separation", x_var = "n_person")
plot(sim_eval, facet = "Criterion", metric = "severityrmse", x_var = "n_person")

Notes:

Population forecast

spec_pop <- build_mfrm_sim_spec(
  n_person = 50,
  n_rater = 4,
  n_criterion = 4,
  raters_per_person = 2,
  assignment = "rotating",
  model = "RSM"
)

pred_pop <- predict_mfrm_population(
  sim_spec = spec_pop,
  n_person = 60,
  reps = 2,
  maxit = 15,
  seed = 123
)

s_pred <- summary(pred_pop)
s_pred$forecast[, c("Facet", "MeanSeparation", "McseSeparation")]

Notes:

Future-unit posterior scoring

toy_pred <- load_mfrmr_data("example_core")
toy_fit <- fit_mfrm(
  toy_pred,
  "Person", c("Rater", "Criterion"), "Score",
  method = "MML",
  quad_points = 7
)

raters <- unique(toy_pred$Rater)[1:2]
criteria <- unique(toy_pred$Criterion)[1:2]

new_units <- data.frame(
  Person = c("NEW01", "NEW01", "NEW02", "NEW02"),
  Rater = c(raters[1], raters[2], raters[1], raters[2]),
  Criterion = c(criteria[1], criteria[2], criteria[1], criteria[2]),
  Score = c(2, 3, 2, 4)
)

pred_units <- predict_mfrm_units(toy_fit, new_units, n_draws = 0)
summary(pred_units)$estimates[, c("Person", "Estimate", "Lower", "Upper")]

pv_units <- sample_mfrm_plausible_values(
  toy_fit,
  new_units,
  n_draws = 3,
  seed = 123
)
summary(pv_units)$draw_summary[, c("Person", "Draws", "MeanValue")]

Notes:

Prediction-aware bundle export

bundle_pred <- export_mfrm_bundle(
  fit = toy_fit,
  population_prediction = pred_pop,
  unit_prediction = pred_units,
  plausible_values = pv_units,
  output_dir = tempdir(),
  prefix = "mfrmr_prediction_bundle",
  include = c("manifest", "predictions", "html"),
  overwrite = TRUE
)

bundle_pred$summary

Notes:

DIF / Bias screening simulation

spec_sig <- build_mfrm_sim_spec(
  n_person = 50,
  n_rater = 4,
  n_criterion = 4,
  raters_per_person = 2,
  assignment = "rotating",
  group_levels = c("A", "B")
)

sig_eval <- evaluate_mfrm_signal_detection(
  n_person = c(30, 50, 80),
  n_rater = 4,
  n_criterion = 4,
  raters_per_person = 2,
  reps = 2,
  dif_effect = 0.8,
  bias_effect = -0.8,
  maxit = 15,
  sim_spec = spec_sig,
  seed = 123
)

s_sig <- summary(sig_eval)
s_sig$detection_summary
s_sig$ademp

plot(sig_eval, signal = "dif", metric = "power", x_var = "n_person")
plot(sig_eval, signal = "bias", metric = "false_positive", x_var = "n_person")

Notes:

Bundle export

bundle <- export_mfrm_bundle(
  fit_bias,
  diagnostics = diag_bias,
  bias_results = bias_all,
  output_dir = tempdir(),
  prefix = "mfrmr_bundle",
  include = c("core_tables", "checklist", "manifest", "visual_summaries", "script", "html"),
  overwrite = TRUE
)

bundle$written_files

bundle_pred <- export_mfrm_bundle(
  toy_fit,
  output_dir = tempdir(),
  prefix = "mfrmr_prediction_bundle",
  include = c("manifest", "predictions", "html"),
  population_prediction = pred_pop,
  unit_prediction = pred_units,
  plausible_values = pv_units,
  overwrite = TRUE
)

bundle_pred$written_files

replay <- build_mfrm_replay_script(
  fit_bias,
  diagnostics = diag_bias,
  bias_results = bias_all,
  data_file = "your_data.csv"
)

replay$summary

Anchoring and linking

d1 <- load_mfrmr_data("study1")
d2 <- load_mfrmr_data("study2")
fit1 <- fit_mfrm(d1, "Person", c("Rater", "Criterion"), "Score", method = "JML", maxit = 25)
fit2 <- fit_mfrm(d2, "Person", c("Rater", "Criterion"), "Score", method = "JML", maxit = 25)

# Anchored calibration
res <- anchor_to_baseline(d2, fit1, "Person", c("Rater", "Criterion"), "Score")
summary(res)
res$drift

# Drift detection
drift <- detect_anchor_drift(list(Wave1 = fit1, Wave2 = fit2))
summary(drift)
plot_anchor_drift(drift, type = "drift")

# Screened linking chain
chain <- build_equating_chain(list(Form1 = fit1, Form2 = fit2))
summary(chain)
plot_anchor_drift(chain, type = "chain")

Notes:

QC pipeline

qc <- run_qc_pipeline(fit, threshold_profile = "standard")
qc$overall      # "Pass", "Warn", or "Fail"
qc$verdicts     # per-check verdicts
qc$recommendations

plot_qc_pipeline(qc, type = "traffic_light")
plot_qc_pipeline(qc, type = "detail")

# Threshold profiles: "strict", "standard", "lenient"
qc_strict <- run_qc_pipeline(fit, threshold_profile = "strict")

Compatibility layer

Compatibility helpers are still available, but they are no longer the primary route for new scripts.

For the full map, see help("mfrmr_compatibility_layer", package = "mfrmr").

External-software wording should stay conservative:

chk <- reporting_checklist(fit, diagnostics = diag)
chk$software_scope
summary(chk)$software_scope

Legacy-compatible one-shot wrapper

run <- run_mfrm_facets(
  data = df,
  person = "Person",
  facets = c("Rater", "Criterion"),
  score = "Score",
  method = "JML",
  model = "RSM"
)
summary(run)
plot(run, type = "fit", draw = FALSE)

Public API map

Model and diagnostics:

Differential functioning and model comparison:

Anchoring and linking:

QC pipeline:

Table/report outputs:

Output terminology:

Plots and dashboards:

Export and data utilities:

Some legacy FACETS-style numbered names remain unexported. facets_parity_report() keeps its historical function name, but the intended use is package-output coverage checking, not a FACETS execution or external software-equivalence claim.

FACETS reference mapping

See:

Packaged synthetic datasets

Installed at system.file("extdata", package = "mfrmr"):

The same datasets are also packaged in data/ and can be loaded with:

data("ej2021_study1", package = "mfrmr")
# or
df <- load_mfrmr_data("study1")

Current packaged dataset sizes:

Citation

citation("mfrmr")

Acknowledgements

mfrmr has benefited from discussion and methodological input from Dr. Atsushi Mizumoto and Dr. Taichi Yamashita.