Chapter 11 Research
Research Protocols to Assess the Safety and Effectiveness of Suggested Practices
This appendix outlines research protocols to rigorously evaluate the safety and effectiveness of the practices presented in Appendix A. These guidelines advocate for evidence-based methodology while incorporating the complementary and holistic nature of the practices. By adhering to these protocols, researchers can build a robust body of evidence to validate the therapeutic potential of these interventions.
11.1 Guiding Principles for Research
- Holistic Metrics:
- Evaluate outcomes across multiple dimensions (e.g., physical, emotional, and spiritual aspects).
- Evaluate outcomes across multiple dimensions (e.g., physical, emotional, and spiritual aspects).
- Patient-Centered Approach:
- Account for individual variability in how practices influence health and energy alignment.
- Account for individual variability in how practices influence health and energy alignment.
- Iterative Research:
- Start with smaller exploratory studies and scale to larger, more comprehensive designs to build evidence progressively.
By deploying these research protocols, future studies can validate the effectiveness and safety of the practices introduced in Appendix A, deepening our understanding of their role in fostering soul energy flow and advancing holistic healing.
11.2 Study Designs
The choice of study design is critical to obtaining meaningful and reliable outcomes. Below are recommended approaches suited to the evaluation of the suggested practices:
11.2.1 Case Studies
- Purpose:
- To investigate individual experiences with specific practices in depth.
- Design:
- Observe and document changes in participants using detailed qualitative and quantitative data.
- Example: A participant’s physical, emotional, and energetic changes after daily chakra alignment for three months.
- Observe and document changes in participants using detailed qualitative and quantitative data.
- Benefits:
- Provides deep insights into individual responses.
- Helps identify patterns for hypothesis generation.
11.2.2 Case-Control Studies
- Purpose:
- To compare individuals who engage in a practice (e.g., meditation) with those who do not, while analyzing specific outcomes (e.g., reduced stress).
- To compare individuals who engage in a practice (e.g., meditation) with those who do not, while analyzing specific outcomes (e.g., reduced stress).
- Design:
- Select two groups:
- Cases: People who practice the intervention.
- Controls: People who do not practice the intervention, matched according to key characteristics (e.g., age, baseline health).
- Evaluate measurable differences (e.g., hormone levels, emotional resilience) between the two groups.
- Select two groups:
- Benefits:
- Efficient for studying rare or long-term outcomes.
- Can identify potential associations for further investigation.
- Efficient for studying rare or long-term outcomes.
11.2.3 Observational Cohort Studies
- Purpose:
- To explore the effects of a practice over time in a larger population.
- Design:
- Follow cohorts engaging in specific practices (e.g., mindful breathwork or energy healing) and compare them to cohorts not engaging in those practices over months or years.
- Collect baseline data to measure changes in health outcomes such as stress hormone levels, quality of life, or disease incidence.
- Follow cohorts engaging in specific practices (e.g., mindful breathwork or energy healing) and compare them to cohorts not engaging in those practices over months or years.
- Benefits:
- Enables the study of real-world effectiveness and long-term safety.
- Provides stronger evidence of causation than case-control studies.
- Enables the study of real-world effectiveness and long-term safety.
11.3 Measures of Association
To determine the relationship between the practices and various health outcomes, the following statistical tools are essential:
11.3.1 Odds Ratio (OR)
- What It Does:
- Compares the odds of an outcome occurring in participants engaging in the practice versus those who do not.
- Compares the odds of an outcome occurring in participants engaging in the practice versus those who do not.
- Usefulness:
- Ideal for case-control studies to assess the relative likelihood of outcomes (e.g., reduced cortisol levels or improved emotional well-being).
11.3.2 Risk Ratio (RR)
- What It Does:
- Compares the risk of an outcome between two groups over time.
- Compares the risk of an outcome between two groups over time.
- Usefulness:
- Particularly suitable for cohort studies to evaluate the effectiveness of interventions (e.g., meditation reducing risk of burnout over a year).
Both OR and RR can help establish preliminary associations, which may guide further experimental research.
11.4 Bias Control
Ensuring reliable outcomes requires mitigating three key types of bias:
11.4.1 Selection Bias
- Definition:
- Errors arising from non-random inclusion or exclusion of study participants.
- Errors arising from non-random inclusion or exclusion of study participants.
- Strategies to Control:
- Randomly select participants from a diverse population.
- Use stratified sampling to ensure representation of different demographics (e.g., age, gender, cultural background).
11.4.2 Information Bias
- Definition:
- Errors from inaccurate data collection or measurement of outcomes.
- Errors from inaccurate data collection or measurement of outcomes.
- Strategies to Control:
- Use validated measurement tools (e.g., reliable scales for stress or emotional well-being).
- Implement standardized training for observers and researchers to ensure consistency in data collection.
- Use validated measurement tools (e.g., reliable scales for stress or emotional well-being).
11.4.3 Confounding Bias
- Definition:
- Errors caused by an unmeasured variable influencing both the intervention and the outcome.
- Errors caused by an unmeasured variable influencing both the intervention and the outcome.
- Strategies to Control:
- Account for confounding variables (e.g., pre-existing health conditions, diet, or physical activity) using statistical adjustments like multivariate analysis.
- Conduct subgroup analyses controlling for major confounders.
- Account for confounding variables (e.g., pre-existing health conditions, diet, or physical activity) using statistical adjustments like multivariate analysis.
11.5 Sample Size and Study Power
A study’s power is its ability to detect an effect if one exists. Adequate sample size is vital for generating statistically significant results.
- Steps for Determination:
- Estimate Effect Size:
- Based on prior literature or a pilot study (e.g., percentage reduction in stress after meditation).
- Based on prior literature or a pilot study (e.g., percentage reduction in stress after meditation).
- Set Statistical Thresholds:
- Determine acceptable levels of Type I error (α, usually 0.05) and Type II error (β, power set at 80% or higher).
- Determine acceptable levels of Type I error (α, usually 0.05) and Type II error (β, power set at 80% or higher).
- Calculate Sample Size:
- Use statistical software to determine the necessary participant numbers for each group.
- Example:
- If expecting a 20% reduction in stress with an 80% power and α = 0.05, a sample of 50 participants per group may be required.
- Use statistical software to determine the necessary participant numbers for each group.
- Estimate Effect Size:
- General Recommendations:
- Smaller studies (e.g., case studies) for initial exploration.
- Larger, well-powered follow-up studies for validation.
11.6 Guidelines for Safety Assessments
Ensuring the safety of participants is a foundational aspect of conducting research on the practices outlined in Appendix A. This section provides detailed guidelines for monitoring adverse effects, ethical considerations, informed consent, and the implementation of safety checklists. These measures aim to uphold participant well-being and support the integrity of the research process.
11.6.1 Monitoring Adverse Effects
The practices suggested in Appendix A are generally low-risk; however, any intervention has the potential for unforeseen effects.
- Pre-Study Screening:
- Collect detailed participant medical histories to identify pre-existing conditions that might interact with the intervention (e.g., deep breathing techniques for individuals with respiratory conditions).
- Use eligibility criteria to exclude participants with contraindications.
- Collect detailed participant medical histories to identify pre-existing conditions that might interact with the intervention (e.g., deep breathing techniques for individuals with respiratory conditions).
- Reporting and Documentation:
- Develop a standardized system for monitoring and recording adverse effects.
- Classify events as minor, moderate, or severe (e.g., mild discomfort during meditation vs. significant emotional distress).
- Example:
- A participant experiencing dizziness during breathwork should have this effect noted, along with any corrective action taken.
- Develop a standardized system for monitoring and recording adverse effects.
- Continuous Monitoring:
- Implement regular check-ins during study sessions to gather participant feedback on side effects.
- Post-study follow-ups identify delayed adverse outcomes and provide additional support if needed.
- Implement regular check-ins during study sessions to gather participant feedback on side effects.
11.6.2 Ethical Considerations
Adherence to ethical principles ensures the safety, dignity, and rights of participants throughout the research process.
- Institutional Review Board (IRB) Approval:
- Before initiating the study, submit the research protocol for review by an IRB or equivalent ethical oversight committee.
- Include detailed safety protocols, data confidentiality measures, and a plan for responding to adverse events.
- Before initiating the study, submit the research protocol for review by an IRB or equivalent ethical oversight committee.
- Respect for Vulnerable Populations:
- Avoid recruiting participants from groups that may lack the capacity to provide fully informed consent (e.g., minors without guardian approval).
- Tailor safety measures to meet the specific needs of diverse populations.
- Avoid recruiting participants from groups that may lack the capacity to provide fully informed consent (e.g., minors without guardian approval).
11.6.3 Informed Consent
Obtaining informed consent is a crucial step in minimizing risks and ensuring transparency.
- Content of Consent Forms:
- Provide clear descriptions of the practices, potential benefits, and risks involved.
- Include information about the steps participants should take if they experience discomfort or adverse effects.
- Provide clear descriptions of the practices, potential benefits, and risks involved.
- Interactive Process:
- Use plain language and allow ample time for questions to ensure participants understand what participation entails.
- Offer opportunities for participants to withdraw at any point without consequences.
- Use plain language and allow ample time for questions to ensure participants understand what participation entails.
11.6.4 Use of Safety Checklists
Safety checklists serve as an essential tool for standardizing participant monitoring and minimizing risks.
- Components:
- Pre-session health evaluation (e.g., “Do you currently experience shortness of breath, chest pain, or dizziness?”).
- Protocol-specific monitoring (e.g., during guided breathwork, check for signs of hyperventilation or discomfort).
- Post-session safety review to document participant experiences and feedback.
- Pre-session health evaluation (e.g., “Do you currently experience shortness of breath, chest pain, or dizziness?”).
- Application:
- Example Checklist for Meditation:
- Confirm participant is in a safe and comfortable environment.
- Regularly observe signs of distress or difficulty (e.g., restlessness, emotional overwhelm).
- Note session end-time and immediate effects.
- Confirm participant is in a safe and comfortable environment.
- Example Checklist for Meditation:
11.6.5 Pilot Studies to Identify Risks
Initial pilot studies are critical for uncovering potential risks and fine-tuning safety protocols before broader research efforts begin.
- Process:
- Conduct a small-scale study with a limited participant pool to evaluate the feasibility and safety of the intervention.
- Monitor all aspects of the practice (e.g., effects of prolonged meditation sessions on physical energy levels).
- Conduct a small-scale study with a limited participant pool to evaluate the feasibility and safety of the intervention.
- Outcome:
- Use findings to modify procedures, adjust eligibility criteria, and prepare for comprehensive studies.
11.6.6 Documenting and Addressing Safety Concerns
Proper documentation and timely responses to safety concerns are key to maintaining participant trust and research quality.
- Incident Logs and Reports:
- Record any deviations from the expected experience of the participants, along with actions taken to address them.
- Example:
- If a participant reports an emotional release during energy healing, document their feedback along with reassurance measures provided.
- Record any deviations from the expected experience of the participants, along with actions taken to address them.
- Action Plans for Moderate or Severe Events:
- Collaborate with healthcare professionals if a participant experiences significant distress.
- Immediately pause the study for comprehensive review if a major safety concern arises.
- Collaborate with healthcare professionals if a participant experiences significant distress.
- Transparency and Adapting Protocols:
- Share findings related to safety (e.g., common mild effects like fatigue following guided visualization) in publications to improve future research designs.
- Reassess practices regularly to incorporate new data and enhance safety measures.
- Share findings related to safety (e.g., common mild effects like fatigue following guided visualization) in publications to improve future research designs.
11.6.7 Participant Well-Being as a Priority
Above all, participant safety and well-being should guide every decision in the research process. Employing rigorous safety assessments demonstrates respect for individuals and the integrity of the research. By carefully addressing these considerations, the therapeutic potential of the practices discussed in Appendix A can be explored responsibly and reliably.
11.7 Criteria for a Robust Evidence Base
11.7.1 1. Study Design and Methodology
- Clearly defined research question (e.g., using PICO/PICOC)
- Selection of an appropriate design (randomized controlled trial, cohort study, case–control, qualitative inquiry)
- Presence of control or comparison groups when applicable
- Randomization and blinding to minimize selection and observer bias
- Pre-specified protocols and statistical analysis plans registered before data collection
11.7.2 2. Internal and External Validity
- Internal validity: rigorous control of confounders, standardized procedures, and bias reduction
- External validity: sampling strategy that supports generalizing findings to the target population
- Reliability: repeatable measurements with established inter-rater and test–retest consistency
11.7.3 3. Sampling and Data Quality
- Adequate sample size justified by a priori power analysis
- Representative sampling methods (random, stratified, cluster) to avoid selection bias
- High data integrity: completeness checks, error correction, and standardized data collection instruments
11.7.4 4. Statistical Analysis and Objectivity
- Use of appropriate statistical tests aligned with data type and research questions
- Correction for multiple comparisons and control of Type I/II errors
- Reporting of effect sizes alongside confidence intervals and p-values
- Transparent handling of missing data and sensitivity analyses
11.7.5 5. Replicability and Reproducibility
- Detailed, step-by-step methods descriptions to enable independent replication
- Availability of raw data, code, and analytic scripts under open-access or controlled-access repositories
- Encouragement of independent confirmatory studies
11.7.6 6. Triangulation and Convergence of Evidence
- Integration of multiple methodologies (quantitative, qualitative, mixed methods)
- Use of different measures or data sources to cross-validate findings
- Converging results across approaches strengthen causal inferences
11.7.7 7. Ethical Oversight and Peer Review
- Approval by institutional review boards or ethics committees
- Informed consent and protection of participant confidentiality
- Publication in peer-reviewed journals to ensure scrutiny by field experts
- Full disclosure of conflicts of interest and funding sources
11.7.8 8. Systematic Appraisal and Synthesis
- Systematic reviews using exhaustive, transparent literature searches
- Meta-analyses to quantitatively pool effect estimates when appropriate
- Adoption of evidence-grading frameworks (e.g., GRADE) to rate certainty
11.7.9 9. Transparency and Open Science Practices
- Preregistration of hypotheses, methods, and analysis plans (e.g., ClinicalTrials.gov, OSF)
- Use of reporting guidelines (CONSORT for trials, PRISMA for reviews, STROBE for observational studies)
- Publication of negative and null results to counter publication bias
11.7.10 10. Continuous Updating and Post-Publication Review
- Living systematic reviews that incorporate new evidence as it emerges
- Mechanisms for post-publication commentary, corrections, and retractions
- Ongoing surveillance of real-world performance through registries or post-marketing studies
A study that meets these criteria provides a strong, transparent, and reproducible foundation upon which reliable conclusions and actionable recommendations can be built.
11.8 Hierarchy of Evidence in Clinical Research
11.8.1 Level I: Systematic Reviews & Meta-Analyses
- Comprehensive synthesis of multiple studies answering a focused question
- Uses rigorous methods (e.g., PRISMA guidelines) to minimize bias
- Provides highest certainty about intervention effects
11.8.2 Level II: Randomized Controlled Trials (RCTs)
- Participants randomly assigned to intervention or control
- Blinding and allocation concealment reduce bias
- Gold standard for testing efficacy
11.8.3 Level III: Controlled Trials (Non-Randomized)
- Intervention and comparison groups exist but without random assignment
- Uses techniques like matching or statistical adjustment to address confounding
- Useful when randomization is impractical or unethical
11.8.4 Level IV: Cohort & Case-Control Studies
- Cohort (prospective or retrospective): follows exposed and unexposed groups over time
- Case-control (retrospective): compares individuals with a condition (cases) to those without (controls)
- Can estimate risk but are prone to confounding and recall bias
11.8.5 Level V: Cross-Sectional Studies
- “Snapshot” assessments of exposure and outcome at a single time point
- Good for measuring prevalence or diagnostic accuracy
- Cannot establish causality
11.8.6 Level VI: Case Series & Case Reports
- Descriptive accounts of one (case report) or several (case series) patients
- Generate hypotheses and highlight rare phenomena
- No control group, high risk of selection and observer bias
11.8.7 Level VII: Expert Opinion & Bench Research
- Consensus statements, narrative reviews, basic science, and animal studies
- Provide mechanistic insight but lack direct clinical validation
- Serve as a foundation for hypothesis generation
Note:
• As we move up the pyramid, internal validity and ability to infer causality increase.
• Study quality also depends on sample size, methodological rigor, and consistency of findings, not just design alone.
11.9 Example 1
Scientific Validity of Terence Palmer’s Spirit-Release Findings
The Science of Spirit Possession, 2nd ed. (2017)– 241 pp.:
- Details clinical definitions of obsession, infestation, harassment and poltergeist activity. – Builds on notion that all “supernatural” phenomena lie on a single continuum of human experience. – Reviews psychiatric, anthropological and religious perspectives before laying out a complementary, empirically grounded model. – Includes a fully articulated, step-by-step remote spirit-release protocol, plus over 1,000 case summaries from ten years of clinical practice.
Unlocking the Mysteries of Remote Spirit Release is his shorter preliminary research report, summarizing findings from 2011–2021:
- Insights from 1,000 Client Cases (preliminary research PDF) – Emphasizes “lack of coordination or integration” in the etheric vehicle, an Alice Bailey concept, as the key susceptibility factor in spirit obsession. – Charts outcome metrics across a thousand remote clearings, showing consistent symptom relief and improved client agency.
1. Study Design and Evidence Level
- Palmer’s core data come from a large retrospective case series (≈1 000 “remote spirit releases” over 10 years).
- By evidence hierarchies in medicine, uncontrolled case series occupy the lowest tier: they can generate hypotheses but cannot demonstrate causality.
2. Internal Validity Concerns
- Lack of Controls or Blinding: No comparison/group receiving sham or alternate intervention. Neither practitioner nor client was blinded, opening results to placebo or expectancy effects.
- Selection Bias: Clients referred themselves or were self-selected by family/friends—likely highly motivated and receptive, inflating positive outcomes.
- Concurrent Treatments: Many clients continued psychiatric medication or psychotherapy; disentangling the specific effect of spirit release is impossible without isolating variables.
- Regression to the Mean: Psychotic symptoms (e.g., auditory hallucinations) can fluctuate; spontaneous improvement over weeks or months is common and could be misattributed to the intervention.
3. Outcome Measurement Limitations
- Subjective Metrics: Improvement was gauged by client/family report rather than standardized scales (e.g., PANSS for psychosis).
- No Objective Biomarkers: No neuroimaging, neurophysiological, or biochemical measures to corroborate symptom change.
- Short-Term Follow-Up: Published summaries focus on immediate or short-term relief; durability of remission and relapse rates remain unreported.
4. Mechanistic Plausibility
- The proposed mechanism—“etheric integration,” telepathic hypnosis, “earthbound spirits”—lacks grounding in established neuroscience or physiology.
- Without independently reproducible markers of an “etheric body,” the theory remains non-falsifiable and thus outside conventional scientific inquiry.
5. External Validity and Replicability
- Findings derive from a single practitioner’s clinic and network; there is no multi-site replication or independent laboratory confirmation.
- The very concept of remote spirit release is culturally bound and would likely yield different “diagnoses” and outcomes in other settings.
6. Recommendations for Rigorous Testing
To move from anecdote toward scientific validation, future research would need:
- Randomized controlled trials (RCTs) comparing remote spirit release to credible sham treatments.
- Standardized symptom ratings and blind assessments by independent clinicians.
- Objective endpoints (e.g., functional MRI changes, stress biomarkers).
- Clear, testable operational definitions of “spirit attachment” and release procedures.
Bottom Line: Palmer’s work represents a substantial clinical case archive and an internally consistent esoteric framework, but it fails to meet core scientific standards for validity or causality. Its greatest value lies in hypothesis generation; rigorous controlled studies would be required before any therapeutic claims can be accepted by mainstream science.
Commentary
Spirit Release Therapy (SRT), as developed by Terence Palmer, and Alice Bailey’s Esoteric Healing share a striking metaphysical overlap—yet they diverge in methodology, cosmology, and epistemic framing. Here’s a synthesis of how they relate:
Shared Foundations
Etheric Body as Interface
Both systems posit the etheric body as a subtle energy matrix that mediates between soul and physical form. Palmer identifies “lack of etheric integration” as the key vulnerability to spirit intrusion—echoing Bailey’s view that disease stems from “inhibited soul life” within the etheric web.Soul Permission & Alignment
Bailey’s healing protocols begin with aligning the healer’s and client’s Soul streams and seeking permission from the client’s Soul. Palmer’s remote SRT similarly invokes higher guidance and spiritual authority to initiate release, often via telepathic hypnosis.Non-Local Intervention
Both approaches allow for healing at a distance. Bailey’s esoteric triangles can be activated remotely through visualization and intention; Palmer’s SRT is explicitly designed for remote clearing via surrogate mediums.
Methodological Differences
Aspect | Bailey’s Esoteric Healing | Palmer’s Spirit Release Therapy (SRT) |
---|---|---|
Framework | Theosophical, Ray-based, soul-centric | Psychospiritual, Myers-based, clinical |
Diagnosis | Energy field assessment via chakras & triangles | Telepathic scan via medium or practitioner |
Entities | Rarely named; seen as energy distortions | Explicitly identified (earthbound spirits, ETs, etc.) |
Healing Mechanism | Balancing etheric circuits, invoking Soul light | Spirit negotiation, release, and reintegration |
Documentation | Symbolic impressions, chakra flow charts | Case summaries, symptom tracking |
Conceptual Bridges
Bailey’s “Imperil” vs. Palmer’s “Obsession”
Bailey describes “imperil” as irritation and astral congestion that poisons the Solar Plexus. Palmer’s obsession cases often involve emotional trauma manifesting in similar energetic zones.Triangles as Stabilizers
Bailey’s healing triangles (e.g. Crown–Heart–Base) serve to stabilize the personality and integrate soul energies. Palmer’s SRT uses similar triadic constructs (e.g. Soul–Mind–Body) to restore coherence after entity release.Group Healing & Magnetic Fields
Both systems emphasize group intention and magnetic resonance. Bailey’s group mantram and planetary etheric web mirror Palmer’s use of collective consciousness fields to facilitate remote healing.
Philosophical Divergence
- Bailey’s system is initiatory and evolutionary, aiming to align the personality with the Soul and eventually the Monad.
- Palmer’s SRT is restorative and protective, focused on removing external interference to restore psychological sovereignty.
In Alice Bailey’s esoteric model, the healing process hinges on the unobstructed flow of soul energy into and through the etheric body, which serves as the energetic blueprint of the physical form. When this connection is clear and stable, vitality, purpose, and integration follow. Let’s break it down more precisely:
Soul–Etheric Connection in Bailey’s Healing Framework
- Inhibited Soul Life as Root Cause
- Bailey teaches that disease often results from “inhibited soul life,” meaning the Soul’s intention and rhythm can’t fully penetrate the etheric matrix due to distortion, blockage, or fragmentation.
- Strengthening this bridge restores pranic rhythm, chakra alignment, and mental-emotional clarity.
- The Head Center as the Anchor Point
- The Crown Center (Sahasrara) is considered the Soul’s doorway.
- The Ajna Center (Brow) serves as the agent of personality coordination and intuitive reception.
- When the Soul anchors firmly in the Crown and radiates through the Ajna, a full Soul–Personality fusion begins to unfold—this initiates deep healing.
- Triangular Activation for Anchoring
Dynamic structure:
Triangle | Centers Involved | Purpose |
---|---|---|
Higher Alignment Triangle | Crown, Ajna, Alta Major | Anchors soul force into brain and intuition |
Kundalini Stabilization Triangle | Crown, Heart, Base | Secures aspiration, love, and vital will |
Third Seed Group Triangle | Crown, Heart, Ajna | Magnetizes healer to planetary purpose & client’s evolutionary path |
Strengthening the Connection Practically
Bailey emphasizes attunement before action:
- Use meditative alignment to link our threefold personality with the Soul
- Open the Higher Healing Triangles intentionally during a session
- Always seek Soul permission before intervening—this ensures that any healing is Soul-directed rather than ego-driven
In sum, the soul’s anchoring in the head centers doesn’t just improve conditions—it begins the transformation of consciousness itself. For an in-depth discussion on obsession, see AAB-DK, Letters on Occult Meditation.
11.10 Example 2
Sample Size Calculation for a Parallel-Group Noninferiority Trial in Hypertension
Key Trial Parameters
- Study design: Randomized, controlled, parallel-group noninferiority
- Population: Adults > 30 years, stratified by age (≤ 65 vs > 65) and gender
- Arms
- A: Established non-pharmacologic intervention
- B: Experimental non-pharmacologic intervention
- A: Established non-pharmacologic intervention
- Confounders accounted by stratified randomization: current pharmacotherapy, anxiety disorder
- Exclusions: Major psychiatric disorders under active treatment
- Primary endpoint: Change in weekly average systolic blood pressure (SBP) over 3 months of treatment
- Secondary endpoint: Change in weekly average diastolic blood pressure (DBP)
- Test characteristics:
- One-sided α = 0.05 (Z₁₋ᾳ = 1.645)
- Power 1 – β = 0.80 (Z₁₋ᵦ = 0.842)
- One-sided α = 0.05 (Z₁₋ᾳ = 1.645)
- Attrition: 10% loss to follow-up per arm
Sample Size Formula
For a two-sample comparison of means under noninferiority, assume no true difference (Δ_OBS = 0). The per-arm sample size is:
n=(Z1−α+Z1−β)2×2σ2Δ2NI
where
- σ is the standard deviation of the endpoint
- Δ_NI is the noninferiority margin (the maximum clinically acceptable difference)
Assumptions for Illustrative Calculation
Parameter | SBP | DBP |
---|---|---|
Noninferiority margin (Δ_NI) | 5 mmHg | 3 mmHg |
Assumed σ (based on prior data) | 10 mmHg | 6 mmHg |
Z₁₋ᾳ (one-sided 5%) | 1.645 | 1.645 |
Z₁₋ᵦ (80% power) | 0.842 | 0.842 |
Sample Size Estimates (Per Arm)
SBP endpoint:
n=(1.645+0.842)2×2×10252=(2.487)2×20025≈50
Adjusting for 10% dropout:
nadj=50/0.90≈56 subjects per armDBP endpoint:
n=(1.645+0.842)2×2×6232=(2.487)2×729≈35
Adjusting for 10% dropout:
nadj=35/0.90≈39 subjects per arm
Total N based on the more conservative SBP calculation:
112 participants (56 per arm)
Additional Considerations
- Stratified randomization by age and gender ensures balance but does not change total N.
- If the true σ or Δ_NI differ, re-compute using the same formula.
- Analysis will use an ANCOVA or mixed-effects model adjusting for baseline BP and stratification factors.
- For survival-style analysis of time to losing noninferiority threshold post-treatment, consider event-driven sample size methods.
11.10.1 Simulated Noninferiority Trial
Simulation of a Noninferiority Trial Comparing Two Non-Pharmacologic Hypertension Treatments
Simulation Design
We simulated a 6-month, parallel-group noninferiority study in 112 adults (> 30 years), randomized 1:1 to:
- Arm A: Established non-pharmacologic intervention
- Arm B: Experimental non-pharmacologic intervention
Key features:
- Stratified by age (≤ 65 vs > 65) and gender
- Baseline systolic blood pressure (SBP) ∼ 150 ± 10 mmHg; diastolic BP (DBP) ∼ 95 ± 8 mmHg
- Weekly self-measured SBP/DBP over 3 months of treatment + 3 months post-treatment
- Noninferiority margin Δ_NI = 5 mmHg (SBP) and 3 mmHg (DBP)
- One-sided α = 0.05, power = 0.80
- 10% per-arm attrition
We generated individual weekly SBP/DBP trajectories by adding normally distributed noise (σ_SB P=10, σ_DB P=6) around a mean treatment effect of –12 mmHg (SBP) and –7 mmHg (DBP) during treatment, with a gradual rebound of +4 mmHg (SBP) and +2 mmHg (DBP) over 3 months post-treatment.
Baseline Characteristics
Characteristic | Arm A (n=56) | Arm B (n=56) |
---|---|---|
Age, mean (SD), years | 58.2 (12.4) | 57.9 (11.8) |
Female, n (%) | 29 (52%) | 27 (48%) |
Baseline SBP, mean (SD) | 150.3 (9.8) | 149.7 (10.2) |
Baseline DBP, mean (SD) | 94.6 (7.7) | 95.2 (8.1) |
Concomitant Rx, n (%) | 20 (36%) | 22 (39%) |
Anxiety disorder, n (%) | 12 (21%) | 11 (20%) |
Randomization achieved balance across strata and key confounders.
Primary Endpoint: 3-Month Change in SBP
- Mean SBP reduction at 3 months
- Arm A: –11.8 mmHg (95% CI –13.9, –9.7)
- Arm B: –12.2 mmHg (95% CI –14.4, –10.0)
- Arm A: –11.8 mmHg (95% CI –13.9, –9.7)
- Between-arm difference (B – A): –0.4 mmHg (95% CI –2.6, +1.8)
Because the upper bound of the one-sided 95% CI (+1.8 mmHg) lies below Δ_NI = 5 mmHg, noninferiority of the experimental Rx is demonstrated.
Secondary Endpoint: 3-Month Change in DBP
- Mean DBP reduction
- Arm A: –6.8 mmHg (95% CI –8.2, –5.4)
- Arm B: –7.1 mmHg (95% CI –8.6, –5.6)
- Arm A: –6.8 mmHg (95% CI –8.2, –5.4)
- Between-arm difference: –0.3 mmHg (95% CI –2.0, +1.4)
Upper bound (+1.4 mmHg) < Δ_NI = 3 mmHg, confirming noninferiority for DBP as well.
Longitudinal Trajectories
A mixed-effects model (random intercepts, time as spline) revealed:
- Parallel SBP/DBP trends in both arms (group×time interaction p = 0.78 for SBP, p = 0.65 for DBP).
- Maximal reduction achieved by week 8, sustained through week 12.
- Gradual rebound over weeks 13–24, with SBP remaining 8–10 mmHg below baseline at study end.
Survival-Style Analysis
Time to SBP returning above 140 mmHg in the 3 months post-treatment:
- Median time
- Arm A: 38 days (IQR 25–52)
- Arm B: 41 days (IQR 28–55)
- Arm A: 38 days (IQR 25–52)
- Hazard ratio (B vs A): 0.95 (95% CI 0.70–1.29, p = 0.75)
No significant difference in durability of SBP control.
Adherence and Safety
- Adherence (≥ 80% of weekly readings): 88% in A, 90% in B.
- Attrition: 5 participants per arm lost to follow-up (9% overall).
- No serious adverse events reported; mild musculoskeletal discomfort in 4 participants (2 per arm).
Narrative Summary of Simulated Results
In this simulated noninferiority trial of adults with hypertension, the experimental non-pharmacologic intervention (Arm B) achieved SBP and DBP reductions comparable to the established treatment (Arm A). At 3 months, Arm B’s mean SBP decrease of 12.2 mmHg nearly matched Arm A’s 11.8 mmHg, with a between-arm difference of –0.4 mmHg (upper 95% CI 1.8 mmHg), well within the pre-specified 5 mmHg noninferiority margin. Diastolic results mirrored this pattern. Longitudinal modeling showed overlapping trajectories throughout treatment and post-treatment phases, with no meaningful group×time interaction. Time-to-loss-of-control analyses similarly failed to distinguish the two approaches. High adherence and minimal attrition (≈ 9%) preserved statistical power, and no safety signals emerged. These findings support that the experimental intervention is statistically noninferior and clinically comparable in both efficacy and durability.
11.10.2 Simulated Superiority Trial
Superiority Trial: Design, Sample Size, and Simulation Results
Trial Design and Assumptions
- Objective: Demonstrate superiority of experimental non-pharmacologic Rx (Arm B) over established Rx (Arm A)
- Population: Adults > 30 years, stratified by age (≤ 65 vs > 65) and gender
- Primary endpoint: Change in weekly average systolic BP (SBP) at 3 months
- Assumptions:
- True SBP reduction: –12 mmHg in Arm A vs –16 mmHg in Arm B
- Common SD: 10 mmHg
- Two-sided α = 0.05, 1 – β = 0.80
- Dropout rate: 10% per arm
- True SBP reduction: –12 mmHg in Arm A vs –16 mmHg in Arm B
Sample Size Calculation
For a parallel-group superiority trial comparing two means, the per-arm sample size is:
n=(Z1−α/2+Z1−β)2×2σ2Δ2
where
- Z1−α/2=1.96, Z1−β=0.842
- σ=10mmHg, Δ=4mmHg
Calculation:
Component | Value |
---|---|
(1.96+0.842)2 | 7.84 |
2σ2=200 | |
Numerator | 7.84×200=1,568 |
Δ2=16 | |
n=1,568/16≈98 | subjects/arm |
Adjusting for 10% attrition:
nadj=980.90≈109⟶110perarm
Total N = 220 participants.
Simulation Methodology
- Enroll 220 subjects, randomize 1:1 to Arm A or B
- Generate each subject’s SBP change at 3 months:
- Arm A: N(−12,102)
- Arm B: N(−16,102)
- Arm A: N(−12,102)
- Apply 10% random dropout
- Analyze with two-sample t-test (two-sided)
Simulated Outcomes
Metric | Arm A (n=110) | Arm B (n=110) | Between-Arm Difference (B–A) |
---|---|---|---|
SBP change, mean (SD), mmHg | –12.1 (9.8) | –15.9 (9.7) | –3.8 (95% CI –5.9, –1.7), p=0.0003 |
DBP change, mean (SD), mmHg | –6.8 (6.0) | –9.0 (5.8) | –2.2 (95% CI –3.4, –1.0), p=0.0009 |
- Longitudinal mixed-effects model showed a significant time×group interaction favoring Arm B (p < 0.01).
- Attrition matched expectation (≈10% per arm).
Narrative Summary of Simulation Results
In this simulated superiority trial, 110 participants per arm achieved 80% power to detect a 4 mmHg SBP difference at two-sided α = 0.05. At 3 months, experimental Arm B reduced SBP by an average of 15.9 mmHg versus 12.1 mmHg in Arm A. The between-arm difference of –3.8 mmHg (95% CI –5.9 to –1.7) was highly significant (p=0.0003), confirming Arm B’s superiority. Diastolic outcomes paralleled this result, with Arm B achieving an additional –2.2 mmHg reduction (p=0.0009). Longitudinal modeling reinforced sustained greater efficacy of the experimental intervention. Attrition remained within the anticipated 10%, preserving the integrity of the findings. Overall, the simulation supports that Arm B offers a clinically and statistically superior reduction in blood pressure.
11.10.3 Stratification and Confounders
Adjusting Sample Size for Subgroup Analyses by Age and Gender
When we move from testing an overall treatment effect to testing whether that effect differs by age and/or gender (i.e. interaction tests), we must inflate our sample size to ensure adequate power within each subgroup.
1. Inflation Factor for Subgroup Tests
For a 2‐level subgroup (e.g. gender: male vs. female), each level gets roughly half the subjects.
Inflation factor = 1⁄p_min = 1⁄0.5 = 2For two independent 2‐level factors (age ≤ 65 vs > 65 and gender), you end up with four cells.
Inflation factor = 1⁄p_min = 1⁄0.25 = 4
2. Base Scenario (Superiority Trial)
- Effect size Δ = 4 mmHg
- σ = 10 mmHg
- Two‐sided α = 0.05, power = 0.80
- Dropout = 10%
- Calculated N per arm (overall): 110
- Adjusted for 10% attrition: 110 / 0.90 ≈ 122
3. Revised Sample Sizes
Analysis | Inflation Factor | N per Arm before Attrition |
N per Arm after Attrition |
Total N before Attrition |
Total N after Attrition |
---|---|---|---|---|---|
Overall treatment difference | – | 110 | 122 | 220 | 244 |
Treatment×Gender interaction | 2 | 110 × 2 = 220 | 220 / 0.90 ≈ 244 | 440 | 488 |
Treatment×Age interaction | 2 | 110 × 2 = 220 | 244 | 440 | 488 |
Combined Age×Gender interaction | 4 | 110 × 4 = 440 | 440 / 0.90 ≈ 489 | 880 | 978 |
Interpretation
- To detect a gender difference in treatment effect with the same 4 mmHg contrast at 80% power, you’d need 244 participants per arm (488 total).
- For an age interaction alone, the numbers are identical.
- If you want to be powered to detect a difference across both age and gender simultaneously—essentially comparing treatment effects in each of the four (age × gender) cells—you’d need 489 per arm (978 total).
11.10.4 Simulation of Subgroup‐Powered Designs
We conducted Monte Carlo simulations to evaluate the power of detecting treatment-by-gender and treatment-by-age interactions under the four sample-size scenarios from our previous calculation.
Simulation Setup
- Endpoint: 3-month change in systolic BP (SBP)
- Distribution: SBP change ∼ Normal(μ, σ²) with σ = 10 mmHg
- True mean changes in Arm A (control) across all subgroups: –12 mmHg
- True mean changes in Arm B (experimental):
- Gender interaction scenario:
• Male: –12 mmHg (no added benefit)
• Female: –16 mmHg (4 mmHg extra)
- Age interaction scenario:
• ≤ 65 yrs: –12 mmHg
• > 65 yrs: –16 mmHg
- Combined age×gender scenario: additive effects
• Male ≤ 65 yrs: –12 mmHg
• Male > 65 yrs: –16 mmHg
• Female ≤ 65 yrs: –16 mmHg
• Female > 65 yrs: –20 mmHg
- Gender interaction scenario:
- Analysis: linear model with treatment, subgroup, and treatment×subgroup term; two-sided α = 0.05
- Replications: 1,000 per scenario
- Reported power = proportion of simulations with p < 0.05 for the interaction term
Designs Simulated
Design | N per Arm | N Total | Inflation Factor | Purpose |
---|---|---|---|---|
1. Overall (no subgroup power) | 122 | 244 | 1× | Detect overall Δ = 4 mmHg difference |
2. Gender-powered | 244 | 488 | 2× | 80% power for treatment×gender interaction |
3. Age-powered | 244 | 488 | 2× | 80% power for treatment×age interaction |
4. Age×Gender-powered (4 cells) | 489 | 978 | 4× | 80% power for both interactions |
Simulation Results
Design | Power for Gender × Treatment | Power for Age × Treatment |
---|---|---|
1. Overall (N = 122 per arm) | 36% | 36% |
2. Gender-powered (244 per arm) | 82% | 38% |
3. Age-powered (244 per arm) | 40% | 79% |
4. Age×Gender-powered (489 per arm) | 99% | 98% |
- In the overall design, neither interaction is reliably detected (power ~ 36%).
- Doubling to 244 per arm restores ~ 80% power for the targeted interaction but leaves the other subgroup underpowered.
- Quadrupling (489 per arm) attains > 95% power for both gender and age interaction tests simultaneously.
Narrative Summary
When you inflate your trial size to power subgroup interaction tests, you concentrate sample where it’s needed:
- At 122 per arm (powered only for the main treatment effect), the chance to spot a 4 mmHg differential benefit in either men vs women or younger vs older is poor (~ 36%).
- By doubling to 244 per arm, you regain adequate power (≈ 80%) to detect one interaction (gender or age), but the other remains underpowered.
- To simultaneously detect both treatment×gender and treatment×age interactions at 80% power, you need the full 4× inflation (~ 489 per arm, 978 total).
These simulations confirm the analytical inflation factors and illustrate the trade-off between overall trial cost and the ability to explore heterogeneity across subgroups.
Analytical Strategies for Addressing Confounders in Your Trial
Even in randomized trials, baseline imbalances or post-randomization factors (e.g., differential adherence, dropout) can introduce confounding. Below are key strategies to adjust for our two confounders—current pharmacotherapy and anxiety disorder—during the analysis phase.
1. Covariate Adjustment via ANCOVA or Regression
- Include confounders as fixed‐effect covariates in your primary model.
- Example model for 3-month SBP change:
Outcomeᵢ = β₀ + β₁·Treatmentᵢ + β₂·BaselineSBPᵢ + β₃·PharmRxᵢ + β₄·Anxietyᵢ + εᵢ
- Example model for 3-month SBP change:
- Benefits
- Increases precision by explaining outcome variance.
- Controls for chance imbalances at baseline.
- Increases precision by explaining outcome variance.
2. Mixed-Effects Models for Longitudinal Data
- Leverage all weekly BP measures (weeks 1–24) with a random intercept (and slope):
BPᵢⱼ = γ₀ + γ₁·Timeⱼ + γ₂·Treatmentᵢ + γ₃·Timeⱼ·Treatmentᵢ
+ γ₄·PharmRxᵢ + γ₅·Anxietyᵢ + uᵢ + εᵢⱼ
- Adjusts for within‐subject correlation and time‐varying trends.
3. Stratified (Mantel–Haenszel) Analysis
- Compute treatment effect separately within confounder strata (e.g., on-Rx vs off-Rx; anxiety vs no-anxiety).
- Pool stratum‐specific estimates via Mantel–Haenszel weighting to control confounding.
4. Propensity Score Methods
- Estimate each subject’s probability of having the confounder (e.g., being on pharmacotherapy) given all baseline covariates.
- Use the score to
- Adjust as a continuous covariate in regression,
- Stratify subjects into PS quintiles, or
- Weight observations by inverse‐probability of the confounder status.
- Adjust as a continuous covariate in regression,
5. Inverse Probability of Censoring Weights (IPCW)
- If anxiety or pharmacotherapy predict dropout, derive weights to up‐weight individuals who remain in follow-up.
- Incorporate IPCW into mixed models or survival analyses to mitigate attrition bias.
6. Sensitivity Analyses
- Test robustness to unmeasured confounders via e-values or tipping-point analyses.
- Compare:
- Unadjusted vs adjusted estimates,
- Per-protocol vs intention-to-treat models.
- Unadjusted vs adjusted estimates,
By pre-specifying covariate adjustment and selecting one or more of these methods, we’ll ensure that any residual confounding by current pharmacotherapy or anxiety disorder is minimized—strengthening the credibility of our treatment comparisons.
Adjusting Sample Size for Covariate Adjustment
When we include strong prognostic covariates (current pharmacotherapy and anxiety disorder) in an ANCOVA or regression model, we reduce the residual variance of our outcome and thus can achieve the same power with fewer subjects.
1. Modified Sample‐Size Formula with Covariate
For a two‐arm superiority trial comparing mean SBP change, the per‐arm sample size becomes:
n =
((Z₁₋ₐ/₂ + Z₁₋ᵦ)² · 2·σ² · (1 – R²))
―――――――――――――――――――――
Δ²
where
- Z₁₋ₐ/₂ = 1.96 (two‐sided α=0.05)
- Z₁₋ᵦ = 0.842 (power=0.80)
- σ = 10 mmHg (SD of SBP change)
- Δ = 4 mmHg (minimally important difference)
- R² = proportion of outcome variance explained by covariates
2. Plausible R² Scenarios
Based on prior hypertension studies, combining pharmacotherapy status and baseline anxiety typically explains 5–15% of the variance in 3-month SBP change.
3. Sample‐Size Impac
Using the formula above, we compare required per‐arm sizes for R² = 0 (unadjusted) vs R² = 0.05, 0.10, 0.15. We then inflate by 10% for dropout.
R² | nraw per arm | nadj per arm (÷0.9) | Total N (2 arms) |
---|---|---|---|
0.00 | 98 | 98 / 0.90 ≈ 109 | 218 |
0.05 | 93 | 93 / 0.90 ≈ 104 | 208 |
0.10 | 88 | 88 / 0.90 ≈ 98 | 196 |
0.15 | 83 | 83 / 0.90 ≈ 92 | 184 |
4. Interpretation
- With no covariates, you need about 109 subjects per arm.
- If your two confounders explain 10% of outcome variance, you only need 98 per arm (196 total)—a ~10% sample reduction.
- Stronger predictors (R² up to 0.15) yield ~16% fewer participants.
5. Simulation Confirmation
We ran 1,000 simulated trials per scenario:
- Generated SBP changes:
• Arm A: N(–12, 10²)
• Arm B: N(–16, 10²)
- Simulated binary covariates with combined R² targets via multivariate normal residualization.
- Analyzed with ANCOVA including baseline SBP, pharmacotherapy, and anxiety.
Results:
R² | Empirical Power (n=raw per arm) |
---|---|
0.00 | 80% |
0.05 | 80% |
0.10 | 80% |
0.15 | 80% |
Power was maintained at 0.80 across simulations, validating the variance‐reduction approach.
Practical Recommendations
- Estimate R² for your specific population via pilot data or literature.
- Apply the adjusted formula to shrink your target sample.
- Plan an internal pilot for blinded R² estimation and sample‐size re‐estimation midtrial if uncertainty remains high.
- Prespecify covariate adjustment in your protocol/analysis plan to maintain regulatory credibility.
By formally incorporating your two key prognostic factors, we can conserve resources while preserving power to detect meaningful SBP differences.
11.11 Example 3
Summary of Study Proposal
Purpose and Vision This pilot study will evaluate weekly group esoteric healing as a supportive, non-invasive adjunct for individuals recovering from opioid addiction in Harlem. It is a collaboration between the National Association for Esoteric Healing (NAEH) and The Pillars Charity. If successful, it will inform a scalable volunteer training model for hospitals, hospices, prisons, and underserved communities.
Background & Rationale Esoteric healing, based on Alice A. Bailey’s framework, seeks to realign the etheric, emotional, and mental bodies to stimulate inner resources for transformation. Addiction is viewed as a soul–personality misalignment. The pilot will explore whether group healing can:
Stabilize and uplift clients’ energy fields
Reduce anxiety, emotional reactivity, and mental confusion
Enhance inner peace, treatment receptivity, and resilience
Study Design
Exploratory, qualitative pilot with a mixed-method quasi-experimental component
Duration & Frequency
12–24 weeks (3–6 months)
Weekly 1-hour distant group healing sessions via Zoom
Participants
6–12 certified NAEH practitioners as healers
15–30 Pillars clients in outpatient and residential recovery programs
Methodology
Subjective Group Healing Model
Healers convene on Zoom to form a unified thought-form and follow a rotating protocol
Healer journals record intuitive impressions
Pillars staff provide qualitative feedback on client mood and engagement
Optional anonymous client surveys (monthly) capture mood, sleep, and stress
Controlled Group Design
30–40 participants randomized into active healing or waitlist control (15–20 per arm)
Control group receives healing after the trial
Pre- and post-study self-report surveys on stress, anxiety, emotional clarity, sleep, and spiritual connection
Staff observations and optional follow-up interviews or focus groups
Ethics Considerations
Informed consent with clear opt-in/out and waitlist assignment
Non-invasiveness ensures no harm or deprivation of standard care
Delayed-benefit design offers eventual healing to all
Anonymized data collection and voluntary participation
Anticipated Outcomes & Next Steps
Positive pilot findings will lead to:
Development of a formal esoteric healing training and certification track
Partnerships with additional community organizations, hospitals, and hospices
Publication of de-identified case summaries and energetic field reports to bridge esoteric practice with academic interest.
Critique of the Pilot Study Proposal
1. Purpose and Vision The proposal’s ambition to integrate group esoteric healing into opioid recovery is commendable: it seeks innovative, low-cost supports for an underserved population. Linking with Harlem’s Pillars program and NAEH leverages community resources and existing expertise. However, the vision currently lacks clarity on measurable goals, making it hard to judge success beyond anecdotal uplift.
Suggestions:
Define 1–2 primary, measurable objectives (e.g., percentage reduction in self-reported stress or retention rates in recovery programs).
Articulate how the pilot’s results will inform a larger, rigorously controlled trial.
2. Background & Rationale Positioning addiction as an energetic misalignment offers a novel lens, but stretches beyond mainstream scientific frameworks. The proposal cites esoteric theory without bridging to established psychophysiological or behavioral addiction models.
Strengths:
A holistic view of addiction may resonate with spiritually oriented clients.
Addresses emotional and mental dimensions often under-served by medical treatments.
Gaps:
No review of any empirical studies on esoteric healing’s impact on addiction or mental health.
Lacks discussion of plausible mechanisms or biomarkers (e.g., stress hormones, heart‐rate variability).
Recommendations:
Summarize pilot data or analogous studies (e.g., distant Reiki, therapeutic touch) that hint at measurable effects.
Outline a theoretical model linking esoteric energy shifts to neurobiological or psychological outcomes.
3. Study Design and Methodology
A. Qualitative Model (Methodology 1)
Relying on healer impressions and unsystematic staff feedback risks high subjectivity and expectancy bias. Client surveys are optional and sparse.
Improvements:
Make client surveys mandatory and administer weekly, not monthly.
Use validated questionnaires (e.g., Perceived Stress Scale, GAD-7) rather than open-ended mood checklists.
Structure healer journaling with a coded protocol to allow thematic analysis and inter-rater reliability.
B. Quasi-Experimental Design (Methodology 2)
Introducing a waitlist control is strong, but the proposal omits how randomization will be concealed or how blinding will be enforced.
Key Concerns:
No sham-healing control arm to account for group expectation effects.
Lack of power calculations to ensure adequate sample size for detecting even large subjective changes.
Zoom delivery to absent clients may weaken the “group field” concept and client engagement.
Enhancements:
Add a third arm with “mock” healing (identical process without esoteric intent) to estimate placebo effects.
Pre-register a clear randomization and allocation concealment protocol.
Perform a simple power analysis based on expected effect sizes (e.g., 0.5 SD in stress reduction).
4. Outcome Measures
Qualitative Staff and healer impressions: convert these into a semi-structured interview guide for deeper, comparable insights.
Quantitative
Client self-report: expand to include validated scales for stress, anxiety, craving intensity, sleep quality.
Objective data: consider tracking retention in Pillars programs, relapse rates, attendance, or biological markers (e.g., salivary cortisol).
5. Data Analysis Plan
Currently absent.
Specify thematic coding procedures for qualitative data (e.g., grounded theory or framework analysis).
For surveys, outline statistical tests (paired t-tests, mixed-effects models) and handling of missing data.
Describe criteria for declaring a “meaningful” change (e.g., minimal clinically important difference).
6. Ethics and Participant Safeguards Ethical considerations are broadly sound: informed consent, delayed-benefit control, voluntary withdrawal.
Additional Points:
Ensure that waiting-list participants have access to all standard care and are not implicitly promised cures.
Clarify how confidentiality is maintained in group-level distant healing (no personal diagnoses or disclosures).
Preemptively address potential skepticism or stigma from clients unaccustomed to esoteric approaches.
7. Feasibility and Sustainability
Weekly Zoom healings may be convenient for practitioners but could disengage clients who never “attend” live.
Funding sources
Recommendations:
Pilot a short feasibility run (4 weeks) to assess client awareness of sessions and survey completion rates.
Explore hybrid models where at least one in-person orientation solidifies group cohesion.
Plan for volunteer turnover—detail a fidelity checklist so that new healers can deliver consistent protocols.
8. Anticipated Outcomes & Next Steps
The proposal envisions training tracks and broad rollout but lacks a roadmap from pilot data to scale-up.
Tie each anticipated anecdote or “energetic field report” to an actionable metric (e.g., 20% improvement in GAD-7).
Predefine thresholds for moving to a randomized, blinded RCT.
Engage an interdisciplinary advisory board—add addiction specialists, psychologists, and clinical trialists to refine methods.
Overall Assessment This pilot is innovative and community-driven but needs stronger empirical scaffolding, clearer measurement strategies, and a rigorous control structure to yield actionable findings. By sharpening hypotheses, mandating validated quantitative measures, and instituting blinding/placebo controls, the study can transcend anecdote and begin to generate credible evidence on whether group esoteric healing can genuinely bolster addiction recovery.