top of page
Gotham__Org Gotham White.png

Interpretation

OF THE FACIT MEASURES

Due to the evolving nature of QOL research, the best approach to interpreting data collected with a FACIT measure is to conduct a comprehensive literature search to determine the approaches taken by others and build upon that body of work.

Development

The majority of FACIT measures have undergone a standard scale development and validation methodology, which takes place in four phases: item generation, item reduction, scale construction, and psychometric evaluation. The scale development process involves considerable input from patients and expert health care providers, using a semi-structured interview designed to elicit personal experiences and educated opinions about how a disease, treatment, or condition may affect physical status, emotional well-being, functional well-being, family/social issues, sexuality/intimacy, work status, and future orientation. This process yields an exhaustive list of candidate items, which then undergo a series of reviews and reductions based on patient and expert ratings and item quality. A finite set of targeted concerns are then derived. Final candidate items are formatted with response choices compatible with a 5-point Likert-type scale, and appended to the FACT-G.


Newly constructed FACIT subscales then undergo an initial assessment of reliability and validity using a sample of at least 50 patients. The validation design typically involves patient completion of a baseline assessment, a test-retest assessment 3–7 days later, and a third assessment 2–3 months later to demonstrate sensitivity to change over time. Relevant sociodemographic and treatment data is also collected and a battery of other measures administered at the baseline and 2–3 month retest to help determine convergent and divergent validity. A comprehensive analysis of the data gathered (including item response theory modeling when sample size allows) yields useful psychometric information and establishes initial reliability and validity of the scale.


Further details regarding the development and validation of specific FACIT measures can be found in the literature.

Reference Values

Reference values are population values of a PRO instrument which can be a particular disease population or the general population. They are also often useful if generated for a particular political or geographical designation, e.g., at the country level. Such values can be useful for putting scores of an individual or group into context. Typically, reference values include averages, dispersion (e.g., standard deviation), ranges, or other aspects of the scores’ distributions. They are often reported for an overall sample and for key demographic groups (e.g., by age and sex). Reference values are most useful if they are estimated using a representative sample of patients, regardless of whether that is for the general population or a particular disease sample. Reference values can be applied usefully in both research and clinical settings. There have been multiple reports of reference values for FACIT instruments. In addition to the FACT-G, reference values have been published for the FACT-General
Population (FACT-GP; general population sample); FACT Kidney Symptom Index instruments (FKSI; general population sample); FACIT-Fatigue (general population sample); FACT-Cognitive Function (FACT-Cog; healthy population); and the FACIT-Spiritual Wellbeing Scale (FACIT-Sp-12). We recommend that these reference values be used for comparison to scores from future research.

Clinical and Other Anchors

Anchor variables are very useful tools to help interpret FACIT score differences and change. Anchors are external criterion variables on which the magnitude of change on the construct of interest is well-understood and therefore can be used to “anchor” an interpretation of difference or change on the PRO of interest. Anchors are useful for multiple important applications in PRO-based research. First, anchors are used to test known-groups validity and responsiveness to change in the process of establishing a PRO’s psychometric properties. Second, and more germane to the interpretation of FACIT measures, there is now general consensus that anchor-based approaches are most appropriate for establishing thresholds for important differences and important changes at the group level. In this case, “differences” refer to cross-sectional, between-groups comparisons, and “changes”
refer to within-group comparisons over time. Finally, anchoring PROs to clinically familiar differences and changes can help translate their meaning to patients and clinicians. Multiple types of anchors are useful for establishing important differences and changes. There is significant focus on patient reported anchors. Patient-reported anchors have the advantage of utilizing the same assessment method, and they typically assess changes that are meaningful to patients. In addition, when the patient-reported anchor represents the same construct as the PRO, we have more confidence that the difference or change estimates derived from an analysis using the anchor are relevant to the PRO. However, other types of anchors may be useful as well, especially in cancer research.
For example, clinical variables that are not the same construct as the PRO but have a demonstrable
relationship with the PRO, such as adverse events, tumor response, or progression, may be useful as well. However, any anchor used should be sufficiently correlated with the PRO to justify its use. We require a minimum correlation of 0.30 to justify use of an anchor; although correlations above 0.40 are preferred, as we have noted a paradox by which anchors with lower correlations tend to produce smaller estimates of important difference or change. Because this is essentially an exercise in acquiring multiple converging points of evidence, we advise use of multiple anchors that include patient report, clinician report, and objective clinical metrics (e.g., laboratory values; radiographic data). 

Important Differences and Change

At the group level, determining the level of difference that is considered important to patients or
other stakeholders over and above statistical significance can enhance interpretation because, with large sample sizes, even trivial differences can be statistically significant. Important difference estimates can be used to determine whether patient groups differ in HRQoL, and may be especially useful for planning future studies by providing a basis for power analyses. Similarly, important change estimates can indicate the amount of change that patients find meaningful or that indicate clinically important improvements or decrements. A previous summary of important differences
and changes on FACIT instruments found relative consistency in the magnitude important differences in terms of proportion of the total scale points. In summary, the following ranges for
important differences were found: FACT-G Total: 4–7% of total scores (3–7 units), FACT-G subscales:
7–11% (2–3 units), symptom-targeted instrument totals (e.g., Total FACT-Anemia, Total FACT-Breast, Total FACT-Colorectal, Total FACT-Head and Neck): 4–8% (5–12 units), and trial outcome indexes (e.g., Fatigue, Anemia, Biological Response Modifiers, Breast, Colorectal, Lung): 5–7% (4–7 units). This was a thorough aggregation of data up to 2005, but many studies estimating important differences
for FACIT instruments, especially newer instruments or for non-cancer populations, have been
published since that time. After collecting 15 additional years of data, these 2005 estimates have held true.


We recommend that researchers consult the literature for up-to-date and
appropriate important difference or change estimates for any given FACT or FACIT scale of
interest. To implement this recommendation, it is important to use estimates of important change
that have come from longitudinal studies actually focusing on change over time in the FACT or
FACIT scale of interest, instead of substituting a cross-sectional estimate of the important difference
where an estimate of important change is needed. There are a few reasons to distinguish between change versus difference estimates. First, analyses to estimate important change typically use change scores (i.e., difference between baseline and a post-baseline follow-up), which may be
distributed differently than FACT/FACIT scale scores at a single cross-sectional cut. Second, the
analyses used to determine change often differ from analyses to estimate important differences
in some ways. Identifying important changes in terms of meaningfulness to patients is required to
support the use of FACT/FACIT instruments in regulatory applications. The FDA, for one, has
prioritized estimating meaningful change thresholds for PROs using patient-reported anchors that
measure the same construct or domain of the PRO to be used as an endpoint in trials to show
treatment benefit. A very common anchor for this kind of application is the patient global
impression of change (PGIC), which retrospectively asks the patients how much they have
changed on a domain of interest over a clinically relevant period of time and a set of discreet
response options to characterize this change. Then, the difference in mean PRO change scores can be examined over the PGIC response options to determine the amount of change on the PRO associated with meaningful categories as defined on the PGIC, e.g., difference in mean PRO change scores between patients reporting being “about the same” and “a little worse” on the PGIC anchor. To help interpret these differences, empirical cumulative distribution plots (eCDF) can be created and plotted to represent change on the PRO within each anchor category. A useful alternative to the PGIC may be to examine prospective change in a similar item, the patient global impression of severity (PGIS), which assesses the level of symptom severity at a given time point.

Responder Definition

An important step in interpreting a PRO is to identify the responder definition, or the amount of change at the individual level that should be interpreted as treatment benefit. Used alone, group-level estimates of change on PROs may not be appropriate for classifying individuals as having changed. Identifying responders to treatment requires determining whether the change for an individual patient is significant, and group-level estimates of change (e.g., from
important difference or change analyses) may under-estimate this. This view is in contrast to current regulatory focus on defining responders in terms of meaningful change based on a patient-reported anchor; such methods are necessarily group-based, focusing on identifying the average change for the group of individuals who said they changed on an anchor. In contrast to this approach, other authors have argued that, “a minimum standard for saying an individual has responded (improved) should include that the change in score is statistically significant.” Since it often requires large
changes, statistically significant change at the individual level may also be meaningful to the
individual.

Interpretation

Higher scores for the scales and subscales indicate better quality of life. Average FACT-G scores for a group of patients can be compared to normative data to determine the HRQOL of the patients relative to the general U.S. population. These comparisons facilitate meaningful interpretation of HRQOL in patient populations. Though the body of literature is constantly evolving, normative data typically does not exist for disease-, symptom-, or condition-specific subscales.


FACIT measures have been shown to be responsive to change in both clinical and observational studies. Minimally important differences (MIDs) for scores of scales and subscales for some measures are available in the literature. An MID is the "smallest difference in score in the domain of interest that patients perceive as important, either beneficial or harmful, and that would lead the clinician to consider a change in the patient's management". MID estimates may vary across patients and possibly across patient groups; thus, ranges of MIDs have been identified for some scales, though it’s best to check the literature.

 

For more information about any of the above, please refer to:

Webster, K.A., Peipert, J.D., Lent, L.F., Bredle, J., Cella, D. (2022). The Functional Assessment of Chronic Illness Therapy (FACIT) Measurement System: Guidance for Use in Research and Clinical Practice. In: Kassianos, A.P. (eds) Handbook of Quality of Life in Cancer. Springer, Cham. https://doi.org/10.1007/978-3-030-84702-9_6

Further Reading

Webster, K.A., Peipert, J.D., Lent, L.F., Bredle, J., Cella, D. (2022). The Functional Assessment of Chronic Illness Therapy (FACIT) Measurement System: Guidance for Use in Research and Clinical Practice. In: Kassianos, A.P. (eds) Handbook of Quality of Life in Cancer. Springer, Cham. https://doi.org/10.1007/978-3-030-84702-9_6

T. Pearman, B. Yanez, J. Peipert, K. Wortman, J. Beaumont, and D. Cella, "Ambulatory cancer and US general population reference values and cutoff scores for the functional assessment of cancer therapy," Cancer, vol. 120, no. 18, pp. 2902-2909, 2014.

 

P. S. Brucker, K. Yost, J. Cashy, K. Webster, and D. Cella, "General population and cancer patient norms for the Functional Assessment of Cancer Therapy-General (FACT-G)," Evaluation & the health professions, vol. 28, no. 2, pp. 192-211, 2005.

 

Z. Butt, J. Peipert, K. Webster, C. Chen, and D. Cella, "General population norms for the functional assessment of cancer therapy–Kidney Symptom Index (FKSI)," Cancer, vol. 119, no. 2, pp. 429-437, 2013.

B. Holzner et al., "Normative data for functional assessment of cancer therapy general scale and its use for the interpretation of quality of life scores in cancer survivors," Acta Oncologica, vol. 43, no. 2, pp. 153-160, 2004.

M. Janda, T. DiSipio, C. Hurst, D. Cella, and B. Newman, "The Queensland cancer risk study: general population norms for the Functional Assessment of Cancer Therapy–General (FACT‐G)," Psycho‐Oncology: Journal of the Psychological, Social and Behavioral Dimensions of Cancer, vol. 18, no. 6, pp. 606-614, 2009.

A.-S. L. Bagge, A. Carlander, C. Fahlke, and R. O. Bagge, "Health-Related Quality of Life (FACT-GP) in General Swedish Population," European Journal of Surgical Oncology, vol. 46, no. 2, pp. e7-e8, 2020.

I. Montan, B. Löwe, D. Cella, A. Mehnert, and A. Hinz, "General population norms for the functional assessment of chronic illness therapy (FACIT)-Fatigue Scale," Value in Health, vol. 21, no. 11, pp. 1313-1321, 2018.

D. Cella, J. s. Lai, C. H. Chang, A. Peterman, and M. Slavin, "Fatigue in cancer patients compared with fatigue in the general United States population," Cancer, vol. 94, no. 2, pp. 528-538, 2002.

D. Cella, M. J. Zagari, C. Vandoros, D. D. Gagnon, H.-J. Hurtz, and J. W. Nortier, "Epoetin alfa treatment results in clinically significant improvements in quality of life in anemic cancer patients when referenced to the general population," Journal of Clinical Oncology, vol. 21, no. 2.

M. Lange, N. Heutte, N. Morel, F. Eustache, F. Joly, and B. Giffard, "Cognitive complaints in cancer: The French version of the Functional Assessment of Cancer Therapy–Cognitive Function (FACT-Cog), normative data from a healthy population," Neuropsychological rehabilitation, vol. 26, no. 3, pp. 392-409, 2016.

 

J.-S. Lai et al., "Parent-perceived child cognitive function: results from a sample drawn from the US general population," Child's Nervous System, vol. 27, no. 2, pp. 285-293, 2011.
 

A. R. Munoz, J. M. Salsman, K. D. Stein, and D. Cella, "Reference values of the Functional Assessment of Chronic Illness Therapy‐Spiritual Well‐Being: A report from the American Cancer Society's studies of cancer survivors," Cancer, vol. 121, no. 11, pp. 1838-1844, 2015.

 

G. R. Norman, F. G. Sridhar, G. H. Guyatt, and S. D. Walter, "Relation of distribution-and anchor-based approaches in interpretation of changes in health-related quality of life," Medical care, pp. 1039-1047, 2001.

D. Cella, D. T. Eton, J.-S. Lai, A. H. Peterman, and D. E. Merkel, "Combining anchor and distribution-based methods to derive minimal clinically important differences on the Functional Assessment of Cancer Therapy (FACT) anemia and fatigue scales," Journal of pain and symptom management, vol. 24, no. 6, pp. 547-561, 2002.

R. R. Hay, D., "Reliability and validity (including responsiveness)," in Assessing Quality of Life in Clinical Trials: Methods and Practice, P. F. R. Hays Ed., 2nd ed. Oxford, NY: Oxford University Press, 2005, pp. 525-539.

 

T. Devji et al., "Evaluating the credibility of anchor based estimates of minimal important differences for patient reported outcomes: instrument development and reliability study," bmj, vol. 369, 2020.

K. J. Yost and D. T. Eton, "Combining distribution-and anchor-based approaches to determine minimally important differences: the FACIT experience," Evaluation & the health professions, vol. 28, no. 2, pp. 172-191, 2005.

 

D. Victorson, M. Soni, and D. Cella, "Metaanalysis of the correlation between radiographic tumor response and patient‐reported outcomes," Cancer: Interdisciplinary International Journal of the American Cancer Society, vol. 106, no. 3, pp. 494-504, 2006.

 

P. M. Fayers and R. D. Hays, "Don’t middle your MIDs: regression to the mean shrinks estimates of minimally important differences," Quality of Life Research, vol. 23, no. 1, pp. 1-4, 2014.

 

J. M. Salsman, J. L. Beaumont, K. Wortman, Y. Yan, J. Friend, and D. Cella, "Brief versions of the FACIT-fatigue and FAACT subscales for patients with non-small cell lung cancer cachexia," Supportive Care in Cancer, vol. 23, no. 5, pp. 1355-1364, 2015.

 

P. Rebelo, A. Oliveira, L. Andrade, C. Valente, and A. Marques, "Minimal Clinically Important Differences for Patient-Reported Outcome Measures of Fatigue in Patients With COPD Following Pulmonary Rehabilitation," Chest, vol. 158, no. 2, pp. 550-561, 2020.

 

S. N. Garland et al., "Prospective evaluation of the reliability, validity, and minimally important difference of the functional assessment of cancer therapy‐gastric (FACT‐Ga) quality‐of‐life instrument," Cancer, vol. 117, no. 6, pp. 1302-1312, 2011.

 

J. D. Peipert et al., "Validation of the Functional Assessment of Cancer Therapy–Leukemia instrument in patients with acute myeloid leukemia who are not candidates for intensive therapy," Cancer, vol. 126, no. 15, pp. 3542-3551, 2020.

 

M. T. King, M. Agar, D. C. Currow, J. Hardy, B. Fazekas, and N. McCaffrey, "Assessing quality of life in palliative care settings: head-to-head comparison of four patient-reported outcome measures (EORTC QLQ-C15-PAL, FACT-Pal, FACT-Pal-14, FACT-G7)," Supportive Care in Cancer, vol. 28, no. 1, pp. 141-153, 2020.

 

S. Yount et al., "A randomized validation study comparing embedded versus extracted FACT Head and Neck Symptom Index scores," Quality of Life Research, vol. 16, no. 10, pp. 1615-1626, 2007.

 

D. Cella et al., "Validity of the FACT Hepatobiliary (FACT-Hep) questionnaire for assessing disease-related symptoms and health-related quality of life in patients with metastatic pancreatic cancer," Quality of Life Research, vol. 22, no. 5, pp. 1105-1112, 2013.

 

D. Cella et al., "What is a clinically meaningful change on the functional assessment of Cancer therapy–lung (FACT-L) questionnaire?: results from eastern cooperative oncology group (ECOG) study 5592," Journal of clinical epidemiology, vol. 55, no. 3, pp. 285-295, 2002.

 

D. Cella, M. B. Nichol, D. Eton, J. B. Nelson, and P. Mulani, "Estimating clinically meaningful changes for the Functional Assessment of Cancer Therapy—Prostate: results from a clinical trial of patients with metastatic hormone-refractory prostate cancer," Value in Health, vol. 12, no. 1, pp. 124-129, 2009.

J. Steel, D. T. Eton, D. Cella, M. Olek, and B. Carr, "Clinically meaningful changes in health-related quality of life in patients diagnosed with hepatobiliary carcinoma," Annals of Oncology, vol. 17, no. 2, pp. 304-312, 2006.

R. Jaeschke, J. Singer, and G. H. Guyatt, "Measurement of health status. Ascertaining the minimal clinically important difference," (in eng), Control Clin Trials, vol. 10, no. 4, pp. 407-15, Dec 1989, doi: 10.1016/0197-2456(89)90005-6.

 

H. L. Cheng et al., "Psychometric testing of the Functional Assessment of Cancer Therapy/Gynecologic Oncology Group—Neurotoxicity (FACT/GOG-Ntx) subscale in a longitudinal study of cancer patients treated with chemotherapy," Health and quality of life outcomes, vol. 18, no. 1, pp. 1-9, 2020.

 

S.-F. Wong et al., "A prospective study to validate the functional assessment of cancer therapy (FACT) for epidermal growth factor receptor inhibitor (EGFRI)-induced dermatologic toxicities FACT-EGFRI 18 questionnaire: SWOG S1013," Journal of patient-reported outcomes, vol. 4, no. 1, pp. 1-12, 2020.

U. F. a. D. Administration, "Discussion Document for Patient-Focused Drug Development Public Workshop on Guidance 4: Incorporating Clinical Outcome Assessments into Endpoints for Regulatory Decision-Making," United States Department of Health and Human Services, Silver Spring, MD, 2019.

 

U. F. a. D. Administration, "Discussion Document for Patient-Focused Drug Development Public Workshop on Guidance 3: Select, Develop or Modify Fit-for-Purpose Clinical Outcome Assessments," United States Department of Health and Human Services, Silver Spring, MD, 2018.

R. E. Jensen et al., "Validation of the PROMIS physical function measures in a diverse US population-based cohort of cancer patients," Quality of life research, vol. 24, no. 10, pp. 2333-2344, 2015.

 

R. E. Jensen et al., "Responsiveness of 8 Patient‐Reported Outcomes Measurement Information System (PROMIS) measures in a large, community‐based cancer study cohort," Cancer, vol. 123, no. 2, pp. 327-335, 2017.

 

C. D. Coon and K. F. Cook, "Moving from significance to real-world meaning: methods for interpreting change in clinical outcome assessment scores," Quality of Life Research, vol. 27, no. 1, pp. 33-40, 2018.

 

H. R. D. P. J. D, "Minimally Important Differences Do Not Identify Responders to Treatment," JOJ Sciences, Juniper Publishers Inc., vol. 1, no. 1, pp. 4-5, 2018.

 

G. R. Norman, P. Stratford, and G. Regehr, "Methodological problems in the retrospective computation of responsiveness to change: the lesson of Cronbach," Journal of clinical epidemiology, vol. 50, no. 8, pp. 869-879, 1997.

 

L. D. McLeod, C. D. Coon, S. A. Martin, S. E. Fehnel, and R. D. Hays, "Interpreting patient-reported outcome results: US FDA guidance and emerging methods," Expert review of pharmacoeconomics & outcomes research, vol. 11, no. 2, pp. 163-169, 2011.

 

R. D. Hays, M. Brodsky, M. F. Johnston, K. L. Spritzer, and K.-K. Hui, "Evaluating the statistical significance of health-related quality-of-life change in individual patients," Evaluation & the Health Professions, vol. 28, no. 2, pp. 160-171, 2005.

 

M. T. King, A. C. Dueck, and D. A. Revicki, "Can methods developed for interpreting group-level patient-reported outcome data be applied to individual patient management?," Medical care, vol. 57, no. Suppl 5 1, p. S38, 2019.

 

N. S. Jacobson and P. Truax, "Clinical significance: A statistical approach to defining meaningful change in psychotherapy research," Journal of Consulting and Clinical Psychology, vol. 59, no. 1, pp. 12-19, 1991, doi: 10.1037/0022-006X.59.1.12.

 

R. D. Hays, K. L. Spritzer, C. D. Sherbourne, G. W. Ryan, and I. D. Coulter, "Group and individual-level change on health-related quality of life in chiropractic patients with chronic low back or neck pain," Spine, vol. 44, no. 9, p. 647, 2019.

bottom of page