Strengthening the Reporting of Observational Studies in Epidemiology (STROBE): Explanation and Elaboration. Translation to Russian

Cover Page

Cite item


Much medical research is observational. The reporting of observational studies is often of insufficient quality. Poor reporting hampers the assessment of the strengths and weaknesses of a study and the generalisability of its results. Taking into account empirical evidence and theoretical considerations, a group of methodologists, researchers, and editors developed the Strengthening the Reporting of Observational Studies in Epidemiology (STROBE) recommendations to improve the quality of reporting of observational studies. The STROBE Statement consists of a checklist of 22 items, which relate to the title, abstract, introduction, methods, results and discussion sections of articles. Eighteen items are common to cohort studies, case-control studies and cross-sectional studies and four are specific to each of the three study designs. The STROBE Statement provides guidance to authors about how to improve the reporting of observational studies and facilitates critical appraisal and interpretation of studies by reviewers, journal editors and readers. This explanatory and elaboration document is intended to enhance the use, understanding, and dissemination of the STROBE Statement. The meaning and rationale for each checklist item are presented. For each item, one or several published examples and, where possible, references to relevant empirical studies and methodological literature are provided. Examples of useful flow diagrams are also included. The STROBE Statement, this document, and the associated Web site (http://www. should be helpful resources to improve reporting of observational research.

This article is the reprint with Russian translation of the original that can be observed here: Vandenbroucke JP, von Elm E, Altman DG, Gotzsche PC, Mulrow CD, et al. Strengthening the Reporting of Observational Studies in Epidemiology (STROBE): Explanation and Elaboration. PLoS Med. 2007;4(10):e297. doi: 10.1371/journal.pmed.0040297.

Full Text


CI, confidence interval; RERI, Relative Excess Risk from Interaction; RR, relative risk; STROBE, Strengthening the Reporting of Observational Studies in Epidemiology


Rational health care practices require knowledge about the aetiology and pathogenesis, diagnosis, prognosis and treatment of diseases. Randomised trials provide valuable evidence about treatments and other interventions. However, much of clinical or public health knowledge comes from observational research [1]. About nine of ten research papers published in clinical speciality journals describe observational research [2, 3].

The STROBE Statement

Reporting of observational research is often not detailed and clear enough to assess the strengths and weaknesses of the investigation [4, 5]. To improve the reporting of observational research, we developed a checklist of items that should be addressed: the Strengthening the Reporting of Observational Studies in Epidemiology (STROBE) Statement (Table 1). Items relate to title, abstract, introduction, methods, results and discussion sections of articles. The STROBE Statement has recently been published in several journals [6]. Our aim is to ensure clear presentation of what was planned, done, and found in an observational study. We stress that the recommendations are not prescriptions for setting up or conducting studies, nor do they dictate methodology or mandate a uniform presentation.

STROBE provides general reporting recommendations for descriptive observational studies and studies that investigate associations between exposures and health outcomes. STROBE addresses the three main types of observational studies: cohort, case-control and cross-sectional studies. Authors use diverse terminology to describe these study designs. For instance, ‘follow-up study’ and ‘longitudinal study’ are used as synonyms for ‘cohort study’, and ‘prevalence study’ as synonymous with ‘cross-sectional study’. We chose the present terminology because it is in common use. Unfortunately, terminology is often used incorrectly [7] or imprecisely [8]. In Box 1 we describe the hallmarks of the three study designs.

The Scope of Observational Research

Observational studies serve a wide range of purposes: from reporting a first hint of a potential cause of a disease, to verifying the magnitude of previously reported associations. Ideas for studies may arise from clinical observations or from biologic insight. Ideas may also arise from informal looks at data that lead to further explorations. Like a clinician who has seen thousands of patients, and notes one that strikes her attention, the researcher may note something special in the data. Adjusting for multiple looks at the data may not be possible or desirable [9], but further studies to confirm or refute initial observations are often needed [10]. Existing data may be used to examine new ideas about potential causal factors, and may be sufficient for rejection or confirmation. In other instances, studies follow that are specifically designed to overcome potential problems with previous reports. The latter studies will gather new data and will be planned for that purpose, in contrast to analyses of existing data. This leads to diverse viewpoints, e.g., on the merits of looking at subgroups or the importance of a predetermined sample size. STROBE tries to accommodate these diverse uses of observational research - from discovery to refutation or confirmation. Where necessary we will indicate in what circumstances specific recommendations apply.

How to Use This Paper

This paper is linked to the shorter STROBE paper that introduced the items of the checklist in several journals [6], and forms an integral part of the STROBE Statement. Our intention is to explain how to report research well, not how research should be done. We offer a detailed explanation for each checklist item. Each explanation is preceded by an example of what we consider transparent reporting. This does not mean that the study from which the example was taken was uniformly well reported or well done; nor does it mean that its findings were reliable, in the sense that they were later confirmed by others: it only means that this particular item was well reported in that study. In addition to explanations and examples we included Boxes 1-8 with supplementary information. These are intended for readers who want to refresh their memories about some theoretical points, or be quickly informed about technical background details. A full understanding of these points may require studying the textbooks or methodological papers that are cited.

STROBE recommendations do not specifically address topics such as genetic linkage studies, infectious disease modelling or case reports and case series [11, 12]. As many of the key elements in STROBE apply to these designs, authors who report such studies may nevertheless find our recommendations useful. For authors of observational studies that specifically address diagnostic tests, tumour markers and genetic associations, STARD [13], REMARK [14], and STREGA [15] recommendations may be particularly useful.

The Items in the STROBE Checklist

We now discuss and explain the 22 items in the STROBE checklist (Table 1), and give published examples for each item. Some examples have been edited by removing citations or spelling out abbreviations. Eighteen items apply to all three study designs whereas four are design-specific. Starred items (for example item 8*) indicate that the information should be given separately for cases and controls in casecontrol studies, or exposed and unexposed groups in cohort and cross-sectional studies. We advise authors to address all items somewhere in their paper, but we do not prescribe a precise location or order. For instance, we discuss the reporting of results under a number of separate items, while recognizing that authors might address several items within a single section of text or in a table.

The Items


1 (a). Indicate the study’s design with a commonly used term in the title or the abstract.


‘‘Leukaemia incidence among workers in the shoe and boot manufacturing industry: a case-control study’’ [18].


Readers should be able to easily identify the design that was used from the title or abstract. An explicit, commonly used term for the study design also helps ensure correct indexing of articles in electronic databases [19, 20].

Table 1. The STROBE Statement—Checklist of Items That Should Be Addressed in Reports of Observational Studies






(a) Indicate the study's design with a commonly used term in the title or the abstract

(b) Provide in the abstract an informative and balanced summary of what was done and what was found




Explain the scientific background and rationale for the investigation being reported



State specific objectives, including any prespecified hypotheses


Study design


Present key elements of study design early in the paper



Describe the setting, locations, and relevant dates, including periods of recruitment, exposure, follow-up, and data collection



(a) Cohort study—Give the eligibility criteria, and the sources and methods of selection of participants. Describe methods of follow-up

Case-control study—Give the eligibility criteria, and the sources and methods of case ascertainment and control selection. Give the rationale for the choice of cases and controls

Cross-sectional study—Give the eligibility criteria, and the sources and methods of selection of participants

(b) Cohort study—For matched studies, give matching criteria and number of exposed and unexposed

Case-control study—For matched studies, give matching criteria and the number of controls per case



Clearly define all outcomes, exposures, predictors, potential confounders, and effect modifiers. Give diagnostic criteria, if applicable

Data sources/ measurement


For each variable of interest, give sources of data and details of methods of assessment (measurement).

Describe comparability of assessment methods if there is more than one group



Describe any efforts to address potential sources of bias

Study size


Explain how the study size was arrived at

Quantitative variables


Explain how quantitative variables were handled in the analyses. If applicable, describe which groupings were chosen, and why

Statistical methods


a) Describe all statistical methods, including those used to control for confounding

b) Describe any methods used to examine subgroups and interactions

c) Explain how missing data were addressed

d) Cohort study—If applicable, explain how loss to follow-up was addressed

Case-control study—If applicable, explain how matching of cases and controls was addressed

Cross-sectional study—If applicable, describe analytical methods taking account of sampling strategy

e) Describe any sensitivity analyses




a) Report the numbers of individuals at each stage of the study—e.g., numbers potentially eligible, examined for eligibility, confirmed eligible, included in the study, completing follow-up, and analysed

b) Give reasons for non-participation at each stage

c) Consider use of a flow diagram

Descriptive data


(a) Give characteristics of study participants (e.g., demographic, clinical, social) and information on exposures and potential confounders

(b) Indicate the number of participants with missing data for each variable of interest

(c) Cohort study—Summarise follow-up time (e.g., average and total amount)

Outcome data


Cohort study—Report numbers of outcome events or summary measures over time

Case-control study—Report numbers in each exposure category, or summary measures of exposure

Cross-sectional study—Report numbers of outcome events or summary measures

Main results


(a) Give unadjusted estimates and, if applicable, confounder-adjusted estimates and their precision (e.g., 95% confidence interval). Make clear which confounders were adjusted for and why they were included

(b) Report category boundaries when continuous variables were categorized

(c) If relevant, consider translating estimates of relative risk into absolute risk for a meaningful time period

Other analyses


Report other analyses done—e.g., analyses of subgroups and interactions, and sensitivity analyses


Key results


Summarise key results with reference to study objectives



Discuss limitations of the study, taking into account sources of potential bias or imprecision. Discuss both direction and magnitude of any potential bias



Give a cautious overall interpretation of results considering objectives, limitations, multiplicity of analyses, results from similar studies, and other relevant evidence



Discuss the generalisability (external validity) of the study results




Give the source of funding and the role of the funders for the present study and, if applicable, for the original study on which the present article is based

*Give such information separately for cases and controls in case-control studies, and, if applicable, for exposed and unexposed groups in cohort and cross-sectional studies.

Note: An Explanation and Elaboration article discusses each checklist item and gives methodological background and published examples of transparent reporting. The STROBE checklist is best used in conjunction with this article (freely available on the Web sites of PLoS Medicine at, Annals of Internal Medicine at, and Epidemiology at Separate versions of the checklist for cohort, case-control, and cross-sectional studies are available on the STROBE Web site at http://www. doi:10.1371/journal.pmed.0040297.t001


Box 1. Main study designs covered by STROBE

Cohort, case-control, and cross-sectional designs represent different approaches of investigating the occurrence of health-related events in a given population and time period. These studies may address many types of health-related events, including disease or disease remission, disability or complications, death or survival, and the occurrence of risk factors.

In cohort studies, the investigators follow people over time. They obtain information about people and their exposures at baseline, let time pass, and then assess the occurrence of outcomes. Investigators commonly make contrasts between individuals who are exposed and not exposed or among groups of individuals with different categories of exposure. Investigators may assess several different outcomes, and examine exposure and outcome variables at multiple points during follow-up. Closed cohorts (for example birth cohorts) enrol a defined number of participants at study onset and follow them from that time forward, often at set intervals up to a fixed end date. In open cohorts the study population is dynamic: people enter and leave the population at different points in time (for example inhabitants of a town). Open cohorts change due to deaths, births, and migration, but the composition of the population with regard to variables such as age and gender may remain approximately constant, especially over a short period of time. In a closed cohort cumulative incidences (risks) and incidence rates can be estimated; when exposed and unexposed groups are compared, this leads to risk ratio or rate ratio estimates. Open cohorts estimate incidence rates and rate ratios.

In case-control studies, investigators compare exposures between people with a particular disease outcome (cases) and people without that outcome (controls). Investigators aim to collect cases and controls that are representative of an underlying cohort or a cross-section of a population. That population can be defined geographically, but also more loosely as the catchment area of health care facilities. The case sample may be 100% or a large fraction of available cases, while the control sample usually is only a small fraction of the people who do not have the pertinent outcome. Controls represent the cohort or population of people from which the cases arose. Investigators calculate the ratio of the odds of exposures to putative causes of the disease among cases and controls (see Box 7). Depending on the sampling strategy for cases and controls and the nature of the population studied, the odds ratio obtained in a case-control study is interpreted as the risk ratio, rate ratio or (prevalence) odds ratio [16, 17]. The majority of published case-control studies sample open cohorts and so allow direct estimations of rate ratios.

In cross-sectional studies, investigators assess all individuals in a sample at the same point in time, often to examine the prevalence of exposures, risk factors or disease. Some cross-sectional studies are analytical and aim to quantify potential causal associations between exposures and disease. Such studies may be analysed like a cohort study by comparing disease prevalence between exposure groups. They may also be analysed like a case-control study by comparing the odds of exposure between groups with and without disease. A difficulty that can occur in any design but is particularly clear in cross-sectional studies is to establish that an exposure preceded the disease, although the time order of exposure and outcome may sometimes be clear. In a study in which the exposure variable is congenital or genetic, for example, we can be confident that the exposure preceded the disease, even if we are measuring both at the same time.

1 (b). Provide in the abstract an informative and balanced summary of what was done and what was found.

Example [21]

Background: The expected survival of HIV-infected patients is of major public health interest.

Objective: To estimate survival time and age-specific mortality rates of an HIV-infected population compared with that of the general population.

Design: Population-based cohort study.

Setting: All HIV-infected persons receiving care in Denmark from 1995 to 2005.

Patients: Each member of the nationwide Danish HIV Cohort Study was matched with as many as 99 persons from the general population according to sex, date of birth, and municipality of residence.

Measurements: The authors computed Kaplan-Meier life tables with age as the time scale to estimate survival from age 25 years. Patients with HIV infection and corresponding persons from the general population were observed from the date of the patient’s HIV diagnosis until death, emigration, or 1 May 2005.

Results: 3990 HIV-infected patients and 379 872 persons from the general population were included in the study, yielding 22 744 (median, 5.8 y/person) and 2 689 287 (median, 8.4 years/person) person-years of observation. Three percent of participants were lost to follow-up. From age 25 years, the median survival was 19.9 years (95% CI, 18.5 to 21.3) among patients with HIV infection and 51.1 years (CI, 50.9 to 51.5) among the general population. For HIV-infected patients, survival increased to 32.5 years (CI, 29.4 to 34.7) during the 2000 to 2005 period. In the subgroup that excluded persons with known hepatitis C coinfection (16%), median survival was 38.9 years (CI, 35.4 to 40.1) during this same period. The relative mortality rates for patients with HIV infection compared with those for the general population decreased with increasing age, whereas the excess mortality rate increased with increasing age.

Limitations: The observed mortality rates are assumed to apply beyond the current maximum observation time of 10 years.

Conclusions: The estimated median survival is more than 35 years for a young person diagnosed with HIV infection in the late highly active antiretroviral therapy era. However, an ongoing effort is still needed to further reduce mortality rates for these persons compared with the general population.


The abstract provides key information that enables readers to understand a study and decide whether to read the article. Typical components include a statement of the research question, a short description of methods and results, and a conclusion [22]. Abstracts should summarize key details of studies and should only present information that is provided in the article. We advise presenting key results in a numerical form that includes numbers of participants, estimates of associations and appropriate measures of variability and uncertainty (e.g., odds ratios with confidence intervals). We regard it insufficient to state only that an exposure is or is not significantly associated with an outcome.

A series of headings pertaining to the background, design, conduct, and analysis of a study may help readers acquire the essential information rapidly [23]. Many journals require such structured abstracts, which tend to be of higher quality and more readily informative than unstructured summaries [24, 25].


The Introduction section should describe why the study was done and what questions and hypotheses it addresses. It should allow others to understand the study’s context and judge its potential contribution to current knowledge.

  1. Background/rationale: Explain the scientific background and rationale for the investigation being reported.


‘‘Concerns about the rising prevalence of obesity in children and adolescents have focused on the well documented associations between childhood obesity and increased cardiovascular risk and mortality in adulthood. Childhood obesity has considerable social and psychological consequences within childhood and adolescence, yet little is known about social, socioeconomic, and psychological consequences in adult life. A recent systematic review found no longitudinal studies on the outcomes of childhood obesity other than physical health outcomes and only two longitudinal studies of the socioeconomic effects of obesity in adolescence. Gortmaker et al. found that US women who had been obese in late adolescence in 1981 were less likely to be married and had lower incomes seven years later than women who had not been overweight, while men who had been overweight were less likely to be married. Sargent et al. found that UK women, but not men, who had been obese at 16 years in 1974 earned 7.4% less than their non-obese peers at age 23. <...> We used longitudinal data from the 1970 British birth cohort to examine the adult socioeconomic, educational, social, and psychological outcomes of childhood obesity’’ [26].


The scientific background of the study provides important context for readers. It sets the stage for the study and describes its focus. It gives an overview of what is known on a topic and what gaps in current knowledge are addressed by the study. Background material should note recent pertinent studies and any systematic reviews of pertinent studies.

  1. Objectives: State specific objectives, including any prespecified hypotheses.


‘‘Our primary objectives were to 1) determine the prevalence of domestic violence among female patients presenting to four community-based, primary care, adult medicine practices that serve patients of diverse socioeconomic background and 2) identify demographic and clinical differences between currently abused patients and patients not currently being abused ’’ [27].


Objectives are the detailed aims of the study. Well crafted objectives specify populations, exposures and outcomes, and parameters that will be estimated. They may be formulated as specific hypotheses or as questions that the study was designed to address. In some situations objectives may be less specific, for example, in early discovery phases. Regardless, the report should clearly reflect the investigators’ intentions. For example, if important subgroups or additional analyses were not the original aim of the study but arose during data analysis, they should be described accordingly (see also items 4, 17 and 20).


The Methods section should describe what was planned and what was done in sufficient detail to allow others to understand the essential aspects of the study, to judge whether the methods were adequate to provide reliable and valid answers, and to assess whether any deviations from the original plan were reasonable.

  1. Study design: Present key elements of study design early in the paper.


‘‘We used a case-crossover design, a variation of a casecontrol design that is appropriate when a brief exposure (driver’s phone use) causes a transient rise in the risk of a rare outcome (a crash). We compared a driver’s use of a mobile phone at the estimated time of a crash with the same driver’s use during another suitable time period. Because drivers are their own controls, the design controls for characteristics of the driver that may affect the risk of a crash but do not change over a short period of time. As it is important that risks during control periods and crash trips are similar, we compared phone activity during the hazard interval (time immediately before the crash) with phone activity during control intervals (equivalent times during which participants were driving but did not crash) in the previous week’’ [28].


We advise presenting key elements of study design early in the methods section (or at the end of the introduction) so that readers can understand the basics of the study. For example, authors should indicate that the study was a cohort study, which followed people over a particular time period, and describe the group of persons that comprised the cohort and their exposure status. Similarly, if the investigation used a case-control design, the cases and controls and their source population should be described. If the study was a crosssectional survey, the population and the point in time at which the cross-section was taken should be mentioned. When a study is a variant of the three main study types, there is an additional need for clarity. For instance, for a casecrossover study, one of the variants of the case-control design, a succinct description of the principles was given in the example above [28].

We recommend that authors refrain from simply calling a study ‘prospective’ or ‘retrospective’ because these terms are ill defined [29]. One usage sees cohort and prospective as synonymous and reserves the word retrospective for casecontrol studies [30]. A second usage distinguishes prospective and retrospective cohort studies according to the timing of data collection relative to when the idea for the study was developed [31]. A third usage distinguishes prospective and retrospective case-control studies depending on whether the data about the exposure of interest existed when cases were selected [32]. Some advise against using these terms [33], or adopting the alternatives ‘concurrent’ and ‘historical’ for describing cohort studies [34]. In STROBE, we do not use the words prospective and retrospective, nor alternatives such as concurrent and historical. We recommend that, whenever authors use these words, they define what they mean. Most importantly, we recommend that authors describe exactly how and when data collection took place.

The first part of the methods section might also be the place to mention whether the report is one of several from a study. If a new report is in line with the original aims of the study, this is usually indicated by referring to an earlier publication and by briefly restating the salient features of the study. However, the aims of a study may also evolve over time.

Researchers often use data for purposes for which they were not originally intended, including, for example, official vital statistics that were collected primarily for administrative purposes, items in questionnaires that originally were only included for completeness, or blood samples that were collected for another purpose. For example, the Physicians’ Health Study, a randomized controlled trial of aspirin and carotene, was later used to demonstrate that a point mutation in the factor V gene was associated with an increased risk of venous thrombosis, but not of myocardial infarction or stroke [35]. The secondary use of existing data is a creative part of observational research and does not necessarily make results less credible or less important. However, briefly restating the original aims might help readers understand the context of the research and possible limitations in the data.

  1. Setting: Describe the setting, locations, and relevant dates, including periods of recruitment, exposure, follow-up, and data collection.


‘‘The Pasitos Cohort Study recruited pregnant women from Women, Infant and Child clinics in Socorro and San Elizario, El Paso County, Texas and maternal-child clinics of the Mexican Social Security Institute in Ciudad Juarez, Mexico from April 1998 to October 2000. At baseline, prior to the birth of the enrolled cohort children, staff interviewed mothers regarding the household environment. In this ongoing cohort study, we target follow-up exams at 6-month intervals beginning at age 6 months’’ [36].


Readers need information on setting and locations to assess the context and generalisability of a study’s results. Exposures such as environmental factors and therapies can change over time. Also, study methods may evolve over time. Knowing when a study took place and over what period participants were recruited and followed up places the study in historical context and is important for the interpretation of results.

Information about setting includes recruitment sites or sources (e.g., electoral roll, outpatient clinic, cancer registry, or tertiary care centre). Information about location may refer to the countries, towns, hospitals or practices where the investigation took place. We advise stating dates rather than only describing the length of time periods. There may be different sets of dates for exposure, disease occurrence, recruitment, beginning and end of follow-up, and data collection. Of note, nearly 80% of 132 reports in oncology journals that used survival analysis included the starting and ending dates for accrual of patients, but only 24% also reported the date on which follow-up ended [37].

  1. Participants

6 (a). Cohort study: Give the eligibility criteria, and the sources and methods of selection of participants. Describe methods of follow-up.


‘‘Participants in the Iowa Women’s Health Study were a random sample of all women ages 55 to 69 years derived from the state of Iowa automobile driver’s license list in 1985, which represented approximately 94% of Iowa women in that age group. (. ..) Follow-up questionnaires were mailed in October 1987 and August 1989 to assess vital status and address changes. (.. .) Incident cancers, except for nonmelanoma skin cancers, were ascertained by the State Health Registry of Iowa (.. .). The Iowa Women’s Health Study cohort was matched to the registry with combinations of first, last, and maiden names, zip code, birthdate, and social security number’’ [38].

6 (a). Case-control study: Give the eligibility criteria, and the sources and methods of case ascertainment and control selection. Give the rationale for the choice of cases and controls.


‘‘Cutaneous melanoma cases diagnosed in 1999 and 2000 were ascertained through the Iowa Cancer Registry (.. .). Controls, also identified through the Iowa Cancer Registry, were colorectal cancer patients diagnosed during the same time. Colorectal cancer controls were selected because they are common and have a relatively long survival, and because arsenic exposure has not been conclusively linked to the incidence of colorectal cancer’’ [39].

6 (a). Cross-sectional study: Give the eligibility criteria, and the sources and methods of selection of participants.


‘‘We retrospectively identified patients with a principal diagnosis of myocardial infarction (code 410) according to the International Classification of Diseases, 9th Revision, Clinical Modification, from codes designating discharge diagnoses, excluding the codes with a fifth digit of 2, which designates a subsequent episode of care <...> A random sample of the entire Medicare cohort with myocardial infarction from February 1994 to July 1995 was selected (. ..) To be eligible, patients had to present to the hospital after at least 30 minutes but less than 12 hours of chest pain and had to have ST-segment elevation of at least 1 mm on two contiguous leads on the initial electrocardiogram’’ [40].


Detailed descriptions of the study participants help readers understand the applicability of the results. Investigators usually restrict a study population by defining clinical, demographic and other characteristics of eligible participants. Typical eligibility criteria relate to age, gender, diagnosis and comorbid conditions. Despite their importance, eligibility criteria often are not reported adequately. In a survey of observational stroke research, 17 of 49 reports (35%) did not specify eligibility criteria [5].

Eligibility criteria may be presented as inclusion and exclusion criteria, although this distinction is not always necessary or useful. Regardless, we advise authors to report all eligibility criteria and also to describe the group from which the study population was selected (e.g., the general population of a region or country), and the method of recruitment (e.g., referral or self-selection through advertisements).

Knowing details about follow-up procedures, including whether procedures minimized non-response and loss to follow-up and whether the procedures were similar for all participants, informs judgments about the validity of results. For example, in a study that used IgM antibodies to detect acute infections, readers needed to know the interval between blood tests for IgM antibodies so that they could judge whether some infections likely were missed because the interval between blood tests was too long [41]. In other studies where follow-up procedures differed between exposed and unexposed groups, readers might recognize substantial bias due to unequal ascertainment of events or differences in non-response or loss to follow-up [42]. Accordingly, we advise that researchers describe the methods used for following participants and whether those methods were the same for all participants, and that they describe the completeness of ascertainment of variables (see also item 14).

In case-control studies, the choice of cases and controls is crucial to interpreting the results, and the method of their selection has major implications for study validity. In general, controls should reflect the population from which the cases arose. Various methods are used to sample controls, all with advantages and disadvantages: for cases that arise from a general population, population roster sampling, random digit dialling, neighbourhood or friend controls are used. Neighbourhood or friend controls may present intrinsic matching on exposure [17]. Controls with other diseases may have advantages over population-based controls, in particular for hospital-based cases, because they better reflect the catchment population of a hospital, have greater comparability of recall and ease of recruitment. However, they can present problems if the exposure of interest affects the risk of developing or being hospitalized for the control condition(s) [43, 44]. To remedy this problem often a mixture of the best defensible control diseases is used [45].

6 (b). Cohort study: For matched studies, give matching criteria and number of exposed and unexposed.


‘‘For each patient who initially received a statin, we used propensity-based matching to identify one control who did not receive a statin according to the following protocol. First, propensity scores were calculated for each patient in the entire cohort on the basis of an extensive list of factors potentially related to the use of statins or the risk of sepsis. Second, each statin user was matched to a smaller pool of non-statin-users by sex, age (plus or minus 1 year), and index date (plus or minus 3 months). Third, we selected the control with the closest propensity score (within 0.2 SD) to each statin user in a 1:1 fashion and discarded the remaining controls.’’ [46].

6 (b). Case-control study: For matched studies, give matching criteria and the number of controls per case.


‘‘We aimed to select five controls for every case from among individuals in the study population who had no diagnosis of autism or other pervasive developmental disorders (PDD) recorded in their general practice record and who were alive and registered with a participating practice on the date of the PDD diagnosis in the case. Controls were individually matched to cases by year of birth (up to 1 year older or younger), sex, and general practice. For each of 300 cases, five controls could be identified who met all the matching criteria. For the remaining 994, one or more controls was excluded...’’ [47].


Matching is much more common in case-control studies, but occasionally, investigators use matching in cohort studies to make groups comparable at the start of follow-up. Matching in cohort studies makes groups directly comparable for potential confounders and presents fewer intricacies than with case-control studies. For example, it is not necessary to take the matching into account for the estimation of the relative risk [48]. Because matching in cohort studies may increase statistical precision investigators might allow for the matching in their analyses and thus obtain narrower confidence intervals.

In case-control studies matching is done to increase a study’s efficiency by ensuring similarity in the distribution of variables between cases and controls, in particular the distribution of potential confounding variables [48, 49]. Because matching can be done in various ways, with one or more controls per case, the rationale for the choice of matching variables and the details of the method used should be described. Commonly used forms of matching are frequency matching (also called group matching) and individual matching. In frequency matching, investigators choose controls so that the distribution of matching variables becomes identical or similar to that of cases. Individual matching involves matching one or several controls to each case. Although intuitively appealing and sometimes useful, matching in case-control studies has a number of disadvantages, is not always appropriate, and needs to be taken into account in the analysis (see Box 2).

Even apparently simple matching procedures may be poorly reported. For example, authors may state that controls were matched to cases ‘within five years’, or using ‘five year age bands’. Does this mean that, if a case was 54 years old, the respective control needed to be in the five-year age band 50 to 54, or aged 49 to 59, which is within five years of age 54? If a wide (e.g., 10-year) age band is chosen, there is a danger of residual confounding by age (see also Box 4), for example because controls may then be younger than cases on average.

  1. Variables: Clearly define all outcomes, exposures, predictors, potential confounders, and effect modifiers. Give diagnostic criteria, if applicable.


‘‘Only major congenital malformations were included in the analyses. Minor anomalies were excluded according to the exclusion list of European Registration of Congenital Anomalies (EUROCAT). If a child had more than one major congenital malformation of one organ system, those malformations were treated as one outcome in the analyses by organ system (.. .) In the statistical analyses, factors considered potential confounders were maternal age at delivery and number of previous parities. Factors considered potential effect modifiers were maternal age at reimbursement for antiepileptic medication and maternal age at delivery’’ [55].


Authors should define all variables considered for and included in the analysis, including outcomes, exposures, predictors, potential confounders and potential effect modifiers. Disease outcomes require adequately detailed description of the diagnostic criteria. This applies to criteria for cases in a case-control study, disease events during follow-up in a cohort study and prevalent disease in a cross-sectional study. Clear definitions and steps taken to adhere to them are particularly important for any disease condition of primary interest in the study.

For some studies, ‘determinant’ or ‘predictor’ may be appropriate terms for exposure variables and outcomes may be called ‘endpoints’. In multivariable models, authors sometimes use ‘dependent variable’ for an outcome and ‘independent variable’ or ‘explanatory variable’ for exposure and confounding variables. The latter is not precise as it does not distinguish exposures from confounders.

If many variables have been measured and included in exploratory analyses in an early discovery phase, consider providing a list with details on each variable in an appendix, additional table or separate publication. Of note, the International Journal of Epidemiology recently launched a new section with ‘cohort profiles’, that includes detailed information on what was measured at different points in time in particular studies [56, 57]. Finally, we advise that authors declare all ‘candidate variables’ considered for statistical analysis, rather than selectively reporting only those included in the final models (see also item 16a) [58, 59].

Box 2. Matching in case-control studies

In any case-control study, sensible choices need to be made on whether to use matching of controls to cases, and if so, what variables to match on, the precise method of matching to use, and the appropriate method of statistical analysis. Not to match at all may mean that the distribution of some key potential confounders (e.g., age, sex) is radically different between cases and controls. Although this could be adjusted for in the analysis there could be a major loss in statistical efficiency.

The use of matching in case-control studies and its interpretation are fraught with difficulties, especially if matching is attempted on several risk factors, some of which may be linked to the exposure of prime interest [50, 51]. For example, in a case-control study of myocardial infarction and oral contraceptives nested in a large pharmaco-epidemiologic data base, with information about thousands of women who are available as potential controls, investigators may be tempted to choose matched controls who had similar levels of risk factors to each case of myocardial infarction. One objective is to adjust for factors that might influence the prescription of oral contraceptives and thus to control for confounding by indication. However, the result will be a control group that is no longer representative of the oral contraceptive use in the source population: controls will be older than the source population because patients with myocardial infarction tend to be older. This has several implications. A crude analysis of the data will produce odds ratios that are usually biased towards unity if the matching factor is associated with the exposure. The solution is to perform a matched or stratified analysis (see item 12d). In addition, because the matched control group ceases to be representative for the population at large, the exposure distribution among the controls can no longer be used to estimate the population attributable fraction (see Box 7) [52]. Also, the effect of the matching factor can no longer be studied, and the search for well-matched controls can be cumbersome - making a design with a non-matched control group preferable because the non-matched controls will be easier to obtain and the control group can be larger. Overmatching is another problem, which may reduce the efficiency of matched case-control studies, and, in some situations, introduce bias. Information is lost and the power of the study is reduced if the matching variable is closely associated with the exposure. Then many individuals in the same matched sets will tend to have identical or similar levels of exposures and therefore not contribute relevant information. Matching will introduce irremediable bias if the matching variable is not a confounder but in the causal pathway between exposure and disease. For example, in vitro fertilization is associated with an increased risk of perinatal death, due to an increase in multiple births and low birth weight infants [53]. Matching on plurality or birth weight will bias results towards the null, and this cannot be remedied in the analysis.

Matching is intuitively appealing, but the complexities involved have led methodologists to advise against routine matching in case-control studies. They recommend instead a careful and judicious consideration of each potential matching factor, recognizing that it could instead be measured and used as an adjustment variable without matching on it. In response, there has been a reduction in the number of matching factors employed, an increasing use of frequency matching, which avoids some of the problems discussed above, and more case-control studies with no matching at all [54]. Matching remains most desirable, or even necessary, when the distributions of the confounder (e.g., age) might differ radically between the unmatched comparison groups [48, 49].

  1. Data sources/measurement: For each variable of interest give sources of data and details of methods of assessment (measurement). Describe comparability of assessment methods if there is more than one group.

Example 1

‘‘Total caffeine intake was calculated primarily using US Department of Agriculture food composition sources. In these calculations, it was assumed that the content of caffeine was 137 mg per cup of coffee, 47 mg per cup of tea, 46 mg per can or bottle of cola beverage, and 7 mg per serving of chocolate candy. This method of measuring (caffeine) intake was shown to be valid in both the NHS I cohort and a similar cohort study of male health professionals <...> Self-reported diagnosis of hypertension was found to be reliable in the NHS I cohort’’ [60].

Example 2

‘‘Samples pertaining to matched cases and controls were always analyzed together in the same batch and laboratory personnel were unable to distinguish among cases and controls’’ [61].


The way in which exposures, confounders and outcomes were measured affects the reliability and validity of a study. Measurement error and misclassification of exposures or outcomes can make it more difficult to detect cause-effect relationships, or may produce spurious relationships. Error in measurement of potential confounders can increase the risk of residual confounding [62, 63]. It is helpful, therefore, if authors report the findings of any studies of the validity or reliability of assessments or measurements, including details of the reference standard that was used. Rather than simply citing validation studies (as in the first example), we advise that authors give the estimated validity or reliability, which can then be used for measurement error adjustment or sensitivity analyses (see items 12e and 17).

In addition, it is important to know if groups being compared differed with respect to the way in which the data were collected. This may be important for laboratory examinations (as in the second example) and other situations. For instance, if an interviewer first questions all the cases and then the controls, or vice versa, bias is possible because of the learning curve; solutions such as randomising the order of interviewing may avoid this problem. Information bias may also arise if the compared groups are not given the same diagnostic tests or if one group receives more tests of the same kind than another (see also item 9).

  1. Bias: Describe any efforts to address potential sources of bias.

Example 1

‘‘In most case-control studies of suicide, the control group comprises living individuals but we decided to have a control group of people who had died of other causes <...>. With a control group of deceased individuals, the sources of information used to assess risk factors are informants who have recently experienced the death of a family member or close associate - and are therefore more comparable to the sources of information in the suicide group than if living controls were used’’ [64].

Example 2

‘‘Detection bias could influence the association between Type 2 diabetes mellitus (T2DM) and primary open-angle glaucoma (POAG) if women with T2DM were under closer ophthalmic surveillance than women without this condition. We compared the mean number of eye examinations reported by women with and without diabetes. We also recalculated the relative risk for POAG with additional control for covariates associated with more careful ocular surveillance (a self-report of cataract, macular degeneration, number of eye examinations, and number of physical examinations)’’ [65].


Biased studies produce results that differ systematically from the truth (see also Box 3). It is important for a reader to know what measures were taken during the conduct of a study to reduce the potential of bias. Ideally, investigators carefully consider potential sources of bias when they plan their study. At the stage of reporting, we recommend that authors always assess the likelihood of relevant biases. Specifically, the direction and magnitude of bias should be discussed and, if possible, estimated. For instance, in casecontrol studies information bias can occur, but may be reduced by selecting an appropriate control group, as in the first example [64]. Differences in the medical surveillance of participants were a problem in the second example [65]. Consequently, the authors provide more detail about the additional data they collected to tackle this problem. When investigators have set up quality control programs for data collection to counter a possible ‘‘drift’’ in measurements of variables in longitudinal studies, or to keep variability at a minimum when multiple observers are used, these should be described.

Unfortunately, authors often do not address important biases when reporting their results. Among 43 case-control and cohort studies published from 1990 to 1994 that investigated the risk of second cancers in patients with a history of cancer, medical surveillance bias was mentioned in only 5 articles [66]. A survey of reports of mental health research published during 1998 in three psychiatric journals found that only 13% of 392 articles mentioned response bias [67]. A survey of cohort studies in stroke research found that 14 of 49 (28%) articles published from 1999 to 2003 addressed potential selection bias in the recruitment of study participants and 35 (71%) mentioned the possibility that any type of bias may have affected results [5].

Box 3. Bias

Bias is a systematic deviation of a study’s result from a true value. Typically, it is introduced during the design or implementation of a study and cannot be remedied later. Bias and confounding are not synonymous. Bias arises from flawed information or subject selection so that a wrong association is found. Confounding produces relations that are factually right, but that cannot be interpreted causally because some underlying, unaccounted for factor is associated with both exposure and outcome (see Box 5). Also, bias needs to be distinguished from random error, a deviation from a true value caused by statistical fluctuations (in either direction) in the measured data. Many possible sources of bias have been described and a variety of terms are used [68, 69]. We find two simple categories helpful: information bias and selection bias.

Information bias occurs when systematic differences in the completeness or the accuracy of data lead to differential misclassification of individuals regarding exposures or outcomes. For instance, if diabetic women receive more regular and thorough eye examinations, the ascertainment of glaucoma will be more complete than in women without diabetes (see item 9) [65]. Patients receiving a drug that causes non-specific stomach discomfort may undergo gastroscopy more often and have more ulcers detected than patients not receiving the drug - even if the drug does not cause more ulcers. This type of information bias is also called ‘detection bias' or ‘medical surveillance bias’. One way to assess its influence is to measure the intensity of medical surveillance in the different study groups, and to adjust for it in statistical analyses. In case-control studies information bias occurs if cases recall past exposures more or less accurately than controls without that disease, or if they are more or less willing to report them (also called ‘recall bias'). ‘Interviewer bias’ can occur if interviewers are aware of the study hypothesis and subconsciously or consciously gather data selectively [70]. Some form of blinding of study participants and researchers is therefore often valuable.

Selection bias may be introduced in case-control studies if the probability of including cases or controls is associated with exposure. For instance, a doctor recruiting participants for a study on deep-vein thrombosis might diagnose this disease in a woman who has leg complaints and takes oral contraceptives. But she might not diagnose deep-vein thrombosis in a woman with similar complaints who is not taking such medication. Such bias may be countered by using cases and controls that were referred in the same way to the diagnostic service [70].

Similarly, the use of disease registers may introduce selection bias: if a possible relationship between an exposure and a disease is known, cases may be more likely to be submitted to a register if they have been exposed to the suspected causative agent [72]. ‘Response bias' is another type of selection bias that occurs if differences in characteristics between those who respond and those who decline participation in a study affect estimates of prevalence, incidence and, in some circumstances, associations. In general, selection bias affects the internal validity of a study. This is different from problems that may arise with the selection of participants for a study in general, which affects the external rather than the internal validity of a study (also see item 21).

  1. Study size: Explain how the study size was arrived at.

Example 1

‘‘The number of cases in the area during the study period determined the sample size’’ [73].

Example 2

‘‘A survey of postnatal depression in the region had documented a prevalence of 19.8%. Assuming depression in mothers with normal weight children to be 20% and an odds ratio of 3 for depression in mothers with a malnourished child we needed 72 case-control sets (one case to one control) with an 80% power and 5% significance’’ [74].


A study should be large enough to obtain a point estimate with a sufficiently narrow confidence interval to meaningfully answer a research question. Large samples are needed to distinguish a small association from no association. Small studies often provide valuable information, but wide confidence intervals may indicate that they contribute less to current knowledge in comparison with studies providing estimates with narrower confidence intervals. Also, small studies that show ‘interesting’ or ‘statistically significant’ associations are published more frequently than small studies that do not have ‘significant’ findings. While these studies may provide an early signal in the context of discovery, readers should be informed of their potential weaknesses.

The importance of sample size determination in observational studies depends on the context. If an analysis is performed on data that were already available for other purposes, the main question is whether the analysis of the data will produce results with sufficient statistical precision to contribute substantially to the literature, and sample size considerations will be informal. Formal, a priori calculation of sample size may be useful when planning a new study [75, 76]. Such calculations are associated with more uncertainty than implied by the single number that is generally produced. For example, estimates of the rate of the event of interest or other assumptions central to calculations are commonly imprecise, if not guesswork [77]. The precision obtained in the final analysis can often not be determined beforehand because it will be reduced by inclusion of confounding variables in multivariable analyses [78], the degree of precision with which key variables can be measured [79], and the exclusion of some individuals.

Few epidemiological studies explain or report deliberations about sample size [4, 5]. We encourage investigators to report pertinent formal sample size calculations if they were done. In other situations they should indicate the considerations that determined the study size (e.g., a fixed available sample, as in the first example above). If the observational study was stopped early when statistical significance was achieved, readers should be told. Do not bother readers with post hoc justifications for study size or retrospective power calculations [77]. From the point of view of the reader, confidence intervals indicate the statistical precision that was ultimately obtained. It should be realized that confidence intervals reflect statistical uncertainty only, and not all uncertainty that may be present in a study (see item 20).

  1. Quantitative variables: Explain how quantitative variables were handled in the analyses. If applicable, describe which groupings were chosen, and why.


‘‘Patients with a Glasgow Coma Scale less than 8 are considered to be seriously injured. A GCS of 9 or more indicates less serious brain injury. We examined the association of GCS in these two categories with the occurrence of death within 12 months from injury’’ [80].


Investigators make choices regarding how to collect and analyse quantitative data about exposures, effect modifiers and confounders. For example, they may group a continuous exposure variable to create a new categorical variable (see Box 4). Grouping choices may have important consequences for later analyses [81, 82]. We advise that authors explain why and how they grouped quantitative data, including the number of categories, the cut-points, and category mean or median values. Whenever data are reported in tabular form, the counts of cases, controls, persons at risk, person-time at risk, etc. should be given for each category. Tables should not consist solely of effect-measure estimates or results of model fitting.

Box 4. Grouping

There are several reasons why continuous data may be grouped [86]. When collecting data it may be better to use an ordinal variable than to seek an artificially precise continuous measure for an exposure based on recall over several years. Categories may also be helpful for presentation, for example to present all variables in a similar style, or to show a doseresponse relationship.

Grouping may also be done to simplify the analysis, for example to avoid an assumption of linearity. However, grouping loses information and may reduce statistical power [87] especially when dichotomization is used [82, 85, 88]. If a continuous confounder is grouped, residual confounding may occur, whereby some of the variable's confounding effect remains unadjusted for (see Box 5) [62, 89]. Increasing the number of categories can diminish power loss and residual confounding, and is especially appropriate in large studies. Small studies may use few groups because of limited numbers.

Investigators may choose cut-points for groupings based on commonly used values that are relevant for diagnosis or prognosis, for practicality, or on statistical grounds. They may choose equal numbers of individuals in each group using quantiles [90]. On the other hand, one may gain more insight into the association with the outcome by choosing more extreme outer groups and having the middle group(s) larger than the outer groups [91]. In case-control studies, deriving a distribution from the control group is preferred since it is intended to reflect the source population. Readers should be informed if cut-points are selected post hoc from several alternatives. In particular, if the cut-points were chosen to minimise a P value the true strength of an association1 will be exaggerated [81].

When analysing grouped variables, it is important to recognise their underlying continuous nature. For instance, a possible trend in risk across ordered groups can be investigated. A common approach is to model the rank of the groups as a continuous variable. Such linearity across group scores will approximate an actual linear relation if groups are equally spaced (e.g., l0 year age groups) but not otherwise. Il'yasova et al [92]. recommend publication of both the categorical and the continuous estimates of effect, with their standard errors, in order to facilitate meta-analysis, as well as providing intrinsically valuable information on dose-response. One analysis may inform the other and neither is assumption-free. Authors often ignore the ordering and consider the estimates (and P values) separately for each category compared to the reference category. This may be useful for description, but may fail to detect a real trend in risk across groups. If a trend is observed, a confidence interval for a slope2 might indicate the strength of the observation.

  1. Quantitative variables: Explain how quantitative variables were handled in the analyses. If applicable, describe which groupings were chosen, and why.


‘‘Patients with a Glasgow Coma Scale less than 8 are considered to be seriously injured. A GCS of 9 or more indicates less serious brain injury. We examined the association of GCS in these two categories with the occurrence of death within 12 months from injury’’ [80].


Investigators make choices regarding how to collect and analyse quantitative data about exposures, effect modifiers and confounders. For example, they may group a continuous exposure variable to create a new categorical variable (see Box 4). Grouping choices may have important consequences for later analyses [81, 82]. We advise that authors explain why and how they grouped quantitative data, including the number of categories, the cut-points, and category mean or median values. Whenever data are reported in tabular form, the counts of cases, controls, persons at risk, person-time at risk, etc. should be given for each category. Tables should not consist solely of effect-measure estimates or results of model fitting.

Investigators might model an exposure as continuous in order to retain all the information. In making this choice, one needs to consider the nature of the relationship of the exposure to the outcome. As it may be wrong to assume a linear relation automatically, possible departures from linearity should be investigated. Authors could mention alternative models they explored during analyses (e.g., using log transformation, quadratic terms or spline functions). Several methods exist for fitting a non-linear relation between the exposure and outcome [82-84]. Also, it may be informative to present both continuous and grouped analyses for a quantitative exposure of prime interest.

In a recent survey, two thirds of epidemiological publications studied quantitative exposure variables [4]. In 42 of 50 articles (84%) exposures were grouped into several ordered categories, but often without any stated rationale for the choices made. Fifteen articles used linear associations to model continuous exposure but only two reported checking for linearity. In another survey, of the psychological literature, dichotomization was justified in only 22 of 110 articles (20%) [85].

  1. Statistical methods:

12 (а). Describe all statistical methods, including those used to control for confounding.


‘‘The adjusted relative risk was calculated using the Mantel-Haenszel technique, when evaluating if confounding by age or gender was present in the groups compared. The 95% confidence interval (CI) was computed around the adjusted relative risk, using the variance according to Greenland and Robins and Robins et al.’’ [93].


In general, there is no one correct statistical analysis but, rather, several possibilities that may address the same question, but make different assumptions. Regardless, investigators should pre-determine analyses at least for the primary study objectives in a study protocol. Often additional analyses are needed, either instead of, or as well as, those originally envisaged, and these may sometimes be motivated by the data. When a study is reported, authors should tell readers whether particular analyses were suggested by data inspection. Even though the distinction between pre-specified and exploratory analyses may sometimes be blurred, authors should clarify reasons for particular analyses.

If groups being compared are not similar with regard to some characteristics, adjustment should be made for possible confounding variables by stratification or by multivariable regression (see Box 5) [94]. Often, the study design determines which type of regression analysis is chosen. For instance, Cox proportional hazard regression is commonly used in cohort studies [95]. whereas logistic regression is often the method of choice in case-control studies [96, 97]. Analysts should fully describe specific procedures for variable selection and not only present results from the final model [98, 99]. If model comparisons are made to narrow down a list of potential confounders for inclusion in a final model, this process should be described. It is helpful to tell readers if one or two covariates are responsible for a great deal of the apparent confounding in a data analysis. Other statistical analyses such as imputation procedures, data transformation, and calculations of attributable risks should also be described. Nonstandard or novel approaches should be referenced and the statistical software used reported. As a guiding principle, we advise statistical methods be described ‘‘with enough detail to enable a knowledgeable reader with access to the original data to verify the reported results’’ [100].

In an empirical study, only 93 of 169 articles (55%) reporting adjustment for confounding clearly stated how continuous and multi-category variables were entered into the statistical model [101]. Another study found that among 67 articles in which statistical analyses were adjusted for confounders, it was mostly unclear how confounders were chosen [4].

12 (b). Describe any methods used to examine subgroups and interactions.


‘‘Sex differences in susceptibility to the 3 lifestyle-related risk factors studied were explored by testing for biological interaction according to Rothman: a new composite variable with 4 categories (a-b-, a-b+, a+b-, and a+b+) was redefined for sex and a dichotomous exposure of interest where a and bdenote absence of exposure. RR was calculated for each category after adjustment for age. An interaction effect is defined as departure from additivity of absolute effects, and excess RR caused by interaction (RERI) was calculated:

RERI = RR(a+b+) – RR(ab+) – RR(a+b) – 1

where RR(a+b+) denotes RR among those exposed to both factors where RR(a b ) is used as reference category (RR = 1.0). Ninety-five percent CIs were calculated as proposed by Hosmer and Lemeshow. RERI of 0 means no interaction’’ [103].


As discussed in detail under item 17, many debate the use and value of analyses restricted to subgroups of the study population [4, 104]. Subgroup analyses are nevertheless often done [4]. Readers need to know which subgroup analyses were planned in advance, and which arose while analysing the data. Also, it is important to explain what methods were used to examine whether effects or associations differed across groups (see item 17).

Interaction relates to the situation when one factor modifies the effect of another (therefore also called ‘effect modification’). The joint action of two factors can be characterized in two ways: on an additive scale, in terms of risk differences; or on a multiplicative scale, in terms of relative risk (see Box 8). Many authors and readers may have their own preference about the way interactions should be analysed. Still, they may be interested to know to what extent the joint effect of exposures differs from the separate effects. There is consensus that the additive scale, which uses absolute risks, is more appropriate for public health and clinical decision making [105]. Whatever view is taken, this should be clearly presented to the reader, as is done in the example above [103]. A lay-out presenting separate effects of both exposures as well as their joint effect, each relative to no exposure, might be most informative. It is presented in the example for interaction under item 17, and the calculations on the different scales are explained in Box 8.

Box 5. Confounding

Confounding literally means confusion of effects. A study might seem to show either an association or no association between an exposure and the risk of a disease. In reality, the seeming association or lack of association is due to another factor that determines the occurrence of the disease but that is also associated with the exposure. The other factor is called the confounding factor or confounder. Confounding thus gives a wrong assessment of the potential ‘causal’ association of an exposure. For example, if women who approach middle age and develop elevated blood pressure are less often prescribed oral contraceptives, a simple comparison of the frequency of cardiovascular disease between those who use contraceptives and those who do not, might give the wrong impression that contraceptives protect against heart disease.

Investigators should think beforehand about potential confounding factors. This will inform the study design and allow proper data collection by identifying the confounders for which detailed information should be sought. Restriction or matching may be used. In the example above, the study might be restricted to women who do not have the confounder, elevated blood pressure. Matching on blood pressure might also be possible, though not necessarily desirable (see Box 2). In the analysis phase, investigators may use stratification or multivariable analysis to reduce the effect of confounders. Stratification consists of dividing the data in strata for the confounder (e.g., strata of blood pressure), assessing estimates of association within each stratum, and calculating the combined estimate of association as a weighted average over all strata. Multivariable analysis achieves the same result but permits one to take more variables into account simultaneously. It is more flexible but may involve additional assumptions about the mathematical form of the relationship between exposure and disease.

Taking confounders into account is crucial in observational studies, but readers should not assume that analyses adjusted for confounders establish the ‘causal part’ of an association. Results may still be distorted by residual confounding (the confounding that remains after unsuccessful attempts to control for it [102]), random sampling error, selection bias and information bias (see Box 3).

12 (c). Explain how missing data were addressed.


‘‘Our missing data analysis procedures used missing at random (MAR) assumptions. We used the MICE (multivariate imputation by chained equations) method of multiple multivariate imputation in STATA. We independently analysed 10 copies of the data, each with missing values suitably imputed, in the multivariate logistic regression analyses. We averaged estimates of the variables to give a single mean estimate and adjusted standard errors according to Rubin’s rules’’ [106].


Missing data are common in observational research. Questionnaires posted to study participants are not always filled in completely, participants may not attend all follow-up visits and routine data sources and clinical databases are often incomplete. Despite its ubiquity and importance, few papers report in detail on the problem of missing data [5, 107]. Investigators may use any of several approaches to address missing data. We describe some strengths and limitations of various approaches in Box 6. We advise that authors report the number of missing values for each variable of interest (exposures, outcomes, confounders) and for each step in the analysis. Authors should give reasons for missing values if possible, and indicate how many individuals were excluded because of missing data when describing the flow of participants through the study (see also item 13). For analyses that account for missing data, authors should describe the nature of the analysis (e.g., multiple imputation) and the assumptions that were made (e.g., missing at random, see Box 6).

12 (d). Cohort study: If applicable, describe how loss to follow-up was addressed.


‘‘In treatment programmes with active follow-up, those lost to follow-up and those followed-up at 1 year had similar baseline CD4 cell counts (median 115 cells per µL and 123 cells per µL), whereas patients lost to follow-up in programmes with no active follow-up procedures had considerably lower CD4 cell counts than those followed-up (median 64 cells per µL and 123 cells per µL). (. ..) Treatment programmes with passive follow-up were excluded from subsequent analyses’’ [116].


Cohort studies are analysed using life table methods or other approaches that are based on the person-time of follow-up and time to developing the disease of interest. Among individuals who remain free of the disease at the end of their observation period, the amount of follow-up time is assumed to be unrelated to the probability of developing the outcome. This will be the case if follow-up ends on a fixed date or at a particular age. Loss to follow-up occurs when participants withdraw from a study before that date. This may hamper the validity of a study if loss to follow-up occurs selectively in exposed individuals, or in persons at high risk of developing the disease (‘informative censoring’). In the example above, patients lost to follow-up in treatment programmes with no active follow-up had fewer CD4 helper cells than those remaining under observation and were therefore at higher risk of dying [116].

It is important to distinguish persons who reach the end of the study from those lost to follow-up. Unfortunately, statistical software usually does not distinguish between the two situations: in both cases follow-up time is automatically truncated (‘censored’) at the end of the observation period. Investigators therefore need to decide, ideally at the stage of planning the study, how they will deal with loss to follow-up. When few patients are lost, investigators may either exclude individuals with incomplete follow-up, or treat them as if they withdrew alive at either the date of loss to follow-up or the end of the study. We advise authors to report how many patients were lost to follow-up and what censoring strategies they used.

Box 6. Missing data: problems and possible solutions

A common approach to dealing with missing data is to restrict analyses to individuals with complete data on all variables required for a particular analysis. Although such ‘complete-case’ analyses are unbiased in many circumstances, they can be biased and are always inefficient [108]. Bias arises if individuals with missing data are not typical of the whole sample. Inefficiency arises because of the reduced sample size for analysis.

Using the last observation carried forward for repeated measures can distort trends over time if persons who experience a foreshadowing of the outcome selectively drop out [109]. Inserting a missing category indicator for a confounder may increase residual confounding [107]. Imputation, in which each missing value is replaced with an assumed or estimated value, may lead to attenuation or exaggeration of the association of interest, and without the use of sophisticated methods described below may produce standard errors that are too small.

Rubin developed a typology of missing data problems, based on a model for the probability of an observation being missing [108, 110]. Data are described as missing completely at random (MCAR) if the probability that a particular observation is missing does not depend on the value of any observable variable(s). Data are missing at random (MAR) if, given the observed data, the probability that observations are missing is independent of the actual values of the missing data. For example, suppose younger children are more prone to missing spirometry measurements, but that the probability of missing is unrelated to the true unobserved lung function, after accounting for age. Then the missing lung function measurement would be MAR in models including age. Data are missing not at random (MNAR) if the probability of missing still depends on the missing value even after taking the available data into account. When data are MNAR valid inferences require explicit assumptions about the mechanisms that led to missing data.

Methods to deal with data missing at random (MAR) fall into three broad classes [108, 111]: likelihood-based approaches [112], weighted estimation [113] and multiple imputation [111, 114]. Of these three approaches, multiple imputation is the most commonly used and flexible, particularly when multiple variables have missing values [115]. Results using any of these approaches should be compared with those from complete case analyses, and important differences discussed. The plausibility of assumptions made in missing data analyses is generally unverifiable. In particular it is impossible to prove that data are MAR, rather than MNAR. Such analyses are therefore best viewed in the spirit of sensitivity analysis (see items 12e and 17).

12 (d). Case-control study: If applicable, explain how matching of cases and controls was addressed.


‘‘We used McNemar’s test, paired t test, and conditional logistic regression analysis to compare dementia patients with their matched controls for cardiovascular risk factors, the occurrence of spontaneous cerebral emboli, carotid disease, and venous to arterial circulation shunt’’ [117].


In individually matched case-control studies a crude analysis of the odds ratio, ignoring the matching, usually leads to an estimation that is biased towards unity (see Box 2). A matched analysis is therefore often necessary. This can intuitively be understood as a stratified analysis: each case is seen as one stratum with his or her set of matched controls. The analysis rests on considering whether the case is more often exposed than the controls, despite having made them alike regarding the matching variables. Investigators can do such a stratified analysis using the Mantel-Haenszel method on a ‘matched’ 2 by 2 table. In its simplest form the odds ratio becomes the ratio of pairs that are discordant for the exposure variable. If matching was done for variables like age and sex that are universal attributes, the analysis needs not retain the individual, person-to-person matching: a simple analysis in categories of age and sex is sufficient [50]. For other matching variables, such as neighbourhood, sibship, or friendship, however, each matched set should be considered its own stratum.

In individually matched studies, the most widely used method of analysis is conditional logistic regression, in which each case and their controls are considered together. The conditional method is necessary when the number of controls varies among cases, and when, in addition to the matching variables, other variables need to be adjusted for. To allow readers to judge whether the matched design was appropriately taken into account in the analysis, we recommend that authors describe in detail what statistical methods were used to analyse the data. If taking the matching into account does have little effect on the estimates, authors may choose to present an unmatched analysis.

12 (d). Cross-sectional study: If applicable, describe analytical methods taking account of sampling strategy.


‘‘The standard errors (SE) were calculated using the Taylor expansion method to estimate the sampling errors of estimators based on the complex sample design. (. ..) The overall design effect for diastolic blood pressure was found to be 1.9 for men and 1.8 for women and, for systolic blood pressure, it was 1.9 for men and 2.0 for women’’ [118].


Most cross-sectional studies use a pre-specified sampling strategy to select participants from a source population. Sampling may be more complex than taking a simple random sample, however. It may include several stages and clustering of participants (e.g., in districts or villages). Proportionate stratification may ensure that subgroups with a specific characteristic are correctly represented. Disproportionate stratification may be useful to over-sample a subgroup of particular interest.

An estimate of association derived from a complex sample may be more or less precise than that derived from a simple random sample. Measures of precision such as standard error or confidence interval should be corrected using the design effect, a ratio measure that describes how much precision is gained or lost if a more complex sampling strategy is used instead of simple random sampling [119]. Most complex sampling techniques lead to a decrease of precision, resulting in a design effect greater than 1.

We advise that authors clearly state the method used to adjust for complex sampling strategies so that readers may understand how the chosen sampling method influenced the precision of the obtained estimates. For instance, with clustered sampling, the implicit trade-off between easier data collection and loss of precision is transparent if the design effect is reported. In the example, the calculated design effects of 1.9 for men indicates that the actual sample size would need to be 1.9 times greater than with simple random sampling for the resulting estimates to have equal precision.

12 (e). Describe any sensitivity analyses.


‘‘Because we had a relatively higher proportion of ‘missing’ dead patients with insufficient data (38/148=25.7%) as compared to live patients (15/437=3.4%) (...), it is possible that this might have biased the results. We have, therefore, carried out a sensitivity analysis. We have assumed that the proportion of women using oral contraceptives in the study group applies to the whole (19.1% for dead, and 11.4% for live patients), and then applied two extreme scenarios: either all the exposed missing patients used second generation pills or they all used third-generation pills’’ [120].


Sensitivity analyses are useful to investigate whether or not the main results are consistent with those obtained with alternative analysis strategies or assumptions [121]. Issues that may be examined include the criteria for inclusion in analyses, the definitions of exposures or outcomes [122], which confounding variables merit adjustment, the handling of missing data [120, 123], possible selection bias or bias from inaccurate or inconsistent measurement of exposure, disease and other variables, and specific analysis choices, such as the treatment of quantitative variables (see item 11). Sophisticated methods are increasingly used to simultaneously model the influence of several biases or assumptions [124–126].

In 1959 Cornfield et al. famously showed that a relative risk of 9 for cigarette smoking and lung cancer was extremely unlikely to be due to any conceivable confounder, since the confounder would need to be at least nine times as prevalent in smokers as in non-smokers [127]. This analysis did not rule out the possibility that such a factor was present, but it did identify the prevalence such a factor would need to have. The same approach was recently used to identify plausible confounding factors that could explain the association between childhood leukaemia and living near electric power lines [128]. More generally, sensitivity analyses can be used to identify the degree of confounding, selection bias, or information bias required to distort an association. One important, perhaps under recognised, use of sensitivity analysis is when a study shows little or no association between an exposure and an outcome and it is plausible that confounding or other biases toward the null are present.


The Results section should give a factual account of what was found, from the recruitment of study participants, the description of the study population to the main results and ancillary analyses. It should be free of interpretations and discursive text reflecting the authors’ views and opinions.

  1. Participants

13 (a). Report the numbers of individuals at each stage of the study—e.g., numbers potentially eligible, examined for eligibility, confirmed eligible, included in the study, completing follow-up, and analysed.


‘‘Of the 105 freestanding bars and taverns sampled, 13 establishments were no longer in business and 9 were located in restaurants, leaving 83 eligible businesses. In 22 cases, the owner could not be reached by telephone despite 6 or more attempts. The owners of 36 bars declined study participation. (...) The 25 participating bars and taverns employed 124 bartenders, with 67 bartenders working at least 1 weekly daytime shift. Fifty-four of the daytime bartenders (81%) completed baseline interviews and spirometry; 53 of these subjects (98%) completed follow-up‘‘ [129].


Detailed information on the process of recruiting study participants is important for several reasons. Those included in a study often differ in relevant ways from the target population to which results are applied. This may result in estimates of prevalence or incidence that do not reflect the experience of the target population. For example, people who agreed to participate in a postal survey of sexual behaviour attended church less often, had less conservative sexual attitudes and earlier age at first sexual intercourse, and were more likely to smoke cigarettes and drink alcohol than people who refused [130]. These differences suggest that postal surveys may overestimate sexual liberalism and activity in the population. Such response bias (see Box 3) can distort exposure-disease associations if associations differ between those eligible for the study and those included in the study. As another example, the association between young maternal age and leukaemia in offspring, which has been observed in some case-control studies [131, 132], was explained by differential participation of young women in case and control groups. Young women with healthy children were less likely to participate than those with unhealthy children [133]. Although low participation does not necessarily compromise the validity of a study, transparent information on participation and reasons for non-participation is essential. Also, as there are no universally agreed definitions for participation, response or follow-up rates, readers need to understand how authors calculated such proportions [134].

Ideally, investigators should give an account of the numbers of individuals considered at each stage of recruiting study participants, from the choice of a target population to the inclusion of participants’ data in the analysis. Depending on the type of study, this may include the number of individuals considered to be potentially eligible, the number assessed for eligibility, the number found to be eligible, the number included in the study, the number examined, the number followed up and the number included in the analysis. Information on different sampling units may be required, if sampling of study participants is carried out in two or more stages as in the example above (multistage sampling). In casecontrol studies, we advise that authors describe the flow of participants separately for case and control groups [135]. Controls can sometimes be selected from several sources, including, for example, hospitalised patients and community dwellers. In this case, we recommend a separate account of the numbers of participants for each type of control group. Olson and colleagues proposed useful reporting guidelines for controls recruited through random-digit dialling and other methods [136].

A recent survey of epidemiological studies published in 10 general epidemiology, public health and medical journals found that some information regarding participation was provided in 47 of 107 case-control studies (59%), 49 of 154 cohort studies (32%), and 51 of 86 cross-sectional studies (59%) [137]. Incomplete or absent reporting of participation and non-participation in epidemiological studies was also documented in two other surveys of the literature [4, 5]. Finally, there is evidence that participation in epidemiological studies may have declined in recent decades [137, 138], which underscores the need for transparent reporting [139].

13 (b). Give reasons for non-participation at each stage.


‘‘The main reasons for non-participation were the participant was too ill or had died before interview (cases 30%, controls < 1%), nonresponse (cases 2%, controls 21%), refusal (cases 10%, controls 29%), and other reasons (refusal by consultant or general practitioner, non-English speaking, mental impairment) (cases 7%, controls 5%)’’ [140].


Explaining the reasons why people no longer participated in a study or why they were excluded from statistical analyses helps readers judge whether the study population was representative of the target population and whether bias was possibly introduced. For example, in a cross-sectional health survey, non-participation due to reasons unlikely to be related to health status (for example, the letter of invitation was not delivered because of an incorrect address) will affect the precision of estimates but will probably not introduce bias. Conversely, if many individuals opt out of the survey because of illness, or perceived good health, results may underestimate or overestimate the prevalence of ill health in the population.

13 (c). Consider use of a flow diagram.






An informative and well-structured flow diagram can readily and transparently convey information that might otherwise require a lengthy description [142], as in the example above. The diagram may usefully include the main results, such as the number of events for the primary outcome. While we recommend the use of a flow diagram, particularly for complex observational studies, we do not propose a specific format for the diagram.

  1. Descriptive data:

14 (a). Give characteristics of study participants (e.g., demographic, clinical, social) and information on exposures and potential confounders.



Table. Characteristics of the Study Base at Enrolment, Castellana G (Italy), 1985-1986



n = 1458


n = 511


n = 513

Sex (%)


936 (64%)

296 (58%)

197 (39%)


522 (36%)

215 (42%)

306 (61%)

Mean age at enrolment (SD)

45.7 (10.5)

52.0 (9.7)

52.5 (9.8)

Daily alcohol intake (%)


250 (17%)

129 (25%)

119 (24%)


853 (59%)

272 (53%)

293 (58%)


355 (24%)

110 (22%)

91 (18%)

HCV, Hepatitis C virus.

aMales <60 g ethanol/day, females <30 g ethanol/day.

bMales >60 g ethanol/day, females >30 g ethanol/day.

Table adapted from Osella et al. [143].



Readers need descriptions of study participants and their exposures to judge the generalisability of the findings. Information about potential confounders, including whether and how they were measured, influences judgments about study validity. We advise authors to summarize continuous variables for each study group by giving the mean and standard deviation, or when the data have an asymmetrical distribution, as is often the case, the median and percentile range (e.g., 25th and 75th percentiles). Variables that make up a small number of ordered categories (such as stages of disease I to IV) should not be presented as continuous variables; it is preferable to give numbers and proportions for each category (see also Box 4). In studies that compare groups, the descriptive characteristics and numbers should be given by group, as in the example above.

Inferential measures such as standard errors and confidence intervals should not be used to describe the variability of characteristics, and significance tests should be avoided in descriptive tables. Also, P values are not an appropriate criterion for selecting which confounders to adjust for in analysis; even small differences in a confounder that has a strong effect on the outcome can be important [144, 145].

In cohort studies, it may be useful to document how an exposure relates to other characteristics and potential confounders. Authors could present this information in a table with columns for participants in two or more exposure categories, which permits to judge the differences in confounders between these categories.

In case-control studies potential confounders cannot be judged by comparing cases and controls. Control persons represent the source population and will usually be different from the cases in many respects. For example, in a study of oral contraceptives and myocardial infarction, a sample of young women with infarction more often had risk factors for that disease, such as high serum cholesterol, smoking and a positive family history, than the control group [146]. This does not influence the assessment of the effect of oral contraceptives, as long as the prescription of oral contraceptives was not guided by the presence of these risk factors—e.g., because the risk factors were only established after the event (see also Box 5). In case-control studies the equivalent of comparing exposed and non-exposed for the presence of potential confounders (as is done in cohorts) can be achieved by exploring the source population of the cases: if the control group is large enough and represents the source population, exposed and unexposed controls can be compared for potential confounders [121, 147].

14 (b). Indicate the number of participants with missing data for each variable of interest.



Table. Symptom End Points Used in Survival Analysis



Short of Breath


Symptom resolved

201 (79%)

138 (54%)

171 (67%)


27 (10%)

21 (8%)

24 (9%)

Never symptomatic


46 (18%)

11 (4%)

Data missing

28 (11%)

51 (20%)

50 (20%)


256 (100%)

256 (100%)

256 (100%)

Table adapted from Hay et al. [141].



As missing data may bias or affect generalisability of results, authors should tell readers amounts of missing data for exposures, potential confounders, and other important characteristics of patients (see also item 12c and Box 6). In a cohort study, authors should report the extent of loss to follow-up (with reasons), since incomplete follow-up may bias findings (see also items 12d and 13) [148]. We advise authors to use their tables and figures to enumerate amounts of missing data.

14 (c). Cohort study: Summarise follow-up time—e.g., average and total amount.


‘‘During the 4366 person-years of follow-up (median 5.4, maximum 8.3 years), 265 subjects were diagnosed as having dementia, including 202 with Alzheimer’s disease’’ [149].


Readers need to know the duration and extent of follow-up for the available outcome data. Authors can present a summary of the average follow-up with either the mean or median follow-up time or both. The mean allows a reader to calculate the total number of person-years by multiplying it with the number of study participants. Authors also may present minimum and maximum times or percentiles of the distribution to show readers the spread of follow-up times. They may report total person-years of follow-up or some indication of the proportion of potential data that was captured [148]. All such information may be presented separately for participants in two or more exposure categories. Almost half of 132 articles in cancer journals (mostly cohort studies) did not give any summary of length of follow-up [37].

  1. Outcome data:

Cohort study: Report numbers of outcome events or summary measures over time.



Table. Rates of HIV-1 Seroconversion by Selected Sociodemographic Variables: 1990-1993






Person-Years (95% CI)

Calendar year




8.2 (4.4–12.0)




6.9 (4.0–9.7)




5.7 (3.1–8.3)




8.9 (5.5–12.4)




4.5 (0.6–8.5)





5.7 (4.1–7.3)

Other Ugandan



15.6 (5.4–25.7)




6.9 (3.5–10.3)

Other tribe



13.9 (6.0–21.7)





2.7 (0.9–4.5)




8.6 (6.6–10.5)

CI, confidence interval.

Table adapted from Kengeya-Kayondo et al. [150].


Case-control study: Report numbers in each exposure category, or summary measures of exposure.



Table. Exposure among Liver Cirrhosis Cases and Controls






Vinyl chloride monomer (cumulative exposure: ppm × years)


7 (18%)

38 (27%)


7 (18%)

40 (29%)


9 (23%)

37 (27%)


17 (43%)

24 (17%)

Alcohol consumption (g/day)


1 (3%)

82 (59%)


7 (18%)

46 (33%)


32 (80%)

11 (8%)



33 (83%)

136 (98%)


7 (18%)

3 (2%)

HBsAG, hepatitis B surface antigen; HCV, hepatitis C virus.

Table adapted from Mastrangelo et al. [151].

Cross-sectional study: Report numbers of outcome events or summary measures.



Table. Prevalence of Current Asthma and Diagnosed Hay Fever by Average Alternaria alternate Antigen Level in the Household


Current Asthma 

Diagnosed Hay Fever 


Alternaria Level* 



(95% CI)



(95% CI)

1st tertile


4.8 (3.3–6.9)


16.4 (13.0–20.5)

2nd tertile


7.5 (5.2–10.6)


17.1 (12.8–22.5)

3rd tertile


8.7 (6.7–11.3)


15.2 (12.1–18.9)

*1st tertile < 3.90 µ/g; 2nd tertile 3.90-6.27 µ/g; 3rd tertile > 6.28 µ/g.

Percentage (95% CI) weighted for the multistage sampling design of the National Survey of Lead and Allergens in Housing.

Table adapted from Salo et al. [152].



Before addressing the possible association between exposures (risk factors) and outcomes, authors should report relevant descriptive data. It may be possible and meaningful to present measures of association in the same table that presents the descriptive data (see item 14a). In a cohort study with events as outcomes, report the numbers of events for each outcome of interest. Consider reporting the event rate per person-year of follow-up. If the risk of an event changes over follow-up time, present the numbers and rates of events in appropriate intervals of follow-up or as a Kaplan-Meier life table or plot. It might be preferable to show plots as cumulative incidence that go up from 0% rather than down from 100%, especially if the event rate is lower than, say, 30% [153]. Consider presenting such information separately for participants in different exposure categories of interest. If a cohort study is investigating other time-related outcomes (e.g., quantitative disease markers such as blood pressure), present appropriate summary measures (e.g., means and standard deviations) over time, perhaps in a table or figure.

For cross-sectional studies, we recommend presenting the same type of information on prevalent outcome events or summary measures. For case-control studies, the focus will be on reporting exposures separately for cases and controls as frequencies or quantitative summaries [154]. For all designs, it may be helpful also to tabulate continuous outcomes or exposures in categories, even if the data are not analysed as such.

  1. Main results:

16 (a). Give unadjusted estimates and, if applicable, confounder-adjusted estimates and their precision (e.g., 95% confidence intervals). Make clear which confounders were adjusted for and why they were included.

Example 1

‘‘We initially considered the following variables as potential confounders by Mantel-Haenszel stratified analysis: (...) The variables we included in the final logistic regression models were those (.. .) that produced a 10% change in the odds ratio after the Mantel-Haenszel adjustment” [155].

Example 2


Table. Relative Rates of Rehospitalisation by Treatment in Patients in Community Care after First Hospitalisation due to Schizophrenia and Schizoaffective Disorder

<td style="width: 123.333px; text-align: center; height: 10px;



of relapses






(95% CI)




(95% CI)





(95% CI)





(0.29 to 0.59)


(0.32 to 0.65


(0.22 to 0.49)





(0.45 to 0.75)


(0.43 to 0.72)


(0.41 to 0.71)





(0.47 to 0.79)


(0.41 to 0.69


(0.48 to 0.85)





(0.58 to 1.09)


(0.61 to 1.15


(0.45 to 0.91





(0.63 to 1.12)


(0.61 to 1.10


(0.51 to 0.96)





(0.58 to 0.82)


About the authors

Jan P. Vandenbroucke

Leiden University Medical Center

ORCID iD: 0000-0001-5668-6716

Department of Clinical Epidemiology

Netherlands, Leiden

Erik von Elm

University of Bern; University Medical Centre

ORCID iD: 0000-0002-7412-0406

Institute of Social & Preventive Medicine (ISPM) of the University of Bern; Department of Medical Biometry and Medical Informatics of the University Medical Centre

Switzerland, Bern; Freiburg, Germany

Douglas G. Altman

Cancer Research UK/NHS Centre for Statistics in Medicine

ORCID iD: 0000-0002-7183-4083
United Kingdom, Oxford

Peter C. Gotzsche

Nordic Cochrane Centre, Rigshospitalet

Denmark, Copenhagen

Cynthia D. Mulrow

University of Texas Health Science Center

ORCID iD: 0000-0002-4768-4492
United States, San Antonio

Stuart J. Pocock

London School of Hygiene and Tropical Medicine

ORCID iD: 0000-0003-2212-4007

Medical Statistics Unit

United Kingdom, London

Charles Poole

University of North Carolina School of Public Health


Department of Epidemiology

United States, Chapel Hill

James J. Schlesselman

University of Pittsburgh Graduate School of Public Health; University of Pittsburgh Cancer Institute


Department of Biostatistics

United States, Pittsburgh; Pittsburgh

Matthias Egger

University of Bern; University of Bristol

Author for correspondence.
ORCID iD: 0000-0001-7462-5132

Institute of Social & Preventive Medicine (ISPM) of the University of Bern; Department of Social Medicine of the University of Bristol

United Kingdom, Bern, Switzerland; Bristol


  1. Glasziou P, Vandenbroucke JP, Chalmers I. Assessing the quality of research. BMJ. 2004;328(7430):39–41. doi: 10.1136/bmj.328.7430.39
  2. Funai EF, Rosenbush EJ, Lee MJ, Del Priore G. Distribution of study designs in four major US journals of obstetrics and gynecology. Gynecol Obstet Invest. 200;151(1):8–11. doi: 10.1159/000052882
  3. Scales CD, Norris RD, Peterson BL, et al. Clinical research and statistical methods in the urology literature. J Urol. 2005;174(4 Pt 1):1374–1379. doi: 10.1097/01.ju.0000173640.91654.b5
  4. Pocock SJ, Collier TJ, Dandreo KJ, et al. Issues in the reporting of epidemiological studies: a survey of recent practice. BMJ. 2004;329(7471):883. doi: 10.1136/bmj.38250.571088.55
  5. Tooth L, Ware R, Bain C, et al. Quality of reporting of observational longitudinal research. Am J Epidemiol. 2005;161(3):280–288. doi: 10.1093/aje/kwi042
  6. Von Elm E, Altman DG, Egger M, et al. The Strengthening the Reporting of Observational Studies in Epidemiology (STROBE) Statement: guidelines for reporting observational studies. Epidemiology. 2007;18(6):800–804. doi: 10.1097/EDE.0b013e3181577654
  7. Mihailovic A, Bell CM, Urbach DR. Users’ guide to the surgical literature. Case-control studies in surgical journals. Can J Surg. 2005;48(2):148–151.
  8. Rushton L. Reporting of occupational and environmental research: use and misuse of statistical and epidemiological methods. Occup Environ Med. 2000;57(1):1–9. doi: 10.1136/oem.57.1.1
  9. Rothman KJ. No adjustments are needed for multiple comparisons. Epidemiology. 1990;1(1):43–46. doi: 10.1097/00001648-199001000-00010
  10. Moonesinghe R, Khoury MJ, Janssens AC. Most published research findings are false-but a little replication goes a long way. PLoS Med. 2007;4(2):e28. doi: 10.1371/journal.pmed.0040028
  11. Jenicek M. Clinical Case Reporting. Evidence-Based Medicine. Oxford: Butterworth-Heinemann; 1999. 117 p.
  12. Vandenbroucke JP. In defense of case reports and case series. Ann Intern Med. 2001;134(4):330–334. doi: 10.7326/0003-4819-134-4-200102200-00017
  13. Bossuyt PM, Reitsma JB, Bruns DE, et al. Towards complete and accurate reporting of studies of diagnostic accuracy: The STARD Initiative. Ann Intern Med. 2003;138(1):40–44. doi: 10.7326/0003-4819-138-1-200301070-00010
  14. McShane LM, Altman DG, Sauerbrei W, et al. REporting recommendations for tumour MARKer prognostic studies (REMARK). Br J Cancer. 2005;93(4):387–391. doi: 10.1038/sj.bjc.6602678
  15. Ioannidis JP, Gwinn M, Little J, et al. A road map for efficient and reliable human genome epidemiology. Nat Genet. 2006;38(1):3–5. doi: 10.1038/ng0106-3
  16. Rodrigues L, Kirkwood BR. Case-control designs in the study of common diseases: updates on the demise of the rare disease assumption and the choice of sampling scheme for controls. Int J Epidemiol. 1990;19(1):205–213. doi: 10.1093/ije/19.1.205
  17. Rothman KJ, Greenland S. Case-Control Studies. In: Rothman KJ, Greenland S, editors. Modern epidemiology. 2nd ed. Philadelphia: Lippincott Raven; 1998. Р. 93–114.
  18. Forand SP. Leukaemia incidence among workers in the shoe and boot manufacturing industry: a case-control study. Environ Health. 2004;3(1):7. doi: 10.1186/1476-069X-3-7
  19. Benson K, Hartz AJ. A comparison of observational studies and randomized, controlled trials. N Engl J Med. 2000;342(25):1878–1886. doi: 10.1056/NEJM200006223422506
  20. Gøtzsche PC, Harden A. Searching for non-randomised studies. Draft chapter 3. Cochrane Non-Randomised Studies Methods Group; 2002. Available from: Accessed 10 September 2007.
  21. Lohse N, Hansen AB, Pedersen G, et al. Survival of persons with and without HIV infection in Denmark, 1995-2005. Ann Intern Med. 2007;146(2):87–95. doi: 10.7326/0003-4819-146-2-200701160-00003
  22. American Journal of Epidemiology. 2007. Information for authors. Available from: Accessed 10 September 2007.
  23. Haynes RB, Mulrow CD, Huth EJ, et al. More informative abstracts revisited. Ann Intern Med. 1990;113(1):69–76. doi: 10.7326/0003-4819-113-1-69
  24. Taddio A, Pain T, Fassos FF, et al. Quality of nonstructured and structured abstracts of original research articles in the British Medical Journal, the Canadian Medical Association Journal and the Journal of the American Medical Association. CMAJ. 1994;150(10):1611–1615.
  25. Hartley J, Sydes M. Which layout do you prefer? An analysis of readers’ preferences for different typographic layouts of structured abstracts. J Inform Sci. 1996;22(1):27–37. doi: 10.1177/016555159602200103
  26. Viner RM, Cole TJ. Adult socioeconomic, educational, social, and psychological outcomes of childhood obesity: a national birth cohort study. BMJ. 2005;330(7504):1354. doi: 10.1136/bmj.38453.422049.E0
  27. McCauley J, Kern DE, Kolodner K, et al. The “battering syndrome”: prevalence and clinical characteristics of domestic violence in primary care internal medicine practices. Ann Intern Med. 1995;123(10):737–746. doi: 10.7326/0003-4819-123-10-199511150-00001
  28. McEvoy SP, Stevenson MR, McCartt AT, et al. Role of mobile phones in motor vehicle crashes resulting in hospital attendance: a case-crossover study. BMJ. 2005;331(7514):428. doi: 10.1136/bmj.38537.397512.55
  29. Vandenbroucke JP. Prospective or retrospective: what’s in a name? BMJ. 1991;302(6771):249–250. doi: 10.1136/bmj.302.6771.249
  30. Last JM. A Dictionary of Epidemiology. New York: Oxford University Press; 2000.
  31. Miettinen OS. Theoretical Epidemiology: principles of occurrence research in medicine. New York: Wiley; 1985. Р. 64–66.
  32. Rothman KJ, Greenland S. Types of Epidemiologic Studies. In: Rothman KJ, Greenland S, editors. Modern epidemiology. 2nd ed. Lippincott Raven; 1998. Р. 74–75.
  33. MacMahon B, Trichopoulos D. Epidemiology, principles and methods. 2nd ed. Boston: Little, Brown; 1996. 81 p. doi: 10.1016/S0033-3506(97)00047-4
  34. Lilienfeld AM. Foundations of Epidemiology. New York: Oxford University Press; 1976.
  35. Ridker PM, Hennekens CH, Lindpaintner K, et al. Mutation in the gene coding for coagulation factor V and the risk of myocardial infarction, stroke, and venous thrombosis in apparently healthy men. N Engl J Med. 1995;332(14):912–917. doi: 10.1056/NEJM199504063321403
  36. Goodman KJ, O’Rourke K, Day RS, et al. Dynamics of Helicobacter pylori infection in a US-Mexico cohort during the first two years of life. Int J Epidemiol. 2005;34(6):1348–1355. doi: 10.1093/ije/dyi152
  37. Altman DG, De Stavola BL, Love SB, Stepniewska KA. Review of survival analyses published in cancer journals. Br J Cancer. 1995;72(2): 511–518. doi: 10.1038/bjc.1995.364
  38. Cerhan JR, Wallace RB, Folsom AR, et al. Transfusion history and cancer risk in older women. Ann Intern Med. 1993;119(1):8–15. doi: 10.7326/0003-4819-119-1-199307010-00002
  39. Freeman LE, Dennis LK, Lynch CF, et al. Toenail arsenic content and cutaneous melanoma in Iowa. Am J Epidemiol. 2004;160(7):679–687. doi: 10.1093/aje/kwh267
  40. Canto JG, Allison JJ, Kiefe C, et al. Relation of race and sex to the use of reperfusion therapy in Medicare beneficiaries with acute myocardial infarction. N Engl J Med. 2000;342(15):1094–1100. doi: 10.1056/NEJM200004133421505
  41. Metzkor-Cotter E, Kletter Y, Avidor B, et al. Long-term serological analysis and clinical follow-up of patients with cat scratch disease. Clin Infect Dis. 2003;37(9):1149–1154. doi: 10.1086/378738
  42. Johnson ES. Bias on withdrawing lost subjects from the analysis at the time of loss, in cohort mortality studies, and in follow-up methods. J Occup Med. 1990;32(3):250–254. doi: 10.1097/00043764-199003000-00013
  43. Berkson J. Limitations of the application of fourfold table analysis to hospital data. Biom Bull. 1946;2(3):47. doi: 10.2307/3002000
  44. Feinstein AR, Walter SD, Horwitz RI. An analysis of Berkson’s bias in case-control studies. J Chronic Dis. 1986;39(7):495–504. doi: 10.1016/0021-9681(86)90194-3
  45. Jick H, Vessey MP. Case-control studies in the evaluation of drug-induced illness. Am J Epidemiol. 1978;107(1):1–7. doi: 10.1093/oxfordjournals.aje.a112502
  46. Hackam DG, Mamdani M, Li P, Redelmeier DA. Statins and sepsis in patients with cardiovascular disease: a population-based cohort analysis. Lancet. 2006;367(9508):413–418. doi: 10.1016/S0140-6736(06)68041-0
  47. Smeeth L, Cook C, Fombonne E, et al. MMR vaccination and pervasive developmental disorders: a case-control study. Lancet. 2004;364(9438):963–969. doi: 10.1016/S0140-6736(04)17020-7
  48. Costanza MC. Matching. Prev Med. 1995;24(5):425–433. doi: 10.1006/pmed.1995.1069
  49. Sturmer T, Brenner H. Flexible matching strategies to increase power and efficiency to detect and estimate gene-environment interactions in case-control studies. Am J Epidemiol. 2002;155(7):593–602. doi: 10.1093/aje/155.7.593
  50. Rothman KJ, Greenland S. Matching. In: Rothman KJ, Greenland S, editors. 2nd ed. Modern epidemiology. Lippincott Raven; 1998. Р. 147–161.
  51. Szklo MF, Nieto J. Epidemiology, Beyond the Basics. Sudbury (MA): Jones and Bartlett; 2000. Р. 40–51.
  52. Cole P, MacMahon B. Attributable risk percent in case-control studies. Br J Prev Soc Med. 1971;25(4):242–244. doi: 10.1136/jech.25.4.242
  53. Gissler M, Hemminki E. The danger of overmatching in studies of the perinatal mortality and birthweight of infants born after assisted conception. Eur J Obstet Gynecol Reprod Biol. 1996;69(2):73–75. doi: 10.1016/0301-2115(95)02517-0
  54. Gefeller O, Pfahlberg A, Brenner H, Windeler J. An empirical investigation on matching in published case-control studies. Eur J Epidemiol. 1998;14(4):321–325. doi: 10.1023/A:1007497104800
  55. Artama M, Ritvanen A, Gissler M, et al. Congenital structural anomalies in offspring of women with epilepsy-a population-based cohort study in Finland. Int J Epidemiol. 2006;35(2):280–287. doi: 10.1093/ije/dyi234
  56. Ebrahim S. Cohorts, infants and children. Int J Epidemiol. 2004;33(6):1165–1166. doi: 10.1093/ije/dyh368
  57. Walker M, Whincup PH, Shaper AG. The British Regional Heart Study 1975-2004. Int J Epidemiol. 2004;33(6):1185–1192. doi: 10.1093/ije/dyh295
  58. Wieland S, Dickersin K. Selective exposure reporting and Medline indexing limited the search sensitivity for observational studies of the adverse effects of oral contraceptives. J Clin Epidemiol. 2005;58(6):560–567. doi: 10.1016/j.jclinepi.2004.11.018
  59. Anderson HR, Atkinson RW, Peacock JL, et al. Ambient particulate matter and health effects: publication bias in studies of short-term associations. Epidemiology. 2005;16(2):155–163. doi: 10.1097/01.ede.0000152528.22746.0f
  60. Winkelmayer WC, Stampfer MJ, Willett WC, Curhan GC. Habitual caffeine intake and the risk of hypertension in women. JAMA. 2005;294(18):2330–2335. doi: 10.1001/jama.294.18.2330
  61. Lukanova A, Soderberg S, Kaaks R, et al. Serum adiponectin is not associated with risk of colorectal cancer. Cancer Epidemiol Biomarkers Prev. 2006;15(2):401–402. doi: 10.1158/1055-9965.EPI-05-0836
  62. Becher H. The concept of residual confounding in regression models and some applications. Stat Med. 1992;11(13):1747–1758. doi: 10.1002/sim.4780111308
  63. Brenner H, Blettner M. Controlling for continuous confounders in epidemiologic research. Epidemiology. 1997;8(4):429–434. doi: 10.1097/00001648-199707000-00014
  64. Phillips MR, Yang G, Zhang Y, et al. Risk factors for suicide in China: a national case-control psychological autopsy study. Lancet. 2002;360(9347):1728–1736. doi: 10.1016/S0140-6736(02)11681-3
  65. Pasquale LR, Kang JH, Manson JE, et al. Prospective study of type 2 diabetes mellitus and risk of primary open-angle glaucoma in women. Ophthalmology. 2006;113(7):1081–1086. doi: 10.1016/j.ophtha.2006.01.066
  66. Craig SL, Feinstein AR. Antecedent therapy versus detection bias as causes of neoplastic multimorbidity. Am J Clin Oncol. 1999;22(1):51–56. doi: 10.1097/00000421-199902000-00013
  67. Rogler LH, Mroczek DK, Fellows M, Loftus ST. The neglect of response bias in mental health research. J Nerv Ment Dis. 2001;189(3):182–187. doi: 10.1097/00005053-200103000-00007
  68. Murphy EA. The logic of medicine. Baltimore: Johns Hopkins University Press; 1976.
  69. Sackett DL. Bias in analytic research. J Chronic Dis. 1979;32(1-2):51–63. doi: 10.1016/0021-9681(79)90012-2
  70. Johannes CB, Crawford SL, McKinlay JB. Interviewer effects in a cohort study. Results from the Massachusetts Women’s Health Study. Am J Epidemiol. 1997;146(5):429–438. doi: 10.1093/oxfordjournals.aje.a009296
  71. Bloemenkamp KW, Rosendaal FR, Buller HR, et al. Risk of venous thrombosis with use of current low-dose oral contraceptives is not explained by diagnostic suspicion and referral bias. Arch Intern Med. 1999;159(1):65–70. doi: 10.1001/archinte.159.1.65
  72. Feinstein AR. Clinical epidemiology: the architecture of clinical research. Philadelphia: W.B. Saunders; 1985.
  73. Yadon ZE, Rodrigues LC, Davies CR, Quigley MA. Indoor and peridomestic transmission of American cutaneous leishmaniasis in northwestern Argentina: a retrospective case-control study. Am J Trop Med Hyg. 2003;68(5): 519–526. doi: 10.4269/ajtmh.2003.68.519
  74. Anoop S, Saravanan B, Joseph A, et al. Maternal depression and low maternal intelligence as risk factors for malnutrition in children: a community based case-control study from South India. Arch Dis Child. 2004;89(4):325–329. doi: 10.1136/adc.2002.009738
  75. Carlin JB, Doyle LW. Sample size. J Paediatr Child Health. 2002;38(3):300–304. doi: 10.1046/j.1440-1754.2002.00855.x
  76. Rigby AS, Vail A. Statistical methods in epidemiology. II: A commonsense approach to sample size estimation. Disabil Rehabil. 1998;20(11):405–410. doi: 10.3109/09638289809166102
  77. Schulz KF, Grimes DA. Sample size calculations in randomised trials: mandatory and mystical. Lancet. 2005;365(9467):1348–1353. doi: 10.1016/S0140-6736(05)61034-3
  78. Drescher K, Timm J, Jockel KH. The design of case-control studies: the effect of confounding on sample size requirements. Stat Med. 1990;9(7):765–776. doi: 10.1002/sim.4780090706
  79. Devine OJ, Smith JM. Estimating sample size for epidemiologic studies: the impact of ignoring exposure measurement uncertainty. Stat Med. 1998;17(12):1375–1389. doi: 10.1002/(SICI)1097-0258(19980630)17:12<1375::AID-SIM857>3.0.CO;2-D
  80. Linn S, Levi L, Grunau PD, et al. Effect measure modification and confounding of severe head injury mortality by age and multiple organ injury severity. Ann Epidemiol. 2007;17(2):142–147. doi: 10.1016/j.annepidem.2006.08.004
  81. Altman DG, Lausen B, Sauerbrei W, Schumacher M. Dangers of using “optimal” cutpoints in the evaluation of prognostic factors. J Natl Cancer Inst. 1994;86(11):829–835. doi: 10.1093/jnci/86.11.829
  82. Royston P, Altman DG, Sauerbrei W. Dichotomizing continuous predictors in multiple regression: a bad idea. Stat Med. 2006;25(1):127–141. doi: 10.1002/sim.2331
  83. Greenland S. Avoiding power loss associated with categorization and ordinal scores in dose-response and trend analysis. Epidemiology. 1995;6(4):450–454. doi: 10.1097/00001648-199507000-00025
  84. Royston P, Ambler G, Sauerbrei W. The use of fractional polynomials to model continuous risk variables in epidemiology. Int J Epidemiol. 1999;28(5):964–974. doi: 10.1093/ije/28.5.964
  85. MacCallum RC, Zhang S, Preacher KJ, Rucker DD. On the practice of dichotomization of quantitative variables. Psychol Methods. 2002;7(1):19–40. doi: 10.1037/1082-989X.7.1.19
  86. Altman DG. Categorizing continuous variables. In: Armitage P, Colton T, editors. Encyclopedia of biostatistics. 2nd ed. Chichester: John Wiley; 2005. Р. 708–711. doi: 10.1002/0470011815.b2a10012
  87. Cohen J. The cost of dichotomization. Applied Psychological Measurement. 1983;7(3):249–253. doi: 10.1177/014662168300700301
  88. Zhao LP, Kolonel LN. Efficiency loss from categorizing quantitative exposures into qualitative exposures in case-control studies. Am J Epidemiol. 1992;136(4):464–474. doi: 10.1093/oxfordjournals.aje.a116520
  89. Cochran WG. The effectiveness of adjustment by subclassification in removing bias in observational studies. Biometrics. 1968;24(2):295–313. doi: 10.2307/2528036
  90. Clayton D, Hills M. Models for dose-response (Chapter 25). Statistical Models in Epidemiology. Oxford: Oxford University Press; 1993. Р. 249–260.
  91. Cox DR. Note on grouping. J Am Stat Assoc. 1957;52(280):543–547. doi: 10.1080/01621459.1957.10501411
  92. Il’yasova D, Hertz-Picciotto I, Peters U, et al. Choice of exposure scores for categorical regression in meta-analysis: a case study of a common problem. Cancer Causes Control. 2005;16(4):383–388. doi: 10.1007/s10552-004-5025-x
  93. Berglund A, Alfredsson L, Cassidy JD, et al. The association between exposure to a rear-end collision and future neck or shoulder pain: a cohort study. J Clin Epidemiol. 2000;53(11):1089–1094. doi: 10.1016/S0895-4356(00)00225-0
  94. Slama R, Werwatz A. Controlling for continuous confounding factors: non- and semiparametric approaches. Rev Epidemiol Sante Publique. 2005;53(2):2S65–80. doi: 10.1016/S0398-7620(05)84769-8
  95. Greenland S. Introduction to regression modelling (Chapter 21). In: Rothman KJ, Greenland S, editors. Modern epidemiology. 2nd ed: Lippincott Raven; 1998. Р. 401–432.
  96. Thompson WD. Statistical analysis of case-control studies. Epidemiol Rev. 1994;16(1):33–50. doi: 10.1093/oxfordjournals.epirev.a036143
  97. Schlesselman JJ. Logistic regression for case-control studies (Chapter 8.2). Case-control studies Design, conduct, analysis. New York, Oxford: Oxford University Press; 1982. Р. 235–241.
  98. Clayton D, Hills M. Choice and interpretation of models (Chapter 27). Statistical Models in Epidemiology. Oxford: Oxford University Press; 1993. Р. 271–281.
  99. Altman DG, Gore SM, Gardner MJ, Pocock SJ. Statistical guidelines for contributors to medical journals. Br Med J. 1983;286(6376):1489–1493. doi: 10.1136/bmj.286.6376.1489
  100. International Committee of Medical Journal Editors Uniform requirements for manuscripts submitted to biomedical journals. N Engl J Med. 1997;336:309–315. [Electronic version updated February 2006, Available from at]. doi: 10.1056/NEJM199701233360422
  101. Mullner M, Matthews H, Altman DG. Reporting on statistical methods to adjust for confounding: a cross-sectional survey. Ann Intern Med. 2002;136(2):122–126. doi: 10.7326/0003-4819-136-2-200201150-00009
  102. Olsen J, Basso O. Re: Residual confounding. Am J Epidemiol. 1999;149(3):290. doi: 10.1093/oxfordjournals.aje.a009805
  103. Hallan S, de Mutsert R, Carlsen S, et al. Obesity, smoking, and physical inactivity as risk factors for CKD: are men more vulnerable? Am J Kidney Dis. 2006;47(3):396–405. doi: 10.1053/j.ajkd.2005.11.027
  104. Gotzsche PC. Believability of relative risks and odds ratios in abstracts: cross sectional study. BMJ. 2006;333(7561):231–234. doi: 10.1136/bmj.38895.410451.79
  105. Szklo MF, Nieto J. Communicating Results of Epidemiologic Studies (Chapter 9). Epidemiology, Beyond the Basics. Sudbury (MA): Jones and Bartlett; 2000. Р. 408–430.
  106. Chandola T, Brunner E, Marmot M. Chronic stress at work and the metabolic syndrome: prospective study. BMJ. 2006;332(7540):521–525. doi: 10.1136/bmj.38693.435301.80
  107. Vach W. Blettner M. Biased estimation of the odds ratio in case-control studies due to the use of ad hoc methods of correcting for missing values for confounding variables. Am J Epidemiol. 1991;134(8):895–907. doi: 10.1093/oxfordjournals.aje.a116164
  108. Little RJ, Rubin DB. A taxonomy of missing-data methods (Chapter 1.4.). Statistical Analysis with Missing Data. New York: Wiley; 2002. Р. 19–23. doi: 10.1002/9781119013563
  109. Ware JH. Interpreting incomplete data in studies of diet and weight loss. N Engl J Med. 2003;348(21):2136–2137. doi: 10.1056/NEJMe030054
  110. Rubin DB. Inference and missing data. Biometrika. 1976;63(3):581–592. doi: 10.1093/biomet/63.3.581
  111. Schafer JL. Analysis of Incomplete Multivariate Data. London: Chapman & Hall; 1997. doi: 10.1201/9781439821862
  112. Lipsitz SR, Ibrahim JG, Chen MH, Peterson H. Non-ignorable missing covariates in generalized linear models. Stat Med. 1999;18(17-18):2435–2448. doi: 10.1002/(SICI)1097-0258(19990915/30)18:17/18<2435::AID-SIM267>3.0.CO;2-B
  113. Rotnitzky A, Robins J. Analysis of semi-parametric regression models with non-ignorable non-response. Stat Med. 1997;16(1-3):81–102. doi: 10.1002/(SICI)1097-0258(19970115)16:1<81::AID-SIM473>3.0.CO;2-0
  114. Rubin DB. Multiple Imputation for Nonresponse in Surveys. New York: John Wiley; 1987. doi: 10.1002/9780470316696
  115. Barnard J, Meng XL. Applications of multiple imputation in medical studies: from AIDS to NHANES. Stat Methods Med Res. 1999;8(1):17–36. doi: 10.1177/096228029900800103
  116. Braitstein P, Brinkhof MW, Dabis F, et al. Mortality of HIV-1-infected patients in the first year of antiretroviral therapy: comparison between low-income and high-income countries. Lancet. 2006;367(9513):817–824. doi: 10.1016/S0140-6736(06)68337-2
  117. Purandare N, Burns A, Daly KJ, et al. Cerebral emboli as a potential cause of Alzheimer’s disease and vascular dementia: case-control study. BMJ. 2006;332(7550):1119–1124. doi: 10.1136/bmj.38814.696493.AE
  118. Steyn K, Gaziano TA, Bradshaw D, et al. Hypertension in South African adults: results from the Demographic and Health Survey, 1998. J Hypertens. 2001;19(10):1717–1725. doi: 10.1097/00004872-200110000-00004
  119. Lohr SL. Design Effects (Chapter 7.5). Sampling: Design and Analysis. Pacific Grove (CA): Duxbury Press; 1999.
  120. Dunn NR, Arscott A, Thorogood M. The relationship between use of oral contraceptives and myocardial infarction in young women with fatal outcome, compared to those who survive: results from the MICA case-control study. Contraception. 2001;63(2):65–69. doi: 10.1016/S0010-7824(01)00172-X
  121. Rothman KJ, Greenland S. Basic Methods for Sensitivity Analysis and External Adjustment. In: Rothman KJ, Greenland S, editors. Modern epidemiology. 2nd ed. Lippincott Raven; 1998. Р. 343–357.
  122. Custer B, Longstreth WT, Phillips LE, et al. Hormonal exposures and the risk of intracranial meningioma in women: a population-based case-control study. BMC Cancer. 2006;6:152. doi: 10.1186/1471-2407-6-152
  123. Wakefield MA, Chaloupka FJ, Kaufman NJ, et al. Effect of restrictions on smoking at home, at school, and in public places on teenage smoking: cross sectional study. BMJ. 2000;321(7257):333–337. doi: 10.1136/bmj.321.7257.333
  124. Greenland S. The impact of prior distributions for uncontrolled confounding and response bias: a case study of the relation of wire codes and magnetic fields to childhood leukemia. J Am Stat Assoc. 2003;98(461):47–54. doi: 10.1198/01621450338861905
  125. Lash TL, Fink AK. Semi-automated sensitivity analysis to assess systematic errors in observational data. Epidemiology. 2003;14(4):451–458. doi: 10.1097/
  126. Phillips CV. Quantifying and reporting uncertainty from systematic errors. Epidemiology. 2003;14(4):459–466. doi: 10.1097/
  127. Cornfield J, Haenszel W, Hammond EC, et al. Smoking and lung cancer: recent evidence and a discussion of some questions. J Natl Cancer Inst. 1959;22(1):173–203.
  128. Langholz B. Factors that explain the power line configuration wiring code-childhood leukemia association: what would they look like? Bioelectromagnetics. 2001;Suppl 5:S19–31. doi: 10.1002/1521-186x(2001)22:5+<::aid-bem1021>;2-9.
  129. Eisner MD, Smith AK, Blanc PD. Bartenders’ respiratory health after establishment of smoke-free bars and taverns. JAMA. 1998;280(22):1909–1914. doi: 10.1001/jama.280.22.1909
  130. Dunne MP, Martin NG, Bailey JM, et al. Participation bias in a sexuality survey: psychological and behavioural characteristics of responders and non-responders. Int J Epidemiol. 1997;26(4):844–854. doi: 10.1093/ije/26.4.844
  131. Schuz J, Kaatsch P, Kaletsch U, et al. Association of childhood cancer with factors related to pregnancy and birth. Int J Epidemiol. 1999;28(4):631–639. doi: 10.1093/ije/28.4.631
  132. Cnattingius S, Zack M, Ekbom A, et al. Prenatal and neonatal risk factors for childhood myeloid leukemia. Cancer Epidemiol Biomarkers Prev. 1995;4(5):441–445.
  133. Schuz J. Non-response bias as a likely cause of the association between young maternal age at the time of delivery and the risk of cancer in the offspring. Paediatr Perinat Epidemiol. 2003;17(1):106–112. doi: 10.1046/j.1365-3016.2003.00460.x
  134. Slattery ML, Edwards SL, Caan BJ, et al. Response rates among control subjects in case-control studies. Ann Epidemiol. 1995;5(3):245–249. doi: 10.1016/1047-2797(94)00113-8
  135. Schulz KF, Grimes DA. Case-control studies: research in reverse. Lancet. 2002;359(9304):431–434. doi: 10.1016/S0140-6736(02)07605-5
  136. Olson SH, Voigt LF, Begg CB, Weiss NS. Reporting participation in case-control studies. Epidemiology. 2002;13(2):123–126. doi: 10.1097/00001648-200203000-00004
  137. Morton LM, Cahill J, Hartge P. Reporting participation in epidemiologic studies: a survey of practice. Am J Epidemiol. 2006;163(3):197–203. doi: 10.1093/aje/kwj036
  138. Olson SH. Reported participation in case-control studies: changes over time. Am J Epidemiol. 2001;154(6):574–581. doi: 10.1093/aje/154.6.574
  139. Sandler DP. On revealing what we’d rather hide: the problem of describing study participation. Epidemiology. 2002;13(2):117. doi: 10.1097/00001648-200203000-00001
  140. Hepworth SJ, Schoemaker MJ, Muir KR, et al. Mobile phone use and risk of glioma in adults: case-control study. BMJ. 2006;332(7546):883–887. doi: 10.1136/bmj.38720.687975.55
  141. Hay AD, Wilson A, Fahey T, Peters TJ. The duration of acute cough in pre-school children presenting to primary care: a prospective cohort study. Fam Pract. 2003;20(6):696–705. doi: 10.1093/fampra/cmg613
  142. Egger M, Juni P, Bartlett C. Value of flow diagrams in reports of randomized controlled trials. JAMA. 2001;285(15):1996–1999. doi: 10.1001/jama.285.15.1996
  143. Osella AR, Misciagna G, Guerra VM, et al. Hepatitis C virus (HCV) infection and liver-related mortality: a population-based cohort study in southern Italy. The Association for the Study of Liver Disease in Puglia. Int J Epidemiol. 2000;29(5):922–927. doi: 10.1093/ije/29.5.922
  144. Dales LG, Ury HK. An improper use of statistical significance testing in studying covariables. Int J Epidemiol. 1978;7(4):373–375. doi: 10.1093/ije/7.4.373
  145. Maldonado G, Greenland S. Simulation study of confounder-selection strategies. Am J Epidemiol. 1993;138(11):923–936. doi: 10.1093/oxfordjournals.aje.a116813
  146. Tanis BC, van den Bosch MA, Kemmeren JM, et al. Oral contraceptives and the risk of myocardial infarction. N Engl J Med. 2001;345(25):1787–1793. doi: 10.1056/NEJMoa003216
  147. Rothman KJ, Greenland S. Precision and Validity in Epidemiologic Studies. In: Rothman KJ, Greenland S, editors. Modern epidemiology. 2nd ed. Lippincott Raven; 1998. Р. 120–125.
  148. Clark TG, Altman DG, De Stavola BL. Quantification of the completeness of follow-up. Lancet. 2002;359(9314):1309–1310. doi: 10.1016/S0140-6736(02)08272-7
  149. Qiu C, Fratiglioni L, Karp A, et al. Occupational exposure to electromagnetic fields and risk of Alzheimer’s disease. Epidemiology. 2004;15(6):687–694. doi: 10.1097/01.ede.0000142147.49297.9d
  150. Kengeya-Kayondo JF, Kamali A, Nunn AJ, et al. Incidence of HIV-1 infection in adults and socio-demographic characteristics of seroconverters in a rural population in Uganda: 1990-1994. Int J Epidemiol. 1996;25(5):1077–1082. doi: 10.1093/ije/25.5.1077
  151. Mastrangelo G, Fedeli U, Fadda E, et al. Increased risk of hepatocellular carcinoma and liver cirrhosis in vinyl chloride workers: synergistic effect of occupational exposure with alcohol intake. Environ Health Perspect. 2004;112(11):1188–1192. doi: 10.1289/ehp.6972
  152. Salo PM, Arbes SJ, Sever M, et al. Exposure to Alternaria alternata in US homes is associated with asthma symptoms. J Allergy Clin Immunol. 2006;118(4):892–898. doi: 10.1016/j.jaci.2006.07.037
  153. Pocock SJ, Clayton TC, Altman DG. Survival plots of time-to-event outcomes in clinical trials: good practice and pitfalls. Lancet. 2002;359(9318):1686–1689. doi: 10.1016/S0140-6736(02)08594-X
  154. Sasieni P. A note on the presentation of matched case-control data. Stat Med. 1992;11(5):617–620. doi: 10.1002/sim.4780110506
  155. Lee GM, Neutra RR, Hristova L, et al. A nested case-control study of residential and personal magnetic field measures and miscarriages. Epidemiology. 2002;13(1):21–31. doi: 10.1097/00001648-200201000-00005
  156. Tiihonen J, Walhbeck K, Lonnqvist J, et al. Effectiveness of antipsychotic treatments in a nationwide cohort of patients in community care after first hospitalisation due to schizophrenia and schizoaffective disorder: observational follow-up study. BMJ. 2006;333(7561):224. doi: 10.1136/bmj.38881.382755.2F
  157. Christenfeld NJ. Sloan RP, Carroll D, Greenland S. Risk factors, confounding, and the illusion of statistical control. Psychosom Med. 2004;66(6):868–875. doi: 10.1097/01.psy.0000140008.70959.41
  158. Smith GD, Phillips A. Declaring independence: why we should be cautious. J Epidemiol Community Health. 1990;44(4):257–258. doi: 10.1136/jech.44.4.257
  159. Greenland S, Neutra R. Control of confounding in the assessment of medical technology. Int J Epidemiol. 1980;9(4):361–367. doi: 10.1093/ije/9.4.361
  160. Robins JM. Data, design, and background knowledge in etiologic inference. Epidemiology. 2001;12(3):313–320. doi: 10.1097/00001648-200105000-00011
  161. Sagiv SK, Tolbert PE, Altshul LM, Korrick SA. Organochlorine exposures during pregnancy and infant size at birth. Epidemiology. 2007;18(1):120–129. doi: 10.1097/01.ede.0000249769.15001.7c
  162. World Health Organization. Body Mass Index (BMI). 2007. Available from:
  163. Beral V. Breast cancer and hormone-replacement therapy in the Million Women Study. Lancet. 2003;362(9382):419–427. doi: 10.1016/s0140-6736(03)14065-2.
  164. Hill AB. The environment and disease: Association or causation? Proc R Soc Med. 1965;58(5):295–300. doi: 10.1177/003591576505800503
  165. Vineis P. Causality in epidemiology. Soz Praventivmed. 2003;48(2):80–87. doi: 10.1007/s00038-003-1029-7
  166. Empana JP, Ducimetiere P, Arveiler D, et al. Are the Framingham and PROCAM coronary heart disease risk functions applicable to different European populations? The PRIME Study. Eur Heart J. 2003;24(21):1903–1911. doi: 10.1016/j.ehj.2003.09.002
  167. Tunstall-Pedoe H, Kuulasmaa K, Mahonen M, et al. Contribution of trends in survival and coronary-event rates to changes in coronary heart disease mortality: 10-year results from 37 WHO MONICA project populations. Monitoring trends and determinants in cardiovascular disease. Lancet. 1999;353(9164):1547–1557. doi: 10.1016/S0140-6736(99)04021-0
  168. Cambien F, Chretien JM, Ducimetiere P, et al. Is the relationship between blood pressure and cardiovascular risk dependent on body mass index? Am J Epidemiol. 1985;122(3):434–442. doi: 10.1093/oxfordjournals.aje.a114124
  169. Hosmer DW, Taber S, Lemeshow S. The importance of assessing the fit of logistic regression models: a case study. Am J Public Health. 1991;81(12):1630–1635. doi: 10.2105/AJPH.81.12.1630
  170. Tibshirani R. A plain man’s guide to the proportional hazards model. Clin Invest Med. 1982;5(1):63–68.
  171. Rockhill B, Newman B, Weinberg C. Use and misuse of population attributable fractions. Am J Public Health. 1998;88(1):15–19. doi: 10.2105/AJPH.88.1.15
  172. Uter W, Pfahlberg A. The application of methods to quantify attributable risk in medical practice. Stat Methods Med Res. 2001;10(3):231–237. doi: 10.1177/096228020101000305
  173. Schwartz LM, Woloshin S, Dvorin EL, Welch HG. Ratio measures in leading medical journals: structured review of accessibility of underlying absolute risks. BMJ. 2006;333(7581):1248. doi: 10.1136/bmj.38985.564317.7C
  174. Nakayama T, Zaman MM, Tanaka H. Reporting of attributable and relative risks, 1966-97. Lancet. 1998;351(9110):1179. doi: 10.1016/S0140-6736(05)79123-6
  175. Cornfield J. A method of estimating comparative rates from clinical data; applications to cancer of the lung, breast, and cervix. J Natl Cancer Ins. 1951;11(6):1269–1275.
  176. Pearce N. What does the odds ratio estimate in a case-control study? Int J Epidemiol. 1993;22(6):1189–1192. doi: 10.1093/ije/22.6.1189
  177. Rothman KJ, Greenland S, Rothman KJ, Greenland S. Measures of Disease Frequency. Modern epidemiology. 2nd ed. Lippincott Raven; 1998. Р. 44–45.
  178. Doll R, Hill AB. The mortality of doctors in relation to their smoking habits: a preliminary report. BMJ. 1954;1(4877):1451–1455. doi: 10.1136/bmj.1.4877.1451
  179. Ezzati M, Lopez AD. Estimates of global mortality attributable to smoking in 2000. Lancet. 2003;362(9387):847–852. doi: 10.1016/S0140-6736(03)14338-3
  180. Greenland S. Applications of Stratified Analysis Methods. In: Rothman KJ, Greenland S, editors. Modern epidemiology. 2nd ed. Lippincott Raven; 1998. Р. 295–297.
  181. Rose G. Sick individuals and sick populations. Int J Epidemiol. 2001;30(3):427–432. discussion. doi: 10.1093/ije/30.3.427
  182. Vandenbroucke JP, Koster T, Briet E, et al. Increased risk of venous thrombosis in oral-contraceptive users who are carriers of factor V Leiden mutation. Lancet. 1994;344(8935):1453–1457. doi: 10.1016/S0140-6736(94)90286-0
  183. Botto LD, Khoury MJ. Commentary: facing the challenge of gene-environment interaction: the two-by-four table and beyond. Am J Epidemiol. 2001;153(10):1016–1020. doi: 10.1093/aje/153.10.1016
  184. Wei L, MacDonald TM, Walker BR. Taking glucocorticoids by prescription is associated with subsequent cardiovascular disease. Ann Intern Med. 2004;141(10):764–770. doi: 10.7326/0003-4819-141-10-200411160-00007
  185. Martinelli I, Taioli E, Battaglioli T, et al. Risk of venous thromboembolism after air travel: interaction with thrombophilia and oral contraceptives. Arch Intern Med. 2003;163(22):2771–2774. doi: 10.1001/archinte.163.22.2771
  186. Kyzas PA, Loizou KT, Ioannidis JP. Selective reporting biases in cancer prognostic factor studies. J Natl Cancer Inst. 2005;97(14):1043–1055. doi: 10.1093/jnci/dji184
  187. Rothman KJ, Greenland S, Walker AM. Concepts of interaction. Am J Epidemiol. 1980;112(4):467–470. doi: 10.1093/oxfordjournals.aje.a113015
  188. Saracci R. Interaction and synergism. Am J Epidemiol. 1980;112(4):465–466. doi: 10.1093/oxfordjournals.aje.a113014
  189. Rothman KJ. Epidemiology. An introduction. Oxford: Oxford University Press; 2002. Р. 168–180.
  190. Rothman KJ. Interactions Between Causes. Modern epidemiology. Boston: Little Brown; 1986. Р. 311–326.
  191. Hess DR. How to write an effective discussion. Respir Care. 2004;49(10):1238–1241.
  192. Horton R. The hidden research paper. JAMA. 2002;287(21):2775–2778. doi: 10.1001/jama.287.21.2775
  193. Horton R. The rhetoric of research. BMJ. 1995;310(6985):985–987. doi: 10.1136/bmj.310.6985.985
  194. Docherty M, Smith R. The case for structuring the discussion of scientific papers. BMJ. 1999;318(7193):1224–1225. doi: 10.1136/bmj.318.7193.1224
  195. Perneger TV, Hudelson PM. Writing a research article: advice to beginners. Int J Qual Health Care. 2004;16(3):191–192. doi: 10.1093/intqhc/mzh053
  196. Annals of Internal Medicine. Information for authors. Available from: Accessed 10 September 2007.
  197. Maldonado G, Poole C. More research is needed. Ann Epidemiol. 1999;9(1):17–18. doi: 10.1016/s1047-2797(98)00050-7
  198. Phillips CV. The economics of ‘more research is needed’. Int J Epidemiol. 2001;30(4):771–776. doi: 10.1093/ije/30.4.771
  199. Winkleby MA, Kraemer HC, Ahn DK, Varady AN. Ethnic and socioeconomic differences in cardiovascular disease risk factors: findings for women from the Third National Health and Nutrition Examination Survey, 1988-1994. JAMA. 1998;280(4):356–362. doi: 10.1001/jama.280.4.356
  200. Galuska DA, Will JC, Serdula MK, Ford ES. Are health care professionals advising obese patients to lose weight? JAMA. 1999;282(16):1576–1578. doi: 10.1001/jama.282.16.1576
  201. Spearman C. The proof and measurement of association between two things. Am J Psychol. 1904;15(1):72–101. doi: 10.2307/1412159
  202. Fuller WA, Hidiroglou MA. Regression estimates after correcting for attenuation. J Am Stat Assoc. 1978;73(361):99–104. doi: 10.1080/01621459.1978.10480011
  203. MacMahon S, Peto R, Cutler J, et al. Blood pressure, stroke, and coronary heart disease. Part 1, Prolonged differences in blood pressure: prospective observational studies corrected for the regression dilution bias. Lancet. 1990;335(8692):765–774. doi: 10.1016/0140-6736(90)90878-9
  204. Phillips AN, Smith GD. How independent are “independent” effects? Relative risk estimation when correlated exposures are measured imprecisely. J Clin Epidemiol. 1991;44(11):1223–1231. doi: 10.1016/0895-4356(91)90155-3
  205. Phillips AN, Smith GD. Bias in relative odds estimation owing to imprecise measurement of correlated exposures. Stat Med. 1992;11(7):953–961. doi: 10.1002/sim.4780110712
  206. Greenland S. The effect of misclassification in the presence of covariates. Am J Epidemiol. 1980;112(4):564–569. doi: 10.1093/oxfordjournals.aje.a113025
  207. Poole C, Peters U, Il’yasova D, Arab L. Commentary: This study failed? Int J Epidemiol. 2003;32(4):534–535. doi: 10.1093/ije/dyg197
  208. Kaufman JS, Cooper RS, McGee DL. Socioeconomic status and health in blacks and whites: the problem of residual confounding and the resiliency of race. Epidemiology. 1997;8(6):621–628. doi: 10.1097/00001648-199710000-00002
  209. Greenland S. Randomization, statistics, and causal inference. Epidemiology. 1990;1(6):421–429. doi: 10.1097/00001648-199011000-00003
  210. Taubes G. Epidemiology faces its limits. Science. 1995;269(5221):164–169. doi: 10.1126/science.7618077
  211. Temple R. Meta-analysis and epidemiologic studies in drug development and postmarketing surveillance. JAMA. 1999;281(9):841–844. doi: 10.1001/jama.281.9.841
  212. Greenberg RS, Shuster JL. Epidemiology of cancer in children. Epidemiol Rev. 1985;7:22–48. doi: 10.1093/oxfordjournals.epirev.a036284
  213. Kushi LH, Mink PJ, Folsom AR, et al. Prospective study of diet and ovarian cancer. Am J Epidemiol. 1999;149(1):21–31. doi: 10.1093/oxfordjournals.aje.a009723
  214. Kemmeren JM, Algra A, Meijers JC, et al. Effect of second- and third-generation oral contraceptives on the protein C system in the absence or presence of the factor V Leiden mutation: a randomized trial. Blood. 2004;103(3):927–933. doi: 10.1182/blood-2003-04-1285
  215. Egger M, May M, Chene G, et al. Prognosis of HIV-1-infected patients starting highly active antiretroviral therapy: a collaborative analysis of prospective studies. Lancet. 2002;360(9327):119–129. doi: 10.1016/S0140-6736(02)09411-4
  216. Campbell DT. Factors relevant to the validity of experiments in social settings. Psychol Bull. 1957;54(4):297–312. doi: 10.1037/h0040950
  217. Justice AC, Covinsky KE, Berlin JA. Assessing the generalizability of prognostic information. Ann Intern Med. 1999;130(6):515–524. doi: 10.7326/0003-4819-130-6-199903160-00016
  218. Krimsky S, Rothenberg LS. Conflict of interest policies in science and medical journals: editorial practices and author disclosures. Sci Eng Ethics. 2001;7(2):205–218. doi: 10.1007/s11948-001-0041-7
  219. Bekelman JE, Li Y, Gross C. Scope and impact of financial conflicts of interest in biomedical research: a systematic review. JAMA. 2003;289(4):454–465. doi: 10.1001/jama.289.4.454
  220. Davidson RA. Source of funding and outcome of clinical trials. J Gen Intern Med. 1986;1(3):155–158. doi: 10.1007/BF02602327
  221. Stelfox HT, Chua G, O’Rourke K, Detsky AS. Conflict of interest in the debate over calcium-channel antagonists. N Engl J Med. 1998;338(2):101–106. doi: 10.1056/NEJM199801083380206
  222. Lexchin J, Bero LA, Djulbegovic B, Clark O. Pharmaceutical industry sponsorship and research outcome and quality: systematic review. BMJ. 2003;326(7400):1167–1170. doi: 10.1136/bmj.326.7400.1167
  223. Als-Nielsen B, Chen W, Gluud C, Kjaergard LL. Association of funding and conclusions in randomized drug trials: a reflection of treatment effect or adverse events? JAMA. 2003;290(7):921–928. doi: 10.1001/jama.290.7.921
  224. Barnes DE, Bero LA. Why re s on the health effects of passive smoking reach different conclusions. JAMA. 1998;279(19):1566–1570. doi: 10.1001/jama.279.19.1566
  225. Barnes DE, Bero LA. Industry-funded research and conflict of interest: an analysis of research sponsored by the tobacco industry through the Center for Indoor Air Research. J Health Polit Policy Law. 1996;21(3):515–542. doi: 10.1215/03616878-21-3-515
  226. Glantz SA, Barnes DE, Bero L, et al. Looking through a keyhole at the tobacco industry. The Brown and Williamson documents. JAMA. 1995;274(3):219–224. doi: 10.1001/jama.1995.03530030039032
  227. Huss A, Egger M, Hug K, et al. Source of funding and results of studies of health effects of mobile phone use: systematic review of experimental studies. Environ Health Perspect. 2007;115(1):1–4. doi: 10.1289/ehp.9149
  228. Safer DJ. Design and reporting modifications in industry-sponsored comparative psychopharmacology trials. J Nerv Ment Dis. 2002;190(9):583–592. doi: 10.1097/00005053-200209000-00002
  229. Aspinall RL, Goodman NW. Denial of effective treatment and poor quality of clinical information in placebo controlled trials of ondansetron for postoperative nausea and vomiting: a review of published trials. BMJ. 1995;311(7009):844–846. doi: 10.1136/bmj.311.7009.844
  230. Chan AW, Hrobjartsson A, Haahr MT, et al. Empirical evidence for selective reporting of outcomes in randomized trials: comparison of protocols to published articles. JAMA. 2004;291(20):2457–2465. doi: 10.1001/jama.291.20.2457
  231. Melander H, Ahlqvist-Rastad J, Meijer G, Beermann B. Evidence b(i)ased medicine-selective reporting from studies sponsored by pharmaceutical industry: review of studies in new drug applications. BMJ. 2003;326(7400):1171–1173. doi: 10.1136/bmj.326.7400.1171
  232. Scherer RW, Langenberg P, von Elm E. Full publication of results initially presented in abstracts. Cochrane Database of Systematic Reviews. (Issue 2). Art. No.: MR000005. Available from: http:// Accessed 10 September. 2005. doi: 10.1002/14651858.MR000005.pub2
  233. Moher D, Schulz KF, Altman DG. The CONSORT statement: revised recommendations for improving the quality of reports of parallel-group randomised trials. Lancet. 2001;357(9263):1191–1194. doi: 10.1016/S0140-6736(00)04337-3
  234. Stroup DF, Berlin JA, Morton SC, et al. Meta-analysis of observational studies in epidemiology: a proposal for reporting. Meta-analysis Of Observational Studies in Epidemiology (MOOSE) group. JAMA. 2000;283(15):2008–2012. doi: 10.1001/jama.283.15.2008
  235. Altman DG, Schulz KF, Moher D, et al. The revised CONSORT statement for reporting randomized trials: explanation and elaboration. Ann Intern Med. 2001;134(8):663–694. doi: 10.7326/0003-4819-134-8-200104170-00012
  236. Moher D. CONSORT: an evolving tool to help improve the quality of reports of randomized controlled trials. Consolidated Standards of Reporting Trials. JAMA. 1998;279(18):1489–1491. doi: 10.1001/jama.279.18.1489
  237. Begg C, Cho M, Eastwood S, et al. Improving the quality of reporting of randomized controlled trials. The CONSORT statement. JAMA. 1996;276(8):637–639. doi: 10.1001/jama.276.8.637.

Supplementary files

Supplementary Files
1. Diagram

Download (221KB)

Copyright (c) 2007 Vandenbroucke J.P., von Elm E., Altman D.G., Gotzsche P.C., Mulrow C.D., Pocock S.J., Poole C., Schlesselman J.J., Egger M.

Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 International License.

СМИ зарегистрировано Федеральной службой по надзору в сфере связи, информационных технологий и массовых коммуникаций (Роскомнадзор).
Регистрационный номер и дата принятия решения о регистрации СМИ: серия ПИ № ФС 77 - 79539 от 09 ноября 2020 г.

This website uses cookies

You consent to our cookies if you continue to use our website.

About Cookies