Skip to main content

Explaining variance in perceived research misbehavior: results from a survey among academic researchers in Amsterdam

Abstract

Background

Concerns about research misbehavior in academic science have sparked interest in the factors that may explain research misbehavior. Often three clusters of factors are distinguished: individual factors, climate factors and publication factors. Our research question was: to what extent can individual, climate and publication factors explain the variance in frequently perceived research misbehaviors?

Methods

From May 2017 until July 2017, we conducted a survey study among academic researchers in Amsterdam. The survey included three measurement instruments that we previously reported individual results of and here we integrate these findings.

Results

One thousand two hundred ninety-eight researchers completed the survey (response rate: 17%). Results showed that individual, climate and publication factors combined explained 34% of variance in perceived frequency of research misbehavior. Individual factors explained 7%, climate factors explained 22% and publication factors 16%.

Conclusions

Our results suggest that the perceptions of the research climate play a substantial role in explaining variance in research misbehavior. This suggests that efforts to improve departmental norms might have a salutary effect on behavior.

Peer Review reports

Background

There has long been concern about research misbehavior in academic science [1,2,3,4]. Research misbehavior includes a broad array of behaviors, some of which may invalidate research results, some that damage trust in science, and others that may deny credit to those to whom credit is due in ways that may hamper their career progression, possibly leading to their exit from the scientific workforce and the loss of highly talented individuals [5]. These behaviors range in “severity” or “seriousness” from research misconduct (fabrication, falsification and plagiarism, henceforth RM) to “lesser” forms of misbehavior usually termed questionable or detrimental research practices (henceforth: QRP) [6]. These behaviors also differ in their level of intentionality and may be just negligent or reckless, or conscious deviations from the standards for good quality research with a purpose other than finding true answers.

Explanations for why researchers misbehave can generally be grouped into three clusters of potentially explanatory factors: those at the level of the individual, factors arising from the organization in which researchers go about their work, and forces that may act upon individual researchers from beyond their immediate workplace - such as the commonly referenced “publish or perish” pressure [7,8,9,10].

Examples of individual-related factors are gender or academic rank. Examples of climate factors are perceptions of research-related norms and fairness of supervision, and the quality of resources available to support researchers in their work. Examples of publication system factors are the perceived publication stress among academic researchers and their attitudes towards the current publication system governing academic research.

Previous research has found that male researchers were overrepresented when reviewing RM reports and that junior researchers also seem more likely to report QRPs or RM. In addition, researchers are supposedly more likely to misbehave in a climate where they feel treated unjustly and perceive heavy competition. Lastly, RM and QRPs have been associated with high perceived publication pressure [11,12,13].

Objectives

In this paper, we integrate our previously published findings [14,15,16] that used measurement instruments that are at best proxies for these complex phenomena to see what share of variance in QRPs and RM these three groups of factors account for. We work from the assumption that in a poor-quality research climate with high publication pressure, researchers should be more likely to observe research misbehavior. Our research question is: to what extent can individual, climate and publication factors explain the variance in frequently perceived research misbehaviors?

Methods

Study design

We used a cross-sectional survey design.

Participants

Participants were academic researchers employed at two universities in Amsterdam (Vrije Universiteit Amsterdam and University of Amsterdam) and two academic medical centers (i.e., Amsterdam University Medical Centers, location AMC and VUmc). In order to be eligible for participation, respondents had to be employed in research for at least 1 day per week. We included PhD candidates, as they are formally employed by Dutch institutions. A full description of our recruitment procedure can be found elsewhere [15].

Variables

The survey questionnaire consisted of three instruments (Survey of Organizational Research Climate, henceforth: SOURCE [17], the revised Publication Pressure Questionnaire, henceforth: PPQr [18], 20 randomly drawn research misbehaviors from a list of 60 QRPs and RM [5]) and three demographic items (gender, academic rank and disciplinary field). For an overview of the different subscales and items that we used as proxies for the individual, climate and publication factors, see Table 1.

Table 1 Overview of instruments used in survey questionnaire

Setting

Between May 2017 and July 2017, we conducted a survey study among academic researchers in Amsterdam. We used Qualtrics (Qualtrics, Provo, UT, USA) to design the survey. The survey started after participants indicated informed consent. The survey included three measurement instruments that we previously reported individual results of and here we integrate these findings.

Study size

We invited the complete population of interest; no specific sample size calculations were made prior to data collection.

Bias

The greatest source of potential bias in our design is response bias, which is why we sent multiple reminders and advertised our study in university newsletters and on the intranet. Still, the choice to participate in a study related to research integrity and misbehavior is presumably not random.

Quantitative variables

Explanatory variables are the demographic characteristics of the participant (we refer to these as individual factors, as they regard characteristics of the individual), SOURCE subscales, and PPQr subscales.

Outcome variables are (1) perceived frequency (never observed/observed)Footnote 1 and (2) perceived impact, the product score of perceived frequency and impact on validity that we henceforth denote as perceived impact.Footnote 2 We use perceived impact because focusing on perceived frequency alone may result in a model that explains more trivial trespasses only. We took the square root of this perceived impact score for normalization purposes.

To give the reader an indication of the overall frequency of perceived misbehavior, we calculated percentages of the three possible frequencies. To get a sense of the reliability of our outcome measures, we calculated generalizability coefficients, based on the theory of generalizability developed by Cronbach and colleagues [19]. The generalizability coefficient is a function of variance components and can also be estimated with incomplete data.

Statistical methods

Each participant responded to 20 items, randomly selected out of a set of 60 items. As a result, participants responded to different sets of items. We applied multilevel logistic regression analysis to the perceived frequency item scores and multilevel linear regression analysis to the perceived impact item scores, with items nested within respondents, and the characteristics of the participants as the higher-level variables. We thus treated the 60 questions about QRPs and RM as “level 1” observations, with those observations nested within respondents, “level 2”. The nesting of observations within level two means that those observations are not independent (in fact, ICCs are 0.17 for Frequency scores and 0.28 for Impact scores) which is why multilevel analyses is appropriate as it is designed to take this non-independence into account, and adjust the standard-errors appropriately to reflect the true “effective” sample size. This application of multilevel models is not yet as common as other applications, such as with student data, where students are the level 1 observations, nested within classrooms (level 2), or such as within-persons repeated measures data, where each time-point provides level 1 measures nested within persons (level 2). But just as multilevel analyses appropriately account for the non-independence of observations in such applications, we used multilevel analyses to account for the non-independence across measures of RM and QRPsFootnote 3 (level 1) within individual respondents (level 2) (for an in-depth explanation, see [20, 21].

Perceived frequency item scores were dichotomized, as the third response option was hardly used (0 = not observed, 1 = observed). The concept of explained variance is not defined in multilevel logistic regression. However, as our application items are first level units and respondents are second level units, the estimated intercept variance represents between-subject variance [20]. We can compare intercept variance in the empty model with intercept variance in models that include explanatory variables, and use unity minus the proportional reduction in intercept variance as an index of explained variance.

Our approach comprised four steps: first, we analyzed the influence of each explanatory variable on the two outcome variables individually. Second, we used a stepwise procedure to assess which cluster of explanatory variables explained most variance (cluster 1, individual factors = gender, academic rank and disciplinary field, cluster 2, climate factors = 7 SOURCE subscales and cluster 3, publication factors = PPQr subscales). Third, we employed a hierarchical model where we consecutively added the explanatory variables in their clusters – starting with cluster 1 – to assess how much cumulative variance was explained. Finally, we inspected the relationships between the different explanatory variables with Pearson’s correlation and regression analyses.

Results

Response rate

We obtained 7548 e-mail addresses of active academic researchers in Amsterdam of which 83 were no longer in use. Some researchers explicitly declined participation (n = 109) and 1298 researchers completed at least one subscale from the SOURCE, which was sufficient to use their responses in our models, yielding a response rate of 17%.

Descriptive data

Demographic information can be found in Table 2.

Table 2 Participants’ demographic informationa

Outcome data

Percentages of each frequency for all 60 QRPs and RM (as well as for the SOURCE and PPQr) can be found in the Additional file 1: appendix.

Main results

We assessed the association of each explanatory variable with both the perceived frequency measure and the perceived impact measure. An overview of these results can be found in Table 3. Note that these are all separate univariate multilevel regression analyses with a single variable in the model (not corrected for any confounders). Individual factors explain between 0 and 5% of the variance in perceived frequency of research misbehaviors, climate factors explain between 5 and 18% and publication factors explain between 1 and 15% of the variance in frequency of research misbehaviors.

Table 3 Explanatory variables for perceived research misbehaviors, univariate analyses

When using perceived impact as outcome variable, individual factors explain 1% of variance, climate factors between 1 and 13% and finally publication factors explain between 2 and 12% of variance in perceived impact of research misbehaviors.

We added the explanatory variables in their respective clusters and then followed up with a hierarchical model where we consecutively added the clusters, see Table 4. Individual factors as a cluster explain 7% of variance in perceived frequency of research misbehaviors, climate factors as a cluster explain 22% of variance and publication factors as a cluster explain 16% of variance in perceived frequency of research misbehaviors. Individual factors as a cluster explain 1% of variance in perceived impact of research misbehavior, the cluster of climate factors explains 14% and the cluster of publication factors explains 12% in variance of perceived impact of research misbehaviors.

Table 4 Explained variance of clusters of factors using hierarchical mixed modelling

We followed up with a hierarchical model where we consecutively added the clusters. The clusters of individual factors and climate factors combined explain 32% of the variance in perceived frequency of research misbehaviors. Adding all three clusters to the model, hence including publication factors, explains 34% of the variance in perceived frequency of research misbehaviors. When using perceived impact as the outcome variable, individual and climate clusters combined explain 16% of variance in variance of perceived impact of research misbehaviors. Finally, adding all three clusters to the model explains 18% of variance of perceived impact of research misbehaviors.

Other analyses

Note that publication factors explain little additional variance when individual and climate factors are already in the model, which prompts questions about the relationship between the different explanatory variables. To assess why adding publication factors last to the model had only a marginal effect on the cumulative increase in variance, we calculated Pearson correlation coefficients between the individual factors and the publication factors and between the climate factors and publication factors (see Additional file 1: appendix). We already looked into the effects of individual factors on publication factors in another paper [1]. To see the additional effects of climate factors on publication factors, we ran further regression analyses (see Additional file 1: appendix). Overall, we found that the more positive a participant’s perception of the research climate, the less negative that participant’s perception of the publication system.

Discussion

Key results

We investigated the extent to which variances in research misbehavior can be explained by individual, climate and publication factors. Overall, individual, climate and publication factors combined explain 34% of variance in perceived frequency of research misbehavior and 18% in perceived impact of research misbehavior. The cluster accounting for the greatest percentage of explained variance is the research climate, 22 and 14% in perceived frequency and perceived impact of research misbehavior, respectively. Publication pressure is the second greatest explanatory variable, accounting for 16% of variance in perceived frequency and 12% of variance in perceived impact of research misbehavior. Individual factors are the smallest cluster, explaining 7% of variance in perceived frequency and 1% in perceived impact.

Interpretation

We found academic rank to play the greatest role within the cluster of individual factors. Previous research coined explanations for the association between academic rank and research misbehavior including the idea that junior researchers are less familiar with responsible research practices [8], or, when under pressure to perform, they would potentially compromise their ethics [16]. However, our results indicate that senior researchers observed significantly more research misbehavior. Hence, perhaps junior researchers are more honest in their self-reporting but when asked about the behavior of others, senior researchers are equally critical of their colleagues.

We found no effect of gender and in fact the influence of individual variables (such as gender) for research misbehavior has received criticism. For example, Kaatz, Vogelman & Carnes [22] pointed out that males being overrepresented among those found guilty of misconduct and evidence from other areas found men more likely to commit fraud, are insufficient to conclude that male researchers would be more likely to engage in research misconduct. Besides, Dalton & Ortegren [23] found that the consistent finding that women respond more ethically than men was greatly reduced when controlling for social desirability. The authors note that this does not indicate males and females to respond equally ethical, but simply that the differences in ethical behavior may be smaller than initially assumed.

We found the cluster of climate factors to have the greatest share in explaining research misbehavior, which is similar to Crain and colleagues [24] who found that especially the subscale Integrity Inhibitors subscale (a scale that measures the degree to which integrity inhibiting factors are present, such as the pressure to obtain funding and whether there is suspicion among researchers) was strongly related to engaging in research misbehavior in their sample of US scientists. A high score on the Departmental Norms (the extent to which researchers value norms regarding scholarly integrity in research, such as honesty) subscale was negatively associated with engaging in research misbehavior. When reviewing the individual subscale effects in our study, these two subscale scores are most strongly associated with perceived frequency as well as with perceived impact. Bearing in mind that we focused on perceptions of engagement in research misbehavior by others in the direct environment and not on research misbehavior by the respondent him- or herself, we still think it is reasonable to believe that we observed a similar pattern. In addition, using a large bibliographic sample based on retracted papers, Fanelli, Costas and Larivière [25] reported that academic culture affects research integrity, again emphasizing the importance of this cluster.

Broadly speaking, the relationship we observed aligns with existing literature that investigates unethical behavior in organizations [26]. A meta-analysis by Martin and Cullen [27] found that unethical behavior (among which they considered lying, cheating and falsifying reports) was associated with what is called an instrumental climate where individual behavior is primarily motivated by self-interest [28]. Related, Gorsira et al. [29] found that when employees perceive their work climates to be more ethical, they were less likely to engage in corrupt behavior and vice versa.

Maggio and colleagues [12] used the previous version of the Publication Pressure Questionnaire and found publication pressure to account for 10% of variance of self-reported research misbehavior among researchers in health professions’ education. This is similar to our findings, although the authors focused on self-reported misbehaviors, whereas we focused on perceptions of engagement in research misbehavior by others in the direct environment. In addition, we used a slightly different set of research misbehaviors and we have investigated researchers from other disciplinary fields as well. Nevertheless, both study results indicate that in an environment where perceived publication pressure is high, the likelihood of researchers reporting research misbehavior will be larger compared to an environment with low publication pressure.

Holtfreter and colleagues [29] used a list of criminological factors that have been associated with research misconduct and asked academic researchers in the US to indicate which factor they thought contributed most to research misconduct. Regardless of their disciplinary field, researchers reported that the stress and strain to perform (among which was the pressure to publish) was the main cause for research misconduct. Holtfreter and colleagues only distinguished two clusters of factors: ‘bad apples’ (similar to our individual factors) and ‘bad barrels’, comprising both climate and publication factors. That said, the stress and strain items are rather similar to our publication pressure items, supporting the idea of publication pressure as a factor contributing to research misconduct.

Note that we do not claim that individual, climate and publication factors are independent. We found, for instance, publication pressure to account for 16% of variance in perceived frequency when added as first variable. However, when climate factors are already in the model, the cumulative increase of explained variance when adding publication pressure is only 2%, which seems intuitive, since it could be that publication factors influence climate factors, such as when increased publication pressure leads to authorship disputes that in turn potentially damage the research climate in particular research groups [13]. A related reasoning could be that publication pressure may arise as a function of how one’s department and departmental expectations for “productivity” are setup, or may arise at a higher organizational level, to the extent that publication expectations are set or influenced by decision makers above the department level.

Generalizability

Our study’s sample included researchers from different academic disciplines and academic ranks. The findings thus bear relevance to a broad group of academic researchers. Besides, relying on previously validated and repeatedly employed instruments such as the SOURCE [17] and PPQr [18] should substantiate the validity of our findings.

Limitations

We should acknowledge a number of weaknesses in our study. Firstly, a response rate of 17% is arguably low. That said, it is not lower than other recent surveys that are considered valid [30]. In addition, a low response rate in itself does not indicate a response bias. In another study, we tried to estimate response bias in our sample using a wave analysis and found early responders to be similar to late responders [14]. Also, when looking at demographic characteristics, such as academic rank, our responders seemed similar to the population [15] reducing the concern that our sample is biased, at least with respect to those dimensions. In conclusion, with our response rate, we cannot exclude the possibility of response bias, but we have some reason to believe it should not influence our results substantially.

Secondly, our outcome variables regard perceived misbehavior by others, whereas many studies into misbehavior focus on self-reports of misbehavior by the respondent, including some of the literature we cited. Interestingly, whereas self-reported rates of misbehavior by the respondent have decreased over time, perceptions of the frequency of misbehavior by others have remained more stable [31]. Nevertheless, perceptions of misbehavior measurements may be artificially inflated in situations where various responders have witnessed the same incident. Besides, people are generally more earnest when reporting about others’ misbehavior (and more lenient when it regards their own), also known as the Mohammed Ali effect [31], which could artificially inflate reported perceptions. Hence, our data may overestimate the actual frequency of perceived research misbehavior. Relatedly, as we measured all outcome and explanatory variables through subjective self-report, the correlations between these variables may be inflated by common-method bias [32]. It seems reasonable to say that perceptions carry credible evidence about the ‘true’ prevalence of research misbehavior and its explanatory variables, although surveying perceptions is by no means conclusive.

Thirdly, the assumption that is implicit in our work is that when participants reported on what research misbehaviors they observed in their field of study, they were largely reporting on what they observed in their own research setting. Although we do not think this is an unreasonable assumption, we nevertheless want to acknowledge that we could not test it explicitly in our survey.

Fourthly, it is a characteristic of multiple regression that the more explanatory variables within a cluster, the larger the explained variance. This should be kept in mind, as our clusters have different numbers of explanatory variables within them.

Finally, our results are cross-sectional in nature so we have to refrain from any causal conclusions.

Conclusions

Our results suggest that researchers’ perceptions of the research climate as well as researchers’ perceptions of publication pressure play a significant role in explaining research misbehavior. Especially the norms that govern research practices in a department and the extent to which integrity inhibiting factors such as suspicion were present, explained a large proportion. Finally, it was not so much a researchers’ publication stress but more their attitudes towards the current publication system that played a substantial role. Note that these proportions of explained variance decreased when using perceived impact as outcome, but the results pattern remained the same. This suggests that efforts to improve departmental norms might have a salutary effect on behavior.

Availability of data and materials

Data cannot be shared publicly because of participants’ privacy. The pseudonymized personal data is available for research purposes only under a data transfer agreement to ensure that persons authorized to process the personal data have committed themselves to confidentiality and the receiving party agrees that the pseudonymized personal data will only be used for scientific research purposes. To ease the possible exchange of data with fellow researchers, a concept data sharing agreement can be found here: https://osf.io/8jye2. Survey materials can be found here: https://surfdrive.surf.nl/files/index.php/s/rhEAWrUap69jQ6a.

Notes

  1. This remains an imperfect measure of how often misbehavior actually occurs, as it relies on whether respondents report observing the behaviour in question.

  2. We reasoned that the impact of the misbehavior increases as the behavior is more frequently perceived and is assigned a greater impact on the validity. We used the term impact on the aggregate level another paper, but believe the term perceived impact is more succinct.

  3. See Additional file 1: appendix where we juxtapose ordinary regression analysis and multilevel regression analysis.

References

  1. De Vries R, Anderson MS, Martinson BC. Normal misbehavior: scientists talk about the ethics of research. J Empir Res Hum Res Ethics. 2006;1(1):43–50. https://0-doi-org.brum.beds.ac.uk/10.1525/jer.2006.1.1.43.

    Article  Google Scholar 

  2. Martinson BC, Anderson MS, de Vries R. Scientists behaving badly. Nature. 2005;435(7043):737–8. https://0-doi-org.brum.beds.ac.uk/10.1038/435737a.

    Article  Google Scholar 

  3. Stroebe W, Postmes T, Spears R. Scientific misconduct and the myth of self-correction in science. Perspect Psychol Sci. 2012;7(6):670–88. https://0-doi-org.brum.beds.ac.uk/10.1177/1745691612460687.

    Article  Google Scholar 

  4. Steneck N. Fostering integrity in research: definition, current knowlege, and future directions. Sci Eng Ethics. 2006;12(1):53–74. https://0-doi-org.brum.beds.ac.uk/10.1007/s11948-006-0006-y.

    Article  Google Scholar 

  5. Bouter LM, Tijdink J, Axelsen N, Martinson BC, ter Riet G. Ranking major and minor research misbehaviors: results from a survey among participants of four world conferences on research integrity. Res Integr Peer Rev. 2016;1(17):1–8.

    Google Scholar 

  6. Medicine), NASEM (National Academies of Sciences, Engineering A. Fostering Integrity in Research. Washington, D.C.; 2017.

  7. Sovacool BK. Exploring scientific misconduct: isolated individuals, impure institutions, or an inevitable idiom of modern science? J Bioeth Inq. 2008;5(4):271–82. https://0-doi-org.brum.beds.ac.uk/10.1007/s11673-008-9113-6.

    Article  Google Scholar 

  8. Bogner A, Menz W. Science crime. The Korean cloning scandal and the role of ethics. Sci Public Policy. 2006;33(8):601–12.

    Article  Google Scholar 

  9. George SL. Research misconduct and data fraud in clinical trials: prevalence and causal factors. Int J Clin Oncol. 2016;21(1):15–21. https://0-doi-org.brum.beds.ac.uk/10.1007/s10147-015-0887-3.

    Article  Google Scholar 

  10. Neill US. Publish or perish, but at what cost? J Clin Invest. 2008;118(7):1–2.

    Article  Google Scholar 

  11. Tijdink JK, Verbeke R, Smulders YM. Publication pressure and scientific misconduct in medical scientists. J Empir Res Hum Res Ethics. 2014;9(5):64–71. https://0-doi-org.brum.beds.ac.uk/10.1177/1556264614552421.

    Article  Google Scholar 

  12. Maggio L, Dong T, Driessen E, Artino A. Factors associated with scientific misconduct and questionable research practices in health professions education. Perspect Med Educ. 2019;8(2):74–82. https://0-doi-org.brum.beds.ac.uk/10.1007/s40037-019-0501-x.

    Article  Google Scholar 

  13. Tijdink JK, Schipper K, Bouter LM, Pont PM, De Jonge J, Smulders YM. How do scientists perceive the current publication culture? A qualitative focus group interview study among Dutch biomedical researchers. BMJ Open. 2016;6(2):e008681.

  14. Haven TL, Bouter LM, Smulders YM, Tijdink. Perceived publication pressure in Amsterdam –survey of all disciplinary fields and academic ranks. Plos One. 2019;14(6):e0217931.

    Article  Google Scholar 

  15. Haven TL, Tijdink JK, Martinson BC, Bouter LM. Perceptions of research integrity climate differ between academic ranks and disciplinary fields: results from a survey among academic researchers in Amsterdam. Plos One. 2019;14(1):e0210599. https://0-doi-org.brum.beds.ac.uk/10.1371/journal.pone.0210599.

    Article  Google Scholar 

  16. Haven TL, Tijdink JK, Pasman HR, Widdershoven G, Riet G, Bouter LM. Researchers’ perceptions of research misbehaviours : a mixed methods study among academic researchers in Amsterdam. Res Integr Peer Rev. 2019;4(25):1–12.

    Google Scholar 

  17. Martinson BC, Thrush CR, Crain AL. Development and validation of the survey of organizational research climate (SORC). Sci Eng Ethics. 2013;19(3):813–34. https://0-doi-org.brum.beds.ac.uk/10.1007/s11948-012-9410-7.

    Article  Google Scholar 

  18. Haven TL, Tijdink JK, De Goede MEE, Oort F. Personally perceived publication pressure - revising the publication pressure questionnaire (PPQ) by using work stress models. Res Integr Peer Rev. 2019;4(7):1–9.

    Google Scholar 

  19. Cronbach LJ, Nageswari R, Gleser GC. Theory of generalizability: a liberation of reliability theory. Br J Stat Psychol. 1963;16(2):137–63. https://0-doi-org.brum.beds.ac.uk/10.1111/j.2044-8317.1963.tb00206.x.

    Article  Google Scholar 

  20. Snijders TAB, Bosker RJ. Multilevel analysis: an introduction to basic and advanced multilevel modeling. London: Sage; 1999. p. 266.

    Google Scholar 

  21. Twisk J. Multivariate mixed model analysis. In: Multivariate mixed model analysis. Cambridge: Cambridge University Press; 2019. p. 151–65. https://0-doi-org.brum.beds.ac.uk/10.1017/9781108635660.011.

    Chapter  Google Scholar 

  22. Kaatz A, Vogelman PN, Carnes M. Are men more likely than women to commit scientific misconduct? Maybe, maybe not. MBio. 2013;4(2):3–4.

    Article  Google Scholar 

  23. Dalton D, Ortegren M. Gender differences in ethics research: the importance of controlling for the social desirability response Bias. J Bus Ethics. 2011;103(1):73–93. https://0-doi-org.brum.beds.ac.uk/10.1007/s10551-011-0843-8.

    Article  Google Scholar 

  24. Crain LA, Martinson BC, Thrush CR, Crain AL, Martinson BC, Thrush CR. Relationships between the survey of organizational research climate (SORC) and self-reported research practices. Sci Eng Ethics. 2013;19(3):835–50. https://0-doi-org.brum.beds.ac.uk/10.1007/s11948-012-9409-0.

    Article  Google Scholar 

  25. Fanelli D, Costas R, Larivière V. Misconduct policies, academic culture and career stage, not gender or pressures to publish, affect scientific integrity. PLoS One. 2015;10(6):1–18.

    Article  Google Scholar 

  26. Treviño LK, den Nieuwenboer NA, Kish-Gephart JJ. (un) ethical behavior in organizations. Annu Rev Psychol. 2014;65(1):635–60. https://0-doi-org.brum.beds.ac.uk/10.1146/annurev-psych-113011-143745.

    Article  Google Scholar 

  27. Martin KD, Cullen JB. Continuities and extensions of ethical climate theory: a meta-analytic review. J Bus Ethics. 2006;69(2):175–94. https://0-doi-org.brum.beds.ac.uk/10.1007/s10551-006-9084-7.

    Article  Google Scholar 

  28. Simha A, Cullen JB. Ethical climates and their effects on organizational outcomes: implications from the past and prophecies for the future. Acad Manag Perspect. 2012;26(4):20–34. https://0-doi-org.brum.beds.ac.uk/10.5465/amp.2011.0156.

    Article  Google Scholar 

  29. Gorsira M, Steg L, Denkers A, Huisman W. Corruption in organizations: ethical climate and individual motives. Adm Sci. 2018;8(1):4. https://0-doi-org.brum.beds.ac.uk/10.3390/admsci8010004.

    Article  Google Scholar 

  30. Groves R. Nonresponse rates and nonresponse bias in household surveys: what do we know about the linkage between nonresponse rates and nonresponse bias? Public Opin Q. 2006;70(5):646–75. https://0-doi-org.brum.beds.ac.uk/10.1093/poq/nfl033.

    Article  Google Scholar 

  31. Fanelli D. How many scientists fabricate and falsify research? A systematic review and meta-analysis of survey data. Plos One. 2009;4(5):e5738. https://0-doi-org.brum.beds.ac.uk/10.1371/journal.pone.0005738.

    Article  Google Scholar 

  32. Podasakoff PM, MacKenzie SB, Lee J-Y, Podsakoff NP. Common method biases in behavioral research: a critical review of the literature and recommended remedies. J Appl Psychol. 2003;88(5):879–903. https://0-doi-org.brum.beds.ac.uk/10.1037/0021-9010.88.5.879.

    Article  Google Scholar 

Download references

Acknowledgements

We would like to acknowledge the members of the steering committee (René van Woudenberg, Gerben ter Riet, Yvo Smulders, Guy Widdershoven and Hanneke de Haes) for their continuous critical input.

Funding

LB, JT and TH were partly supported by the Templeton World Charity Foundation (https://www.templetonworldcharity.org/) under the grant #TWCF0163/AB106. The funder had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. TH was also partly supported by contributions from the Vrije Universiteit, the University of Amsterdam and the Amsterdam University Medical Centers. These institutions had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.

Author information

Authors and Affiliations

Authors

Contributions

TH wrote the draft manuscript. FO and TH conducted the analyses. FO, BM, JT and LB designed the study. FO, BM, JT and LB contributed significantly to multiple versions of the manuscript. All authors read and approved the final version.

Corresponding author

Correspondence to Tamarinde Haven.

Ethics declarations

Ethics approval and consent to participate

The Scientific and Ethical Review board of the Faculty of Behavior & Movements Sciences (Vrije Universiteit Amsterdam) approved our study, approval number: VCWE-2017-017R1. Participants consented to take part in our survey.

Consent for publication

Not applicable.

Competing interests

The authors declare that they have no competing interests.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Haven, T., Tijdink, J., Martinson, B. et al. Explaining variance in perceived research misbehavior: results from a survey among academic researchers in Amsterdam. Res Integr Peer Rev 6, 7 (2021). https://0-doi-org.brum.beds.ac.uk/10.1186/s41073-021-00110-w

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://0-doi-org.brum.beds.ac.uk/10.1186/s41073-021-00110-w

Keywords