FACTORS AFFECTING BIBLIOMETRIC INDICATORS OF SCIENTIFIC QUALITY; pp. 199–214Full article in PDF format | doi: 10.3176/tr.2013.3.01
The High Quality Science Index (HQSI) was constructed on the basis of the release of the Essential Science Indicators (Thomson Reuters) for the period from January 1, 2002 to August 31, 2012. The HQSI was computed for a country or territory as a sum of normalised scores of the mean impact (citations per paper) and the percentage of papers that reach the top-1% citation ranking. Expectedly, countries or territories that are producing larger Gross National Income per capita and allocate higher percentage of the produced economic wealth for the research and development (R&D) were more likely to achieve prominence in the scientific publications. The size of the country and its population were not important factors to excel in scientific research. Since economic and socio-demographic factors only partly predicted the quality of science in a given country or territory, there is considerable space for historical and science policy factors that could affect the quality of science in a given country. Several countries being in almost identical starting positions twenty years ago have developed on completely different trajectories dependent on policies and decisions made by their policy makers. Possibilities of how to improve reliability of measures of scientific quality have been discussed.
Adams, J. and C. King (2010) Global research report: Russia. Research and collaboration in the new geography of science. Leeds: Evidence.
Aksnes, Dag W., Jesper W. Schneider, and Magnus Gunnarsson (2012) “Ranking national research systems by citation indicators. A comparative analysis using whole and fractionalised counting methods“. Journal of Informetrics 6, 1, 36–43.
Albarran, Pedro, Juan A.Crespo, Ignacio Ortuno, and Javier Ruiz-Castillo (2010) “A comparison of the scientific performance of the US and the European union at the turn of the 21st century”. Scientometrics 85, 1, 329–344.
Allik, J. (2003) “The quality of science in Estonia, Latvia, and Lithuania after the first decade of independence”. Trames 7, 1, 40–52.
Allik, J. (2008) “Quality of Estonian science estimated through bibliometric indicators (1997–2007)”. Proceedings of the Estonian Academy of Sciences 57, 255–264.
Allik, J. (2011) “Estonian science estimated through bibliometric indicators”. In J. Engelbrecht, ed. Research in Estonia: present and future, 456–469. Tallinn: Estonian Academy of Sciences.
Allik, J. (2013a) “Bibliometric analysis of the journal of cross-cultural psychology during the first ten years of the new millennium”. Journal of Cross-Cultural Psychology 44, 4, 657–667.
Allik, J. (2013b) “Personality psychology in the first decade of the new millennium: a bibliometric portrait”. European Journal of Personality 27, 1, 5–14.
Bornmann, Lutz, and Loet Leydesdorff (2013) “Macro-indicators of citation impacts of six prolific countries: In Cites data and the statistical significance of trends”. Plos One 8, 2.
Garfield, E. (2005) “The agony and the ecstasy – the history and meaning of the journal impact factor”. Paper presented at the International Congress on Peer Review and Biomedical Publication, Chicago, September 16, 2005.
Herranz, Neus, and Javier Ruiz-Castillo (2012) “Sub-field normalization in the multiplicative case: average-based citation indicators”. Journal of Informetrics 6, 4, 543–556.
Human Development Report (2011) Human development report 2011: sustainability and equity: a better future for all. New York: United Nations Development Programme.
Jaffe, K. (2011) “Do countries with lower self-citation rates produce higher impact papers? Or, does humility pay?”. Interciencia 36, 9, 694–698.
Karlsson, S., and O. Persson (2012) The Swedish production of highly cited papers. (Vetenskapsrådet lilla rapportserie, 5.) Stockholm: Vetenskapsrådet.
King, D. A. (2004) “The scientific impact of nations”. Nature 430, 6997, 311–316.
Kristapsons, J., H. Martinson, and I. Dagyte (2003) Baltic R&D systems in transitions: experiences and future prospects. Riga: Zinatne.
Leydesdorff, Loet, and Lutz Bornmann (2011) “How fractional counting of citations affects the impact factor: normalization in terms of differences in citation potentials among fields of science”. Journal of the American Society for Information Science and Technology 62, 2, 217–229.
Leydesdorff, Loet, and Tobias Opthof (2010) “Normalization at the field level: fractional counting of citations”. Journal of Informetrics 4, 4, 644–646.
Leydesdorff, Loet, and Wagner, Caroline. (2009a) Is the United States losing ground in science? A global perspective on the world science system. Scientometrics, 78(1), 23–36.
Leydesdorff, Loet, and Caroline Wagner (2009b) “Macro-level indicators of the relations between research funding and research output”. Journal of Informetrics 3, 4, 353–362.
Markusova, V. A., M. Jansz, A. N. Libkind, I. Libkind, and A. Varshavsky (2009) “Trends in Russian research output in post-Soviet era”. Scientometrics 79, 2, 249–260.
May, R. M. (1997) “The scientific wealth of nations”. Science 275, 5301, 793–796.
Moed, H. F. (2005) Citation analysis in research evaluation. Dordrecht: Springer.
Põder, E. (2010) “Let’s correct that small mistake”. Journal of the American Society for Information Science and Technology 61, 12, 2593–2594.
Rodriguez-Navarro, A. (2011) “Measuring research excellence number of Nobel Prize achievements versus conventional bibliometric indicators”. Journal of Documentation 67, 4, 582–600.
The Royal Society (2011) Knowledge, networks and nations: global scientific collaboration in the 21st century. London: The Royal Society.
Wagner, Caroline S. and, Loet Leydesdorff (2012) “An integrated impact indicator: a new definition of ‘impact’ with policy relevance”. Research Evaluation 21, 3, 183–188.
Back to Issue