On Metrics and Research Assessment

Oh research where art thou?

Next Monday 30th June 2014 at noon is the deadline to reply to the ‘Call for Evidence’ for HEFCE’s Independent review of the role of metrics in research assessment. I share some quick notes on my position as an individual researcher. Needless to say my personal position is based on my experience as a researcher and as the result of reading research in the area. Please excuse the lack of hyperlinked references in the body of the text; however  I have included a bibliography at the end of the post where I link to each reference.

A combination of traditional citation metrics and ‘alternative’ article-level metrics can be used across different academic disciplines to assess the reach (in terms of academic and public ‘impact’) of excellent research undertaken in the higher education sector (Liu and Adie 2013).

If increased international public access and impact are to be key factors in 21st century research assessment, the adoption of metrics, and particularly article-level metrics, is essential. Scholarly outputs published with Digital Object Identifiers can be easily tracked and measured, and as scholars in different fields adopt online methods of dissemination more widely, the data we can obtain from tracking it should not be ignored by assessment panels.

Article-level-metrics on scholarly outputs are already being tested by institutional repositories and publishers across the board. The data is open and facilitates further research, and some evidence for qualitative impact storytelling. Merely on their own, metrics of any kind (understood as mostly quantitative data) cannot and should not be used to assess either impact or ‘excellence’.

However, citation metrics and online mention metrics (“altmetrics”) can provide valuable data that can and should be subject to quantitative and qualitative analyses and review. Qualitative assessment in the form of “impact stories” can be informed by quantitative data provided by alternative metrics providers and methodologies (Neylon 2010; Priego 2012).

The San Francisco Declaration on Research Assessment (DORA) made the general recommendation of not using journal-based metrics, such as Journal Impact Factors, as a surrogate measure of the quality of individual research articles, to assess an individual researcher’s contributions, or in hiring, promotion, or funding decisions.

DORA provides recommendations for institutions, funding agencies, publishers and organisations that supply metrics.  An analysis of available data on individual DORA signers as of June 24, 2013, showed that 10963 individuals and 484 institutions had signed and that 6% were in the humanities and 94% in scientific disciplines; this in itself reflects an important disparity across fields that should be taken into account.

The ‘gaming’ of any kind of metric is possible by definition. It is critical that previous efforts in developing good practice in the measurement and assessment of research are adopted or at least taken into account. DORA makes it explicit that the gaming of metrics will not be tolerated and altmetrics service providers are openly working towards good practice and transparent methodologies (Adie 2013).

Social media adoption by scholars for scholarly dissemination is an important aspect of academic communications. It is wide, varies across disciplines and is still fairly recent (Priem 2011: Adie and Roe 2013;  Sud and Thelwall 2013). Therefore the discovery of correlations between online mentions, downloads and traditional citations is expected to be low since the citation window is still too small. Previous research however demonstrates there are positive yet still low correlations between downloads and citation counts.

Recent and ongoing research shows that Open Access publications can lead to greater number of downloads and social media mentions. Though research looking for possible correlations between Open Access and citation counts exists, the findings vary and the citation window is still too small and more time and research will be needed to determine if positive correlations exist as a general rule (Alperin 2014; Bernal 2013; Costas 2014; Kousha & Thelwall 2007).

It could be predicted that it is likely there will be positive correlations in some cases but not all, as the scholarly, institutional, technological, economic and social variables are multiple and platform and culture-dependent. Likewise, current business models from so-called hybrid publishers that enable Open Access via Article Processing Charges are likely to privilege the dissemination of outputs of those with existing funding schemes to cover them. Similarly a prevalence of academic journals, particularly in the arts and humanities have yet to have a significant, sustainable online presence, and many still lack DOIs to enable their automated and transparent tracking. However, institutional repositories are already embracing altmetrics as a means of both tracking and encouraging engagement with the resources, and the ability to track and measure engagement with grey literature can be a good source of evidence of the role these outputs play in the research and publication life-cycle.

Moreover, some fields privilege the publication of multi-author outputs whilst others prefer single author publications. This clearly puts both those without Open Access funding and single author papers at a quantitative disadvantage. As stated above it is crucial that research assessment employing metrics is based on  qualitative analyses and takes differences in disciplinary cultures into account. Research assessment employing metrics should be conducted on a case-by-case basis even if it is difficult, time-consuming and/or costly.

It is also critical that any assessment of article-level metrics understands how these metrics are possible in the first place and  has an informed awareness of the disparities in social media adoption for scholarly purposes across different disciplinary boundaries in the Higher Education sector. Direct experience and ongoing research shows evidence that at the moment some STEM fields are over-represented online (on blogs, social media and Open Access journals and monographs) while social sciences, arts and humanities outputs are lagging behind.

Traditional citation metrics unfairly benefit those publishing in standard channels and particularly those in the Global North, leaving developing countries scholars at a disadvantage (Alperin 2013; Priego 2014). Alternative metrics more accurately measure the wider reach of scholarly outputs, and might better serve most scholars fostering a research culture that supports national and international research impact objectives.

Even though there is still a bias towards North American and European publications, altmetrics can provide advantages to scholars interested in promoting their research online internationally by addressing public needs and enabling easier discovery and access to research outputs long underrepresented in the traditional literature and databases (Alperin 2014). Moreover, the geolocation data obtainable through altmetrics services offers evidence of both the disparities and international reach of both the production and consumption of research online.

In the internaitonal context some recent and ongoing research suggests that Open Access publications tracked via article-level metrics have a wider international reach and impact; there is a growing body of evidence this is the case in both Latin America and some regions in Africa (see the OpenUCT/ Scholarly Communication in Africa Programme (SCAP) reports as well as Alperin 2014; Priego 2013, 2014; Neylon, Willmers & King 2014).

The success of automated methods to obtain quantitative indicators of the reach, reception and use of scholarly outputs depends on our ability as scholarly communities to realise and develop the potential of the Web for scholarly communications. Developers, adopters and advocates of the use of article-level metrics do not claim that quantitative indicators should be taken at face value. Online publishing offers the unique opportunity to track, measure and evaluate what happens to scholarly outputs once they have been published on the Web. They allow us to make comparisons between dissemination and access models across countries and disciplinary boundaries. More importantly the data they provide is not static, passive quantitative data, but ‘interactive’ as they work as platforms for social interactions between researchers (potentially worldwide, where conditions allow it) enabling the easier, faster discoverability, collecting, exchange and discussion of outputs.

Not embracing article-level metrics or alternative metrics/ altmetrics in research assessment when the 21st century is well underway would be a missed opportunity to push towards a scholarly culture of wider public engagement and adoption of innovative online platforms for scholarly dissemination.

Adopting purely quantitative methods, and even more suggesting that any metric, however large, can equate to “excellence” would be misguided and potentially catastrophic, particularly for those not in STEM areas or without the backing of elite institutions. Only the careful, professional qualitative assessment of live, transparent publishing data will be able to provide evidence of the public and scholarly, local and international reach and reception of excellent research.

References

Adie, E., & Roe, W. (2013). Enriching scholarly content with article-level discussion and metrics. Learned Publishing, 26(1), 11–17. doi:10.6084/m9.figshare.105851

Adie, E. (2013). Gaming Altmetrics. Altmetric. September 18 2013. Available from http://www.altmetric.com/blog/gaming-altmetrics/

Alperin, J. P. (2013). Ask not what altmetrics can do for you, but what altmetrics can do for developing countries. Bulletin of the American Society for Information Science and Technology, 39(4), 18–21. doi:10.1002/bult.2013.1720390407

Alperin, Juan Pablo (2014): Exploring altmetrics in an emerging country context. figshare.
http://dx.doi.org/10.6084/m9.figshare.1041797

Bernal, I. (2013). Open Access and the Changing Landscape of Research Impact Indicators: New Roles for Repositories. Publications, 1(2), 56–77. Retrieved from http://www.mdpi.com/2304-6775/1/2/56

Costas, R., Zahedi, Z., & Wouters, P. (2014). Do “altmetrics” correlate with citations? Extensive comparison of altmetric indicators with citations from a multidisciplinary perspective (p. 30). Leiden. Retrieved from http://www.cwts.nl/pdf/CWTS-WP-2014-001.pdf

Konkiel, S. (2013, November 5). Altmetrics in Institutional Repositories. Retrieved from https://scholarworks.iu.edu/dspace/handle/2022/17122

Kousha, K., & Thelwall, M. (2007). The Web impact of open access social science research. Library & Information Science Research, 29(4), 495–507. Retrieved from http://www.sciencedirect.com/science/article/B6W5R-4PX16VS-1/2/6c778fe766bc07c98ef39dbdd8f2b450

Liu, J., & Adie, E. (2013, May 30). Altmetric: Getting Started with Article-Level Metrics. figshare. http://figshare.com/articles/Altmetric_Getting_Started_with_Article_Level_Metrics/709018

Mohammadi, E., & Thelwall, M. (2014). Mendeley readership altmetrics for the social sciences and humanities: Research evaluation and knowledge flows. Journal of the American Society for Information Science and Technology. Retrieved from http://www.scit.wlv.ac.uk/~cm1993/papers/EhsanMendeleyAltmetrics.pdf

Neylon, C. (2011). Re-use as Impact: How re-assessing what we mean by “impact” can support improving the return on public investment, develop open research practice, and widen engagement . Altmetrics. Retrieved from http://altmetrics.org/workshop2011/neylon-v0/

Neylon, C. (2010). Beyond the Impact Factor: Building a community for more diverse measurement of research. Science in the Open. Retrieved November 29, 2010, from http://cameronneylon.net/blog/beyond-the-impact-factor-building-a-community-for-more-diverse-measurement-of-research/

Neylon C, Willmers M and King T (2014). Rethinking Impact: Applying Altmetrics to Southern African Research. Working Paper 1, Scholarly Communication in Africa Programme. http://openuct.uct.ac.za/sites/default/files/media/SCAP_Paper_1_Neylon_et_al_Rethinking_Impact.pdf

OpenUCT Initiative Publications and SCAP reports. Available from http://openuct.uct.ac.za/publications

Priego, E. (2012). Altmetrics’: quality of engagement matters as much as retweets. Guardian Higher Education Network, Friday 24 August 2012. Retrieved from http://www.theguardian.com/higher-education-network/blog/2012/aug/24/measuring-research-impact-altmetic

Priego, E. (2013). Fieldwork: Apples and Oranges? Online Mentions of Papers About the Humanities. Altmetric, January 11 2013. Retrieved from http://www.altmetric.com/blog/apples-oranges-online-mentions-papers-about-humanities/

Priego, E. (2013). Alt-metrics, Digital Opportunity and Africa. Impact of Social Sciences, London School of Economics. February 6 2013. Retrieved from http://blogs.lse.ac.uk/impactofsocialsciences/2013/02/06/alt-metrics-digital-opportunity-and-africa/

Priego, E. (2014). The Triple A: Africa, Access, Altmetrics. 22 February 2014. Retrieved from https://epriego.wordpress.com/2014/02/22/the-triple-a-africa-access-altmetrics/

Priem, J., Hall, M., Hill, C., Piwowar, H., & Waagmeester, A. (2011). Uncovering impacts : CitedIn and total-impact , two new tools for gathering altmetrics . iConference 2012, 9–11. Retrieved from http://jasonpriem.org/self-archived/two-altmetrics-tools.pdf

Priem, J., Piwowar, H. A., & Hemminger, B. H. (n.d.). Altmetrics in the wild: An exploratory study of impact metrics based on social media. Metrics 2011: Symposium on Informetric and Scientometric Research. New Orleans, LA, USA. Retrieved from http://jasonpriem.org/self-archived/PLoS-altmetrics-sigmetrics11-abstract.pdf

Sud, P., & Thelwall, M. (2013). Evaluating altmetrics. Scientometrics. Retrieved from http://link.springer.com/10.1007/s11192-013-1117-2