AltmetricsEdit
Altmetrics describe a family of indicators that aim to measure the attention and engagement scholarly work receives beyond traditional citations. Rather than counting how often a paper is cited in academic journals, altmetrics look at how research is discussed, shared, saved, or used across a wide array of online platforms and audiences. Proponents argue that these measures provide a more immediate, broader sense of a work’s reach and potential real-world impact, while critics warn that they can be noisy, manipulated, and unrepresentative if used in isolation. As research funding and institutional accountability increasingly multiply metrics, altmetrics have become part of the broader conversation about how to judge the value of scholarly work in a fast-changing information ecosystem.
The emergence of altmetrics coincided with a shift in how researchers, universities, and funders think about impact. The premise is simple: ideas that spread widely on the internet—via social networks, news outlets, policy discussions, or public-interest forums—may be shaping practice and policy even when those ideas have not yet accumulated many traditional citations. In practice, altmetrics gather signals from various sources, including social media mentions, saves and bookmarks in reference managers, coverage in news and blogs, discussions in policy documents, and downloads or views of a work. For readers navigating this topic, it is useful to think of altmetrics as a complementary lens to traditional bibliometrics rather than a replacement for it. See for example discussions around ALtmetrics and the broader field of Bibliometrics.
History and development
The term altmetrics originated in the early 2010s as researchers sought to quantify online attention to scholarly work beyond citations in academic journals. A foundational moment came with the idea that online engagement could serve as a proxy for real-world influence, not just scholarly recognition. In the years since, several commercial and nonprofit data platforms emerged to collect and present these signals, each with its own methodology and focus. Prominent players in the space include Altmetric and Plum Analytics (often referred to simply as PlumX), which track diverse sources and produce composite scores to summarize attention. Researchers and institutions began experimenting with how these indicators could inform everything from tenure dossiers to grant applications. See also the early discussion in the Altmetrics Manifesto.
Data sources have diversified over time. Social media platforms such as Twitter, Facebook, and various professional networks contribute mentions and discussions; reference managers and sharing services like Mendeley and others record saves or bookmarks; news outlets, blogs, and policy documents capture coverage and real-world dialogues about research; and download or view counts provide a direct, if imperfect, signal of interest. The relative weight of each source often varies by discipline, language, and region, which makes normalization a central issue for any serious use of altmetrics. For background on how these signals are collected and interpreted, see discussions of data normalization and research evaluation practices.
How altmetrics are used
Altmetrics are employed in several practical ways. They offer a faster read on how research themes resonate with non-academic audiences, a signal that can be relevant for outreach, media engagement, or stakeholder education. Funders and institutions may look at altmetrics as part of a holistic view of impact, particularly when traditional citations lag behind or when policy relevance is a stated goal. Publishers sometimes showcase altmetric scores alongside articles to illustrate engagement beyond the academy, and researchers may use them to demonstrate civic or industry interest in their work. In some cases, altmetrics have been used to identify timely topics that attract attention, which can help researchers plan dissemination or knowledge transfer activities. See research impact and open access for related concepts.
However, the practical use of altmetrics requires care. Not all attention is positive or credible, and high attention in certain online spaces can reflect controversy or misinterpretation. Because online engagement varies by field, language, and platform, normalization and context become essential. Some evaluators advocate combining altmetrics with more traditional measures and qualitative assessments to avoid over- or under-valuing particular kinds of attention. See discussions around the Leiden Manifesto and DORA for governance principles that apply to metrics use in research evaluation. See also peer review and academic publishing for related processes that influence scholarly reputation.
Controversies and debates
A central debate around altmetrics concerns validity and reliability. Critics argue that these metrics measure popularity, sensationalism, or access to online networks more than scholarly quality or long-term value. The risk of gaming is real: coordinated campaigns, bots, and marketing firms can inflate attention signals in ways that don’t reflect intrinsic merit. There is also concern about bias: English-language content and topics with broad public appeal tend to receive more online attention, while work from smaller disciplines, non-English-speaking communities, or early-career researchers may be underrepresented. In practice, this means altmetrics can reflect differences in online visibility and platform access more than differences in scholarly contribution.
Supporters counter that altmetrics capture dimensions of impact that traditional metrics miss, especially outreach, practitioner relevance, and policy engagement. They argue that when used responsibly, altmetrics complement peer review and bibliometric indicators by highlighting real-world resonance and the accessibility of research to non-specialist audiences. Advocates emphasize transparent methodology, field- and year-normalization, and governance measures to mitigate manipulation. In this frame, altmetrics are part of a broader effort to make research assessment more open, multi-dimensional, and timely.
From a traditionalist perspective, critiques sometimes frame altmetrics as inherently biased by online culture or political climate. Proponents respond that any metric system carries biases and that the solution is not to abandon measurement but to design better, more robust systems with guardrails, disclosures, and contextual interpretation. When discussing criticisms tied to ideological shifts or perceptions of online activism, many observers argue that the real issue is not a fundamental flaw in the concept of measuring attention, but the need for responsible use and clear demonstration of how metrics inform decisions without substituting them for expert judgment. See also Leiden Manifesto and DORA for governance recommendations that address these concerns.
Implications for research evaluation and policy
Altmetrics influence how institutions think about research value and how funding decisions are justified. They can provide a more immediate sense of public engagement and knowledge translation, which is attractive in contexts where policy impact or industry relevance matters. Yet the same immediacy can obscure longer-term quality, reproducibility, and methodological rigor. As a result, many evaluators advocate for a balanced approach: use altmetrics alongside traditional indicators and robust qualitative assessments, and apply field- and time-based normalization to avoid penalizing certain disciplines or career stages.
Policy discussions around altmetrics often intersect with broader debates on research governance. Guidelines like the DORA principles and the Leiden Manifesto call for responsible use of metrics, transparency about data sources, and avoidance of one-size-fits-all rankings. Institutions may adopt practices such as clearly documenting how metrics contribute to an evaluation, calibrating expectations by discipline, and ensuring that metrics do not substitute for expert review. See DORA and Leiden Manifesto for detailed frameworks, and see research evaluation for a broader treatment of how metrics inform decisions about hiring, promotion, and funding.