• Privacy Policy

Research Method

Home » Research Results Section – Writing Guide and Examples

Research Results Section – Writing Guide and Examples

Table of Contents

Research Results

Research Results

Research results refer to the findings and conclusions derived from a systematic investigation or study conducted to answer a specific question or hypothesis. These results are typically presented in a written report or paper and can include various forms of data such as numerical data, qualitative data, statistics, charts, graphs, and visual aids.

Results Section in Research

The results section of the research paper presents the findings of the study. It is the part of the paper where the researcher reports the data collected during the study and analyzes it to draw conclusions.

In the results section, the researcher should describe the data that was collected, the statistical analysis performed, and the findings of the study. It is important to be objective and not interpret the data in this section. Instead, the researcher should report the data as accurately and objectively as possible.

Structure of Research Results Section

The structure of the research results section can vary depending on the type of research conducted, but in general, it should contain the following components:

  • Introduction: The introduction should provide an overview of the study, its aims, and its research questions. It should also briefly explain the methodology used to conduct the study.
  • Data presentation : This section presents the data collected during the study. It may include tables, graphs, or other visual aids to help readers better understand the data. The data presented should be organized in a logical and coherent way, with headings and subheadings used to help guide the reader.
  • Data analysis: In this section, the data presented in the previous section are analyzed and interpreted. The statistical tests used to analyze the data should be clearly explained, and the results of the tests should be presented in a way that is easy to understand.
  • Discussion of results : This section should provide an interpretation of the results of the study, including a discussion of any unexpected findings. The discussion should also address the study’s research questions and explain how the results contribute to the field of study.
  • Limitations: This section should acknowledge any limitations of the study, such as sample size, data collection methods, or other factors that may have influenced the results.
  • Conclusions: The conclusions should summarize the main findings of the study and provide a final interpretation of the results. The conclusions should also address the study’s research questions and explain how the results contribute to the field of study.
  • Recommendations : This section may provide recommendations for future research based on the study’s findings. It may also suggest practical applications for the study’s results in real-world settings.

Outline of Research Results Section

The following is an outline of the key components typically included in the Results section:

I. Introduction

  • A brief overview of the research objectives and hypotheses
  • A statement of the research question

II. Descriptive statistics

  • Summary statistics (e.g., mean, standard deviation) for each variable analyzed
  • Frequencies and percentages for categorical variables

III. Inferential statistics

  • Results of statistical analyses, including tests of hypotheses
  • Tables or figures to display statistical results

IV. Effect sizes and confidence intervals

  • Effect sizes (e.g., Cohen’s d, odds ratio) to quantify the strength of the relationship between variables
  • Confidence intervals to estimate the range of plausible values for the effect size

V. Subgroup analyses

  • Results of analyses that examined differences between subgroups (e.g., by gender, age, treatment group)

VI. Limitations and assumptions

  • Discussion of any limitations of the study and potential sources of bias
  • Assumptions made in the statistical analyses

VII. Conclusions

  • A summary of the key findings and their implications
  • A statement of whether the hypotheses were supported or not
  • Suggestions for future research

Example of Research Results Section

An Example of a Research Results Section could be:

  • This study sought to examine the relationship between sleep quality and academic performance in college students.
  • Hypothesis : College students who report better sleep quality will have higher GPAs than those who report poor sleep quality.
  • Methodology : Participants completed a survey about their sleep habits and academic performance.

II. Participants

  • Participants were college students (N=200) from a mid-sized public university in the United States.
  • The sample was evenly split by gender (50% female, 50% male) and predominantly white (85%).
  • Participants were recruited through flyers and online advertisements.

III. Results

  • Participants who reported better sleep quality had significantly higher GPAs (M=3.5, SD=0.5) than those who reported poor sleep quality (M=2.9, SD=0.6).
  • See Table 1 for a summary of the results.
  • Participants who reported consistent sleep schedules had higher GPAs than those with irregular sleep schedules.

IV. Discussion

  • The results support the hypothesis that better sleep quality is associated with higher academic performance in college students.
  • These findings have implications for college students, as prioritizing sleep could lead to better academic outcomes.
  • Limitations of the study include self-reported data and the lack of control for other variables that could impact academic performance.

V. Conclusion

  • College students who prioritize sleep may see a positive impact on their academic performance.
  • These findings highlight the importance of sleep in academic success.
  • Future research could explore interventions to improve sleep quality in college students.

Example of Research Results in Research Paper :

Our study aimed to compare the performance of three different machine learning algorithms (Random Forest, Support Vector Machine, and Neural Network) in predicting customer churn in a telecommunications company. We collected a dataset of 10,000 customer records, with 20 predictor variables and a binary churn outcome variable.

Our analysis revealed that all three algorithms performed well in predicting customer churn, with an overall accuracy of 85%. However, the Random Forest algorithm showed the highest accuracy (88%), followed by the Support Vector Machine (86%) and the Neural Network (84%).

Furthermore, we found that the most important predictor variables for customer churn were monthly charges, contract type, and tenure. Random Forest identified monthly charges as the most important variable, while Support Vector Machine and Neural Network identified contract type as the most important.

Overall, our results suggest that machine learning algorithms can be effective in predicting customer churn in a telecommunications company, and that Random Forest is the most accurate algorithm for this task.

Example 3 :

Title : The Impact of Social Media on Body Image and Self-Esteem

Abstract : This study aimed to investigate the relationship between social media use, body image, and self-esteem among young adults. A total of 200 participants were recruited from a university and completed self-report measures of social media use, body image satisfaction, and self-esteem.

Results: The results showed that social media use was significantly associated with body image dissatisfaction and lower self-esteem. Specifically, participants who reported spending more time on social media platforms had lower levels of body image satisfaction and self-esteem compared to those who reported less social media use. Moreover, the study found that comparing oneself to others on social media was a significant predictor of body image dissatisfaction and lower self-esteem.

Conclusion : These results suggest that social media use can have negative effects on body image satisfaction and self-esteem among young adults. It is important for individuals to be mindful of their social media use and to recognize the potential negative impact it can have on their mental health. Furthermore, interventions aimed at promoting positive body image and self-esteem should take into account the role of social media in shaping these attitudes and behaviors.

Importance of Research Results

Research results are important for several reasons, including:

  • Advancing knowledge: Research results can contribute to the advancement of knowledge in a particular field, whether it be in science, technology, medicine, social sciences, or humanities.
  • Developing theories: Research results can help to develop or modify existing theories and create new ones.
  • Improving practices: Research results can inform and improve practices in various fields, such as education, healthcare, business, and public policy.
  • Identifying problems and solutions: Research results can identify problems and provide solutions to complex issues in society, including issues related to health, environment, social justice, and economics.
  • Validating claims : Research results can validate or refute claims made by individuals or groups in society, such as politicians, corporations, or activists.
  • Providing evidence: Research results can provide evidence to support decision-making, policy-making, and resource allocation in various fields.

How to Write Results in A Research Paper

Here are some general guidelines on how to write results in a research paper:

  • Organize the results section: Start by organizing the results section in a logical and coherent manner. Divide the section into subsections if necessary, based on the research questions or hypotheses.
  • Present the findings: Present the findings in a clear and concise manner. Use tables, graphs, and figures to illustrate the data and make the presentation more engaging.
  • Describe the data: Describe the data in detail, including the sample size, response rate, and any missing data. Provide relevant descriptive statistics such as means, standard deviations, and ranges.
  • Interpret the findings: Interpret the findings in light of the research questions or hypotheses. Discuss the implications of the findings and the extent to which they support or contradict existing theories or previous research.
  • Discuss the limitations : Discuss the limitations of the study, including any potential sources of bias or confounding factors that may have affected the results.
  • Compare the results : Compare the results with those of previous studies or theoretical predictions. Discuss any similarities, differences, or inconsistencies.
  • Avoid redundancy: Avoid repeating information that has already been presented in the introduction or methods sections. Instead, focus on presenting new and relevant information.
  • Be objective: Be objective in presenting the results, avoiding any personal biases or interpretations.

When to Write Research Results

Here are situations When to Write Research Results”

  • After conducting research on the chosen topic and obtaining relevant data, organize the findings in a structured format that accurately represents the information gathered.
  • Once the data has been analyzed and interpreted, and conclusions have been drawn, begin the writing process.
  • Before starting to write, ensure that the research results adhere to the guidelines and requirements of the intended audience, such as a scientific journal or academic conference.
  • Begin by writing an abstract that briefly summarizes the research question, methodology, findings, and conclusions.
  • Follow the abstract with an introduction that provides context for the research, explains its significance, and outlines the research question and objectives.
  • The next section should be a literature review that provides an overview of existing research on the topic and highlights the gaps in knowledge that the current research seeks to address.
  • The methodology section should provide a detailed explanation of the research design, including the sample size, data collection methods, and analytical techniques used.
  • Present the research results in a clear and concise manner, using graphs, tables, and figures to illustrate the findings.
  • Discuss the implications of the research results, including how they contribute to the existing body of knowledge on the topic and what further research is needed.
  • Conclude the paper by summarizing the main findings, reiterating the significance of the research, and offering suggestions for future research.

Purpose of Research Results

The purposes of Research Results are as follows:

  • Informing policy and practice: Research results can provide evidence-based information to inform policy decisions, such as in the fields of healthcare, education, and environmental regulation. They can also inform best practices in fields such as business, engineering, and social work.
  • Addressing societal problems : Research results can be used to help address societal problems, such as reducing poverty, improving public health, and promoting social justice.
  • Generating economic benefits : Research results can lead to the development of new products, services, and technologies that can create economic value and improve quality of life.
  • Supporting academic and professional development : Research results can be used to support academic and professional development by providing opportunities for students, researchers, and practitioners to learn about new findings and methodologies in their field.
  • Enhancing public understanding: Research results can help to educate the public about important issues and promote scientific literacy, leading to more informed decision-making and better public policy.
  • Evaluating interventions: Research results can be used to evaluate the effectiveness of interventions, such as treatments, educational programs, and social policies. This can help to identify areas where improvements are needed and guide future interventions.
  • Contributing to scientific progress: Research results can contribute to the advancement of science by providing new insights and discoveries that can lead to new theories, methods, and techniques.
  • Informing decision-making : Research results can provide decision-makers with the information they need to make informed decisions. This can include decision-making at the individual, organizational, or governmental levels.
  • Fostering collaboration : Research results can facilitate collaboration between researchers and practitioners, leading to new partnerships, interdisciplinary approaches, and innovative solutions to complex problems.

Advantages of Research Results

Some Advantages of Research Results are as follows:

  • Improved decision-making: Research results can help inform decision-making in various fields, including medicine, business, and government. For example, research on the effectiveness of different treatments for a particular disease can help doctors make informed decisions about the best course of treatment for their patients.
  • Innovation : Research results can lead to the development of new technologies, products, and services. For example, research on renewable energy sources can lead to the development of new and more efficient ways to harness renewable energy.
  • Economic benefits: Research results can stimulate economic growth by providing new opportunities for businesses and entrepreneurs. For example, research on new materials or manufacturing techniques can lead to the development of new products and processes that can create new jobs and boost economic activity.
  • Improved quality of life: Research results can contribute to improving the quality of life for individuals and society as a whole. For example, research on the causes of a particular disease can lead to the development of new treatments and cures, improving the health and well-being of millions of people.

About the author

' src=

Muhammad Hassan

Researcher, Academic Writer, Web developer

You may also like

Figures in Research Paper

Figures in Research Paper – Examples and Guide

Data collection

Data Collection – Methods Types and Examples

Implications in Research

Implications in Research – Types, Examples and...

Data Interpretation

Data Interpretation – Process, Methods and...

Research Project

Research Project – Definition, Writing Guide and...

Theoretical Framework

Theoretical Framework – Types, Examples and...

  • Libraries & Collections
  • News & Events
  • Process: Research Outputs
  • Output Types

Ask a Librarian

Research Outputs

Decorative chess piece

Scholars circulate and share research in a variety of ways and in numerous genres. Below you'll find a few common examples. Keep in mind there are many other ways to circulate knowledge: factsheets, software, code, government publications, clinical guidelines, and exhibitions, just to name a few.

Outputs Defined

Original research article.

An article published in an academic journal can go by several names: original research, an article, a scholarly article, or a peer reviewed article. This format is an important output for many fields and disciplines. Original research articles are written by one or a number of authors who typically advance a new argument or idea to their field.

Conference Presentations or Proceedings

Conferences are organized events, usually centered on one field or topic, where researchers gather to present and discuss their work. Typically, presenters submit abstracts, or short summaries of their work, before a conference, and a group of organizers select a number of researchers who will present. Conference presentations are frequently transcribed and published in written form after they are given.
Books are often composed of a collection of chapters, each written by a unique author. Usually, these kinds of books are organized by theme, with each author's chapter presenting a unique argument or perspective. Books with uniquely authored chapters are often curated and organized by one or more editors, who may contribute a chapter or foreward themselves.
Often, when researchers perform their work, they will produce or work with large amounts of data, which they compile into datasets. Datasets can contain information about a wide variety of topics, from genetic code to demographic information. These datasets can then be published either independently, or as an accompaniment to another scholarly output, such as an article. Many scientific grants and journals now require researchers to publish datasets.
For some scholars, artwork is a primary research output. Scholars’ artwork can come in diverse forms and media, such as paintings, sculptures, musical performances, choreography, or literary works like poems. s.
Reports can come in many forms and may serve many functions. They can be authored by one or a number of people, and are frequently commissioned by government or private agencies. Some examples of reports are market reports, which analyze and predict a sector of an economy, technical reports, which can explain to researchers or clients how to complete a complex task, or white papers, which can inform or persuade an audience about a wide range of complex issues.

Digital Scholarship

Digital scholarship is a research output that significantly incorporates or relies on digital methodologies, authoring, presentation, and presentation. Digital scholarship often complements and adds to more traditional research outputs, and may be presented in a multimedia format. Some examples include mapping projects; multimodal projects that may be composed of text, visual, and audio elements; or digital, interactive archives.
Researchers from every field and discipline produce books as a research output. Because of this, books can vary widely in content, length, form, and style, but often provide a broad overview of a topic compared to research outputs that are more limited in length, such as articles or conference proceedings. Books may be written by one or many authors, and researchers may contribute to a book in a number of ways: they could author an entire book, write a forward, or collect and organize existing works in an anthology, among others.
Scholars may be called upon by media outlets to share their knowledge about the topic they study. Interviews can provide an opportunity for researchers to teach a more general audience about the work that they perform.

Article in a Newspaper or Magazine

While a significant amount of researchers’ work is intended for a scholarly audience, occasionally researchers will publish in popular newspapers or magazines. Articles in these popular genres can be intended to inform a general audience of an issue in which the researcher is an expert, or they may be intended to persuade an audience about an issue.
In addition to other scholarly outputs, many researchers also compose blogs about the work they do. Unlike books or articles, blogs are often shorter, more general, and more conversational, which makes them accessible to a wider audience. Blogs, again unlike other formats, can be published almost in real time, which can allow scholars to share current developments of their work.
  • University of Colorado Boulder Libraries
  • Research Guides
  • Research Strategies
  • Last Updated: Jul 10, 2024 10:28 AM
  • URL: https://libguides.colorado.edu/strategies/products
  • © Regents of the University of Colorado
  • Welcome to the Staff Intranet
  • My Workplace
  • Staff Directory
  • Service Status
  • Student Charter & Professional Standards
  • Quick links
  • Bright Red Triangle
  • New to Edinburgh Napier?
  • Regulations
  • Academic Skills
  • A-Z Resources
  • ENroute: Professional Recognition Framework
  • ENhance: Curriculum Enhancement Framework
  • Programmes and Modules
  • QAA Enhancement Themes
  • Quality & Standards
  • L&T ENssentials Quick Guides & Resources
  • DLTE Research
  • Student Interns
  • Intercultural Communication
  • Far From Home
  • Annual Statutory Accounts
  • A-Z Documents
  • Finance Regulations
  • Insurance Certificates
  • Procurement
  • Who's Who
  • Staff Briefing Note on Debt Sanctions
  • Operational Communications
  • Who's Who in Governance & Compliance
  • Governance Services
  • Health & Safety
  • Customer Charter
  • Pay and Benefits
  • HR Policy and Forms
  • Working at the University
  • Recruitment
  • Leaving the University
  • ​Industrial Action
  • Learning Technology
  • Digital Skills
  • IS Policies
  • Plans & Performance
  • Research Cycle
  • International & EU Recruitment
  • International Marketing and Intelligence
  • International Programmes
  • Global Online
  • Global Mobility
  • English for Academic Purposes (EAP)
  • UCAS Results Embargo
  • UK Recruitment
  • Visa and International Support
  • Useful Documents
  • Communications
  • Corporate Gifts
  • Development & Alumni Engagement
  • NSS Staff Hub
  • Planning & Performance
  • Business Intelligence
  • Market Intelligence
  • Data Governance
  • Principal & Vice-Chancellor
  • University Leadership Team
  • The University Chancellor
  • University Strategy
  • Catering, Events & Vacation Lettings
  • Environmental Sustainability
  • Facilities Service Desk
  • Print Services
  • Property and Maintenance
  • Student Accommodation
  • A-Z of Services
  • Directorate
  • Staff Documents
  • Design principles
  • Business Engagement
  • Commercialise Your Research
  • Intellectual Property
  • Consultancy and Commercial Activity Framework
  • Continuing Professional Development (CPD)
  • Research Process
  • Policies and Guidance
  • External Projects
  • Public Engagement
  • Research Data
  • Research Degrees
  • Researcher Development
  • Research Governance
  • Research Induction
  • Research Integrity
  • Worktribe Log-in
  • Worktribe RMS
  • Knowledge Exchange Concordat
  • Academic Appeals
  • Academic Calendar
  • Academic Integrity
  • Curriculum Management
  • Examinations
  • Graduations
  • Key Dates Calendar
  • My Programme template
  • Our Charter
  • PASS Process Guides
  • Student Centre & Campus Receptions (iPoints)
  • Student Check In
  • Student Decision and Status related codes
  • Student Engagement Reporting
  • Student Records
  • Students requesting to leave
  • The Student Charter
  • Student Sudden Death
  • Programme and Student Support (PASS)
  • Timetabling
  • Strategy Hub
  • Careers & Skills Development
  • Placements & Practice Learning
  • Graduate Recruitment
  • Student Ambassadors
  • Confident Futures
  • Disability Inclusion
  • Student Learning Profiles
  • Student Funding
  • Report and Support
  • Keep On Track
  • Student Pregnancy, Maternity, Paternity and Adoption
  • Counselling
  • Widening Access
  • About the AUA
  • Edinburgh Napier Students' Association
  • Join UNISON
  • Member Information & Offers
  • LGPS Pensions Bulletin
  • Donations made to Charity

Skip Navigation Links

  • REF2021 - Results
  • You Said, We Listened
  • Outputs from Research
  • Impact from Research
  • REF Training and Development
  • Sector Consultation

​​Outputs from Research 

A research output is the product of research .  It can take many different forms or types.  See here for a full glossary of output types.

The tables below sets out the generic criteria for assessing outputs and the definitions of the starred levels, as used during the REF2021 exercise.

Definitions 


Quality that is in terms of originality, rigour and significance.
Three starQuality that is in terms of originality, rigor and significance but which falls short of the highest standards of excellence.
Two starQuality that is in terms of originality, rigour and significance.
One starQuality that is in terms of originality, rigour and significance.
Unclassified​Quality that the standard of nationally recognised work. Or work which does not meet the published definition of research for the purposes of this assessment.

'World-leading', 'internationally' and 'nationally' in this context refer to quality standards. They do not refer to the nature or geographical scope of particular subjects, nor to the locus of research, nor its place of dissemination.

Definitions of Originality, Rigour and Significance

 will be understood as the extent to which the output makes an important and innovative contribution to understanding and knowledge in the field. Research outputs that demonstrate originality may do one or more of the following: produce and interpret new empirical findings or new material; engage with new and/or complex problems; develop innovative research methods, methodologies and analytical techniques; show imaginative and creative scope; provide new arguments and/or new forms of expression, formal innovations, interpretations and/or insights; collect and engage with novel types of data; and/or advance theory or the analysis of doctrine, policy or practice, and new forms of expression.
 will be understood as the extent to which the work demonstrates intellectual coherence and integrity, and adopts robust and appropriate concepts, analyses, sources, theories and/or methodologies.
 will be understood as the extent to which the work has influenced, or has the capacity to influence, knowledge and scholarly thought, or the development and understanding of policy and/or practice.

Supplementary Output criteria – Understanding the thresholds:

The 'Panel criteria' explains in more detail how the sub-panels apply the assessment criteria and interpret the thresholds:

Main Panel A: Medicine, health and life sciences  Main Panel B: Physical sciences, engineering and mathematics  Main Panel C: Social sciences  Main Panel D: Arts and humanities ​

Definition of Research for the REF

1. For the purposes of the REF, research is defined as a process of investigation leading to new insights, effectively shared.

2. It  includes  work of direct relevance to the needs of commerce, industry, culture, society, and to the public and voluntary sectors; scholarship; the invention and generation of ideas, images, performances, artefacts including design, where these lead to new or substantially improved insights; and the use of existing knowledge in experimental development to produce new or substantially improved materials, devices, products and processes, including design and construction. It excludes routine testing and routine analysis of materials, components and processes such as for the maintenance of national standards, as distinct from the development of new analytical techniques. 

It also  excludes  the development of teaching materials that do not embody original research.

3. It  includes  research that is published, disseminated or made publicly available in the form of assessable research outputs, and confidential reports 

​Output FAQs

Q.  what is a research output.

A research output is the product of research.  An underpinning principle of the REF is that all forms of research output will be assessed on a fair and equal basis.  Sub-panels will not regard any particular form of output as of greater or lesser quality than another per se.  You can access the full list of eligible output types her​e.

Q.  When is the next Research Excellence Framework?

The next exercise will be REF 2029, with results published in 2029.  It is therefore likely that we will make our submission towards the end of 2028, but the actual timetable hasn't been confirmed yet.

A sector-wide consultation is currently occurring to help refine the detail of the next exercise.  You can learn more about the emerging REF 2029 here.

Q.  Why am I being contacted now, if we don't know the final details for a future assessment?

Although we don't know all of the detail, we know that some of the core components of the previous exercise will be retained.  This will include the assessment of research outputs. 

To make the internal process more manageable and avoid a rush at the end of the REF cycle, we will be conducting an output review process on an annual basis, in some shape and form to spread the workload.

Furthermore, regardless of any external assessment frameworks, it is also important for us to understand the quality of research being produced at Edinburgh Napier University and to introduce support mechanisms that will enhance the quality of the research conducted.  This is of benefit to the University and to you and your career development.

Q. I haven't produced any REF-eligible outputs as yet, what should I do?

We recognise that not everyone contacted this year will have produced a REF-eligible output so early on in a new REF cycle.  If this is the case, you can respond with a nil return and you may be contacted again in a future annual review.

If you need additional support to help you deliver on your research objectives, please contact your line manager and/or Head of Research to discuss.

Q.  I was contacted last year to identify an output, but I have not received a notification for the 2024 annual cycle, why not?

Due to administrative capacity in RIE and the lack of detail on the REF 2029 rules relating to staff and outputs, we are restricting this years' scoring activity to a manageable volume based on a set of pre-defined, targeted criteria.

An output review process will be repeated annually.  If an output is not reviewed in the current year, we anticipate that it will be included in a future review process if it remains in your top selection.

Once we know more about the shape of future REF, we will adapt the annual process to meet the new eligibility criteria and aim to increase the volume of outputs being reviewed.

Q. I am unfamiliar with the REF criteria, and I do not feel well-enough equipped to provide a score or qualitative statement for my output/s, what should I do?

The output self-scoring field is optional.  We appreciate that some staff may not be familiar with the criteria and are therefore unable to provide a reliable score. 

The REF team has been working with Schools to develop a programme of REF awareness and output quality enhancement which aims to promote understanding of REF criteria and enable staff to score their work in future.  We aim to deliver quality enhancement training in all Schools by the end of the 2023-24 academic cycle.

Please look out for further communications on this.

For those staff who do wish to provide a score and commentary, please refer specifically to the REF main panel output criteria: Main Panel A: Medicine, health and life sciences  Main Panel B: Physical sciences, engineering and mathematics  Main Panel C: Social sciences  Main Panel D: Arts and humanities 

Q. Can I refer to Journal impact factors or other metrics as a basis of Output quality?

An underpinning principle of REF is that journal impact factors or any hierarchy of journals, journal-based metrics (this includes ABS rating, journal ranking and total citations) should not be used in the assessment o​f outputs. No output is privileged or disadvantaged on the basis of the publisher, where it is published or the medium of its publication. 

An output should be assessed on its content and contribution to advancing knowledge in its own right and in the context of the REF quality threshold criteria, irrespective of the ranking of the journal or publication outlet in which it appears.

You should refer only to the REF output quality criteria (please see definitions above) if you are adding the optional self-score and commentary field and you should not refer to any journal ranking sources.

Q. What is Open Access Policy and how does it affect my outputs?

Under current rules, to be eligible for future research assessment exercises, higher education institutions (HEIs) are required to implement processes and procedures to comply with the REF Open Access policy. 

It is a requirement for all journal articles and conference proceedings with an International Standard Serial Number (ISSN), accepted for publication after 1 April 2016, to be made open access.  This can be achieved by either publishing the output in an open access journal outlet or by depositing an author accepted manuscript version in the University's repository within three months of the acceptance date.

Although the current Open Access policy applies only to journal and conference proceedings with an ISSN, Edinburgh Napier University expects staff to deposit all forms of research output in the University research management system, subject to any publishers' restrictions.

You can read the University's Open Access Policy here .

Q. My Output is likely to form part of a portfolio of work (multi-component output), how do I collate and present this type of output for assessment?

The REF team will be working with relevant School research leadership teams to develop platforms to present multicomponent / portfolio submissions.  In the meantime, please use the commentary section to describe how your output could form part of a multicomponent submission and provide any useful contextual information about the research question your work is addressing.

Q. How will the information I provide about my outputs be used and for what purpose?

In the 2024 output cycle, a minimum of one output identified by each identified author will be reviewed by a panel of internal and external subject experts.

The information provided will be used to enable us to report on research quality measures as identified in the University R&I strategy.

Output quality data will be recorded centrally on the University's REF module in Worktribe.  Access to this data is restricted to a core team of REF staff based with the Research, Innovation and Enterprise Office and key senior leaders in the School.

The data will not be used for any other purpose, other than for monitoring REF-related preparations.

Q. Who else will be involved in reviewing my output/s?

Outputs will be reviewed by an expert panel of internal and external independent reviewers.

Q. Will I receive feedback on my Output/s?

The REF team encourages open and transparent communication relating to output review and feedback.  We will be working with senior research leaders within the School to promote this.

Q.  I have identified more than one Output, will all of my identified outputs be reviewed this year?

In the 2024 cycle, we are committed to reviewing at least one output from each contacted author via an internal, external and moderation review process in the 2024 cycle.

​Once we know more about the shape of a future REF, we will adapt the annual process to meet the new criteria / eligibility.

Get in touch

  • Report a bug
  • Privacy Policy

Edinburgh Napier University is a registered Scottish charity. Registration number SC018373

8 strategies to optimise your research output while creating impact

pexels-pixabay-163064 (1)

This blog post will provide you with strategies to enhance your productivity and focus and help you get what needs to be published published. Our research needs to be published in order for it to become known to our fellow researchers, who will create new studies based on our research findings. Also, publishing your results allows others to use those findings to change policy and practice and, in that way, create an impact. This means that we have a responsibility to get our work published.

But with academic life and life in general being so busy, how do we ensure that we publish the results of the projects we are involved in so that they can inform future studies and have an impact on the world?

This blog post offers a few ideas. But before we proceed, we need to clarify a few things. Publishing your article in a peer-reviewed academic journal should not be the primary purpose or the only output of a research project. When we embark on a research journey, we need to do research that can contribute to future research or society, preferably both, as and when possible. Our research projects need to be conceptualised with impact in mind.

Therefore, the journal editor's acceptance letter should not signify a chapter's end. Translate your research into practice and make your research findings usable in the world, such as disseminating an evidence-based warm-up programme for cricketers or a screening tool for therapists working with newborns.

Disseminating your research findings means getting the results to those who can benefit from them. This includes distribution to both the academic community and the end-user. For example, research findings can be summarised in a blog post or video and distributed via social media. The possibilities are endless. How you translate and disseminate your research findings depends on the area of study and topic.

In an earlier  blog post , I gave 14 top tips for getting your precious paper published with a focus on the paper itself. In this blog post, we’ll zoom into a few broader tactics to have a more strategic approach to how you work, in order to ultimately ensure that you get to publishing what needs to get published.

Optimise your current assets

Before starting a new project, check what’s in your cupboard. The best place to start is to do an inventory check of projects that are halfway there, those that have progressed well but stalled for some reason. Can you revive them and push them past the completion mark? Do you have any existing data that you can analyse and turn into a paper? You may have completed your PhD or MSc but have not analysed or written up all the data. Or, maybe you still have some data from a project done for non-degree purposes. Always be aware of “ salami-slicing ” when it comes to choosing which data you are going to use for what paper.

You may even also have a half-written publication somewhere. This draft is precious and can be turned into a publishable gem within a few days of dedicated time. Don’t give up on it yet.

While doing your inventory check, go through the list of postgraduate projects you supervised. Are any potential papers lingering, with the student already having moved on to the next chapter of their lives? Disseminating the findings to the world out there is an ethical thing to do. Contact the student and other contributors and encourage them to help get the paper published. Always consider authorship based on the generally accepted guidance by the International Committee of Medical Journal Editors . 

Manage your projects effectively

There are many moving parts to “getting it published”, and effective project management is key. The whole project gets delayed if you don’t act timeously on each step. If you delay sending the data to a biostatistician or you get feedback from the journal and revisions are due but don’t take action, every day that you postpone means a delay in your paper getting accepted. Setting your deadlines and sticking to them is crucial. Life gets busy and can derail our best intentions, so put some power into the planning. You can’t control how others are going to respond to meeting a project’s deadlines, but if you stick to them and ensure that the deadline delay does not lie with you, you have a battle half won.

The  Getting Organised and the  Productivity Top Tips playlists on the Research Masterminds Youtube channel will be very helpful here.

Tread carefully when a new project knocks on your door

Start a new project only once you have a solid plan for moving your existing publications into production. The journey to publication requires resources that include your time and expertise, as well as those from others, such as a librarian or biostatistician. Redirecting your resources into new projects without accounting for existing projects will reduce your research output as you may spread yourself too thin.

As you get better at managing multiple projects simultaneously and have surrounded yourself with a trustworthy and knowledgeable team, you can start more projects without having completed existing ones. You’ll know how many balls you can successfully keep in the air at the same time. Still, don’t let the current projects stall because of the excitement of the new projects.

Fly with the principle of “meaningful collaboration for mutual benefit”

Being part of a trustworthy and knowledgeable team is a great way to get more research done. My best advice when working with collaborators is to clarify expectations, roles and responsibilities early on. The biggest mess-ups come from mismatched expectations. Working with others is also a great way to increase accountability. It is difficult to miss a deadline if you know five others are waiting for your task to be completed. Where you are in charge, choose your collaborators according to the contribution that they can make towards the project (meaningful collaboration) and ensure that both the project and collaborator benefit from their contribution (mutual benefit).

In addition to building your own winning team, you can also join an existing research group. This will aid your development while you contribute to knowledge creation. Be aware that you need some first-author papers for career advancement or promotion. So don’t find yourself in the backseat of too many projects; jump into the driving seat where you can.

Encourage postgraduate students to publish

It is important not only to disseminate our own projects but also to encourage others to disseminate their work. Make the postgraduate students under your supervision aware of the expectation to publish their work early on in their postgraduate journeys. Doctoral students can publish as they complete the various phases (depending on the nature of the project). Masters students may not have the time to publish during the course of their studies but can have a draft of a paper ready by the time they submit their research project for examination, which they can then refine while they await the examination outcome. The above, of course, also depends on your institution's policies when it comes to publishing postgraduate projects.

Many postgraduate students are new to the publication process. This “ Publishing your Research ”  Playlist on the Research Masterminds Youtube channel will guide them step by step through putting together the first draft of a publication.

Keep writing tasks top-of-mind

Create time in your calendar for publication writing; weekly, if daily, is not possible. Here, an  accountability partner  will come in handy. Arrange writing retreats or join existing writing retreats arranged by your institution. If you scheduled two hours for research and something unforeseen eats away an hour and a half, use the left-over 30 minutes, even if you are just doing an outline of your abstract in that time. In 2020, when lockdown regulations restructured our entire work setup, I created this quote to help me move forward one small step at a time: “ Marginal gains persistently lead to high impact consistently ”. That half an hour, even though it feels like it is not enough, is half an hour closer to getting your paper to the hands of the editor.

Free yourself up where reasonable and possible

Many of us are appointed in academic positions with teaching, research and service components that form part of our job descriptions. If this is your reality, you need to balance your contribution to the research, teaching and service domains. Use the resources provided to you to relieve yourself from some of the activities that pull you away from your research writing desk, such as funding available for teaching relief or other resources within your institution. You can still excel in the research journey while enjoying the variation that teaching and service pillars bring.

Make your life easier

Some things we do over and over without putting simple systems in place to avoid future repetition. Here are a few examples:

  • Have a list of reputable and appropriate journals at hand. When you search for a journal or “home” for your research, note down important information, such as whether it is open access or if they ask for article processing charges. Add to this list each time you come across a new journal. This will help you avoid going through this process whenever you want to submit a new paper to a journal in your field.
  • How often have you searched for a website URL that you recently used? You can see the home page in your mind’s eye but can’t remember what it's called. Bookmarking that website in your browser will help you access it within a few seconds when you need it again. I have put together a bookmarks folder with all kinds of useful academic bookmarks for a Chrome browser, should you want to check them out. On the  ResearchMasterminds.com  homepage, scroll down to the section “A Gift from One Academic to Another” – you’ll find the bookmarks folder as well as a few other gifts. While you are at it, there are also a whole bunch of other tools to make your life easier so that you can use your time on more tactical tasks.
  • Also, use AI where you can. There are many ways in which AI can make your life easier in a legal, ethical and moral way. This AI for Researchers Playlist  on the Research Masterminds YouTube channel will tell you more. 

Publishing will ensure that precious research findings can potentially reach those who can use them. In order to publish, we need to work strategically with our resources, plan well, and get more done with less.

Most importantly, create impact, keep the balance, and stay happy!

 Cover photo by Pixabay

Leave a comment

Becker Medical Library logotype

  • Library Hours
  • (314) 362-7080
  • [email protected]
  • Library Website
  • Electronic Books & Journals
  • Database Directory
  • Catalog Home
  • Library Home

Research Impact : Outputs and Activities

  • Outputs and Activities
  • Establishing Your Author Name and Presence
  • Enhancing Your Impact
  • Tracking Your Work
  • Telling Your Story
  • Impact Frameworks

What are Scholarly Outputs and Activities?

Scholarly/research outputs and activities represent the various outputs and activities created or executed by scholars and investigators in the course of their academic and/or research efforts.

One common output is in the form of scholarly publications which are defined by Washington University as:

". . . articles, abstracts, presentations at professional meetings and grant applications, [that] provide the main vehicle to disseminate findings, thoughts, and analysis to the scientific, academic, and lay communities. For academic activities to contribute to the advancement of knowledge, they must be published in sufficient detail and accuracy to enable others to understand and elaborate the results. For the authors of such work, successful publication improves opportunities for academic funding and promotion while enhancing scientific and scholarly achievement and repute."

Examples of activities include: editorial board memberships, leadership in professional societies, meeting organizer, consultative efforts, contributions to successful grant applications, invited talks and presentations, admininstrative roles, contribution of service to a clinical laboratory program, to name a few. For more examples of activities, see Washington University School of Medicine Appointments & Promotions Guidelines and Requirements or the "Examples of Outputs and Activities" box below. Also of interest is Table 1 in the " Research impact: We need negative metrics too " work.

Tracking your research outputs and activities is key to being able to document the impact of your research. One starting point for telling a story about your research impact is your publications. Advances in digital technology afford numerous avenues for scholars to not only disseminate research findings but also to document the diffusion of their research. The capacity to measure and report tangible outcomes can be used for a variety of purposes and tailored for various audiences ranging from the layperson, physicians, investigators, organizations, and funding agencies. Publication data can be used to craft a compelling narrative about your impact. See Quantifying the Impact of My Publications for examples of how to tell a story using publication data.

Another tip is to utilize various means of disseminating your research. See Strategies for Enhancing Research Impact for more information.

  • << Previous: Impact
  • Next: Establishing Your Author Name and Presence >>
  • Last Updated: Jun 24, 2024 7:38 AM
  • URL: https://beckerguides.wustl.edu/impact

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

The PMC website is updating on October 15, 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • Front Comput Neurosci

Nine Criteria for a Measure of Scientific Output

Gabriel kreiman.

1 Department of Ophthalmology, Children’s Hospital, Harvard Medical School, Boston, MA, USA

2 Department of Neurology, Children’s Hospital, Harvard Medical School, Boston, MA, USA

John H. R. Maunsell

3 Department of Neurobiology, Harvard Medical School, Boston, MA, USA

Scientific research produces new knowledge, technologies, and clinical treatments that can lead to enormous returns. Often, the path from basic research to new paradigms and direct impact on society takes time. Precise quantification of scientific output in the short-term is not an easy task but is critical for evaluating scientists, laboratories, departments, and institutions. While there have been attempts to quantifying scientific output, we argue that current methods are not ideal and suffer from solvable difficulties. Here we propose criteria that a metric should have to be considered a good index of scientific output. Specifically, we argue that such an index should be quantitative, based on robust data, rapidly updated and retrospective, presented with confidence intervals, normalized by number of contributors, career stage and discipline, impractical to manipulate, and focused on quality over quantity. Such an index should be validated through empirical testing. The purpose of quantitatively evaluating scientific output is not to replace careful, rigorous review by experts but rather to complement those efforts. Because it has the potential to greatly influence the efficiency of scientific research, we have a duty to reflect upon and implement novel and rigorous ways of evaluating scientific output. The criteria proposed here provide initial steps toward the systematic development and validation of a metric to evaluate scientific output.

Introduction

Productivity is the ratio of some output value to some input value. In some enterprises productivity can be measured with high precision. A factory can easily measure how many widgets are produced per man-hour of labor. Evaluating scientific productivity, however, is trickier. The input value for scientific productivity is tractable: it might be measured in terms of years of effort by a scientist, research team, department or program, or perhaps in terms of research dollars. It is the output value for scientific productivity that is problematic.

Scientific research produces new knowledge, some fraction of which can lead to enormous returns. In the long run, science evaluates itself. History has a particularly rigorous way of revealing the value of different scientific theories and efforts. Good science leads to novel ideas and changes the way we interpret physical phenomena and the world around us. Good science influences the direction of science itself, and the development of new technologies and social policies. Poor science leads to dead ends, either because it fails to advance understanding in useful ways or because it contains important errors. Poor science produces papers that can eventually feed the fireplace, or in a more modern and ecologically friendly version, the accumulation of electronic documents.

The process of science evaluating itself is slow. Meanwhile, we need more immediate ways of evaluating scientific output. Sorting out which scientists and research directions are currently providing the most useful output is a thorny problem, but it must be done. Scientists must be evaluated for hiring and promotion, and informed decisions need to be made about how to distribute research funding. The need for evaluation goes beyond the level of individuals. It is often important to evaluate the scientific output of groups of scientists such as laboratories, departments, centers, whole institutions, and perhaps even entire fields. Similarly, funding organizations and agencies need to evaluate the output from various initiatives and funding mechanisms.

Scientific output has traditionally been assessed using peer review in the form of evaluations from a handful of experts. Expert reviewers can evaluate the rigor, value and beauty of new findings, and gauge how they advance the field. Such peer-review constitutes an important approach to evaluating scientific output and it will continue to play a critical role in many forms of evaluation. However, peer review is limited by its subjective nature and the difficulty of obtaining comments from experts that are thorough and thoughtful, and whose comments can be compared across different evaluations. These limitations have driven institutions and agencies to seek more quantitative measures that can complement and sometimes extend thorough evaluation by peers.

In the absence of good quantitative measures of scientific output, many have settled for poor ones. For example, it is often assumed, explicitly, or implicitly, that a long list of publications indicates good output. Using the number of publications as a metric emphasizes quantity rather than quality, when it is the latter that is almost always the value of interest (Siegel and Baveye, 2010 ; Refinetti, 2011 ). In an attempt to measure something closer to quality, many turn to journal impact factors (Garfield, 2006 ). The misuse of journal impact factors in evaluating scientific output has been discussed many times (e.g., Hecht et al., 1998 ; Amin and Mabe, 2000 ; Skorka, 2003 ; Hirsch, 2005 ; Editors, 2006 ; Alberts et al., 2008 ; Castelnuovo, 2008 ; Petsko, 2008 ; Simons, 2008 ; Bollen et al., 2009 ; Dimitrov et al., 2010 ; Hughes et al., 2010 among many others). We will not repeat the problems with using the impact factors of journals to evaluate the output of individual scientists here, nor will we focus on the negative effects this use has had on the process of publishing scientific articles. Instead, we note that the persistent misuse of impact factors in the face of clear evidence of its inadequacies must reflect desperation for a quantitative measure of scientific output.

Many measures of scientific output have been devised or discussed. Because most scientific output takes the form of publication in peer-reviewed journals, these measures focus on articles and citations (Bollen et al., 2009 ). They include a broad range of approaches, such as total number of citations, journal impact factors (Garfield, 2006 ), h -factor (Hirsch, 2005 ), page ranks, article download statistics, and comments using social media (e.g., Mandavilli, 2011 ). While all these approaches have merit, we believe that no existing method captures all the criteria that are needed for a rigorous and comprehensive measure of scientific output. Here we discuss what we consider necessary (but not necessarily sufficient) criteria for a metric or index of scientific output. The goal of developing quantitative criteria to evaluate scientific output is not to replace examination by expert reviewers but rather to complement peer-review efforts. The criteria that we propose are aimed toward developing a quantitative metric that is appropriately normalized, emphasizes the quality of scientific output, and can be used for rigorous, reliable comparisons. We do not propose a specific measure, which should be based on extensive testing and comparison of candidate approaches, together with feedback from interested parties. Nevertheless, we believe that a discussion of properties that would make a suitable measure may help progress toward this goal.

We propose that a good index of scientific output will need to have nine characteristics.

Data Quality and Presentation

Quantitative.

Perhaps the most important requirement of a good measure of scientific output is that it be quantitative. The primary alternative, subjective ratings by experts will continue to be important for evaluations, but nevertheless suffers from some important limitations. Ratings by a handful of invited peers, as is normally used in hiring and promoting of scientists, provide ratings of undetermined precision. Moreover, the peers providing detailed comments on different job candidates or grant applications are typically non-overlapping, making it difficult to directly compare their comments.

A further problem with subjective comments is that they put considerable demands on reviewers’ time. This makes it impractical to overcome uncertainties about comparisons between different reviewers by reaching out to a very large pool of reviewers for detailed comments. The alternative of getting brief comments from a very large pool of reviewers is also unlikely to work. Several initiatives provide frameworks for peer commentary from large sets of commenters. Most online journals provide rapid publication of comments from readers about specific articles (e.g., electronic responses for journals hosted by HighWire Press). However, few articles attract many comments, and most get none. The comments that are posted typically come from people with interest in the specific subject of the article, which means there is little overlap in the people commenting on articles in different journals. Even with comments from many peers, it remains unclear how a large set of subjective comments should be turned into a decision about scientific output.

Based on robust data

Some ventures have sought to quantify peer commentary. For example, The Faculty of 1000 maintains a large editorial board for post-publication peer review of published articles, with numerical rating being given to each rated article. Taking another approach, WebmedCentral is a journal that publishes reviewers’ comments and quantitative ratings along with published articles. However, only a small fraction of published articles are evaluated by systems like these, and many of these are rated by one or two evaluators, limiting the value of this approach as a comprehensive tool for evaluating scientific contributions. It is difficult to know how many evaluations would be needed to provide a precise evaluation of an article, but the number is clearly more than the few that are currently received for most articles. Additionally, it is difficult to assess the accuracy of the comments (should one also evaluate the comments?).

It seems very unlikely that a sufficiently broad and homogeneous set of evaluations could be obtained to achieve uniformly widespread quantitative treatment of most scientists while avoiding being dominated by people who are most vocal or who have the most free time (as opposed to people with the most expertise). There is also reason for concern that peer-rating systems could be subject to manipulation (see below). For these reasons, we believe that a reliable measure of scientific output should be based on hard data rather than subjective ratings.

One could imagine specific historical instances where subjective peer commentary could have been (and probably was) quite detrimental to scientific progress. Imagine Galileo’s statement that the Earth moves or Darwin’s Theory of Evolution being dismissed by Twitter-like commentators.

Based on data that are rapidly updated and retrospective

While other sources might be useful and should not be excluded from consideration, the obvious choice for evaluation data is the citations of peer-reviewed articles. Publication of findings in peer-reviewed journals is the sine qua non for scientific progress, so the scientific literature is the natural place to look for a measure of scientific output. Article citations fulfill several important criteria. First, because every scientist must engage in scientific publication, a measure based on citations can be used to assess any scientist or group of scientists. Second, data on article citations are readily accessible and updated regularly, so that an index of output can be up-to-date. This may be particularly important for evaluating junior scientists, who have a short track record. Finally, publication data are available for a period that spans the lives of almost all working scientists, making it possible to track trends or monitor career trajectories. Historical data are particularly important for validating any measure of scientific output (see below), and would be impractical to obtain historical rankings using peer ratings or other subjective approaches. Because citations provide an objective, quantifiable, and available resource, different indices can be compared (see Validation below) and incremental improvements can be made based on evaluation of their relative merits.

Citations are not without weaknesses as a basis for measuring scientific output. While more-cited articles tend to correlate with important new findings, articles can also be cited more because they contain important errors. Review articles are generally cited more than original research articles, and books or chapters are generally cited less. Although articles are now identified by type in databases, how these factors should be weighted in determining an individual’s contribution would need to be carefully addressed in constructing a metric. Additionally, there will be a lag between publication and citations due to the publishing process itself and due to the time required to carry out new experiments inspired by that publication.

Citations also overlook other important components of a scientist’s contribution. Scientists mentor students and postdoctorals, teach classes and give lectures, organize workshops, courses and conferences, review manuscripts and grants, generate patents, lead clinical trials, contribute methods, algorithms and data to shared repositories and reach out to the public through journalists, books, or other efforts. For this reason, subjective evaluations by well-qualified experts are likely to remain an essential component of evaluating scientific output. Some aspects of the scientific output not involving publication might be quantified and incorporated into an index of output, but some are difficult to quantify. Because it is likely that a robust index of scientific output will depend to a large extent on citation data, in the following section we restrict our discussion to citations, but without intending to exclude other data that could contribute to an index (which might be multidimensional).

We acknowledge that there are practical issues that will need to be overcome to create even the simplest metric based on citations. In particular, to perform well it will be necessary for databases to assign a unique identifier to individual authors, without which it would be impossible to evaluate anyone with names like Smith, Martin, or Nguyen. However, efforts such as these should not be a substantial obstacle and some are already underway (e.g., Author ID by PubMed or ArXiv, see Enserink, 2009 ).

Presented with distributions and confidence intervals

An index of scientific output must be presented together with an appropriate distribution or confidence interval. Considering variation and confidence intervals is commonplace in most areas of scientific research. There is something deeply inappropriate about scientists using a measure of performance without considering its precision. A substantial component of the misuse of impact factor is the failure to consider its lack of precision (e.g., Dimitrov et al., 2010 ).

While the confidence intervals for an index of output for prolific senior investigators or large programs might be narrow, those for junior investigators will be appreciable because they have had less time to affect their field. Yet it is junior investigators who are most frequently evaluated for hiring or promotion. For example, when comparing different postdoctoral candidates for a junior faculty position, it would be desirable to know the distribution of values for a given index across a large population of individuals in the same field and at the same career stage so that differences among candidates can be evaluated in the context of this distribution. Routinely providing a confidence interval with an index of performance will reveal when individuals are statistically indistinguishable and reduce the chances of misuse.

Normalization and Fairness

Normalized by number of contributors.

When evaluating the science reported in a manuscript, the quality and significance of the work are the main consideration, and the number of authors that contributed the findings is almost irrelevant. However, the situation differs when evaluating the contributions of individuals. Clearly, if a paper has only one author, that scientist deserves more credit for the work than if that author published the same paper with 10 other authors.

Defining an appropriate way to normalize for the number of contributors is not simple. Dividing credit equally among the authors is an attractive approach, but in most cases the first author listed has contributed more to an article than other individual authors. Similarly, in some disciplines the last place in the list is usually reserved for the senior investigator, and the relative credit due to a senior investigator is not well established.

Given the importance of authorship, it would not be unreasonable to require authors to explicitly assign to each author a quantitative fractional contribution. However, divvying up author credit quantitatively would not only be extremely difficult but would also probably lead to authorship disputes on a scale well beyond those that currently occur when only the order of authors must be decided. Nevertheless, some disciplines have already taken steps in this direction, with an increasing number of journals requiring explicit statements of how each author contributed to an article.

While it seems difficult to precisely quantify how different authors contribute to a given study, if such an approach came into practice, it might not take long before disciplines established standards for assigning appropriate credit for different types of contributions. Regardless of how normalization for the number of authors is done, one likely benefit of a widely used metric normalized in this way would be the rapid elimination of honorary authorship.

Normalized by discipline

Scientists comprise overlapping but distinct communities that differ considerably in their size and publication habits. Publications in some disciplines include far more citations than others, either because the discipline is larger and produces more papers, or because it has a tradition of providing more comprehensive treatment of prior work (e.g., Jemec, 2001 ; Della Sala and Crawford, 2006 ; Bollen et al., 2009 ; Fersht, 2009 ). Other factors can affect the average number of citations in an article, such as journals that restrict the number of citations that an article may include.

A simple index based on how frequently an author is cited can make investigators working in a large field that is generous with citations appear more productive than one working in a smaller field where people save extensive references for review articles. For example, if two fields are equivalent except that articles in one field reference twice the number of articles as the other field, a simple measure based on citations could make scientists in the first field appear on average to be twice as productive as those in the second. To have maximal value, an index of output based on citations should normalize for differences in the way that citations are used in different fields (including number of people in the field, etc.). Ideally, a measure would reflect an individual’s relative contribution within his or her field. It will be challenging to produce a method to normalize for such differences between disciplines in a rigorous and automatic way. Comprehensive treatment of this issue will require simulation and experimentation. Here, we will briefly mention potential approaches to illustrate a class of solutions.

There is a well-developed field of defining areas of science based on whether pairs of authors are cited in the same articles (author co-citation analysis; Griffith et al., 1986 ). More recently, these methods have been extended by automated rating of text similarity between articles (e.g., Greene et al., 2009 ). Methods like these might be adopted to define a community for any given scientist. With this approach, an investigator might self-define their community based on the literature that they consider most relevant, as reflected by the articles they cite in their own articles. For a robust definition that could not be easily manipulated (see below), an iterative process that used articles that cite cited articles, or articles that are cited by cited articles, would probably be needed. While it is difficult to anticipate what definition of a scientist’s community might be most effective, one benefit of using objective, accessible data is that alternative definitions can be tested and refined.

Once a community of articles has been defined for an investigator, the fraction of all the citations in those articles that refer to the investigator would give a measure of the investigator’s impact within that field. This might provide a much more valuable and interpretable measure than raw counts of numbers of papers or number of citations. It is conceivable that this type of analysis could also permit deeper insights. For example, it might reveal investigators who were widely cited within multiple communities, who were playing a bridging role.

Normalized for career stage

A measure that incorporated the properties discussed so far would allow a meaningful assessment of an individual’s contribution to science. It would, however, rate senior investigators as more influential than junior investigators. This is a property of many existing measures, such as total number of citations or h -index. For some purposes this is appropriate; investigators are frequently compared against others at a similar stage of their careers, and senior scientists generally have contributed more than junior scientists. However, for some decisions, such as judging which investigators are most productive per unit time, an adjustment for seniority is needed. Additionally, it might be revealing for a search committee to compare candidates for an Assistant Professor position with well-known senior investigators when they entered the rank of Assistant Professor.

This type of normalization for stage of career would be difficult to achieve for several reasons. The explosive growth in the number of journals and scientists will make precise normalization difficult. Additionally, data for when individuals entered particular stages (postdoctoral, Assistant Professor, Associate Professor, Full Professor) are not widely available. A workable approximation might be possible based on the time since an author’s first (or first n ) papers were published. Because the size of different disciplines changes with time, and the rate at which articles are cited does not remain constant, these trends would need to be compensated in making comparisons over time.

A related issue is the effect of time itself on citation rates. An earlier publication has had more time to be cited (yet scientists tend to cite more recent work). In some sense, a publication from the year 2000 with 100 citations is less notable than a publication from the year 2010 with 100 citations. A simple way to address this is to compute the number of citations per year (yet we note that this involves arguable assumptions of stationarity in citation rates).

Fostering Great Science

Impractical to manipulate.

If a metric can be manipulated, such that it can be changed through actions that are relatively easy compared to those that it is supposed to measure, people will undoubtedly exploit that weakness. Given an index that is based on an open algorithm (and the algorithm should be open, computable and readily available), it is inevitable that scientists whose livelihoods are affected by that index will come up with ingenious ways to game the system. A good index should be impractical to game so that it encourages scientists to do good science rather than working on tactics that distort the measure.

It is for this reason that measures such as the number of times an article is downloaded cannot be used. That approach would invite the generation of an industry that would surreptitiously download specific articles many times for a fee. For the same reason, a post-publication peer-review measure that depended on evaluations from small numbers of evaluators cannot be robust when careers are at stake.

A measure that is based on the number of times an author’s articles are cited should be relatively secure from gaming, assuming that the neighborhood of articles used to normalize by discipline is sufficiently large. Even a moderate-sized cartel of scientists who agreed to cite each other gratuitously would have little impact on their metrics unless their articles were so poorly cited that any manipulation would still leave them uncompetitive. Nevertheless, it seems likely that a measure based on citations should ignore self-citations and perhaps eliminate or discount citations from recent co-authors (Sala and Brooks, 2008 ).

One would hope that a key motivation for scientific inquiry is, as Feynman put it, “the pleasure of finding things out.” Yet, any metric to evaluate scientific output establishes a certain incentive structure in the research efforts. To some extent, this is unavoidable. Ideally, the incentive structure imposed by a good metric should promote great science as opposed to incentive structures that reward (even financially in some cases) merely publishing an article in specific journals or publishing a certain number of articles. A good metric might encourage collaborative efforts, interdisciplinary efforts, and innovative approaches. It would be important to continuously monitor and evaluate the effects of incentive structures imposed by any metric to ensure that they do not discourage important scientific efforts including interdisciplinary research, collaborations, adequate training, and mentoring of students and others.

Focused on quality over quantity

Most existing metrics show a monotonic dependence on the number of publications. In other words, there are no “negative” citations (but perhaps there should be!). This monotonicity can promote quantity rather than quality. Consider the following example (real numbers but fictitious names). We compare authors Joe Doe and Jane Smith who work in the same research field. Both published his or her first scientific article 12 years ago and the most recent publication from each author was in 2011. Joe has published 45 manuscripts, which have been cited a total of 591 times (mean = 13.1 citations per article, median = 6 citations per article). Jane has published 14 manuscripts, which have been cited 1782 times (mean = 127.3 citations per article median = 57 citations per article). We argue that Jane’s work is more impactful in spite of the fact that her colleague has published three times more manuscripts in the same period of time. The process of publishing a manuscript has a cost in itself including the time required for the authors to do the research and report the results, the time spent by editors, reviewers, and readers to evaluate the manuscript.

In addressing this issue, care must be taken to avoid a measure that discourages scientists from reporting solid, but apparently unexciting, results. For example, penalizing the publication of possibly uninteresting manuscripts by using the average number of citations per article would be inappropriate because it would discourage the publication of any results of below-average interest. The h -index (and variants) constitutes an interesting attempt to emphasize quality (Hirsch, 2005 ). An extension of this notion would be to apply a threshold to the number of citations: publications that do not achieve a certain minimum number of citations would not count toward the overall measure of output. This threshold would have to be defined empirically and may itself be field-dependent. This may help encourage scientists to devote more time thinking about and creating excellence rather than wasting everyone’s time with publications that few consider valuable.

Given a metric, we must be able to ask how good it is. Intuitively, one could compare different metrics by selecting the one that provides a better assessment of excellence in scientific output. The argument, however, appears circular because it seems that we need to have a priori information about excellence to compare different possible metrics. It could be argued that the scientific community will be able to evaluate whether a metric is good or not by assessing whether it correlates well with intuitive judgments about what constitutes good science and innovative scientists. While this is probably correct to some extent, this procedure has the potential to draw the problem back to subjective measures.

To circumvent these difficulties, one could attempt to develop quantitative criteria to evaluate the metrics themselves. One possibility is to compare each proposed quantitative metric against independent evaluations of scientific output (which may not be quantitative or readily available for every scientist). For example, Hirsch ( 2005 ) attempted to validate the h -index by considering Nobel laureates and showing that they typically show a relatively large h -index. In general, one would like to observe that the metric correlates with expert evaluations across a broad range of individuals with different degrees of productivity. While this approach seems intuitive and straightforward it suffers from bringing the problem back to subjective criteria.

An alternative may be to consider historical data. A good metric could provide predictive value. Imagine a set of scientists and their corresponding productivity metric values evaluated in the year 2011. We can ask how well we can predict the productivity metric values in 2011 from their corresponding values in the year 2000 or 1990. Under the assumption that the scientific productivity of a given cohort is approximately stationary, we expect that a useful metric would show a high degree of prediction power whereas a poor metric will not. Of course, many factors influence scientific productivity over time for a given individual and these would be only correlative and probabilistic inferences. Yet, the predictive value of a given metric could help establish a quantitative validation process.

Given the importance of evaluating scientific output, the potential for a plethora of metrics and the high-dimensional parameter landscape involved, it seems worth further examining and developing different and more sophisticated ways of validating these metrics. One could consider measures of scientific influence based on the spread of citations, the number of successful trainees, etc., and compare these to different proposed metrics. Ultimately, these are empirical questions that should be evaluated with the same rigor applied to other scientific endeavors.

We describe above nine criteria that, we hope, might lead to a better way of evaluating scientific output. The development of an evaluation algorithm and metric that capture these properties is not intended to eliminate other forms of peer evaluation. Subjective peer review is valuable (both pre-publication and post-publication) despite its multiple pitfalls and occasional failures, and a combination of different assessments will provide more information than any one alone.

A metric that captured the properties discussed above could provide many benefits. It might encourage better publishing practices by discouraging publication of a large number of uneventful reports or reducing the emphasis on publishing in journals with high impact factors. By highlighting the scientific contributions of individuals within a field it might restore a more appropriate premium: providing important results that other scientists feel compelled to read, think about, act upon, and cite. Placing emphasis on how often other scientists cite work may have other beneficial effects. A long CV with many least-publishable papers would quickly become visibly inferior to a shorter one with fewer but more influential papers. As mentioned above, there may be other benefits including correcting authorship practices, accurate evaluation across disciplines, and it may even help students choose a laboratory or institution for graduate studies or postdoctoral research.

In addition to evaluating the current value of a productivity metric, it may be of interest to compute the rate of change in this metric. This might help highlight individuals, laboratories, departments, or institutions that have recently excelled. Rates should also be normalized and presented alongside distributions as discussed above for the metric itself.

Although we have cast the discussion in terms of a single metric, an index of output does not need to be scalar. No single value can capture the complexities involved in scientific output. Different aspects of an investigator’s contributions may require different indices. Additionally, evaluating a research group, a research center, or a department may be distinct from evaluating an individual and require somewhat different metrics (e.g., Hughes et al., 2010 ), but once suitable measures of output are available, productivity can be evaluated in terms of either years of effort, number of people involved, research funding, and other relevant parameters.

No calculation can take the place of a thoughtful evaluation by competent peers, and even an index that is precise and accurate can be abused. Evaluators might blindly apply an index without actually assessing papers, recommendations, and other material. Evaluators might also ignore confidence intervals and try to make unjustified distinctions between the performance of individuals or programs with different, but statistically indistinguishable, metrics.

Given current technologies, the state of information science, and the wealth of data on authors, publications and citations, useful quantification of the scientific output of individuals should be attainable. While we have avoided the challenge of defining and validating specific algorithms, there is little doubt that a superior metric could be produced. Given how much is at stake in decisions about how to allocate research support, there is no excuse for failing to try to provide a measure that could end the misdirected use of impact factor, download statistics, or similar misleading criteria for judging the contributions of individuals. While the newly developed metrics may show some degree of correlation with existing ones, we have to develop indices that are question-specific (e.g., how do we evaluate a given scientist?) as opposed to using generic indices developed for other purposes (e.g., how do we evaluate a certain web site or journal?). Because it has the potential to greatly influence the efficiency of scientific research, we have a duty to reflect upon and eventually implement novel and rigorous ways of evaluating scientific output.

Conflict of Interest Statement

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Acknowledgments

We thank John Hogenesch, Martin Hemberg, Marlene Cohen, and Douglas Ruff for comments and discussions.

  • Alberts B., Hanson B., Kelner K. L. (2008). Reviewing peer review . Science 321 , 15. 10.1126/science.1162115 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Amin M., Mabe M. (2000). Impact factors: use and abuse . Perspect. Publ. 1 , 1–6 [ Google Scholar ]
  • Bollen J., Van de Sompel H., Hagberg A., Chute R. (2009). A principal component analysis of 39 scientific impact measures . PLoS ONE 4 , e6022. 10.1371/journal.pone.0006022 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Castelnuovo G. (2008). Ditching impact factors: time for the single researcher impact factor . BMJ 336 , 789. 10.1136/bmj.39542.610000.3A [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Della Sala S., Crawford J. R. (2006). Impact factor as we know it handicaps neuropsychology and neuropsychologists . Cortex 42 , 1–2 10.1016/S0010-9452(08)70314-9 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Dimitrov J. D., Kaveri S. V., Bayry J. (2010). Metrics: journal’s impact factor skewed by a single paper . Nature 466 , 179. 10.1038/466179b [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Editors (2006). The impact factor game. It is time to find a better way to assess the scientific literature . PLoS Med. 3 , e291. 10.1371/journal.pmed.0030291 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Enserink M. (2009). Scientific publishing. Are you ready to become a number? Science 323 , 1662–1664 10.1126/science.323.5912.324a [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Fersht A. (2009). The most influential journals: impact factor and Eigenfactor . Proc. Natl. Acad. Sci. U.S.A. 106 , 6883–6884 10.1073/pnas.0903307106 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Garfield E. (2006). The history and meaning of the journal impact factor . JAMA 295 , 90–93 10.1001/jama.295.1.90 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Greene D., Freyne J., Smyth B., Cunningham P. (2009). An Analysis of Current Trends in CBR Research Using Multi-View Clustering Technical Report UCD-CSI-2009-03. Dublin: University College Dublin. [ Google Scholar ]
  • Griffith B. C., White H. D., Drott M. C., Saye J. D. (1986). Tests of methods for evaluating bibliographic databases: an analysis of the National Library of Medicine’s handling of literatures in the medical behavioral sciences . J. Am. Soc. Inf. Sci. 37 , 261–270 10.1002/asi.4630370414 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Hecht F., Hecht B. K., Sandberg A. A. (1998). The journal “impact factor”: a misnamed, misleading, misused measure . Cancer Genet. Cytogenet. 104 , 77–81 10.1016/S0165-4608(97)00459-7 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Hirsch J. E. (2005). An index to quantify an individual’s scientific research output . Proc. Natl. Acad. Sci. U.S.A. 102 , 16569–16572 10.1073/pnas.0507655102 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Hughes M. E., Peeler J., Hogenesch J. B. (2010). Network dynamics to evaluate performance of an academic institution . Sci. Transl. Med. 2 , 53ps49. 10.1126/scitranslmed.3001580 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Jemec G. B. (2001). Impact factor to assess academic output . Lancet 358 , 1373. 10.1016/S0140-6736(01)06443-1 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Mandavilli A. (2011). Peer review: trial by Twitter . Nature 469 , 286–287 10.1038/469286a [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Petsko G. A. (2008). Having an impact (factor) . Genome Biol. 9 , 107. 10.1186/gb-2008-9-9-110 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Refinetti R. (2011). Publish and flourish . Science 331 , 29. 10.1126/science.331.6013.29-a [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Sala S. D., Brooks J. (2008). Multi-authors’ self-citation: a further impact factor bias? Cortex 44 , 1139–1145 10.1016/j.cortex.2007.12.006 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Siegel D., Baveye P. (2010). Battling the paper glut . Science 329 , 1466. 10.1126/science.329.5998.1466-b [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Simons K. (2008). The misused impact factor . Science 322 , 165. 10.1126/science.1165316 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Skorka P. (2003). How do impact factors relate to the real world? Nature 425 , 661. 10.1038/425661c [ PubMed ] [ CrossRef ] [ Google Scholar ]

How to formulate strong outputs

  • Post author By Thomas Winderl
  • Post date September 8, 2020

how to make a research output

Outputs are arguably not the most important level of the results chain. It is outcomes that should be the focus of a good plan. Ultimately, that´s what counts.

However, outputs still matter.

Just to be clear: Simply put, outputs refer to changes in  skills or abilities or the availability of new products and services . In plain lingo: Outputs are what we plan to do to achieve a result.

Ok, let’s be a bit more precise: Outputs usually refer to a group of people or an organization that has improved capacities, abilities, skills, knowledge, systems, or policies or if something is built, created or repaired as a direct result of the support provided. That’s a definition we can work with.

Language is important

When describing what you do, focus on the  change , not the  process . Language matters.

Don’t say: ‘ Local organisations will support young women and men in becoming community leaders .’ This emphasises the process rather than the change.

Instead, emphasis what will be different as a result of your support. Say:  ‘Young women and men have the skills and motivation to be community leaders’ . 

Make it time-bound

An organization’s support is typically not open-ended. You usually expect to wrap up what you do at a  certain time . Emphasise that your activities are carried out within a certain time frame. So it’s always helpful to include in the formulation for example ‘ By January 2019 , …’. 

A formula for describing what you do

To ensure that you accurately describe what you do, use the following formula:

how to make a research output

What to learn more about how to plan for results ? Check out our detailed video course on Practical Results Based Management on Udemy.

  • Policy Library

Research Output

  • All Policy and Procedure A-Z
  • Policy and Procedure Categories
  • Enterprise Agreement
  • Current Activity
  • Policy Framework
  • Definitions Dictionary

PDF version of Research Output

Definition overview

1 definition, 2 references, 3 definition information.

An output is an outcome of research and can take many forms. Research Outputs must meet the definition of Research.

Source: Australian Research Council Excellence in Research for Australia 2018 Submission Guidelines.

Approved Date

3/6/2024

Effective Date

3/6/2024

Record No

15/2329PL

Complying with the law and observing Policy and Procedure is a condition of working and/or studying at the University.

* This file is available in Portable Document Format (PDF) which requires the use of Adobe Acrobat Reader. A free copy of Acrobat Reader may be obtained from Adobe. Users who are unable to access information in PDF should email [email protected] to obtain this information in an alternative format.

  • PRO Courses Guides New Tech Help Pro Expert Videos About wikiHow Pro Upgrade Sign In
  • EDIT Edit this Article
  • EXPLORE Tech Help Pro About Us Random Article Quizzes Request a New Article Community Dashboard This Or That Game Happiness Hub Popular Categories Arts and Entertainment Artwork Books Movies Computers and Electronics Computers Phone Skills Technology Hacks Health Men's Health Mental Health Women's Health Relationships Dating Love Relationship Issues Hobbies and Crafts Crafts Drawing Games Education & Communication Communication Skills Personal Development Studying Personal Care and Style Fashion Hair Care Personal Hygiene Youth Personal Care School Stuff Dating All Categories Arts and Entertainment Finance and Business Home and Garden Relationship Quizzes Cars & Other Vehicles Food and Entertaining Personal Care and Style Sports and Fitness Computers and Electronics Health Pets and Animals Travel Education & Communication Hobbies and Crafts Philosophy and Religion Work World Family Life Holidays and Traditions Relationships Youth
  • Browse Articles
  • Learn Something New
  • Quizzes Hot
  • Happiness Hub
  • This Or That Game
  • Train Your Brain
  • Explore More
  • Support wikiHow
  • About wikiHow
  • Log in / Sign up
  • Education and Communications

How to Write a Research Statement

Last Updated: April 25, 2024 Fact Checked

This article was co-authored by Christopher Taylor, PhD . Christopher Taylor is an Adjunct Assistant Professor of English at Austin Community College in Texas. He received his PhD in English Literature and Medieval Studies from the University of Texas at Austin in 2014. There are 7 references cited in this article, which can be found at the bottom of the page. This article has been fact-checked, ensuring the accuracy of any cited facts and confirming the authority of its sources. This article has been viewed 67,938 times.

The research statement is a very common component of job applications in academia. The statement provides a summary of your research experience, interests, and agenda for reviewers to use to assess your candidacy for a position. Because the research statement introduces you as a researcher to the people reviewing your job application, it’s important to make the statement as impressive as possible. After you’ve planned out what you want to say, all you have to do is write your research statement with the right structure, style, and formatting!

Research Statement Outline and Example

how to make a research output

Planning Your Research Statement

Step 1 Ask yourself what the major themes or questions in your research are.

  • For example, some of the major themes of your research might be slavery and race in the 18th century, the efficacy of cancer treatments, or the reproductive cycles of different species of crab.
  • You may have several small questions that guide specific aspects of your research. Write all of these questions out, then see if you can formulate a broader question that encapsulates all of these smaller questions.

Step 2 Identify why your research is important.

  • For example, if your work is on x-ray technology, describe how your research has filled any knowledge gaps in your field, as well as how it could be applied to x-ray machines in hospitals.
  • It’s important to be able to articulate why your research should matter to people who don’t study what you study to generate interest in your research outside your field. This is very helpful when you go to apply for grants for future research.

Step 3 Describe what your future research interests are.

  • Explain why these are the things you want to research next. Do your best to link your prior research to what you hope to study in the future. This will help give your reviewer a deeper sense of what motivates your research and why it matters.

Step 4 Think of examples of challenges or problems you’ve solved.

  • For example, if your research was historical and the documents you needed to answer your question didn’t exist, describe how you managed to pursue your research agenda using other types of documents.

Step 5 List the relevant skills you can use at the institution you’re applying to.

  • Some skills you might be able to highlight include experience working with digital archives, knowledge of a foreign language, or the ability to work collaboratively. When you're describing your skills, use specific, action-oriented words, rather than just personality traits. For example, you might write "speak Spanish" or "handled digital files."
  • Don’t be modest about describing your skills. You want your research statement to impress whoever is reading it.

Structuring and Writing the Statement

Step 1 Put an executive summary in the first section.

  • Because this section summarizes the rest of your research statement, you may want to write the executive summary after you’ve written the other sections first.
  • Write your executive summary so that if the reviewer chooses to only read this section instead of your whole statement, they will still learn everything they need to know about you as an applicant.
  • Make sure that you only include factual information that you can prove or demonstrate. Don't embellish or editorialize your experience to make it seem like it's more than it is.

Step 2 Describe your graduate research in the second section.

  • If you received a postdoctoral fellowship, describe your postdoc research in this section as well.
  • If at all possible, include research in this section that goes beyond just your thesis or dissertation. Your application will be much stronger if reviewers see you as a researcher in a more general sense than as just a student.

Step 3 Discuss your current research projects in the third section.

  • Again, as with the section on your graduate research, be sure to include a description of why this research matters and what relevant skills you bring to bear on it.
  • If you’re still in graduate school, you can omit this section.

Step 4 Write about your future research interests in the fourth section.

  • Be realistic in describing your future research projects. Don’t describe potential projects or interests that are extremely different from your current projects. If all of your research to this point has been on the American civil war, future research projects in microbiology will sound very farfetched.

Step 5 Acknowledge how your work complements others’ research.

  • For example, add a sentence that says “Dr. Jameson’s work on the study of slavery in colonial Georgia has served as an inspiration for my own work on slavery in South Carolina. I would welcome the opportunity to be able to collaborate with her on future research projects.”

Step 6 Discuss potential funding partners in your research statement.

  • For example, if your research focuses on the history of Philadelphia, add a sentence to the paragraph on your future research projects that says, “I believe based on my work that I would be a very strong candidate to receive a Balch Fellowship from the Historical Society of Pennsylvania.”
  • If you’ve received funding for your research in the past, mention this as well.

Step 7 Aim to keep your research statement to about 2 pages.

  • Typically, your research statement should be about 1-2 pages long if you're applying for a humanities or social sciences position. For a position in psychology or the hard sciences, your research statement may be 3-4 pages long.
  • Although you may think that having a longer research statement makes you seem more impressive, it’s more important that the reviewer actually read the statement. If it seems too long, they may just skip it, which will hurt your application.

Formatting and Editing

Step 1 Maintain a polite and formal tone throughout the statement.

  • For example, instead of saying, “This part of my research was super hard,” say, “I found this obstacle to be particularly challenging.”

Step 2 Avoid using technical jargon when writing the statement.

  • For example, if your research is primarily in anthropology, refrain from using phrases like “Gini coefficient” or “moiety.” Only use phrases that someone in a different field would probably be familiar with, such as “cultural construct,” “egalitarian,” or “social division.”
  • If you have trusted friends or colleagues in fields other than your own, ask them to read your statement for you to make sure you don’t use any words or concepts that they can’t understand.

Step 3 Write in present tense, except when you’re describing your past work.

  • For example, when describing your dissertation, say, “I hypothesized that…” When describing your future research projects, say, “I intend to…” or “My aim is to research…”

Step 4 Use single spacing and 11- or 12-point font.

  • At the same time, don’t make your font too big. If you write your research statement in a font larger than 12, you run the risk of appearing unprofessional.

Step 5 Use section headings to organize your statement.

  • For instance, if you completed a postdoc, use subheadings in the section on previous research experience to delineate the research you did in graduate school and the research you did during your fellowship.

Step 6 Proofread your research statement thoroughly before submitting it.

Expert Q&A

You might also like.

Do a Science Investigatory Project

  • ↑ https://owl.purdue.edu/owl/general_writing/graduate_school_applications/writing_a_research_statement.html
  • ↑ https://www.cmu.edu/student-success/other-resources/handouts/comm-supp-pdfs/writing-research-statement.pdf
  • ↑ https://postdocs.cornell.edu/research-statement
  • ↑ https://gradschool.cornell.edu/academic-progress/pathways-to-success/prepare-for-your-career/take-action/research-statement/
  • ↑ https://libguides.usc.edu/writingguide/executivesummary
  • ↑ https://www.niu.edu/writingtutorial/style/formal-and-informal-style.shtml
  • ↑ https://www.unr.edu/writing-speaking-center/student-resources/writing-speaking-resources/editing-and-proofreading-techniques

About This Article

Christopher Taylor, PhD

  • Send fan mail to authors

Reader Success Stories

Daniella

Apr 24, 2022

Did this article help you?

Daniella

Featured Articles

Enjoy Your Preteen Years

Trending Articles

The Office Trivia Quiz

Watch Articles

Make French Fries

  • Terms of Use
  • Privacy Policy
  • Do Not Sell or Share My Info
  • Not Selling Info

Get all the best how-tos!

Sign up for wikiHow's weekly email newsletter

how to make a research output

View the latest institution tables

View the latest country/territory tables

What's the best measure of research output?

Nature Index

how to make a research output

An artist's interpretation of a paper published in Science that compared human and machine learning. Credit: Danqing Wang

What's the best measure of research output?

21 March 2016

how to make a research output

Danqing Wang

An artist's interpretation of a paper published in Science that compared human and machine learning.

When ranking countries based on their output of high-quality research, weighted fractional count (WFC) is often used as the Nature Index's primary metric. And for good reason. WFC reflects the size of the contribution a country's researchers have made to every study published in the 68 top-tier journals included in the index. This measure also takes into account the higher proportion of astronomy papers in the index. It seems astronomers love to write papers. The index's unweighted measure of contribution is fractional count (FC).

For every paper included in the index, the FC and WFC are split among authors based on their affiliation. Take a recent paper published in Science , which created a computer model that captures humans' unique ability to learn, and had three authors from three different universities, two in the USA and one in Canada. For this paper, each affiliation received an FC of 0.33, and because it wasn't published in an astronomy journal, they received the same WFC. Adding the WFC of a country's institutions presents a picture of that nation's performance over a designated period of time.

In a recent post , we published a graph of the top and bottom 10 countries ranked by change in their WFC. It revealed a compelling narrative. Since 2012 China's contribution to high-quality research has soared, while traditional stronghold, the United States, appears to have lost its mojo. (Although it's worth noting that the USA's total WFC is miles ahead of anyone else, China included.)

But what does it mean when a country's WFC drops? Does that suggest its research performance is slipping?

Not necessarily. When assessing the output of a country's top-quality research, it is prudent to also consider article count - the total number of studies that a country's researchers have contributed too, regardless of the size of that contribution.

Consider this next graph. It shows the change in article count for the countries in the graph above. While the United States and Japan's article count followed a similar downward trajectory to their WFC, the article count of all the other countries grew between 2012 and 2015 - including the eight countries that experienced a drop in their WFC.

As article count isn't a weighted metric, it shouldn't be directly compared to WFC. When considering the change in a country's total number of papers versus the contribution it made to those papers, it best to compare AC with FC.

An interesting trend emerges when a country's article count goes up, but its fractional count dwindles. It suggests that while the country's researchers have contributed to a larger total number of studies, the proportion of their contribution has become smaller.

The reverse can be observed in countries with an increase in their FC but a fall in their AC. In those countries, researchers contributed to fewer studies but received more of the credit for the ones that were published.

These two scenarios at least partly reflect patterns of collaboration, and their significance will depend on a broader research context. What is certain, however, is that neither WFC, FC or AC alone can reveal the state of a country's high-quality natural science output. The three metrics should be considered together.

Sign up to the Nature Index newsletter

Get regular news, analysis and data insights from the editorial team delivered to your inbox.

  • Sign up to receive the Nature Index newsletter. I agree my information will be processed in accordance with the Nature and Springer Nature Limited Privacy Policy .

Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, generate accurate citations for free.

  • Knowledge Base
  • Research paper

How to Create a Structured Research Paper Outline | Example

Published on August 7, 2022 by Courtney Gahan . Revised on August 15, 2023.

How to Create a Structured Research Paper Outline

A research paper outline is a useful tool to aid in the writing process , providing a structure to follow with all information to be included in the paper clearly organized.

A quality outline can make writing your research paper more efficient by helping to:

  • Organize your thoughts
  • Understand the flow of information and how ideas are related
  • Ensure nothing is forgotten

A research paper outline can also give your teacher an early idea of the final product.

Instantly correct all language mistakes in your text

Upload your document to correct all your mistakes in minutes

upload-your-document-ai-proofreader

Table of contents

Research paper outline example, how to write a research paper outline, formatting your research paper outline, language in research paper outlines.

  • Definition of measles
  • Rise in cases in recent years in places the disease was previously eliminated or had very low rates of infection
  • Figures: Number of cases per year on average, number in recent years. Relate to immunization
  • Symptoms and timeframes of disease
  • Risk of fatality, including statistics
  • How measles is spread
  • Immunization procedures in different regions
  • Different regions, focusing on the arguments from those against immunization
  • Immunization figures in affected regions
  • High number of cases in non-immunizing regions
  • Illnesses that can result from measles virus
  • Fatal cases of other illnesses after patient contracted measles
  • Summary of arguments of different groups
  • Summary of figures and relationship with recent immunization debate
  • Which side of the argument appears to be correct?

Scribbr Citation Checker New

The AI-powered Citation Checker helps you avoid common mistakes such as:

  • Missing commas and periods
  • Incorrect usage of “et al.”
  • Ampersands (&) in narrative citations
  • Missing reference entries

how to make a research output

Follow these steps to start your research paper outline:

  • Decide on the subject of the paper
  • Write down all the ideas you want to include or discuss
  • Organize related ideas into sub-groups
  • Arrange your ideas into a hierarchy: What should the reader learn first? What is most important? Which idea will help end your paper most effectively?
  • Create headings and subheadings that are effective
  • Format the outline in either alphanumeric, full-sentence or decimal format

There are three different kinds of research paper outline: alphanumeric, full-sentence and decimal outlines. The differences relate to formatting and style of writing.

  • Alphanumeric
  • Full-sentence

An alphanumeric outline is most commonly used. It uses Roman numerals, capitalized letters, arabic numerals, lowercase letters to organize the flow of information. Text is written with short notes rather than full sentences.

  • Sub-point of sub-point 1

Essentially the same as the alphanumeric outline, but with the text written in full sentences rather than short points.

  • Additional sub-point to conclude discussion of point of evidence introduced in point A

A decimal outline is similar in format to the alphanumeric outline, but with a different numbering system: 1, 1.1, 1.2, etc. Text is written as short notes rather than full sentences.

  • 1.1.1 Sub-point of first point
  • 1.1.2 Sub-point of first point
  • 1.2 Second point

To write an effective research paper outline, it is important to pay attention to language. This is especially important if it is one you will show to your teacher or be assessed on.

There are four main considerations: parallelism, coordination, subordination and division.

Parallelism: Be consistent with grammatical form

Parallel structure or parallelism is the repetition of a particular grammatical form within a sentence, or in this case, between points and sub-points. This simply means that if the first point is a verb , the sub-point should also be a verb.

Example of parallelism:

  • Include different regions, focusing on the different arguments from those against immunization

Coordination: Be aware of each point’s weight

Your chosen subheadings should hold the same significance as each other, as should all first sub-points, secondary sub-points, and so on.

Example of coordination:

  • Include immunization figures in affected regions
  • Illnesses that can result from the measles virus

Subordination: Work from general to specific

Subordination refers to the separation of general points from specific. Your main headings should be quite general, and each level of sub-point should become more specific.

Example of subordination:

Division: break information into sub-points.

Your headings should be divided into two or more subsections. There is no limit to how many subsections you can include under each heading, but keep in mind that the information will be structured into a paragraph during the writing stage, so you should not go overboard with the number of sub-points.

Ready to start writing or looking for guidance on a different step in the process? Read our step-by-step guide on how to write a research paper .

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the “Cite this Scribbr article” button to automatically add the citation to our free Citation Generator.

Gahan, C. (2023, August 15). How to Create a Structured Research Paper Outline | Example. Scribbr. Retrieved September 22, 2024, from https://www.scribbr.com/research-paper/outline/

Is this article helpful?

Courtney Gahan

Courtney Gahan

Other students also liked, research paper format | apa, mla, & chicago templates, writing a research paper introduction | step-by-step guide, writing a research paper conclusion | step-by-step guide, what is your plagiarism score.

  • World of Warcraft
  • Baldur's Gate
  • League of Legends
  • Counter-Strike
  • Ethics Policy
  • Ownership Policy
  • Fact Checking Policy
  • Corrections Policy
  • Affiliate Policy

how to make a research output

Frostpunk 2 – How to get and use the Deep Melting Drill

Image of Gavin Mackenzie

The Deep Melting Drill is a building in Frostpunk 2 that allows you to access deep resource deposits. There are several prerequisites for building a Deep Melting Drill, and depending on your choices in the Story, you might not be able to build one.

One of the hardest things about Frostpunk 2 is that you never feel like you have enough resources, and everything always feels like it’s about to run out. Deep deposits don’t eliminate these problems completely, but they do help. Technically, they are finite, but they contain millions of units of resources, so you can basically treat them as infinite. But to access a deep deposit, you still need a Deep Melting Drill.

How to unlock the Deep Melting Drill in Frostpunk 2

Researching the Deep Melting Drill

To build the Deep Melting Drill in Frostpunk 2 , you first have to research Generator Upgrade one and then Melting Deep Deposits in the Idea Tree . If you’re playing Story mode, you won’t be able to do this until you reach Chapter Two and swear to defeat the frost. If you swear to embrace the frost, then you can’t research Melting Deep Deposits and won’t be able to access deep deposits for the rest of the Story campaign. The point of the “defeat/embrace the frost” choice is to force you to choose between access to unlimited resources at your settlements and unlimited resources in the Frostland. You can’t have both in Story mode, but you can in Utopia Builder mode.

How to build a Deep Melting Drill in Frostpunk 2

Deep Melting Drill tooltip

Once you’ve researched Melting Deep Deposits, you can build a Deep Melting Drill, but there are still a few more conditions to fulfill and steps to follow. The first thing you need is a deep deposit. These are marked by the same icons as regular deep deposits—only with an additional infinity symbol. If you haven’t already, Frostbreak your way to the deep deposit and build a corresponding district on top of it. Now, you can build a Deep Melting Drill in that district. It costs 80 Heatstamps , 40 Prefabs , and one Core , requires 300 Workforce to run and 60 Materials to maintain, and increases Squalor.

Bear in mind that the advantage of using a Deep Mining Drill is that deep deposits don’t run out, but they don’t supply resources at a higher rate than regular resources, so your stockpiles can still run out if you’re using more resources than you’re extracting. In other words, deep deposits are useful, but they’re not a miracle cure for all your supply and demand woes.

Lens zoomed in on the Extraction District in Frostpunk 2

COMMENTS

  1. Turning Research into Outputs: Thesis, Papers and Beyond

    It is a sole-authored document, with clear input and advice from your research group and in particular your supervisor. A paper is a curt, timely, stand alone, and important contribution to the scientific literature. It will (likely) be widely read and considered over many years. It must pass through peer review.

  2. Research Results Section

    Research results refer to the findings and conclusions derived from a systematic investigation or study conducted to answer a specific question or hypothesis. These results are typically presented in a written report or paper and can include various forms of data such as numerical data, qualitative data, statistics, charts, graphs, and visual aids.

  3. Research Guides: Process: Research Outputs: Output Types

    An article published in an academic journal can go by several names: original research, an article, a scholarly article, or a peer reviewed article. This format is an important output for many fields and disciplines. Original research articles are written by one or a number of authors who typically advance a new argument or idea to their field.

  4. How to Write a Results Section

    Checklist: Research results 0 / 7. I have completed my data collection and analyzed the results. I have included all results that are relevant to my research questions. I have concisely and objectively reported each result, including relevant descriptive statistics and inferential statistics. I have stated whether each hypothesis was supported ...

  5. Reporting Research Results in APA Style

    Make sure to report any unexpected events or side effects (for clinical studies). Descriptive statistics. Summarize the primary and secondary outcomes of the study. Inferential statistics, including confidence intervals and effect sizes. Address the primary and secondary research questions by reporting the detailed results of your main analyses.

  6. Outputs from Research

    Outputs from Research. A research output is the product of research. It can take many different forms or types. See here for a full glossary of output types. The tables below sets out the generic criteria for assessing outputs and the definitions of the starred levels, as used during the REF2021 exercise.

  7. How to write the expected results in a research proposal?

    Writing about the expected results of your study in your proposal is a good idea as it can help to establish the significance of your study. On the basis of the problems you have identified and your proposed methodology, you can describe what results can be expected from your research. It's not possible for you to predict the exact outcome of ...

  8. 8 strategies to optimise your research output while creating impact

    Here are a few examples: Have a list of reputable and appropriate journals at hand. When you search for a journal or "home" for your research, note down important information, such as whether it is open access or if they ask for article processing charges. Add to this list each time you come across a new journal.

  9. The Future of Research Outputs

    The changing nature of research outputs has the potential to affect a wide range of organisations and people in the sector. Joined-up thinking and action could help. As the diversity of research outputs increases, we have to make choices. We can either be reactive, responding to needs and challenges as they emerge, or proactive, to help shape ...

  10. BeckerGuides: Research Impact : Outputs and Activities

    Scholarly/research outputs and activities represent the various outputs and activities created or executed by scholars and investigators in the course of their academic and/or research efforts. One common output is in the form of scholarly publications which are defined by Washington University as:". . . articles, abstracts, presentations at ...

  11. Writing a Research Paper Conclusion

    Table of contents. Step 1: Restate the problem. Step 2: Sum up the paper. Step 3: Discuss the implications. Research paper conclusion examples. Frequently asked questions about research paper conclusions.

  12. Research Outcomes and Outputs

    The end-product of the research is helpful to consider at its outset to streamline producing these products. For example, the type of report made from the research will influence how the data are organized and analyzed. If the outputs are reports to members of the community, more focus could be put towards developing a narrative and images ...

  13. Research quality: What it is, and how to achieve it

    2) Initiating research stream: The researcher (s) must be able to assemble a research team that can achieve the identified research potential. The team should be motivated to identify research opportunities and insights, as well as to produce top-quality articles, which can reach the highest-level journals.

  14. Q: How do I write expected results in a research proposal

    Editage Insights offers a wealth of free academic research and publishing resources and is a one-stop guide for authors and others involved in scholarly publishing. Our original resources for authors and journals will help you become an expert in academic publishing. Register for comprehensive research tips and expert advice on English writing ...

  15. Nine Criteria for a Measure of Scientific Output

    Additionally, evaluating a research group, a research center, or a department may be distinct from evaluating an individual and require somewhat different metrics (e.g., Hughes et al., 2010), but once suitable measures of output are available, productivity can be evaluated in terms of either years of effort, number of people involved, research ...

  16. How to formulate strong outputs

    Outputs are arguably not the most important level of the results chain. It is outcomes that should be the focus of a good plan. Ultimately, that´s what counts. However, outputs still matter. Just to be clear: Simply put, outputs refer to changes in skills or abilities or the availability of new products and services.In plain lingo: Outputs are what we plan to do to achieve a result.

  17. How to talk about your research outputs

    The University of Glasgow has signed up to the San Francisco Declaration on Research Assessment and this animation explains how you can talk about your range...

  18. How to Write a Research Proposal

    Research proposal examples. Writing a research proposal can be quite challenging, but a good starting point could be to look at some examples. We've included a few for you below. Example research proposal #1: "A Conceptual Framework for Scheduling Constraint Management".

  19. PDF Outputs Management Plan examples

    3. Where will you make these outputs available? The human neuroimaging data will be shared via the WIN, which has 'Open Neuroimaging' as one of its research themes during the 5-year Centre grant. As part of this research theme, methods of sharing imaging data, analysis tools and pipelines, will be developed with the aim of sharing data

  20. Research Output

    An output is an outcome of research and can take many forms. Research Outputs must meet the definition of Research. 2 References. Source: Australian Research Council Excellence in Research for Australia 2018 Submission Guidelines. 3 Definition Information. Approved Date. 3/6/2024. Effective Date.

  21. 4 Easy Ways to Write a Research Statement

    Download Article. 1. Put an executive summary in the first section. Write 1-2 paragraphs that include a summary of your research agenda and its main focus, any publications you have, your plans for future research, and your ultimate career goals. Place these paragraphs at the very beginning of your research statement.

  22. What's the best measure of research output?

    When assessing the output of a country's top-quality research, it is prudent to also consider article count - the total number of studies that a country's researchers have contributed too ...

  23. How to Create a Structured Research Paper Outline

    Research paper outline example. Research paper outlines can consist only of notes or be extremely detailed. Your teacher might provide guidance as to the kind of outline they wish to see; if not, choose what works best for you. Example: Measles and the vaccination debate. INTRODUCTION.

  24. Frostpunk 2

    To build the Deep Melting Drill in Frostpunk 2, you first have to research Generator Upgrade one and then Melting Deep Deposits in the Idea Tree.If you're playing Story mode, you won't be able ...