AIMOSS2022 is the fourth annual conference for the Association for Interdisciplinary Metaresearch & Open Science.

The purpose of AIMOS is to make the research process more trustworthy and efficient and to promote the study of how research is done and how it can be improved. We see our annual conference as an important collaborative space that advances this purpose.

AIMOS2022 will aim to bring together researchers from across a number of disciplines to talk about how research is done, and how we can improve it.

The full conference program is now live!
Location
Redmond Barry building (115), University of Melbourne
Date & Time
8.30am (registration opens)
28-30 November 2022
format

In-person, virtual attendance supported
Plenary Speakers
AIMOS2022 will include a mix of plenary lectures and mini-note panels
Anne Scheel

What (psychology) researchers really want to know
Anne is an incoming assistant professor in methodology and statistics at Utrecht University. Anne completed her PhD at TU Eindhoven and worked as a postdoctoral researcher at VU Amsterdam and at the Centre for Science and Technology Studies, Leiden. With a background in psychology and meta-psychology, Anne has studied reforms of research and publication practices in psychology as part of the discipline's effort to recover from the replication crisis and improve the reliability and efficiency of published research.
Colin Camerer

Thoughts on Replication & reproducibility in social sciences
Colin is a professor of behavioral economist at Caltech. Colin's research imports methods from psychology and neuroscience to improve economics. Colin's science team is interested in a range of decisions, games, and markets and uses eyetracking, lesion patients, EEG, fMRI, wearable sensors, machine learning, and animal behavior. Two major application areas are financial behavior and strategic thinking in games.
Patrick Forscher

generalisability of research claims in the global south
Patrick is a metaresearcher whose work is focused on making behavioral science more robust, useful, and fair. Patrick works as a Research Lead for the Busara Center for Behavioral Economics in Kenya, a non-profit research and advisory centre that uses behavioral science in service of alleviating poverty, where he manages Busara's internal research agenda focused on Culture, Research Ethics, and MEthods (CREME). Prior to Busara, Patrick served as a research scientist at Université Grenoble Alpes, Funding Lead at the Psychological Science Accelerator, and an Assistant Professor at the University of Arkansas.
Joel mumo Wambua
generalisability of research claims in the global south
Joel Wambua is a Research Specialist at Busara Center for Behavioral Economics. At Busara, he leads an agenda on ethical research. The main objective of this agenda is to conduct empirical research into the preferences of research participants, close feedback loops, and strengthen participant voices. Joel's research interests lie in applied behavioral science, specifically on questions relating to economic development, public policy, reproducibility, and research ethics. Joel has a background in Economics and Sociology.
program - day 1, mon 28 Nov
Zoom webinar link: https://unimelb.zoom.us/j/81862772512 (Password required to login: check email sent to attendees)

Click here for a google doc version of the program.

All sessions (unless otherwise specified) will be held in the Lyle theatre, ground floor, Redmond Barry building (115), Parkville campus, University of Melbourne.
9:30 AM
Plenary - Patrick Forscher & Joel Mumo, Understanding and Addressing the Generalizability Problem in the Behavioral Sciences
Patrick and Joel work for the Busara Center for Behavioral Economics in Kenya, a non-profit research and advisory centre that uses behavioral science in service of alleviating poverty. Patrick is a research lead who manages Busara's internal research agenda focused on Culture, Research Ethics, and MEthods (CREME). Joel is a research officer whose work includes design and monitoring of research projects, and managing the primary data analysis processes.
10:30 AM
Break ☕️
11:00 AM
Lightning talks. Theme: Quality of research
Speakers:
> Lee Jones, Lessons from post-publication statistical reviews
> Bermond Scoggins, ‘Trust Us’: Open Data and Preregistration in Political Science and International Relations
> Richard McGee, Retractions in paediatric endocrinology: are we failing to regulate the literature?

Chair: Bob Reed
11:30 AM
Debate: Open Peer Review
Join us for a lively debate, with interactive audience participation, exploring some of the challenges as peer review and publishing becomes more open

Speakers:
> David Vaux AO, Honorary Fellow, WEHI
> Ginny Barbour, Director, Open Access Australasia, Incoming EiC Medical Journal of Australia
> Simine Vazire, Professor of Psychology, University of Melbourne

Chair: Daniel Hamilton
12:00 pM
Lightning talks. Theme: Open Science policies
Speakers:
> Malgorzata Lagisz, What research awards have to do with Open Science?
> Alejandra Manco, Open Science Policies seen from the perspective of researchers' communities
> Aidan Tan, Prevalence and characteristics of data sharing policies across the health research life cycle: funders, ethics committees, trial registries, journals, and data repositories
> Deborah Apthorp, The Data Badge Project - do badges encourage computational reproducibility?

Chair: Kathy Zeiler
12:30 pM
Lunch!
Lunch will be in the foyer.
1:30 pM - 3:00 pm
Concurrent sessions
Session 1: Discussion group, Pursuing truth in law and science - Jason Chin, Tess Neal, Simine Vazire, Kristy Martire & Alex Holcombe
Room: Oscar Oeser room (1120), Level 11, Redmond Barry building
Zoom link for this session.
Both law and science pursue truth and involve formalised exercises in organised distrust. These similarities offer fertile ground for considering how the pursuit of factual accuracy in law and science is impaired by similar forces. For example, in both science and law, transparency prevents organised distrust from working well. In science: researchers can run many statistical tests and select the most favourable results to report; studies that result in null findings are hard to publish; and errors in the published record are challenging to address. In law: it’s often hard to know what tests an expert performed and didn’t report; swaths of the investigatory process are unreported by police; and multiple experts might be consulted by an adversarial party before one is selected for testimony. These and other similarities offer crucial opportunities for law and science to learn from each other’s innovations. In this group, we will discuss these intersections between law and science, with the ultimate aim of producing a research agenda.
Session 2: Discussion group, Strengthening digital research literacy through interdisciplinary collaboration - Maria del Mar Quiroga, Nic Geard, Simon Mutch, Daniel Russo-Batterham, and Kim Doyle
Room: Alexander J Wearing room (1123), Level 11, Redmond Barry building
A recent report by RMIT and Deloitte estimates that 87% of jobs require digital skills. In the world of research, the need for digital literacy is even higher, and the skills required are much more sophisticated (for example writing analysis code, using version control, and developing research software). Despite this, most degrees don’t explicitly teach these advanced digital skills before students embark on research. This training gap raises several challenges for research institutions: How can they improve digital literacies to enhance research quality and reproducibility? How can they retain digitally skilled researchers, keep up with the latest technologies across all fields, and ensure that their digital infrastructure meets the needs of their researchers? How can they ensure that research practices meet minimum standards of management, governance, ethics, and integrity?
3:00 - 3:45 pM
Concurrent sessions
Session 3: Working paper, Can the Replication Rate Tell Us About Publication Bias? - Patrick Vu
Room: Lyle theatre (ground floor)
Selective publication is among the most-cited reasons for widespread replication failures. By contrast, I show in a simple model of selective publication that the replication rate is unresponsive to the suppression of insignificant results in the publication process. I then show that the expected replication rate falls below its intended target owing to issues with common power calculations, even in the absence of other factors such as p-hacking or heterogeneous treatment effects. To evaluate the importance of this theoretical result, I estimate an empirical model to produce out-of-sample predictions of the replication rate for large-scale replication studies. Predictions are almost identical to observed replication rates in experimental economics and social science, which suggests that issues with power are sufficient to explain the observed replication rates. In psychology, the model explains two-thirds of the gap between the replication rate and its intended target.

Chair/discussant: Alex Holcombe
3:30 pM
Lightning talks.
Room: Latham theatre, ground floor, Redmond Barry building

Click here for zoom.

Speakers:
> Jill Jacobson (& David J. Hauser), Is participant non-naïveté associated with higher replication rates?
> Keyana Zahiri (& Adrienne Mueller), Evaluating Study Design Rigor in Preclinical Cardiovascular Research: A Replication Study
> Janaynne Carvalho do Amaral, Patients in peer review: the Research Involvement and Engagement journal initiative
> Dragan Okanovic, Creating scientific knowledge graphs with Unfold Research
> Kathleen Schmidt, Examining the Generalizability of Psychological Effects

Chair: Matt Page
4:00 pM
Mini-note panel: Correcting the record
Science is self-correcting, right? In this session, we'll hear from four speakers who will tell us about whether and how science self-corrects, and how this could be done better.

Speakers:
> Jana Christopher (FEBS/Springer Nature), Image integrity - correcting the published record
This talk outlines the process of detecting and retracting fraudulent research papers, and specifically paper mill material. It describes real world challenges of correcting the literature, and suggests ways to address them

> John Loadsman (NSW Health/University of Sydney), The experience and perspective of one journal editor
Journal editors have been justifiably criticised for failing to maintain the integrity of the scientific record, both prior to and after publication. This talk will highlight some of the experiences of one editor determined to make a better contribution, both as gatekeeper and whistleblower

> Ben Mol (Monash University), Now we have the data but are the data true
Modern medicine is based randomised clinical trials (RCTs). While the number of RCTs is gradually increasing, nobody usually asks whether these RCTs are trustworthy. In recent years I found out that about 30% of the RCTs in Women's Health are fabricated. I will share my experiences in this endeavour. Medical science is the Olympics without doping checks.

> Lisa Parker (University of Sydney), How to be a research detective: Warning signs of research fraud

Clinical trials are the gold standard of research providing evidence on the safety and efficacy of drugs and interventions. This talk will explore the importance of clinical trials accurately reporting their study design, endpoints, and results.

Chair: Jennifer Byrne
5:30 pM
ECR networking event
All students & early career researchers welcome! There will be some food, drinks & a chance to discuss all things metaresearch.
program - day 2, tue 29 Nov
Zoom webinar link: https://unimelb.zoom.us/j/81862772512 (Password required to login: check email sent to attendees)

Click here for a google doc version of the program.

All sessions (unless otherwise specified) will be held in the Lyle theatre, ground floor, Redmond Barry building (115), Parkville campus, University of Melbourne.
9:00 AM *note earlier start*
Plenary - Colin Camerer, Thoughts on replication & reproducibility in social sciences 
Colin is a professor of behavioural economist at Caltech. Colin's research imports methods from psychology and neuroscience to improve economics. Colin's science team is interested in a range of decisions, games, and markets and uses eye-tracking, lesion patients, EEG, fMRI, wearable sensors, machine learning, and animal behaviour. Two major application areas are financial behaviour and strategic thinking in games.

Colin's talk will summarize his opinions about the reproducibility reboot in social science and what simple things can be done. This will include some previous results, predictability of study reproducibility, the role of prediction markets, and why funding agencies & journals are critical gatekeepers.

Chair: Kathy Zeiler
10:00 AM
Mini-note panel: Many Analysts Eco Evo
To what extent variability in the decisions of individual data analysts drives variability in the results? In this session, we'll hear about a project from the field of ecology and evolution that aims to answer this very question.

Although variation in effect sizes and predicted values among studies of similar phenomena is inevitable, such variation far exceeds what might be produced by sampling error. One possible explanation for this heterogeneity in results is differences in the decisions researchers make regarding statistical analyses. To explore the role of these analytical decisions in driving heterogeneity in results, we posted an open invitation to researchers in ecology and evolutionary biology to analyze either of two unpublished data sets to answer a corresponding pre-determined question. The volunteer analysts who responded to our invitation submitted 132 answers to the question from one data set and 80 answers to the question from the other data set. For both data sets, the answers varied substantially, although the pattern of variation among answers differed between the two. We hypothesize about the factors that may have driven the difference in distribution of effect sizes between the two data sets, and we discuss potential implications for the future of data analyses in ecology and evolutionary biology given the substantial variability driven by divergence in analytical decisions.

Speakers:
> Hannah Fraser (University of Melbourne), Heterogeneity in results among studies in ecology and evolutionary biology – the big picture
Hannah will discuss the sources of heterogeneity in ecological research, introduce the Many Eco Evo Analysts project, and outline our methods.

> Elliot Gould (University of Melbourne), Heterogeneity in results among studies in ecology and evolutionary biology – a ‘many analysts’ study
Elliot will reveal the results of the project and describe how they shed light on the sources of heterogeneity in our study.

> Tim Parker (Whitman College), Heterogeneity in results among studies in ecology and evolutionary biology – implications for the future
Tim will discuss the ramifications of our results for future research and describe some ways that these could be addressed.

Chair: Losia Lagisz
11:30 aM
Break ☕️
Tea & coffee will be available in the foyer all day.
12:00 pm
Mini-note panel: Credibility underlying decision-making
Policy makers and others often use empirical findings to guide their decisions. In this session, scholars from the fields of medicine, law and ecology will discuss research projects that assess how well empirical work is utilized in decision making and how we might address challenges.

Speakers:
> Janet Freilich (Fordham Law School), Credibility of Science in Patents
Scientific details in patents are sometimes viewed as particularly reliable. Unfortunately, this is not the case. Science in patents is often fictional or irreplicable and examiners have little ability to verify information submitted by applicants.

> Anisa Rowhani-Farid (School of Pharmacy, University of Maryland), Clinical trial integrity and transparency
Clinical trials are the gold standard of research providing evidence on the safety and efficacy of drugs and interventions. This presentation will explore the importance of clinical trials accurately reporting their study design, endpoints, and results.

> Barbara Mintzes (School of Pharmacy, University of Sydney), Conflicts of interest and bias in medical research
Pharmaceutical and device industry funding of medical research is widespread, influencing which research questions are addressed and how studies are conducted and reported, with implications for clinical care. A conflict exists between scientific objectivity and the needs of manufacturers to recoup development costs and gain market share, and both the results and conclusions of industry-sponsored research tend to be more favourable to the tested product than non-industry sponsored research. I will discuss the mechanisms contributing to this bias as well as potential solutions.

> Mark Burgman (Imperial College London), Statistics wars and their implications for journal editors
The debates continue about how scientific journals should handler issues such as questionable research practices, p-values, Bayesian inference and related matters. This presentation examines the implications of this debate for a journal directly linked to conservation decision-making, with a special focus on the potential for false positive results to affect human impacts on the environment.

Chair: Jason Chin
1:30 pM
Lunch!
Served in the foyer.
2:00 pM - 3.00pm
Concurrent sessions
Session 1: Workshop, Beyond the value-free ideal of science - Rachael Brown
Room: Oscar Oeser room (1120), Level 11, Redmond Barry building
From Merton's Norm of Distinerestedness to Bacon's Idol of the Tribe, the idea that our best science is "objective" or "value-free" is pervasive. Whilst there is a lot of intuitive appeal to this idea, avoiding values in science is far less straightforward than public discourse suggests and most philosophers of science accept that the influence of values in science (of at least some forms) is unavoidable. Why do they think this? What does it mean for scientific practice? In this workshop, I will use a combination of dialogue and group exercises to unpack the philosophical discussion around values in science and explore together what we can do to improve our scientific practices in the face of the inevitable influence of values in science.
Session 2: Hackathon, Assessing criteria of ”best paper” awards across disciplines - Malgorzata Lagisz & Yefeng Yang
Room: Alexander J Wearing room (1123), Level 11, Redmond Barry building
Many journals offer “best paper” awards to early career researchers recognising outstanding contributions. Such awards are an opportunity to highlight best research practices. At the same time, they may propagate existing biases in who and what is deemed worthy of recognition. The aim of this hackathon is to evaluate criteria and procedures of a sample of “best paper” awards across disciplines. We will collect data on eligibility rules and transparency of the assessment criteria. We will collect names of the past winners to quantify potential historical gender biases. We will also code which award criteria acknowledge good Open Science practices as something that is valued. As such, we will reveal which journals recognise and reward Open Science practices, fulfilling journals’ commitment to promoting robust and transparent science. Results of this survey may help improving existing awards across disciplines, promoting equity, diversity, and inclusivity in academia. To enable broad participation in this hackathon session, we plan to work fully online, using Google Sheets and Forms. Participants’ contributions to basic data collection will be acknowledged in the CRediT statement in the resulting outputs, with a potential to earn co-authorship if further significant contributions are made at the later stages of this project.

We are asking potential participants to fill in this form so we can get in touch and also learn about your preferences in respect to contributing to this project: https://forms.gle/J3WPWMTAyWJy3k1v9
3.00pm
Lightning talks. Theme: Tools for Open Science
Speakers:
> Robert Turnbull, Crunch: A Data Processing Orchestration Tool for Open Science
> Aaron Wilcox, DevOps to ResOps: How research software engineers are using DevOps components to aid in computational reproducibility.
> Nic Geard, Introduction to the Melbourne Data Analytics Platform (MDAP)
> Alexandra Davidson, Taxonomy of interventions at academic institutions to improve research quality
3:30 pM
Mini-note panel: SCORE program
The SCORE program - Systematizing Confidence in Open Research & Evidence - aims to develop and deploy automated tools to assign "confidence scores" to published research in the social & behavioural sciences. SCORE is a research collaboration involving eight research teams. These teams are spread across three distinct areas, including: developing a database of claims from papers in the social and behavioral sciences; generating human expert and machine generated estimates of credibility; and, evidence of reproducibility, robustness, and replicability to validate the estimates. The data collected from this program will be openly shared and provide an unprecedented opportunity to examine research credibility and evidence. SCORE is funded by DARPA.

Speakers:
> Timothy Errington (Center for Open Science), Creating a claims and empirical evidence database for confidence assessment
A corpus of social and behavioral science papers from over 60 journals over a 10 year period were used to extract claims for further assessment by human judgement and algorithmic approaches. From this corpus a subset of claims were investigated for objective evidence of credibility - specifically, process reproducibility (availability of data and code), reproducibility (reanalysis using original data and analytical strategy), robustness (multiple analytical strategies using original data), and replicability (same analytical strategy on new data) - with replicability serving as the ground truth for human assessment within the program.

> Martin Bush (University of Melbourne), The repliCATS project
The repliCATS project has evaluated over 4,000 published research articles from across 8 social science disciplines, eliciting group predictions, judgements and reasoning using a structured deliberation and decision protocol. Over 1200 reviewers from 46 countries have participated in our elicitation and evaluation workshops. In this talk, we will present some preliminary findings of how our replicability predictions and ‘confidence scores’ compare to the outcomes of ground truth assessments (e.g., actual replication and reanalysis studies), and outline the unique mixed methods analysis that produced these scores. We will also discuss the benefits a structured deliberation and decision protocol can bring to the peer review process, and the need for formal peer review training.

> Thomas Pfeiffer (New Zealand Institute for Advanced Studies), Forecasting outcomes in scientific research
Crowd-sourced forecasting projects have been used to predict outcomes in scientific research, and in the past many projects have focused on replication outcomes. In my presentation I will discuss experiences from forecasting projects that focus on outcomes beyond replications.

> Sarah Ratmajer (Pennsylvania State University), An artificial prediction market for estimating confidence in published work
Our presentation will detail a three-year, ongoing effort undertaken by researchers at Penn State, Old Dominion University, Texas A&M, and Rutgers to develop artificially intelligent prediction markets to estimate the replicability of published research. This effort lays groundwork for hybrid approaches integrating human wisdom and machine rationality for research assessment.

Chair: Fiona Fidler
5:00 pM - travel time to conference social
Conference social event (off-site, starts from 5.45pm)
Join us at a local venue in Brunswick for some cocktails, mocktails & nibbles. RSVP to [email protected]
All details will be emailed to in-person attendees.
program - day 3, wed 30 Nov
Zoom webinar link: https://unimelb.zoom.us/j/81862772512 (Password required to login: check email sent to attendees)

Click here for a google doc version of the program.

All sessions (unless otherwise specified) will be held in the Lyle theatre, ground floor, Redmond Barry building (115), Parkville campus, University of Melbourne.
9:00 AM *note earlier start*
Plenary - Anne Scheel, What (Psychology) Researchers Really Want To Know
Anne is an assistant professor in methodology and statistics at Utrecht University. Anne completed her PhD at TU Eindhoven and worked as a postdoctoral researcher at VU Amsterdam and at the Centre for Science and Technology Studies, Leiden. With a background in psychology and meta-psychology, Anne has studied reforms of research and publication practices in psychology as part of the discipline's effort to recover from the replication crisis and improve the reliability and efficiency of published research.

Anne's talk will present findings that suggest that psychology and neighbouring fields may benefit from a closer look at what purposes hypothesis testing serves in practice and which other methods researchers may need to achieve their goals. Many of the reforms proposed in response to the replication crisis in psychology are designed to make hypothesis tests more rigorous and informative. Researchers are supposed to preregister their hypotheses and analysis plans, formulate more specific and falsifiable predictions, and increase the statistical power of their tests. In theory, these measures merely correct certain lapses that had crept into the widely used hypothesis-testing workflow. In practice, though, many psychologists have surprising difficulty implementing these measures and providing the required specifications and decisions a priori. This disconnect between theory and practice suggests that many research questions do not yet (or not at all) lend themselves to the strict, hypothetico-deductive form of hypothesis testing that some recent reforms take for granted. If this is true, higher standards for hypothesis testing will not be sufficient for increasing the knowledge gain of the field at large. Instead, helping psychologists achieve their research goals more effectively and efficiently requires a better understanding of the nature of these research goals in the first place. To further scientific progress, reforms to research practice should take into account what types of questions researchers are asking and what they need to answer them.

Chair: Simine Vazire
10:00 AM
Mini-note panel: Big Team Science
More and more researchers are engaging in big team science, creating grass-roots collaborative networks to tackle difficult research questions. In this session, we'll hear about various big team science projects across different disciplines, the challenges experienced and lessons learnt for future projects.

Speakers:
> Nicholas Coles (Stanford University), Grappling with generalizability constraints in the social sciences via big-team science
Big-team science has been leveraged to both discover and address issues about generalizability in the social sciences. In this talk, Nicholas Coles will review these developments and their implications for future small- and large-scale research in the social sciences.

> Julia Espinosa (Harvard University), Running with the Big Dogs: Building an international dog [scientist] pack as an early career researcher
In my presentation I will introduce the ManyDogs Project, an initiative that I co-founded and have been steering since 2018. Our first project is nearing the end of the data collection period when we convene for the conference, so I will be able to share preliminary results along with my insights and my ECR experiences of uniting and leading a group of established colleagues.

> Lene Seidler (University of Sydney), The TOPCHILD (Transforming Obesity Prevention for CHILDren) Collaboration – working together to address the complex quest of early childhood obesity prevention
The TOPCHILD (Transforming Obesity Prevention for CHILDren) collaboration brings together individual participant data from over 50 trials with a total of 40,000 participants to address the complex public health issue of how to prevent childhood obesity. This presentation will give a snapshot of the main challenges, solutions and lessons learnt from this major collaborative undertaking.

> Lauren Wool (UCL), Open Neuroscience in the International Brain Laboratory
We discuss the open-science aims of the IBL, a neuroscience collaboration of over 100 members across institutions worldwide. How is knowledge produced in a flexible, distributed (and mostly virtual) community? How does this inform our methods and practices as scientists?

Chair: Alex Holcombe
11:30 aM
Break ☕️
Tea & coffee will be available in the foyer all day.
12:00 pm
Mini-note panel: Metascience origins
This panel session will explore the different perspectives and processes of understanding the emergence of the field of metaresearch or metascience in historical, philosophical & STS perspectives.

Speakers:
> Nicole Nelson (University of Wisconsin), A history of American biomedical rigor and reproducibility reform
This talk will provide a history of the emergence of the reproducibility/replication crisis from the vantage point of American biomedicine. It will show the National Institutes of Health’s existing commitments to translational research made it difficult to ignore reports of irreproducible research from pharmaceutical companies, and that the NIH’s eventual reforms were patterned after earlier reforms in clinical research.

> David Peterson (UCLA), Science IS Crisis: On Metascience and Crisis Diagnoses
It is widely accepted that the metascience movement emerged in reaction to the replication crisis. This talk complicates this picture in two ways. First, I argue that diagnoses of "crisis in science" have been a persistent feature of modern science and, second, that these diagnoses have always been motivated by metascientific arguments which reflect a variety of philosophical opinions about the correct role of science in society.

> Fiona Fidler (University of Melbourne), From statistical reform to the credibility revolution
The talk is about two scientific reforms. The first is the attempted statistical reform of life and social sciences, that started in the 1960s with the work of Jacob Cohen, Paul Meehl and others, and was focused on the shortcomings of Null Hypothesis Significance Testing in practice. The second is the current credibility revolution, aimed at improving the replicability and reproducibility of those same sciences. There are many similarities in purpose and motivation between the two, and yet quite stark differences in reception and impact. I will discuss some features of the credibility revolution—including a new level demonstrability of problems, public engagement, technology, coordination and community—that were absent in previous reform effort. All good historians of science argue there is no such thing as ‘a turning point’, or rather that there are so many that the term is somewhat meaningless in understanding the history of science. Nevertheless, 2011 seems to mark a significant point of development for the current scientific reform, if not methodological practice itself, compared to the previous six decades. 

Chair: Fallon Mody
1:30 pM
Lunch!
Served in the foyer.
2:30 pM
Lightning talks. Theme: Meta-analyses
Speakers:
> Phi-Yen Nguyen, Reporting and sharing of review data in systematic reviews between 2014-2020: what changed and what drove these changes
> James Sotiropoulos, Developing guidance for outcome harmonisation in prospective meta-analysis
> Kylie Hunter, Assessment of data integrity for individual participant data meta-analyses: a case study
> Yefeng Yang, Persistent publication bias and low power in ecology and evolution

Chair: Jennifer Byrne
3:00 - 4.00 pM
Concurrent sessions
Session 1: Discussion group, Funding for meta-research: what are our options? - Adrian Barnett
Room: Latham lecture theatre, ground floor, Redmond Barry building (note room change)
Winning research funding in any field is difficult, but there are additional challenges for meta-research as it is a new field that can meet resistance from reviewers. This session will discuss the challenges of winning funding and ideally formulate strategies to help all meta-researchers. We will discuss the ARC and NHMRC, and what schemes might be most appropriate. We will discuss alternative sources, including philanthropy, partnerships with journals/funders, commercial avenues, and direct appeals to government. Previous funding for meta-research has occurred after a major scandal, should we be prepared to exploit the next research integrity scandal in Australia? One approach to broadcast support for meta-research is to create a public list of “Australian scientists who are concerned about research quality”, this could be used to lobby for funding and applicants could cite it to support the need for funding. What else can we do as a community to help win funding for meta-research? Please bring your own ideas for a lively discussion. This session aims to be relevant to researchers from all fields and all experience levels.
Session 2: Hackathon, Mapping the landscape of metaresearch communities, Jason Chin & Losia Lagisz
Room: Lowe lecture theatre, ground floor, Redmond Barry building (note room change)
What are the other meta-research communities beyond AIMOS? What do they do, where do they come from and how they are similar and different? In this hackathon we aim to answer these questions by systematically mapping metaresearch communities and their activities. This hackathon will consist of three types of activities: 1) running searches for relevant communities (we will run preliminary search in English, but need help with other languages here!), 2) screening a preliminary list of communities to find those fulfilling our inclusion criteria, and 3) coding characteristics of the included communities. We will use shared Google Forms and Sheets for easy collaboration. Collected data will be used to guide future work and development of AIMOS and potentially some more concrete outputs, such as a blog or manuscript. At this stage, your work will be acknowledged as a contributorship in CRediT format on the AIMOS website and any other outputs. We hope to learn what other researchers are doing in this space and have some fun! Any background and experience level are welcome.
4:00 pM
Lightning talks. Theme: Miscellaneous metaresearch
Speakers:
> Karim Khan, Improving the quality of consensus methods and consensus statements
> Joshua Wang, Corpus linguistics for meta-research: a case study in obesity neuroscience
> Wendy Higgins, The myth of the “well-validated” measure
> Austin Mackell, Video Bibliographies and Research Transparency

Chair: Martin Bush
4:30 pM
Mini-note panel: Trust in Science
This session will explore themes around public trust in science.  What makes science trustworthy? How can science earn the public's trust?  How can members of the public evaluate how much trust to put in various scientific claims?

Speakers:
> Andy Perfors (Melbourne School of Psychological Sciences), Science as an information system: How can we know when to trust?
I'll be talking about an abstract framework for thinking about information systems in general and identifying the factors that lead to trustworthiness (of the system as well as the specific information). Then I'll discuss how this maps onto the situation we are faced with as scientists, who are embedded in the very system we wish to shape.

> Mike McGuckin (Faculty of Medicine, Dentistry & Health Sciences, University of Melbourne), Ensuring Research Integrity:  Why Institutional Leaders Should Care a Lot
The importance of ensuring quality and integrity of research from a leadership perspective, risks in the environment of researchers that promote poor culture, and the role of leadership in driving proactive programs to optimise quality and integrity.

> Sujatha Raman (ANU Centre for Public Awareness of Science), How public good matters complicate the public trust question for science
Questions of public trust in science have typically been posed in response to specific concerns about the harms that ensue from a perceived lack of such trust (e.g., rejection of vaccines or embrace of ‘alternative’ therapies ungrounded in evidence). Framed this way, solutions have ranged from increasing scientific literacy amongst the public to greater openness and transparency in scientific procedures to stemming the flow of misinformation. In this talk, I will try to reframe the public trust question by drawing from a parallel concern with the public good in science articulated most notably in recent years by the International Council for Science. From a public good perspective, trust in specific scientific propositions may become less important than the way in which science speaks to ongoing agendas for system-wide transformation. 

Chair: Simine Vazire
5:30 pM
Conference close, Matt Page - AIMOS president
Join Matt Page for some closing remarks on AIMOS2022 and what lies ahead for AIMOS
mini-note panels
big team science
More and more researchers are engaging in big team science, creating grass-roots collaborative networks to tackle difficult research questions. Hear about various big team science projects across different disciplines, the challenges experienced and lessons learnt for future projects.
Panel chair: Alex Holcombe
Nicholas Coles
Grappling with generalizability constraints in the social sciences via big-team science
Nicholas is a Research Scientist at Stanford University, the co-director of the Stanford Big Team Science Lab, and the director of the Psychological Science Accelerator. He conducts research in affective science, cross-cultural psychology, and meta-science. In metascience, Nicholas works on building research infrastructure that allows researchers to more efficiently obtain knowledge in the social sciences.
Julia Espinosa
Running with the Big Dogs: Building an international dog [scientist] pack as an early career researcher
Julia is a National Science Foundation Postdoctoral Research Fellow at Harvard University. Her current work looks at the individual differences contributing to behavior phenotypes in domestic dogs, including life history, genetics, and neuroanatomy. Julia is a co-founder of ManyDogs and the project lead of their first study, ManyDogs 1: A Multi-Lab Replication Study of Dogs’ Pointing Comprehension.
Lene Seidler
The TOPCHILD (Transforming Obesity Prevention for CHILDren) Collaboration – working together to address the complex quest of early childhood obesity prevention
Lene works as a Senior Research Fellow at the NHMRC Clinical Trials Centre (CTC), University of Sydney where she leads the NextGen Evidence Synthesis team within the Evidence Integration group. Lene’s work focuses on methods aimed at increasing collaboration and coordination in research to maximise the value of data and reduce research waste. This includes the development and application of next generation evidence synthesis approaches such as individual participant data and prospective meta-analysis.
Lauren Wool
Open Neuroscience in the International Brain Laboratory
Lauren E. Wool is a postdoctoral fellow in computational neuroscience at University College London, where she studies the large-scale population activity of motor neurons in the behaving mouse brain. She is fluent in big-data analysis, interdisciplinary team dynamics, and open science principles, and how they combine to generate high-impact resources for the scientific community. She studies knowledge transfer inside IBL, an open-science collaboration of 100+ neuroscientists worldwide.
correcting the record
Science is self-correcting, right? In this session, we'll hear from four speakers who will tell us about whether and how science self-corrects, and how this could be done better
Panel chair: Jennifer Byrne
Jana Christopher
Image integrity - correcting the published record
Jana was the first Image Integrity Analyst at EMBO Press, helping to set up the programme in 2011. She started her own business Image-Integrity in 2015.
Jana joined FEBS Press in 2017 as Image Data Integrity Analyst for their journals, and also works as a freelance consultant for various journals, publishers and institutes. She also runs training courses, and presents to students and young scientists.
John Loadsman
Correcting the scientific record: the experience and perspective of one journal editor
John is a full-time Staff Specialist anaesthetist at Royal Prince Alfred Hospital, Conjoint Associate Professor in Anaesthetics at the University of Sydney, and Editor-in-Chief of Anaesthesia and Intensive Care, journal of the Australian Society of Anaesthetists. John's interest in publication integrity was first sparked when he noticed cases of duplication and plagiarism as the journal's proofreader in the late 1990s.
Ben Mol
Now we have the data but are the data true
Ben (Willem) Mol is Professor of Obstetrics and Gynaecology at Monash University. Ben holds continuous NHMRC funding, and has been recognized as a very productive (Nature) and well cited (The Australian) author. Ben has  developed extensive relations with Asian universities, resulting in large randomised clinical trials. Ben is also involved in many Individual Participant Data Meta-Analysis. Recently, Ben has worked also on systems to detect data-fabrication in RCTs. Ben's professional adage is ‘A day without randomisation is a day without progress.'
Lisa Parker
How to be a research detective: Warning signs of research fraud
Lisa is a researcher, bioethicist and practicing doctor. Her research focus is on critical evaluation of healthcare practice, policy and evidence. She has led recent studies on research fraud, conflicts of interest, and industry influence in health. Lisa has expertise in qualitative research methodology. Lisa works as a CMO in radiation oncology at Royal North Shore Hospital in Sydney.
The score project
Assessing the credibility of research claims is a central, continuous, and laborious part of the scientific process that is essential for maintaining public trust in science. The Systemizing Confidence in Open Research and Evidence (SCORE) program was designed to address this challenge by developing and assessing algorithms' potential as a rapid, scalable, and valid method for assessing confidence in research claims.
Panel chair: Fiona Fidler
Tim Errington
Creating a claims and empirical evidence database for confidence assessment.
Tim received his Masters in Molecular and Cell Biology from the University of California at Berkeley and his PhD in Microbiology, Immunology, and Cancer Biology from the University of Virginia. He is currently the Senior Director of Research at the Center for Open Science where he collaborates with researchers and stakeholders across scientific disciplines and organizations on metascience projects aiming to understand the current research process and evaluating initiatives designed to increase credibility and openness of scientific research.
Martin Bush
The replicats project
Martin is a historian and philosopher of science at the University of Melbourne. He is a senior research fellow on the repliCATS project, and lead the reasoning analysis and qualitative work on the project. He is otherwise interested in the history of popular science and science communication and engagement.

Thomas Pfeiffer
Forecasting outcomes in scientific research
New Zealand Institute for Advanced Study. Thomas’ research interests are game theory and metascience. In particular, he is interested in mechanisms of information elicitation and aggregation, and how such mechanisms can be used for the benefit of science. Thomas was part of the team running replicationmarkets.com for large-scale replication forecasting within DARPAs SCORE project.
Sarah Michele Rajtmajer
An artificial prediction market for estimating confidence in published work
Sarah Rajtmajer is an assistant professor in the College of Information Sciences and Technology and research associate in the Rock Ethics Institute at The Pennsylvania State University. Her research uses machine learning and mathematical modeling for applications to social phenomena. Dr. Rajtmajer leads one arm of DARPA’s Systematizing Confidence in Open Research and Evidence (SCORE) program, which seeks to develop and deploy AI to assign “confidence scores” to research claims published in the social and behavioral sciences literatures.
Credibility of science underpinning decision making
Policy makers and others often use empirical findings to guide their decisions. In this session, scholars from the fields of medicine, law and ecology will discuss research projects that assess how well empirical work is utilized in decision making and how we might address challenges.
Panel chair: Jason Chin
Janet Freilich
Credibility of Science in Patents
Janet Freilich is a Professor at Fordham Law School. She writes and teaches in the areas of patent law, intellectual property, and civil procedure. She graduated magna cum laude from Harvard Law School and summa cum laude from Cornell University with a bachelor’s degree in molecular biology.
Anisa Rowhani-Farid
Clinical trial integrity and transparency
Anisa Rowhani-Farid recently completed a postdoctoral fellowship in clinical trial integrity at Restoring Invisible & Abandoned Trials (RIAT) support center (University of Maryland School of Pharmacy).  She does metaresearch to strengthen the regulation of clinical research & to promote open research, making science more reliable, trustworthy, verifiable, transparent, & robust. Her first postdoctoral fellowship was in meta-research at the Collaboration for Research Integrity and Transparency (Yale Law School, School of Medicine and School of Public Health, & Center for Outcomes Research and Evaluation - Yale School of Medicine).
Barbara Mintzes
Conflicts of interest and bias in medical research
Barbara Mintzes is an Associate Professor, School of Pharmacy and Charles Perkins Centre (CPC), at the University of Sydney. She holds a PhD in epidemiology from the University of British Columbia. Her research areas include policy analyses, impacts of conflicts of interest on research and practice, systematic reviews, and pharmacoepidemiology research. She has led international comparative studies on direct-to-consumer advertising of medicines, the quality of information provided to family doctors by pharmaceutical sales representatives, and post-market regulatory safety warnings.
Mark Burgman
Statistics wars and their implications for journal editors.
Mark Burgman is Professor of Risk Analysis and Environmental Policy at Imperial College London. Previously, he was Director of the Australian Centre of Excellence for Risk Analysis and the Adrienne Clarke Chair of Botany at the University of Melbourne. He has worked on expert judgement, decision analysis, conservation biology and risk assessment in a broad range of settings including marine fisheries, forestry, irrigation, electrical power utilities, mining, and national park planning. He has been Editor-in-Chief of the journal Conservation Biology since 2013.
Trust in Science
This session will explore themes around public trust in science.  What makes science trustworthy? How can science earn the public's trust?  How can members of the public evaluate how much trust to put in various scientific claims?
Panel chair: Simine Vazire
Mike McGuckin
Ensuring Research Integrity:  Why Institutional Leaders Should Care a Lot
Prof Mike McGuckin is Deputy Dean of the Faculty of Medicine, Dentistry and Health Sciences at the University of Melbourne.  In this role he oversees the research portfolios and has responsibility for People and Culture, Engagement and International Partnerships.  Mike is a biomedical scientist, author of over 170 publications and has been heavily engaged in scientific societies and peer review.
Andrew Perfors
Science as an information system: How can we know when to trust?
Andrew Perfors is an Associate Professor and Director of the Complex Human Data Hub at the University of Melbourne School of Psychological Sciences, where he runs the Computational Cognitive Science Lab. His research focuses on quantitative approaches to higher-order cognition: concepts; language; decision-making; information and misinformation transmission; and cultural and social evolution and change. He also teaches the undergraduate research methods subject in psychology and through it has shown thousands of students (who were initially afraid of R, coding, and statistics) that they can enjoy quantitative research in R and be good at it! Andy's passions include good data practices, especially data visualisation, not just for science communication but also as a key part of the scientific process itself. He has also thought a lot about science as a kind of information system, and thus how insights about how to design and shape effective and trustworthy information systems might apply to science itself.
Sujatha Raman
How public good matters complicate the public trust question for science
Sujatha Raman is UNESCO Chair-holder in Science Communication for the Public Good and Director of Research at the Centre for the Public Awareness of Science (CPAS), Australian National University. Trained in STS and public policy studies, she is interested in normative questions arising from efforts to bring science and technology to bear on global sustainability challenges. Her current work in the UNESCO Chair explores what is entailed by appeals to science as a public good.
metascience origins projects

Panel chair: Fallon Mody
Nicole Nelson
A history of American biomedical rigor & reproducibility reform
Nicole C. Nelson is an Associate Professor of Science and Technology Studies in the Department of Medical History and Bioethics at the University of Wisconsin—Madison’s School of Medicine and Public Health. Her first book, Model Behavior (2018), is an ethnographic study of how animal behavior geneticists conceptualize and enact complexity in research with mouse models. She is Co-Editor of the journal Social Studies of Science, the founding director of the Health and the Humanities program at UW Madison, and a former scholar in residence at the Radcliffe Institute for Advanced Study at Harvard University. Her current research focuses on the “reproducibility crisis” in biomedicine and its relationship to histories of biomedical and open science research reform.
David Peterson
Science IS Crisis: On Metascience and Crisis Diagnoses 
David Peterson received his PhD from Northwestern University and was a postdoc at UCLA before joining the sociology department at Purdue. He studies the nexus of scientific practice, emerging technologies, and expert authority. Currently, his work focuses on two topics. First, he studies how the organizations of science are evolving to meet a variety of threats including political pressure, intensifying global competition, new communications and machine learning technologies, and emerging regulatory and managerial bodies. Second, he investigates the production of science in areas that have had chronic legitimacy problems (like the social sciences) to shed light on the complex interactions between politics, expertise, and authority.
Fiona Fidler
 From Statistical Reform to Credibility Revolution 
Fiona is an Australian Research Council Future Fellow, co-leads the MetaMelb (meta)research group, is lead PI on the repliCATS project, holds a joint appointment in the School of Ecosystem & Forest Science, is the Head of the History and Philosophy of Science Program at the University of Melbourne, & the founding president of AIMOS. Originally trained as a psychologist, before undertaking a PhD in History and Philosophy of Science. Fiona then spent a decade in decision science research centres, developing methods for eliciting more reliable expert judgements to improve environmental & conservation decision making. Her interest is in how experts, including scientists, make decisions and change their minds, & specifically how methodological change, as distinct from theory change, occurs in different disciplines.
MANY ANALYSTS ECO-EVO
Same data, different analysts: variation in effect sizes due to analytical decisions in ecology and evolutionary biology
Panel chair: Losia Lagisz
Tim Parker
Heterogeneity in results among studies in ecology and evolutionary biology – implications for the future
Tim is a behavioural ecologist who conducts occasional meta-analyses, and this work with meta-analysis led (as it has in many others) to an interest in the reliability of the published literature. He has devoted much of the past decade to empirical exploration of the reliability of ecology, evolutionary biology, and other disciplines, and to improve the reliability of published work. Tim has co-authored numerous advocacy and empirical papers on these topics, is a founding member of SORTEE (the Society for Open, Reliable, and Transparent Ecology and Evolutionary biology), and served as SORTEE’s first president.  He is a professor of biology at Whitman College in Washington State, USA.
Hannah Fraser
Heterogeneity in results among studies in ecology and evolutionary biology – the big picture
Hannah Fraser is a research fellow at the University of Melbourne working in Fiona Fidler’s meta-research lab, MetaMelb. She is lead author of Questionable Research Practices in Ecology and Evolution (Fraser et al. 2018), which has received widespread attention (preprint downloaded 679 times). During her PhD, Hannah also gained expert elicitation experience. In 2020, Hannah was president of the Association of Interdisciplinary Meta-research & Open Science, an association she helped found. Hannah was the research coordinator for the repliCATS project in Phase 1, and will be remaining on the project in phase 2 in an advisory capacity.
Elliot Gould
Heterogeneity in results among studies in ecology and evolutionary biology – a ‘many analysts’ study
Elliot Gould is a PhD student at the School of BioSciences, University of Melbourne, with a background in applied ecology. Elliot is investigating the transparency and reproducibility ecological models in conservation decision-making and ecological management.

Location & getting here
AIMOS2022 will be held at the University of Melbourne, in the Redmond Barry building (115).

Public transport is the best option to get here! If you aren’t familiar with the campus, the two closest public transport stops are:
> Stop 1, any tram that travels along Swanston street (enter via Gate 3)
> Stop 11, tram 19 on Royal Parade (enter via Gate 12)
To plan your journey, you can use Google Maps/Apple Maps or PTV Journey Planner.

For interactive maps of Parkville campus, and Redmond Barry building, see: https://maps.unimelb.edu.au/parkville/building/115 For downloadable/printable maps, including ones with wheelchair access, see: https://maps.unimelb.edu.au/download-maps
Cycling & bicycle parks
There are bike parking facilities on the South side of Redmond Barry (the side opposite Tin Alley). For all the info you need to travel to and park your bike on campus, see:
> https://sustainablecampus.unimelb.edu.au/transport/cycling
> http://uom.maps.arcgis.com/apps/webappviewer/index.html?id=813244bb2cc548d48801ff313fc5dd3e
> https://sustainablecampus.unimelb.edu.au/__data/assets/pdf_file/0005/2080832/Map_2015_rev34_Grey_BP.pdf

Parking
Parking on campus is very limited, particularly given the Melbourne Metro tunnel works. For the most up-to-date information about parking around Parkville campus, please see: https://about.unimelb.edu.au/news-resources/campus-services-and-facilities/transport-and-parking/parking-on-campus
Join us on 
28-30 November 2022 
Registrations are now open. The "Register Now" button will take you to Eventbrite for you to complete your registration.

Registration fees (not including admin fee) for in-person attendance includes
catering for all three days:

> Full fee - AU$70.00
> Concession fee - AU$40.00

An in-person fee waiver is available on request. A small number of AU$1000 travel grants are available to support in-person attendance. 

Virtual attendance - free. We will send virtual attendees a zoom webinar link!

**
Conference social events

We will host an ECR catch-up on Monday, 28 November and an evening networking event on Tuesday, 29 November at a restaurant close to the University.
We will email registered attendees with all the details & to RSVP closer to the conference!

For now, save the dates! 
grants
A small number of AU$1000.00 travel grants will be available to support in-person attendance at AIMOS2022.
Due to the high interest in these grants, we will give priority to:
> colleagues who are early career researchers (five years post-PhD) and/or those who have experienced career interruptions
> attendees who have submitted a proposal by 30 October 2022.

We will begin reviewing and accepting proposals on a rolling basis to facilitate fast turnaround and travel booking.
Email [email protected] to note that you wish to apply for a travel grant.
organisers
Meet this year's organising committee. We look forward to welcoming you to AIMOS2022. You can contact us via [email protected] 
Fiona Fidler
School of Ecosystem & Forest Sciences /History & Philosophy of Science, University of Melbourne
Matt Page
School of Public Health & Preventive Medicine,
Monash University
Jennifer Byrne
School of Medical Sciences, University of Sydney
Kathy Zeiler
 School of Law, Boston University
Malgorzata Lagisz
University of New South Wales
Fallon Mody
History & Philosophy of Science, University of Melbourne
Beth Clarke
Melbourne School of Psychological Sciences, University of Melbourne
Carmelina Contarino
History & Philosophy of Science/CAIDE,
University of Melbourne
Sponsors
Thank you to the University of Sydney and the University of Melbourne for their generous sponsorship of AIMOS2022.

Processing Registration...

Powered by: