program - day 3, wed 30 Nov
9:00 AM *note earlier start*
Plenary - Anne Scheel, What (Psychology) Researchers Really Want To Know
Anne is an assistant professor in methodology and statistics at Utrecht University. Anne completed her PhD at TU Eindhoven and worked as a postdoctoral researcher at VU Amsterdam and at the Centre for Science and Technology Studies, Leiden. With a background in psychology and meta-psychology, Anne has studied reforms of research and publication practices in psychology as part of the discipline's effort to recover from the replication crisis and improve the reliability and efficiency of published research.
Anne's talk will present findings that suggest that psychology and neighbouring fields may benefit from a closer look at what purposes hypothesis testing serves in practice and which other methods researchers may need to achieve their goals. Many of the reforms proposed in response to the replication crisis in psychology are designed to make hypothesis tests more rigorous and informative. Researchers are supposed to preregister their hypotheses and analysis plans, formulate more specific and falsifiable predictions, and increase the statistical power of their tests. In theory, these measures merely correct certain lapses that had crept into the widely used hypothesis-testing workflow. In practice, though, many psychologists have surprising difficulty implementing these measures and providing the required specifications and decisions a priori. This disconnect between theory and practice suggests that many research questions do not yet (or not at all) lend themselves to the strict, hypothetico-deductive form of hypothesis testing that some recent reforms take for granted. If this is true, higher standards for hypothesis testing will not be sufficient for increasing the knowledge gain of the field at large. Instead, helping psychologists achieve their research goals more effectively and efficiently requires a better understanding of the nature of these research goals in the first place. To further scientific progress, reforms to research practice should take into account what types of questions researchers are asking and what they need to answer them.
Chair: Simine Vazire
Mini-note panel: Big Team Science
More and more researchers are engaging in big team science, creating grass-roots collaborative networks to tackle difficult research questions. In this session, we'll hear about various big team science projects across different disciplines, the challenges experienced and lessons learnt for future projects.
> Nicholas Coles (Stanford University), Grappling with generalizability constraints in the social sciences via big-team science
Big-team science has been leveraged to both discover and address issues about generalizability in the social sciences. In this talk, Nicholas Coles will review these developments and their implications for future small- and large-scale research in the social sciences.
> Julia Espinosa (Harvard University), Running with the Big Dogs: Building an international dog [scientist] pack as an early career researcher
In my presentation I will introduce the ManyDogs Project, an initiative that I co-founded and have been steering since 2018. Our first project is nearing the end of the data collection period when we convene for the conference, so I will be able to share preliminary results along with my insights and my ECR experiences of uniting and leading a group of established colleagues.
> Lene Seidler (University of Sydney), The TOPCHILD (Transforming Obesity Prevention for CHILDren) Collaboration – working together to address the complex quest of early childhood obesity prevention
The TOPCHILD (Transforming Obesity Prevention for CHILDren) collaboration brings together individual participant data from over 50 trials with a total of 40,000 participants to address the complex public health issue of how to prevent childhood obesity. This presentation will give a snapshot of the main challenges, solutions and lessons learnt from this major collaborative undertaking.
> Lauren Wool (UCL), Open Neuroscience in the International Brain Laboratory
We discuss the open-science aims of the IBL, a neuroscience collaboration of over 100 members across institutions worldwide. How is knowledge produced in a flexible, distributed (and mostly virtual) community? How does this inform our methods and practices as scientists?
Chair: Alex Holcombe
Tea & coffee will be available in the foyer all day.
Mini-note panel: Metascience origins
This panel session will explore the different perspectives and processes of understanding the emergence of the field of metaresearch or metascience in historical, philosophical & STS perspectives.
> Nicole Nelson (University of Wisconsin), A history of American biomedical rigor and reproducibility reform
This talk will provide a history of the emergence of the reproducibility/replication crisis from the vantage point of American biomedicine. It will show the National Institutes of Health’s existing commitments to translational research made it difficult to ignore reports of irreproducible research from pharmaceutical companies, and that the NIH’s eventual reforms were patterned after earlier reforms in clinical research.
> David Peterson (UCLA), Science IS Crisis: On Metascience and Crisis Diagnoses
It is widely accepted that the metascience movement emerged in reaction to the replication crisis. This talk complicates this picture in two ways. First, I argue that diagnoses of "crisis in science" have been a persistent feature of modern science and, second, that these diagnoses have always been motivated by metascientific arguments which reflect a variety of philosophical opinions about the correct role of science in society.
> Fiona Fidler (University of Melbourne), From statistical reform to the credibility revolution
The talk is about two scientific reforms. The first is the attempted statistical reform of life and social sciences, that started in the 1960s with the work of Jacob Cohen, Paul Meehl and others, and was focused on the shortcomings of Null Hypothesis Significance Testing in practice. The second is the current credibility revolution, aimed at improving the replicability and reproducibility of those same sciences. There are many similarities in purpose and motivation between the two, and yet quite stark differences in reception and impact. I will discuss some features of the credibility revolution—including a new level demonstrability of problems, public engagement, technology, coordination and community—that were absent in previous reform effort. All good historians of science argue there is no such thing as ‘a turning point’, or rather that there are so many that the term is somewhat meaningless in understanding the history of science. Nevertheless, 2011 seems to mark a significant point of development for the current scientific reform, if not methodological practice itself, compared to the previous six decades.
Chair: Fallon Mody
Served in the foyer.
Lightning talks. Theme: Meta-analyses
> Phi-Yen Nguyen, Reporting and sharing of review data in systematic reviews between 2014-2020: what changed and what drove these changes
> James Sotiropoulos, Developing guidance for outcome harmonisation in prospective meta-analysis
> Kylie Hunter, Assessment of data integrity for individual participant data meta-analyses: a case study
> Yefeng Yang, Persistent publication bias and low power in ecology and evolution
Chair: Jennifer Byrne
3:00 - 4.00 pM
Session 1: Discussion group, Funding for meta-research: what are our options? - Adrian Barnett
Room: Latham lecture theatre, ground floor, Redmond Barry building (note room change)
Winning research funding in any field is difficult, but there are additional challenges for meta-research as it is a new field that can meet resistance from reviewers. This session will discuss the challenges of winning funding and ideally formulate strategies to help all meta-researchers. We will discuss the ARC and NHMRC, and what schemes might be most appropriate. We will discuss alternative sources, including philanthropy, partnerships with journals/funders, commercial avenues, and direct appeals to government. Previous funding for meta-research has occurred after a major scandal, should we be prepared to exploit the next research integrity scandal in Australia? One approach to broadcast support for meta-research is to create a public list of “Australian scientists who are concerned about research quality”, this could be used to lobby for funding and applicants could cite it to support the need for funding. What else can we do as a community to help win funding for meta-research? Please bring your own ideas for a lively discussion. This session aims to be relevant to researchers from all fields and all experience levels.
Session 2: Hackathon, Mapping the landscape of metaresearch communities, Jason Chin & Losia Lagisz
Room: Lowe lecture theatre, ground floor, Redmond Barry building (note room change)
What are the other meta-research communities beyond AIMOS? What do they do, where do they come from and how they are similar and different? In this hackathon we aim to answer these questions by systematically mapping metaresearch communities and their activities. This hackathon will consist of three types of activities: 1) running searches for relevant communities (we will run preliminary search in English, but need help with other languages here!), 2) screening a preliminary list of communities to find those fulfilling our inclusion criteria, and 3) coding characteristics of the included communities. We will use shared Google Forms and Sheets for easy collaboration. Collected data will be used to guide future work and development of AIMOS and potentially some more concrete outputs, such as a blog or manuscript. At this stage, your work will be acknowledged as a contributorship in CRediT format on the AIMOS website and any other outputs. We hope to learn what other researchers are doing in this space and have some fun! Any background and experience level are welcome.
Lightning talks. Theme: Miscellaneous metaresearch
> Karim Khan, Improving the quality of consensus methods and consensus statements
> Joshua Wang, Corpus linguistics for meta-research: a case study in obesity neuroscience
> Wendy Higgins, The myth of the “well-validated” measure
> Austin Mackell, Video Bibliographies and Research Transparency
Chair: Martin Bush
Mini-note panel: Trust in Science
This session will explore themes around public trust in science. What makes science trustworthy? How can science earn the public's trust? How can members of the public evaluate how much trust to put in various scientific claims?
> Andy Perfors (Melbourne School of Psychological Sciences), Science as an information system: How can we know when to trust?
I'll be talking about an abstract framework for thinking about information systems in general and identifying the factors that lead to trustworthiness (of the system as well as the specific information). Then I'll discuss how this maps onto the situation we are faced with as scientists, who are embedded in the very system we wish to shape.
> Mike McGuckin (Faculty of Medicine, Dentistry & Health Sciences, University of Melbourne), Ensuring Research Integrity: Why Institutional Leaders Should Care a Lot
The importance of ensuring quality and integrity of research from a leadership perspective, risks in the environment of researchers that promote poor culture, and the role of leadership in driving proactive programs to optimise quality and integrity.
> Sujatha Raman (ANU Centre for Public Awareness of Science), How public good matters complicate the public trust question for science
Questions of public trust in science have typically been posed in response to specific concerns about the harms that ensue from a perceived lack of such trust (e.g., rejection of vaccines or embrace of ‘alternative’ therapies ungrounded in evidence). Framed this way, solutions have ranged from increasing scientific literacy amongst the public to greater openness and transparency in scientific procedures to stemming the flow of misinformation. In this talk, I will try to reframe the public trust question by drawing from a parallel concern with the public good in science articulated most notably in recent years by the International Council for Science. From a public good perspective, trust in specific scientific propositions may become less important than the way in which science speaks to ongoing agendas for system-wide transformation.
Chair: Simine Vazire
Conference close, Matt Page - AIMOS president
Join Matt Page for some closing remarks on AIMOS2022 and what lies ahead for AIMOS