How digital scholarly service platforms facilitate research rigor and transparency – A conversation with Nici Pfeiffer

Published by Ebuka Ezeike on

Nici Pfeiffer, podcast image

Nici Pfieffer is Chief Product Officer at the Center for Open Science and in that position works towards enabling researchers to share their work to advance the transparency and reproducibility of science. With Jo, she talks about the importance of open digital infrastructure and how she contributes to research rigor and the achievements made through her work.

See all our published episodes at CONVERSATIONS.

ORCID iD: 0000-0001-8335-6018
Website: cos.io
Twitter: @nicipfeif
Linkedin: /in/nicole-pfeiffer/

Which researcher – dead or alive – do you find inspiring? – I don’t really have a specific researcher. What inspires me is the researcher persona that takes up open science practices. That is transparent about every part of their research process and is really honest about the realities. I think that is really unique and special, because so much of the research ecosystem is competitive, which creates a cause for obfuscating your truth.

What is your favorite animal and why? – I like sloths, they do their own thing and move at their own pace. I respect that.

TRANSCRIPT

Jo: Welcome to Access 2 Perspectives Conversation. Today we have Nici Pfeiffer in the room. Nici Pfeiffer works with the center for Open Science and we’ve been communicating and collaborating for a couple of years now with regards to our work with Africarchive and also in the realms of open science and all the beautiful products and tools and services the center of Open Science has developed. First of all, welcome to you, Nici. Thanks for joining us. 

Nici: Thanks for having me. 

Jo: It’s a great pleasure. So starting off, would you be willing to share some of your journey that led you to eventually work with a center for Open Science? What’s your background? What were, during the studies, your research interest? What led you to Open Science and now to the center for Open Science? 

Nici: Sure, I’d be happy to. I don’t know if it’ll be as interesting as others, and there may be a couple of surprises in here. So actually, my background is mechanical engineering, so I have degree in mechanical engineering and I did a lot of my research in material science and nanotechnology and then began my work career actually working with the Department of Defense with secret government clearance, working on radar signatures to support the global war on terrorism after 911 hit the United States. So that’s what I was doing and then decided to have a family. So I actually took twelve years and wasn’t working or doing research, but actually the most important work of having a family and raising kids. But once they were older, I have three now, teenagers. Once they were old enough to be in school and didn’t need me, I decided I needed to go back for my own self fulfillment to day to day work life, and I couldn’t sit still even then. I did things like volunteering for Church kitchen and making meals for the President of their PTO at their school, the parent teacher organization, to help support the teachers that are teaching all the wonderful kids and basically any other sort of volunteer work I could do in my community. But ultimately I decided I’d go back to work and I found the role at the center for Open Science early on. It had only been in existence for a year or two and had just been developing the OSF with a small group of developers, but didn’t have any QA tested yet. We only had a few hundred or so users on the platform, so it wasn’t a big deal. But things were starting to ramp up. They got some funding and they were starting to add features and more users were coming, so they wanted to have quality assurance testing. So I joined the center as a part time intern doing QA testing and developed processes and ultimately added members to the team and integrated that in with now a pretty robust platform that absolutely there is tremendous efforts on QA testing with every release we push because we have over 450,000 users now. So obviously having bugs or defects go out would not be okay. But that is how I started at COS (Center for Open science) over seven years ago, now again as an intern and just moved into product management and now into the role of chief product officer. 

Jo: Great. And then since you mentioned 911 and then the role you took with national defense, is that it? 

Nici: nhmm

Jo: Were you personally affected? Was a family affected by the attack? 

Nici: No, I didn’t lose anyone that I knew or that was in my family during that. It changed the way of life here in the United States. Yeah, it definitely changed things. And something that I just had passion to help solve and make sure that everyone was safe across the world, really, because the US has such a presence in the whole world around security and safety. 

Jo: And thinking back now, the work you did then and there for that purpose and now the work that you did studying at the center of Open Science and what you’re doing now when it comes to talking about career development and transferable skills, was there anything in particular that you find you could apply to the new job and your setting, anything that stuck out? 

Nici: There are few things. I think some of it is just an engineering mindset for problem solving and incrementalism and process. I feel like those are things that I started out with. My career is definitely part of the work I did with the Department of Defense and now obviously expanded in this role. Just several key elements, actually, what I would consider some of our core principles at COS just to build tools or workflows that support here in this case, researchers. But even when my role was to serve the servicemen and women that were overseas fighting in the war, basically meet them where they are. So whatever place you’re at finding the answer that helps you now versus where we really want to be. Just delivering to that ideal state doesn’t help the people that are trying to make a difference now and then by being really inclusive to all communities. That’s something we’re doing at COS with OSF. But that’s also the case when you’re providing intelligence for the government, because again, we have lots of allies across the world. So just even thinking about what tools or sophisticated instrumentation they may or may not have, making sure that it’s something that supports anyone who might need this intelligence, essentially, and then to add efficiency and not burden. So again, just thinking about how things are delivered in a way that you have everything you need to interpret it and understand it, that intelligence. But also the same thing with the tools for researchers and practicing open science to make it more efficient, less repeated steps and duplicated efforts along the way, I think, really help. I hope it’ll be meaningful. 

Jo: Yeah, totally. Okay. I think we also mentioned the open science framework. Is it still a core product of the center? 

Nici: Yes. 

Jo: Can we maybe explain briefly what it does and how it’s of service to the global research community? It’s an archive, and also more than that, for documenting the whole research workflow. And that’s like going beyond the mere archiving of data and manuscripts. So what’s the added value as compared to other repositories? 

Nici: Yeah. I mean, I think there’s a number of great repositories out there and really the US is not in it for competitive advantage or to be the one. We’re providing a tool that helps practice open science. But we actually advocate for lots of other solutions as well, because ultimately it’s our mission and we want to see science be open. And so wherever you do it, however you do it, we’re excited and championing that effort versus our own. But there was a point where there weren’t a lot of solutions out there. We do feel like there are a couple of unique things that are part of how OSF supports those open practices. So one is open source, which means that you can see the code, you can see the changes, you know what we’re doing, we’re not hiding it, and you can always pull out whatever you put into it at any point. So you’re not locked in. We also have a small community, and we really like to grow this community of developers. So open source developers that have contributed back to what we offer with OSF, which means that there are extensions and additional capabilities and ways to work with the OSF programmatically that support researcher goals across just the large number of communities that could really leverage it for their needs. But it is a collaborative management tool, essentially how we describe it, but it supports the workflows across the research lifecycle. So really thinking about the different phases of research, and I’ll summarize them, there are many, and they’re different depending on how you practice your research. But in general, there’s a lot of effort in planning and collaboration. And one of the things that OSF provides is a step for pre registration or really sort of that pre commitment to what you’re planning as part of your research question or your hypothesis, how you’re going to conduct your study before you begin collecting the data, and really how you’re going to even analyze that data before you begin the process. So that’s one part of the OSF that is supporting the entire research lifecycle. But as you collaborate, you can be open with just your collaborative research team, or you can keep that stuff private until it’s time to share more broadly, depending on the nature of your work. And we understand that. So there’s definitely not this all or nothing open science mantra going on. There are times and reasons why you wouldn’t necessarily do everything in the open at every point, although if you can, then that’s obviously something we would encourage. But there are use cases where initially you want to work together with your research team, define all these things, and then you can create this preregistration. You can even embargo it for some time if it’s important to keep that to yourselves until the research later stage where you’re ready to disseminate your research. So then the data collection and all of those things happen on a number of different platforms and that’s something we don’t support.  We don’t support doing that with the best tool for you, but what we try to do is integrate. Like I said, we have open source and we have a public API. We’ve integrated several tools. I think the number is eleven at this point, but we’re always interested in having conversations with other tool makers or communities that say this tool really is necessary for me to conduct my research, and I’d love to have it part of how the OSF ecosystem works. And oftentimes it can happen without incurring the maintenance and sustainability cost of that direct integration. And it’s something most of the time you can just use the API to meet your needs, but that’s always a conversation we’d love to have with communities. And then beyond that, there’s sort of the reporting. Once you finish your study, there’s the need to really report out your findings. And if you’ve pre registered, it’s really critical to actually say this is what I said I was going to do, and that is what I did or actually real life happened, and these things are actually what happened. But to share that transparently and then to share all of the outputs of your research so that they’re linked together, they have the OIs, they’re archived. And they’re in such a way that if I find your data in an article, if I want to follow that back and trace to some of the other aspects of the data dictionary or the protocol that went into that study, all of those pieces can be connected using the OSF to connect the tools or the outputs together, that helps create that. And then finally ‘discovery’, that’s really important, that is, fair implementation of all of the steps in the research process and making sure those things are discoverable with good metadata and then can be reused and built on. And even the traceability of those steps as well. 

Jo: Thinking of it technologically is really embracing the whole concept of open science with open methodology. Open, like preregistration, as you pointed out, which is not often mentioned when it comes to open science, but it’s increasingly so. I always stressed it in the training that I do to encourage researchers to pre register their plant research. And I feel we need again or still a mindset shift because a lot of the early career researchers feel they are failing when they have to change their route and how the workflow goes from the hypothesis and research ideas they set out on and how that’s changing along the process. Did you receive feedback from the users who embraced or who adopted pre-registration and actually locked those and probably also assigned your eyes, which is again, also to mark the research idea in the first place and therefore basically have not a priority of discovery, but a priority of thoughtfulness about a research question. Is there some learning that you can share from the users to the listeners that you selected in the center? 

Nici: Yeah, absolutely. This is a really great question to ask and something that we’ve spent quite a bit of time focusing on, because we do understand that preregistration is a change. It’s something new that maybe not all disciplines are practicing, and it feels a little bit tight, like claustrophobic, like once you put that down, the reality of your research will then make it seem like you didn’t go about this in the best way or whatever the case may be. There’s a number of reasons. 

Jo: Will it not be good to assume the changes will come? I mean, this is research. That’s the nature of research. We are at the brink of knowledge.

Nici: Right. Now, that’s exactly right. We’ve heard this over the years, mostly in the form of researchers who have used OSF to create their pre registration. And once you do that, it’s an immutable timestamps version of what you said your research plan was, and there wasn’t a good way to make updates to that or make changes to that. And so what we used to have them do was to create a new one. And sometimes, obviously, if you started data collection, you shouldn’t be updating your study plan. But oftentimes this is happening like, oh, I had a typo. I said I would collect 15 samples instead of 150. I need to go in and make this right, or things about live in person data collection and then Covid hit. And like, now I have to do all these interviews virtually or whatever the case may be for their study. And so wanting to make those updates and make it correct and accurate, what we did was actually implement a new feature where you can make an update to your registration to your pre registration. So it still creates the timestamps and you have to kind of say what you’re changing and why you’re changing it at this point. And it does try to at some point, we’ll iterate on this and ask; have you started data collection? Because if that’s the case, then really there shouldn’t be changes to certain parts of your pre registered plan, but there could be metadata changes and things like that that make sense, like really for discoverability or accuracy that those parts aren’t part of the study design substance. So those were the changes we actually just rolled out at the end of last year. 

We’ve had tremendous feedback that this has been really helpful. And honestly, when you look at this scholarly record, it’s a way cleaner way of keeping track of the changes to the research over time. Withdrawing something and then creating a new one and trying to link between the two wasn’t actually the right approach. So we’ve made improvements on that front too. 

Jo: So doesn’t that create a whole new version of the document? Like changes within the same document, which are your marks. 

Nici: Yeah. So it’s just updates to the original that have a timestamp and a justification for it, but ultimately it’s the same preregistration. 

Jo: Can you send your eyes to the preregistration so that it’s also like a scholarly record that you can use in your CV? 

Nici: Exactly. And actually we’ve been working with a data site to get an update to their schema. So that pre registrations are actually an object type. Right now, they kind of get folded into the other category, which means when we want to trace these as part of sort of the pit graph or the way that the research gets done, we’re just tucking them into others. But they’re a really critical stage of research and rigor and transparency. When you say this is what I’m setting out to do and at the end of it, here’s what I did and here’s what I found so that we can build on that over time. 

Jo: It allows for so much learning also the expectation management for one or the other PI like how much you can really achieve within three or five years of the PhD. And then also that real life happens all of the time. And then again, like the unexpected of research. Why wouldn’t we do that otherwise? Hypotheses are only assumptions and they may or may not go true. And this is why we do the work. Okay. 

Nici: Real quick on that. Just to say, I wonder, and nobody knows the answer to this question, but how many times have the same researchers tried to answer the same research question? Because we didn’t share what we’re working on and we certainly don’t share the null findings, which is another issue to solve as well. But you could imagine when it’s preregistered and then you report out, you would share that this didn’t go where I wanted it to. I learned these things, but I didn’t answer these questions. So next, somebody could work from that point onward or you yourself could. But just making all of that much more transparent I think will just help us all not duplicate the effort. 

Jo: Absolutely. Would save so much money, but also much less frustration for hundreds and thousands and tens of thousands of PhD students. Because I learned from my PhD journey, I learned that the repetitiveness of molecular biology work and then just imagining. Okay, how many other people are doing the same thing just in this building and then around the world? Seriously. 

Nici: Great. Wouldn’t it have been better to collaborate, to share data and to build on each other’s work in a new way? I think there’s a lot of promise to being transparent in this way and what we could all benefit from. So putting your own work into this ecosystem in this way and then what you could actually get out as value. It’s just a new paradigm that we haven’t really explored. So I think it’s exciting to start to see this taking shape.

Jo: Shape.

Nici: Yeah. 

Jo: I think many people are realizing the issue, but it’s only a few who actually have the capacity and the knowledge and the technology to build the tools to actually deliver, like the OSF. 

Nici: Right. I think tools are a big piece of it. I also think some of the incentives are the other challenge that gets in the way of practicing research this way. 

Jo: Yeah. We have another episode coming up where we talk about values linked research and of course, transparency, which is provided by sharing the preregistration as a key component.

Yeah. We need all of that. And then DORA is doing its part. But it’s surprising, for us who work in the open science ecosystem, at the core of it seems like, and I don’t know how many people we are, a couple of hundreds. And then there’s so many others out there who have not heard about the San Francisco Declaration of research assessment. What’s coming? There are a lot of signatories. The other question is then how’s it going to be implemented institution wise? Progress is being made, and we keep pushing. 

Nici: No, it is. It’s exciting to see that for sure. We’re not saying we’re not making that progress. I think it’s definitely happening, but we don’t want to stop either. I think there’s further to go. 

Jo: And now talking about a global research community and your experience of working in the center of open science, how are you experiencing the tools and products that are being developed, being picked up around the globe, and how do you see it? I keep using the term global research equity because I have a firm belief that if all of us would increasingly or fully embrace and practice open science, meaning transparent, good research practices throughout, then we could within a decade of years, really could achieve research equity around the world. And I think really the center for open science and towards the OSF are contributing and facilitating that. Do you also see that from inside and also the conversations we keep having in consortia, all kinds of sorts? 

Nici: Yeah. No, I definitely think progress is happening. There’s global use of tools like OSF for practicing open science, which is fantastic to see. I’m usually more critical when it’s my own work. And so I’ll just be critical about areas I want to keep advancing. Obviously, language and translation barriers still exist, and I think that is going to get in the way if we don’t start to solve for that and access. So OSF is free for researchers. It’s a tool that anybody can log in and discover what’s in it without having an account, but then also can create a very simple free account and start sharing their research publicly. So I think in that way, we’ve certainly eliminated a barrier. But then I think being able to translate and know that the data protections and things are applied in the right ways for the content that researchers are applying, I think those are the areas that still need to keep evolving to support this global sort of equity that you’re speaking of and something we certainly are constantly working on. But I think licensing and again, some of the data protection work, I think it’s just a place we’re still going to need to spend some time and understanding. I know we together have been working to try to solve for storage on the continent of Africa for data and researchers to use. And that’s been tricky. It sounds like it should be an easy thing to solve, but we’ve had a harder time really finding the resources. 

Jo: And the partner organizations are limited for the whole continent in this case. 

Nici: Right. And so I think those are still the challenges that we’re going to have to continue to work through. And that’s one location just because it’s the one that we’re working together on. But there are many other locations that have similar challenges. I know that we work with some institutional partners in Canada, and they have their own restrictions for the country, but then they have provincial and tribal restrictions about that data. And it’s not to say that technology can’t support them, but we haven’t worked through how we actually devised systems that meet those restrictions and requirements in a way that still creates access and discoverability, but protects at the same time. Those are still things that the technology hasn’t quite solved for. I know that it can. We just haven’t solved it yet. And so those are things that I think still need some time and effort to finish up. 

Jo: Yeah, but you mentioned tribal and Canadian context. Are you talking about First Nations? Nici: That’s exactly what we are looking at. It’s where it’s been coming forward differently. 

Jo: So with the Indigenous community’s understanding of ownership of data, basically referring to the care principle, which is more values based and less individual, more community and reused in a way that the information is not being misappropriated.

Nici: Yes. 

Jo: And then the legal arms are there to protect the data and to ensure ownership. 

Nici: Right. And there’s a lot of different roles in there that we don’t really see in the technology yet that I’ve come across in ways that you can assign those protections and those roles of sort of compliance to make sure that it’s not violated. So I think that those are important. But I know that OSF still hasn’t, but we were working with Canadian partners to try to solve this and work through how the technology could support it. But, yeah, it’s very simple to start with the easy part of where the data and the metadata is stored. And so we’ve worked through a little bit of that solution. But then it’s beyond that on individual data sets and the sharing and reuse. 

Jo: Yeah. We had an episode with Laure Haak, the former director of ORCID, and she’s been working on a project called Local Contact, which is also investigating these challenges technology wise. I’m trying to raise that positively. But our remaining challenges. So also on that end, I think progress is being made, but it needs to be assessed on not only country level but really tribal level if it works for that particular community that’s concerned and how the exchange between the researchers, academics, and the tribal communities occurs, and to what extent.

Okay, well, that’s really a lot to consider, but a lot of exciting and challenging things, but positively challenging. It’s good to realize that we actually do make progress and there’s lots of learning to be made across organizations. I feel like we’re applying the FAIR principles. We mentioned FAIR a few times in this episode without really explaining back what it means to listeners who might be new. But we’ve covered it in previous episodes that FAIR means, findable, accessible, interoperable and reusable when it comes to research data. But I feel it’s also applicable. I think we can agree it’s applicable also research manuscripts and also broader context and concepts in the sense of interoperability and technology wise, where you said that the center of Open Sciences is keen on also developing APIs to make OSF another systems interoperable with other service providers and research management tools, which for me as a biologist makes a lot of sense and maybe also for you with your background in mechanics and engineering, that diversity is really the key to success and progress and fairness in the double meaning of it. So, yeah, we’re on a good route. Okay. Maybe coming to an end for the episode, since we started off referring back to 911, and now we have some tempting or challenging times, politics wise, with the war in Ukraine between Ukraine and Russia, there’s also other wars and conflicts around the world.

Has the war in Russia and Ukraine affected any of the operations of the center for open science or OSF itself? But did you see a change in usage from Ukraine and Russia due to the conflict, or have there been any other facts recently? 

Nici: No. I mean, we haven’t seen dramatic changes in usage, and I would say that that wasn’t really a primary area, that we had had activity, but would love to. What did come from it that I thought was a really beautiful thing to see was this need to solve for a distributed system where things that are currently archived in Ukraine face risk of damage and being lost kind of forever from the scholarly record to be uploaded digitally and to make sure it was on a distributed system that if one server didn’t exist anymore and that’s the only place that was stored, then we wouldn’t have it any longer. And I saw lots of groups kind of jump in to offer their platforms and their services to help create this mitigation that is really dire. We, of course, offered that as well, and would want to make sure that that is the case not just for Ukraine, but across the world where there could be risks or that we’re not supporting the interoperable and sort of distributed systems that certainly exist. But not to say that everyone is taking advantage of that in a meaningful way. But what could be really the ideal state of all the research out there. It can’t be lost. It can’t go away just because someone removes it here or there, but that it would be preserved forever. 

Jo: Yeah. So many questions in my head now to focus. That was also my immediate concern. Like now that some of the universities are good at tech, what happens with the archiving system similar to when in Cape Town, parts of the libraries burned due to an out of control campfire, there was still a lot that had not been digitized. And wouldn’t it be good to have digital copies, at least for any events like any of these tragic events, which soon or later might happen? You never know where. And then at least we have digital backups. 

Nici: Right. But even those digital backups, if they’re not in cloud, distributed cloud systems and they’re just local, that same thing can essentially happen. One is definitely digital,  it is an improvement. But I think a further improvement is to come out with cloud and distributed systems so we don’t ever have that risk for the work that we’re all working hard to produce and invest in and can reuse and build on. So that’s the real problem.

Jo:  I think research needs to detach more from politics. So sanctions wouldn’t affect storage necessarily, because legal sanctions, as far as I understand, researchers in sanctioned countries are not allowed to make use of cloud services that are hosted in the sanctioning countries. So that’s a burger. 

Nici: Right. I would agree with you there that there’s just some things that should not be part of that. 

Jo: And then where to draw the lines. We had two, three discussions around that episode. You think it’s easy, but then when you get into the details, it’s really tricky. Okay. But, yeah, let’s focus on the fact that the majority is so collaborative and interoperable and not affected by conflict so we can focus on making progress there. Thank you so much for your time and for sharing your experiences and some of your journey. Maybe two or three words or sentences of what’s lying ahead. 

Nici: Yeah, thanks for having me. Of course, I am anxious to listen to the one that you mentioned coming up, on Values and research so I’ll actually be keeping in touch to hear how that one goes and to listen to the following podcasts that you produce. I think one thing that you’ve said in our conversation that just sticks with me, so maybe this is the final point to make, is how diversity is key. Again. I think diversity really helps us make the progress we’re trying to make so I would just end it on that note.

Jo: And it sounds like jeez it was really the essence and I enjoy it every time. It’s challenging sometimes and it’s also a lot of fun and big learning. Okay. So see you soon. 

Nici: Alright. Thank you.

References (related research articles)