Social Systems: Difference between revisions
No edit summary |
No edit summary |
||
Line 5: | Line 5: | ||
|Discord Channel URL=https://discord.com/channels/1029514961782849607/1040386737836412928 | |Discord Channel URL=https://discord.com/channels/1029514961782849607/1040386737836412928 | ||
|Facilitator=Sílvia Bessa | |Facilitator=Sílvia Bessa | ||
|Members=Angelina Lesnikova | |Members=Angelina Lesnikova, Martin Karlsson, Matt Clancy, Raphael Walker, Sílvia Bessa, Valerii Kremnev, Nouran Soliman | ||
|Buddies=Peer Review | |Buddies=Peer Review | ||
}} | }} |
Revision as of 17:24, 12 November 2022
Social Systems | |
---|---|
Description | How to design/foster social structures / incentives / programs for fostering synthesis-friendly work, such as writing living literature reviews, adding metadata to experiments, etc.? |
Related Topics | Incentive Systems |
Discord Channel | #social-systems |
Facilitator | Sílvia Bessa |
Members | Jay Patel, Matt Clancy, Angelina Lesnikova, Valerii Kremnev, Raphael Walker, Martin Karlsson, Nouran Soliman, Sílvia Bessa |
What
How to design/foster social structures / incentives / programs for fostering synthesis-friendly work, such as writing living literature reviews, adding metadata to experiments, etc.?
Resources
- SourceCred
- Golden - Reward function development
- HyperCerts
- Impact Evaluators
- An Engine of Improvement for the Social Processes of Science, by Nielsen and Qiu
Goal for the workshop:
Resources, such as a system map/synthesis of the problem space, synthesis/directory of tools, essential reading list, case study library, or shared synthesis benchmark datase
- The best tools in the world mean nothing if no one adopts them. Let’s merge together small scale testing with early adopters, with rigorous validation.
- Create a resource that the tool builders across the group can glance at to not lose sight of the critical point of adoption. With practical examples of how these mechanisms are designed, iterated upon, and how they can be tested and communicated to academia.
- Learnings from past and ongoing attempts (what truly motivates sustainable participation).
- "A tool builder should check their assumptions agains this checklist before ..."
Identified Open Problems
- Acceptance and onboarding of scientists, even if we have a model that works in a small setting
- Value attribution:
- How do you distribute rewards?
- Opportunity side: new tools looking at ways to provide input to that distribution mechanism
- How do we connect the two sides?
- How do we test the behavior of a model as it scales?
- How do we predict the inventive structures or perverse behavior that will arise as it’s adopted for a large number of players?
- Incentive mechanisms for contributing and maintaining a collective knowledge graph
Potential Next Steps
On "organize thought and writing on a central theme/problem to facilitate future work" track, if there is interest, I would be keen to bring together those that would like to map out:
- The structural options for rewarding contribution.
- Learnings from past and ongoing attempts (what truly motivates sustainable participation).
- New funding sources that could sustain those reward mechanisms.
Then map to the enabling effects of the tooling and model initiatives occurring across the workshop. One goal would be to connect the thread from these new funding sources and methods to the attribution and reward mechanisms offered by discourse graph and synthesis tooling.