29
edits
No edit summary |
No edit summary |
||
Line 20: | Line 20: | ||
* [https://scienceplusplus.org/metascience/ An Engine of Improvement for the Social Processes of Science], by Nielsen and Qiu | * [https://scienceplusplus.org/metascience/ An Engine of Improvement for the Social Processes of Science], by Nielsen and Qiu | ||
* | |||
* | |||
== Goal for the workshop: == | == Goal for the workshop: == | ||
'''Resources''', such as a system map/synthesis of the problem space, synthesis/directory of tools, essential reading list, case study library, or shared synthesis benchmark dataset | '''Resources''', such as a system map/synthesis of the problem space, synthesis/directory of tools, essential reading list, case study library, or shared synthesis benchmark dataset | ||
* | * T'''he best tools in the world mean nothing if no one adopts them'''. Let’s merge together small scale testing with early adopters, with rigorous validation. | ||
* Create a resource that the tool builders across the group can glance at to not lose sight of the critical point of adoption. With practical examples of how these mechanisms are designed, iterated upon, and how they can be tested and communicated to academia. | |||
** "A tool builder should check their assumptions agains this checklist before ..." | |||
== | === Identified Open Problems === | ||
# Acceptance and onboarding of scientists, even if we have a model that works in a small setting | |||
# Value attribution: | |||
#* How do you distribute rewards? | |||
#* Opportunity side: new tools looking at ways to provide input to that distribution mechanism | |||
#* How do we connect the two sides? | |||
# How do we test the behavior of a model as it scales? | |||
#* How do we predict the inventive structures or perverse behavior that will arise as it’s adopted for a large number of players? | |||
# Incentive mechanisms for contributing and maintaining a collective knowledge graph | |||
edits