Social Systems: Difference between revisions

1,339 bytes added ,  17:15, 12 November 2022
no edit summary
No edit summary
No edit summary
Line 19: Line 19:
* [https://www.youtube.com/watch?v=1soPQ31ZHkQ Impact Evaluators]
* [https://www.youtube.com/watch?v=1soPQ31ZHkQ Impact Evaluators]
* [https://scienceplusplus.org/metascience/ An Engine of Improvement for the Social Processes of Science], by Nielsen and Qiu
* [https://scienceplusplus.org/metascience/ An Engine of Improvement for the Social Processes of Science], by Nielsen and Qiu
== Identified Open Problems ==
* Acceptance and onboarding of scientists, even if we have a model that works in a small setting
* Value attribution:
** How do you distribute rewards?
** Opportunity side: new tools looking at ways to provide input to that distribution mechanism
** How do we connect the two sides?
* How do we test the behavior of a model as it scales?
** How do we predict the inventive structures or perverse behavior will arise as it’s adopted for a large number of players
* Incentive mechanisms for contributing and maintaining living lit reviews in both domains
== Goal for the workshop: ==
'''Resources''', such as a system map/synthesis of the problem space, synthesis/directory of tools, essential reading list, case study library, or shared synthesis benchmark dataset
* The best tools in the world mean nothing if no one adopts them. Let’s merge together small scale testing with early adopters, with rigorous validation. Create a resource that the tool builders across the group can glance at to not lose sight of the critical point of adoption. With practical examples of how these mechanisms are designed, iterated upon, and how they can be tested and communicated to academia.  * A tool builder should check their assumptions agains this checklist before ...
== Overlapping with other groups? ==


== Potential Next Steps ==
== Potential Next Steps ==