Discord Messages: Difference between revisions

45,391 bytes added ,  16:03, 31 January 2023
→‎23-01-31: new section
(→‎22-11-07: new section)
(→‎23-01-31: new section)
 
(62 intermediate revisions by 2 users not shown)
Line 224: Line 224:
|Text=Hello Pooja and welcome 🙂 I certainly share your concerns here, and would love to read any writing or work you've done on the topic! I'm curious if you had any initial inklings of [[Discovery]] systems that go beyond the [[Search#Black Box Model]] ? I have my own ideas but as you say, everyone has a unique standpoint and experience that structures their ideas so I would love to hear yours!
|Text=Hello Pooja and welcome 🙂 I certainly share your concerns here, and would love to read any writing or work you've done on the topic! I'm curious if you had any initial inklings of [[Discovery]] systems that go beyond the [[Search#Black Box Model]] ? I have my own ideas but as you say, everyone has a unique standpoint and experience that structures their ideas so I would love to hear yours!
|Link=https://discord.com/channels/1029514961782849607/1038594946791387276/1038984020047962212
|Link=https://discord.com/channels/1029514961782849607/1038594946791387276/1038984020047962212
}}{{Message
|Author=sneakers-the-rat
|Avatar=https://cdn.discordapp.com/avatars/305044217393053697/2970b22bd769d0cd0ee1de79be500e85.png?size=1024
|Date Sent=22-11-07 01:33:13
|Channel=general-brainstorming
|Text=For everyone that is embarking on a project, how about setting up a page under [[Projects]] where we can start organizing people that are interested in them, and setting up any prerequisite infra/tools so we don't have to be struggling with stuff like provisioning servers and getting permissions setup during our limited time this weekend 🙂
|Link=https://discord.com/channels/1029514961782849607/1034992937391632444/1038989470290153472
}}
== 22-11-08 ==
{{Message
|Author=sneakers-the-rat
|Avatar=https://cdn.discordapp.com/avatars/305044217393053697/2970b22bd769d0cd0ee1de79be500e85.png?size=1024
|Date Sent=22-11-08 23:32:39
|Channel=linked-data-activitypub
|Text=To add to the [[Reading List#Linked Data]] on [[Linked Data]], [[Standards]], and [[Collaboration]]: a piece from one of the authors of [[ActivityPub]] on the merger of the distributed messaging and linked data communities that I think puts into context what a massive achievement AP was
http://dustycloud.org/blog/on-standards-divisions-collaboration/
|Link=https://discord.com/channels/1029514961782849607/1038983225348993184/1039683903864189020
}}
== 22-11-09 ==
{{Message
|Author=sneakers-the-rat
|Avatar=https://cdn.discordapp.com/avatars/305044217393053697/2970b22bd769d0cd0ee1de79be500e85.png?size=1024
|Date Sent=22-11-09 23:25:43
|Channel=SEPIO + ActivityStreams via JSON-LD
|Text=Haven't finished n-back thread capture yet but this rocks and let's keep track of it on the wiki. Scroll up in this thread for [[SEPIO]] + [[ActivityStreams]]/[[ActivityPub]] + [[JSON-LD]]. On a train now and having to work on some other stuff but this is making me unreasonably excited to check out later
|Link=https://discord.com/channels/1029514961782849607/1040042059916120094/1040044550611284018
}}
== 22-11-10 ==
{{Message
|Author=sneakers-the-rat
|Avatar=https://cdn.discordapp.com/avatars/305044217393053697/2970b22bd769d0cd0ee1de79be500e85.png?size=1024
|Date Sent=22-11-10 00:15:39
|Channel=mod-requests
|Text=Reminder as the conversations start thickening (which has been great to read, looking forward to jumping in more later when I have a few minutes) and thus become a bit harder to keep track of that you should feel free to make liberal use of [[Wikilinks]] in your posts to archive them in the wiki and make them more discoverable by people outside of your table/project. (For example this message will appear here https://synthesis-infrastructures.wiki/Wikilinks ). This would be especially useful because it looks like some folks are interested in doing some <#1038988750677606432> on the wiki!
|Link=https://discord.com/channels/1029514961782849607/1032530251944833054/1040057112891494451
}}{{Message
|Author=sneakers-the-rat
|Avatar=https://cdn.discordapp.com/avatars/305044217393053697/2970b22bd769d0cd0ee1de79be500e85.png?size=1024
|Date Sent=22-11-10 08:58:52
|Channel=discourse-modeling
|Text=I am about to go to bed but personally I favor the model of the federated wiki, that the same "term" or page title in the case of the wiki has many possible realizations, and what's useful is their multiplicity. I think everything2 was an early model of this, but basically it cuts to the core of the history of early wikis, to the initial fork of ward's wiki into meatball. the singularity of meaning as implied by Wikipedia is imo an artifact of wikis having been adopted by encyclopedists, with all the diderot-like enlightenment-era philosophy that entails. this seems exceptionally apt today and yesterday given Aaron Swartz telling of that history , particularly his "[[Who Writes Wikipedia?]]" Everyone can contribute in a linked context, and that's what the synthesis of wikilike thinking, linked data, and distributed messaging gives us :). I write about this idea more completely here: https://jon-e.net/infrastructure/#the-wiki-way
after my take on the critical/ethical need for forking in information systems as given by the case study of NIH's biomedical translator (link to most relevant part in the middle of the argument, the justification and motivation precedes it): https://jon-e.net/infrastructure/#problematizing-the-need-for-a-system-intended-to-link-all-or-eve
|Link=https://discord.com/channels/1029514961782849607/1038988750677606432/1040188785499054100
}}{{Message
|Author=joelchan86
|Avatar=https://cdn.discordapp.com/avatars/322545403876868096/6dd171845a7a4e30603d98ae510c77b8.png?size=1024
|Date Sent=22-11-10 15:51:29
|Channel=discourse graphs
|Text=the idea [[DiscourseGraphs]] is rooted in a bunch of models like [[SEPIO]] (h/t <@602622661125996545>) and [[ScholOnto]] that have been around for various amounts of time, though not yet with (to my knowledge) serious widespread adoption.
|Link=https://discord.com/channels/1029514961782849607/1040214388554084372/1040292623891582996
}}{{Message
|Author=joelchan86
|Avatar=https://cdn.discordapp.com/avatars/322545403876868096/6dd171845a7a4e30603d98ae510c77b8.png?size=1024
|Date Sent=22-11-10 15:55:39
|Channel=discourse graphs
|Text=we think the problem now is user-friendly tools and workfows that can create discourse graph structures, and have seen some exciting progress across a bunch of new user-facing "personal wikis". but bridging from personal to communal is still a challenge, partially bc of tooling.
this is why i'm excited about the [[Discourse Modeling]] idea, which i sort of understand as a way to try to instantiate something like [[Discourse Graphs]] into a wiki (bc wikis have a lot more in-built affordances for collaboration, such as edit histories, talk pages, etc.), which may hopefully lead to a lower barrier to entry for collaborative discourse graphing.
a high hope is that we can develop a process that is easy enough to understand and implement that can then be applied to discourse graphing the IPCC or similarly large body of research on a focused, contentious, interdisciplinary topic.
other examples include:
- effects of masks on community transmission (can't do decisive RCTs, need to synthesize)
- effects of social media on political (dys)function: (existing crowdsourced lit review here, in traditional narrative form: https://docs.google.com/document/d/1vVAtMCQnz8WVxtSNQev_e1cGmY9rnY96ecYuAj6C548/edit#)
|Link=https://discord.com/channels/1029514961782849607/1040214388554084372/1040293673851691059
}}{{Message
|Author=sneakers-the-rat
|Avatar=https://cdn.discordapp.com/avatars/305044217393053697/2970b22bd769d0cd0ee1de79be500e85.png?size=1024
|Date Sent=22-11-10 20:40:46
|Channel=Thanks sneakers the rat2880 Your site is
|Text=I don't know of any either! The closest I know of is ward's [[Fedwiki]]: but i plan on making one (probably more related to <#1038983225348993184> than this channel, which i am trying hard not to derail lol)
|Link=https://discord.com/channels/1029514961782849607/1040360259413348432/1040365424338026539
}}{{Message
|Author=Konrad Hinsen
|Avatar=https://cdn.discordapp.com/avatars/499904513038090240/343ae17c322fa09b3260f95e58bc4f29.png?size=1024
|Date Sent=22-11-10 20:45:36
|Channel=Thanks sneakers the rat2880 Your site is
|Text=Looking forward to your work in this space! I do know about [[Fedwiki]] but only as a spectator. I tried to convince a few colleagues to set up a network of Fedwikis in our research domain, but nobody was keen on becoming a sysadmin to run their own Wiki instance.
|Link=https://discord.com/channels/1029514961782849607/1040360259413348432/1040366642665902142
}}{{Message
|Author=sneakers-the-rat
|Avatar=https://cdn.discordapp.com/avatars/305044217393053697/2970b22bd769d0cd0ee1de79be500e85.png?size=1024
|Date Sent=22-11-10 20:58:52
|Channel=Thanks sneakers the rat2880 Your site is
|Text=yes [[anagora]] does have a rough kind of federation! it's a very very permissive model which I love, markdown and plaintext with wikilinks, a lot of the wikis that it federates with are just git repositories of .md files 🙂
|Link=https://discord.com/channels/1029514961782849607/1040360259413348432/1040369980648198155
}}{{Message
|Author=sneakers-the-rat
|Avatar=https://cdn.discordapp.com/avatars/305044217393053697/2970b22bd769d0cd0ee1de79be500e85.png?size=1024
|Date Sent=22-11-10 21:01:50
|Channel=anagora
|Text=Maybe [[Synthesis Infrastructures 2022]] or something? but we haven't made one yet no lol
|Link=https://discord.com/channels/1029514961782849607/1040362284125524008/1040370727456604261
}}{{Message
|Author=sneakers-the-rat
|Avatar=https://cdn.discordapp.com/avatars/305044217393053697/2970b22bd769d0cd0ee1de79be500e85.png?size=1024
|Date Sent=22-11-10 21:38:12
|Channel=semantic-climate
|Text=another group ( <#1038988750677606432> ) will i believe be analyzing the semantic information on the wiki ( https://synthesis-infrastructures.wiki/Main_Page ), and you can archive the text of any message onto a wiki page by using [[Wikilinks]]: ( so eg. this message will go to https://synthesis-infrastructures.wiki/Wikilinks )
|Link=https://discord.com/channels/1029514961782849607/1040057721044598788/1040379879843180544
}}{{Message
|Author=joelchan86
|Avatar=https://cdn.discordapp.com/avatars/322545403876868096/6dd171845a7a4e30603d98ae510c77b8.png?size=1024
|Date Sent=22-11-10 21:38:25
|Channel=discourse graphs
|Text=in human-computer interaction we have a similar problem of trying to think about and synthesize across many genres of contributions/research. one map (adapted for information studies) breaks things out into "empirical" contributions (these most often follow the standard intro/methods/results/discussion format), "conceptual" contributions (which are often more amorphous theory papers), and "constructive" contributions (making a new system/method)
from here: HCI Research as Problem-Solving | Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems
https://dl.acm.org/doi/10.1145/2858036.2858283
cc [[Reading List]]
|Link=https://discord.com/channels/1029514961782849607/1040214388554084372/1040379933115043912
}}
== 22-11-11 ==
{{Message
|Author=Konrad Hinsen
|Avatar=https://cdn.discordapp.com/avatars/499904513038090240/343ae17c322fa09b3260f95e58bc4f29.png?size=1024
|Date Sent=22-11-11 05:26:42
|Channel=WikiFunctions
|Text=That said, the more abstract idea of defining a data model plus execution semantics that any programming language can plug into looks very promising. That aspect of WikiLambda was in fact one of my inspirations for developing [[Digital Scientific Notations]].
|Link=https://discord.com/channels/1029514961782849607/1040456437022859324/1040497782882062336
}}{{Message
|Author=Konrad Hinsen
|Avatar=https://cdn.discordapp.com/avatars/499904513038090240/343ae17c322fa09b3260f95e58bc4f29.png?size=1024
|Date Sent=22-11-11 05:45:14
|Channel=Thanks sneakers the rat2880 Your site is
|Text=I'll try to turn this thread into [[Project Ideas#Federated knowledge synthesis]]: identify protocols, data models, tools, practices, etc. that can support the process of synthesizing and formalizing scientific knowledge, then build on these ingredients. One dimension is going from narratives via discourse graphs to knowledge graphs. Another dimension is going from conceptual ideas to formal systems.
|Link=https://discord.com/channels/1029514961782849607/1040360259413348432/1040502446943899668
}}{{Message
|Author=sneakers-the-rat
|Avatar=https://cdn.discordapp.com/avatars/305044217393053697/2970b22bd769d0cd0ee1de79be500e85.png?size=1024
|Date Sent=22-11-11 06:50:07
|Channel=Thanks sneakers the rat2880 Your site is
|Text=we're in the process of consolidating the ideas into group pages, so far the group pages are incomplete, but tomorrow (I'm on Pacific time, US) will work on that and take whatever ya write and move it over there 🙂 <@322545403876868096> got this started here: https://synthesis-infrastructures.wiki/Workshop_Working_Groups and then we'll split those up into pages in [[:Category:Group]]
|Link=https://discord.com/channels/1029514961782849607/1040360259413348432/1040518772064260096
}}{{Message
|Author=joelchan86
|Avatar=https://cdn.discordapp.com/avatars/322545403876868096/6dd171845a7a4e30603d98ae510c77b8.png?size=1024
|Date Sent=22-11-11 14:05:36
|Channel=what is obsidian-logseq-roam
|Text=I think of all of these tools as "personal hypertext notebooks" - basically taking what is possible in wikis (organizing by means of linking, hypertext) and lowering the barrier to entry (no need to spin up a server, can just download an app and go).
The common thread across these notebooks then is allowing for organizing and exploring by means of bidirectional hyperlinks between "notes":
- In [[Obsidian]] each linkable note is a markdown file and can be as short or long as you like
- in [[Logseq]]/[[Roam]] and other outliner-style notebooks, you can link "pages", and also individual bullets in the outlines on each page.
In this way, the core functionality of these tools is similar to a wiki, but they do leave out a lot of the collaborative functionality that makes wikis work well (granular versioning and edit histories, talk pages, etc.). So for folks like <@305044217393053697> who are comfortable with wikis already, they add marginal value IMO.
Their technical predecessors in the "personal (vs. collaborative) wiki" space include [[TiddlyWiki]] and [[emacs org-mode]] (and inherit their technical extensibility: many users create their own extensions of the notebooks' functionality. an example is the [[Roam Discourse Graph extension]] that <@824740026575355906> is using).
These tools also tend to trace their idea lineage back to vannevar bush's [[Memex]] and ted nelson's [[Xanadu]].
|Link=https://discord.com/channels/1029514961782849607/1040600256485797889/1040628364735680572
}}{{Message
|Author=joelchan86
|Avatar=https://cdn.discordapp.com/avatars/322545403876868096/6dd171845a7a4e30603d98ae510c77b8.png?size=1024
|Date Sent=22-11-11 14:08:30
|Channel=what is obsidian-logseq-roam
|Text=These tools are still not entirely mainstream compared to tools like [[Notion]], which is related to your experience trying to learn more about the tools - so they tend to have a steep learning curve!
IMO the best way to get a feel for what they are is to see some examples/videos.
I like this video for an overview of [[Logseq]]: https://www.youtube.com/watch?v=ZtRozP8hfEY&t=6s
I describe [[Roam]] and the [[Roam Discourse Graph extension]] in this portion of a talk I recently gave: https://youtu.be/jH-QF7rVSeo?t=1417
|Link=https://discord.com/channels/1029514961782849607/1040600256485797889/1040629097929375784
}}{{Message
|Author=joelchan86
|Avatar=https://cdn.discordapp.com/avatars/322545403876868096/6dd171845a7a4e30603d98ae510c77b8.png?size=1024
|Date Sent=22-11-11 19:01:10
|Channel=what is obsidian-logseq-roam
|Text=i agree it's not universal! my feeling is that [[Claim]]: a statement (claim or evidence) might be the more universal element:
- empirical work also consists of statements about the world (this is less controversial)
- design/technological innovation rests in part on claims about a) what is needed in the world, what is hard to do, constraints, and b) what is needed to succeed: examples here: https://deepscienceventures.com/content/the-outcomes-graph-2 (h/t <@559775193242009610>)
- theories often consist of systems of core claims (e.g., in models like what <@824740026575355906> and <@734802666441408532> are working with, where we can think of the claims as subgraphs of the overall knowledge graph)
see, e.g., [[Evidence]] from this review of models of scientific knowledge https://publish.obsidian.md/joelchan-notes/discourse-graph/evidence/EVD+-+Four+positivist+epistemological+models+from+philosophy+of+science%2C+including+Popper%2C+emphasiz...+statements+as+a+core+component+of+scientific+knowledge+-+%40harsDesigningScientificKnowledge2001
and [[Evidence]] convergence/contrasts across users of the [[Roam Discourse Graph extension]] in terms of building blocks: common thread across all was Evidence
|Link=https://discord.com/channels/1029514961782849607/1040600256485797889/1040702747659489391
}}{{Message
|Author=sneakers-the-rat
|Avatar=https://cdn.discordapp.com/avatars/305044217393053697/2970b22bd769d0cd0ee1de79be500e85.png?size=1024
|Date Sent=22-11-11 23:05:02
|Channel=graphdb
|Text=super glad to hear that the endpoint worked btw, i've never used SPARQL and am more used to just making my own data models that generate API queries & parse etc. so I would love to see what you've been doing and how you've been using it - I'll make a [[SPARQL]] page linked off the wiki page that gives the URL and maybe we can embed sample queries and etc. there
|Link=https://discord.com/channels/1029514961782849607/1040116311952470026/1040764119294410773
}}
== 22-11-12 ==
{{Message
|Author=sneakers-the-rat
|Avatar=https://cdn.discordapp.com/avatars/305044217393053697/2970b22bd769d0cd0ee1de79be500e85.png?size=1024
|Date Sent=22-11-12 03:30:30
|Channel=discourse-modeling
|Text=I am definitely on team "scruffy" per Lindsay Poirier's typology (BTW "[[A Turn for the Scruffy]]" should be on the collective [[Reading List]] for anyone who hasn't come across it) and so yes definitely "Own-terminology" iterating into something shared, part of why i love the semwiki model of building them. On the other end of things for tomorrow - Is there any particular existing ontology/schema/etc. anyone in this group would like to have imported into the wiki for discourse modeling?
|Link=https://discord.com/channels/1029514961782849607/1038988750677606432/1040830926629896212
}}{{Message
|Author=Wutbot
|Avatar=https://cdn.discordapp.com/avatars/709165833888464966/d959819a9a72aa307c6ef1b91d7f94a2.png?size=1024
|Date Sent=22-11-12 11:22:37
|Channel=general-brainstorming
|Text=those brackets cue the [[WikiBot]] to link the message to the wiki page containing the mentioned terms
|Link=https://discord.com/channels/1029514961782849607/1034992937391632444/1040949739933401148
}}{{Message
|Author=sneakers-the-rat
|Avatar=https://cdn.discordapp.com/avatars/305044217393053697/2970b22bd769d0cd0ee1de79be500e85.png?size=1024
|Date Sent=22-11-12 16:02:25
|Channel=synthesizing-social-media
|Text=[[A System for Interleaving Discussion and Summarization in Online Collaboration#Evidence]] This section of the document references some [[Other Work]]
|Link=https://discord.com/channels/1029514961782849607/1038983225348993184/1041020151421734952
}}{{Message
|Author=joelchan86
|Avatar=https://cdn.discordapp.com/avatars/322545403876868096/6dd171845a7a4e30603d98ae510c77b8.png?size=1024
|Date Sent=22-11-12 17:35:29
|Channel=general-brainstorming
|Text=let's dump into a page! [[Discourse graphs within survey reading course]]
|Link=https://discord.com/channels/1029514961782849607/1034992937391632444/1041043571408654486
}}{{Message
|Author=joelchan86
|Avatar=https://cdn.discordapp.com/avatars/322545403876868096/6dd171845a7a4e30603d98ae510c77b8.png?size=1024
|Date Sent=22-11-12 17:53:19
|Channel=what is obsidian-logseq-roam
|Text=hi peter, yes, the `[[...]]` (wikilinks) syntax has been quite widely adopted, spread from wikis!
|Link=https://discord.com/channels/1029514961782849607/1040600256485797889/1041048059393626194
}}{{Message
|Author=sneakers-the-rat
|Avatar=https://cdn.discordapp.com/avatars/305044217393053697/2970b22bd769d0cd0ee1de79be500e85.png?size=1024
|Date Sent=22-11-12 18:36:37
|Channel=discourse-modeling
|Text=Info on using [[Page Schemas]]:
So you could only need to make schemas for the different types of nodes that you'd want, so if i'm reading right then yes you would have several hundred pages but only 4-5 schemas.
A schema is defined (using page schemas) from a Category Page
A page is only ever loosely connected to a schema (rather than strictly, ie. can only have/requires the schema's fields) through its categ
|Link=https://discord.com/channels/1029514961782849607/1038988750677606432/1041058958053486742
}}{{Message
|Author=sneakers-the-rat
|Avatar=https://cdn.discordapp.com/avatars/305044217393053697/2970b22bd769d0cd0ee1de79be500e85.png?size=1024
|Date Sent=22-11-12 18:39:00
|Channel=discourse-modeling
|Text=Info on using [[Page Schemas]]:
So you could only need to make schemas for the different types of nodes that you'd want, so if i'm reading right then yes you would have several hundred pages but only 4-5 schemas.
A schema is defined (using page schemas) from a Category Page
A page is only ever loosely connected to a schema (rather than strictly, ie. can only have/requires the schema's fields) through its category. Page schemas then generates a template for the category. Typically templates will add a page to a category anyway ([ [Category:CategoryName] ]). So a page can have multiple schemas - that would just look like using multiple templates on the same page.
|Link=https://discord.com/channels/1029514961782849607/1038988750677606432/1041059559541841940
}}{{Message
|Author=sneakers-the-rat
|Avatar=https://cdn.discordapp.com/avatars/305044217393053697/2970b22bd769d0cd0ee1de79be500e85.png?size=1024
|Date Sent=22-11-12 18:40:38
|Channel=discourse-modeling
|Text=[[Semantic MediaWiki]] vs [[WikiBase]]: you're right! Semantic mediawiki is more for being an interface that can support unstructured and structured information in the same place, it's a lot more freeform and gestural, but at the cost of predictability/strictness/performance as a database. Definitely different tools with different applications, albeit with a decent amount of overlap in philosophy and etc.
|Link=https://discord.com/channels/1029514961782849607/1038988750677606432/1041059968234831872
}}{{Message
|Author=sneakers-the-rat
|Avatar=https://cdn.discordapp.com/avatars/305044217393053697/2970b22bd769d0cd0ee1de79be500e85.png?size=1024
|Date Sent=22-11-12 19:32:48
|Channel=discourse-modeling
|Text=[[Page Schemas#Creating a new Schema]]
Page schemas is mostly a handy way to generate boilerplate templates and link them to semantic properties. A Form (using [[Page Forms]] is something that is an interface for filling in values for a template.
For an example of how this shakes out, see
[[:Category:Participant]]
[[Template:Participant]]
[[Form:Participant]]
* go to a `Category:CategoryName` page, creating it if it doesn't already exist.
* Click "Create schema" in top right
* If you want a form, check the "Form" box. it is possible to make a schema without a form. The schema just defines what pages will be generated, and the generated pages can be further edited afterwards (note that this might make them inconsistent with the schema)
* Click "add template" If you are only planning on having one template per category, name the template the same thing as the category.
* Add fields! Each field can have a corresponding form input (with a type, eg. a textbox, token input, date selector, etc.) and a semantic property.
* Once you're finished, save the schema
* Click "Generate pages" on the category page. Typically you want to uncheck any pages that are already bluelinks so you don't overwrite them. You might have to do the 'generate pages' step a few times, and it can take a few minutes, bc it's pretty buggy.
|Link=https://discord.com/channels/1029514961782849607/1038988750677606432/1041073096687370250
}}{{Message
|Author=sneakers-the-rat
|Avatar=https://cdn.discordapp.com/avatars/305044217393053697/2970b22bd769d0cd0ee1de79be500e85.png?size=1024
|Date Sent=22-11-12 20:53:17
|Channel=mod-requests
|Text=OK we have a testing [[Mastodon#Test Instance]] server up and running at https://masto.synthesis-infrastructures.wiki
- since I am not going to bother setting up sending emails from the test instance, I need to manually bypass the email verification step for any accounts that are registered. If you want to make an account just for funzies, send me a DM here with the email you used to sign up with and i'll bypass it for you.
- this is not secure! at all! I did nothing to secure it! seriously this is just used for testing purposes! When the workshop ends I'll shut it down and archive the toots as static pages!
|Link=https://discord.com/channels/1029514961782849607/1032530251944833054/1041093349760839761
}}{{Message
|Author=sneakers-the-rat
|Avatar=https://cdn.discordapp.com/avatars/305044217393053697/2970b22bd769d0cd0ee1de79be500e85.png?size=1024
|Date Sent=22-11-12 23:01:53
|Channel=wikibot
|Text=<@771783584105234462> [[WikiBot#Bugfixes]] just pushed an update to the wikibot that might fix the red X's you're getting - likely an error from when there isn't an avatar set, but the logs aren't being kept long enough back for me to see for sure.
|Link=https://discord.com/channels/1029514961782849607/1036778158122344448/1041125715636125696
}}
== 22-11-13 ==
{{Message
|Author=joelchan86
|Avatar=https://cdn.discordapp.com/avatars/322545403876868096/6dd171845a7a4e30603d98ae510c77b8.png?size=1024
|Date Sent=22-11-13 03:34:58
|Channel=incentive-mechanisms
|Text=my examples are more the latter.
there are also strong roots in this idea of [[Infrastructure]] in CSCW, studying lots of attempts to get scientists to adopt new infrastructure and why they... didn't work.
one challenge is the [[Claim]] that "infrastructures often fail because of the inertia of the installed base" (existing software, workflows, norms, institutions, legal codes, etc.)
one decent entry point [[Source]] on this:
Information Infrastructures and the Challenge of the Installed Base | SpringerLink
https://link.springer.com/chapter/10.1007/978-3-319-51020-0_3
|Link=https://discord.com/channels/1029514961782849607/1041061650977009704/1041194438329901137
}}{{Message
|Author=joelchan86
|Avatar=https://cdn.discordapp.com/avatars/322545403876868096/6dd171845a7a4e30603d98ae510c77b8.png?size=1024
|Date Sent=22-11-13 03:36:24
|Channel=incentive-mechanisms
|Text=another classic [[Source]] on [[Infrastructure]] is Steps Toward an Ecology of Infrastructure: Design and Access for Large Information Spaces | Information Systems Research
https://pubsonline.informs.org/doi/abs/10.1287/isre.7.1.111 -
Which describes the [[Worm Community System (WCS)]], an early attempt at building a new synthesis / knowledge-sharing infrastructure for worm scientists.
an evocative quote from the paper:
> Consider the set of tasks associated with getting the system up and running. WCS runs on Sun Workstation as a standalone or remotely, or a on a Mac with an ethernet connection remotely over the NSFnet, or, with less functionality, on a PC over the net. Prior to using WCS, one must buy the appropriate computer; identify and buy the appropriate windows-based interface; use communications protocol such as telnet and / or FTP; and locate the remote address where you "get" or operate the system. Each of these tasks requires that people trained in biology acquire skills taken for granted by systems developers. The latter have interpersonal and organizational networks that help them obtain necessary technical information, and also possess a wealth of tacit knowledge about systems, software, and configurations. For instance, identifying which version of X Windows to use on a workstation means understanding what class of software product X Windows is, installing it, and then linking its configuration properly with the immediate or remote link. Following instructions to "download the system via FTP" requires an understanding of file transfer protocols across the Internet, knowing which issue of the Worm Breeder's Gazette lists the appropriate electronic address, and knowing how FTP and X Windows work together.
|Link=https://discord.com/channels/1029514961782849607/1041061650977009704/1041194797551063151
}}{{Message
|Author=joelchan86
|Avatar=https://cdn.discordapp.com/avatars/322545403876868096/6dd171845a7a4e30603d98ae510c77b8.png?size=1024
|Date Sent=22-11-13 03:37:44
|Channel=incentive-mechanisms
|Text=<@690574739785121815> can probably point to others, including his own work with the [[GLOBE system]] 🙂
http://globe.umbc.edu/
[[Source]]: Infrastructuring for Cross-Disciplinary Synthetic Science: Meta-Study Research in Land System Science | SpringerLink
https://link.springer.com/article/10.1007/s10606-017-9267-z
|Link=https://discord.com/channels/1029514961782849607/1041061650977009704/1041195133028274206
}}{{Message
|Author=joelchan86
|Avatar=https://cdn.discordapp.com/avatars/322545403876868096/6dd171845a7a4e30603d98ae510c77b8.png?size=1024
|Date Sent=22-11-13 03:43:07
|Channel=incentive-mechanisms
|Text=a bit further afield, i'd point to the [[Open Science Framework]] as a thoughtful case study in incentive mechanism design focused on integration into *intrinsic* benefits (i'm more thoughtful about my science, i can easily document things so i don't forget them)
this podcast interview is a decent look into how he thinks about things: https://everythinghertz.com/69
if i read him right, i sort of agree that infrastructure (possibility) an usability and communities (norms) are prior to / foundational to incentives and policy. top-down incentives and policies that don't align with existing norms and usable practices may risk incentivizing 'just comply with it' practices or just fall flat, like some data sharing mandates.
|Link=https://discord.com/channels/1029514961782849607/1041061650977009704/1041196489214537728
}}{{Message
|Author=joelchan86
|Avatar=https://cdn.discordapp.com/avatars/322545403876868096/6dd171845a7a4e30603d98ae510c77b8.png?size=1024
|Date Sent=22-11-13 03:44:39
|Channel=incentive-mechanisms
|Text=[[Source]] for the figure in the previous msg: https://assets.pubpub.org/5nv701md/01521405455055.pdf
|Link=https://discord.com/channels/1029514961782849607/1041061650977009704/1041196875237298318
}}{{Message
|Author=Konrad Hinsen
|Avatar=https://cdn.discordapp.com/avatars/499904513038090240/343ae17c322fa09b3260f95e58bc4f29.png?size=1024
|Date Sent=22-11-13 08:26:56
|Channel=A request from the Discourse Modeling
|Text=Thanks <@322545403876868096> ! Added to https://synthesis-infrastructures.wiki/Discourse_Modeling. I guess I could have used [[Wikibot]] for that, but it was easier to do it by hand than figuring out the intricacies of Wikibot.
|Link=https://discord.com/channels/1029514961782849607/1041046303804772402/1041267914017357825
}}{{Message
|Author=petermr
|Avatar=
|Date Sent=22-11-13 09:06:34
|Channel=incentive-mechanisms
|Text=[[Blue Obelisk]] is (i.e. still active) a remote asynchronous collaboration with no central management or funding. A large part consists of nodes representing software packages. See [[https://en.wikipedia.org/wiki/Blue_Obelisk]]. It works because several of the authors knew/know each other and agreed at the outset to adopt an interoperability mantra "Open Data, Open Standards, Open Source" (ODOSOS).
Because everyone agrees the same approach to interoperability the nodes can develop indeoendently! The management is informal - a mailing list and occasional back channels. So there is a collaborative network - see WP article.
|Link=https://discord.com/channels/1029514961782849607/1041061650977009704/1041277888764325938
}}{{Message
|Author=petermr
|Avatar=
|Date Sent=22-11-13 09:08:25
|Channel=computable-graphs
|Text=Yes, as a scientist I also made this assumption. For example the [[IPCC report]] is 10,000 pages of scientific discourse. Hmm!
|Link=https://discord.com/channels/1029514961782849607/1038983137222467604/1041278352809537637
}}{{Message
|Author=Konrad Hinsen
|Avatar=https://cdn.discordapp.com/avatars/499904513038090240/343ae17c322fa09b3260f95e58bc4f29.png?size=1024
|Date Sent=22-11-13 10:25:58
|Channel=discourse-modeling
|Text=Just added a "proposal" tag to our discussion. In scientific discourse, that would be a category used in opinion papers etc. Is this already part of the [[Discourse Graph]] repertory?
|Link=https://discord.com/channels/1029514961782849607/1038988750677606432/1041297868809584680
}}{{Message
|Author=Konrad Hinsen
|Avatar=https://cdn.discordapp.com/avatars/499904513038090240/343ae17c322fa09b3260f95e58bc4f29.png?size=1024
|Date Sent=22-11-13 10:36:54
|Channel=off-topic
|Text=Note to <@305044217393053697> about [[Wikibot]]: it doesn't pick up edits on messages that it has already added to the WIki. The version in the Wiki ends up being obsolete. Could be important when someone edits to add "not", for example. Discord users are used to having this possibility.
|Link=https://discord.com/channels/1029514961782849607/1035691728356790322/1041300620109426699
}}{{Message
|Author=joelchan86
|Avatar=https://cdn.discordapp.com/avatars/322545403876868096/6dd171845a7a4e30603d98ae510c77b8.png?size=1024
|Date Sent=22-11-13 13:02:42
|Channel=general-brainstorming
|Text=I think this is probably covered by [[Glamorous Toolkit]] (cc <@499904513038090240> who is a core user)!
|Link=https://discord.com/channels/1029514961782849607/1034992937391632444/1041337312648376430
}}{{Message
|Author=Konrad Hinsen
|Avatar=https://cdn.discordapp.com/avatars/499904513038090240/343ae17c322fa09b3260f95e58bc4f29.png?size=1024
|Date Sent=22-11-13 13:07:02
|Channel=general-brainstorming
|Text=Yes, that's a prominent use case for [[Glamorous Toolkit]].
|Link=https://discord.com/channels/1029514961782849607/1034992937391632444/1041338405390401536
}}{{Message
|Author=Konrad Hinsen
|Avatar=https://cdn.discordapp.com/avatars/499904513038090240/343ae17c322fa09b3260f95e58bc4f29.png?size=1024
|Date Sent=22-11-13 13:22:21
|Channel=general-brainstorming
|Text=Note that [[Glamorous Toolkit]] is not (yet) a development environment for Python. What is described here is "data science" on a Python codebase. You analyze the code, but you cannot change it. For Pharo Smalltalk, there is excellent code refactoring support in addition to analysis features.
|Link=https://discord.com/channels/1029514961782849607/1034992937391632444/1041342259930603592
}}{{Message
|Author=Flancian
|Avatar=https://cdn.discordapp.com/avatars/708787219992805407/3552e578a664f2e66d7bccad375e589d.png?size=1024
|Date Sent=22-11-13 15:09:48
|Channel=front-door
|Text=[[joel chan]] -> [[decentralized discourse graph]]
|Link=https://discord.com/channels/1029514961782849607/1029514961782849610/1041369298817536121
}}{{Message
|Author=Flancian
|Avatar=https://cdn.discordapp.com/avatars/708787219992805407/3552e578a664f2e66d7bccad375e589d.png?size=1024
|Date Sent=22-11-13 15:10:56
|Channel=oh ya sorry there s a zoom link in
|Text=thank you! [[meta]] why zoom instead of something like [[jitsi]]
|Link=https://discord.com/channels/1029514961782849607/1041369253011542146/1041369585477226587
}}{{Message
|Author=Flancian
|Avatar=https://cdn.discordapp.com/avatars/708787219992805407/3552e578a664f2e66d7bccad375e589d.png?size=1024
|Date Sent=22-11-13 16:06:41
|Channel=front-door
|Text=apologies I didn't make it to [[discourse modeling]]!
|Link=https://discord.com/channels/1029514961782849607/1029514961782849610/1041383613943513098
}}{{Message
|Author=Wutbot
|Avatar=https://cdn.discordapp.com/avatars/709165833888464966/d959819a9a72aa307c6ef1b91d7f94a2.png?size=1024
|Date Sent=22-11-13 18:04:14
|Channel=discourse-modeling
|Text=From the Gutenberg city of Mainz, the [[CLAIM]] home of modern intellectual synthesis and dissemination - thank you for your participation! I've enjoyed our discussions and look forward to their continuation!
|Link=https://discord.com/channels/1029514961782849607/1038988750677606432/1041413196940066936
}}
== 22-11-14 ==
{{Message
|Author=sneakers-the-rat
|Avatar=https://cdn.discordapp.com/avatars/305044217393053697/2970b22bd769d0cd0ee1de79be500e85.png?size=1024
|Date Sent=22-11-14 04:10:29
|Channel=general
|Text=I've got a question that seems appropriate for this group, if anyone is interested in sticking around in this discord :).
So I spend a decent amount of time talking to [[Librarians]] [[Libraries]], and it always strikes me that they are a group of people with a ton of training and experience specifically in synthesis-like work but seem often stymied by their tools, often for lack of resources. I should have asked earlier, are there any other libraries-adjacent people in this chat?
Here's a question for whoever is interested: what would you do (what tools, what would your workflow look like) for [[Manual Curation]] of thousands of papers from structured queries across multiple databases, with curation criteria that include
a) reasonably specific/computable **minimum standards** (peer-reviewed, word count, etc.) and
b) **topic standards** that are a series of keywords, but rely on someone doing manual curation to be able to recognize an intuitive n-depth similarity to the specific keywords
|Link=https://discord.com/channels/1029514961782849607/1041005559954022471/1041565762546053190
}}{{Message
|Author=sneakers-the-rat
|Avatar=https://cdn.discordapp.com/avatars/305044217393053697/2970b22bd769d0cd0ee1de79be500e85.png?size=1024
|Date Sent=22-11-14 04:21:58
|Channel=libraries-and-manual-curation
|Text=encouraging the use of the thread for the sake of people's notifications as we enter slow-mode.
sidebar: this to me is one of the more interesting uses of this kind of wiki-bot, in a more long-lived chat and communication medium (glad 2 have <@708787219992805407> here for the long-timescales perspective btw). in both this and any future workshops, being able to plug in something like a wikibot that can let different threads get tagged to common concepts through time to different/overlapping discord servers and output to potentially multiple overlapping wikis is v interesting to me.
I'm gonna continue to make it easier to deploy because i feel like the [[Garden and Stream]] metaphor is one that can unfold on multiple timescales, and it would be cool to build out the ability to make that easier: how cool would it be if you didn't have to decide on a chat/document medium or have to make a new set at the start of an organizing project since it was arbitrary anyway and your infra supported use and crossposting across many media.
Eg. the very understanding surfacing of [[The Google Docs Problem]] because of [[Mediawiki]]'s lack of [[Synchronous Editing]] [[Live Editing]] and the need to remember to link out to external services rather than that being a natural expectation of a multimodal group and having systems that explicitly support that is illustrative to me. Maybe one description is being able to deploy a [[Context of Interoperability]] [[Interoperability]]: during this time period I am intending these documents/discord servers/hashtags/social media accounts/etc. to be able to crosspost between each other so that everyone needs to to as little as possible to make their workflows align
|Link=https://discord.com/channels/1029514961782849607/1041565762546053190/1041568655948922910
}}{{Message
|Author=sneakers-the-rat
|Avatar=https://cdn.discordapp.com/avatars/305044217393053697/2970b22bd769d0cd0ee1de79be500e85.png?size=1024
|Date Sent=22-11-14 04:23:28
|Channel=libraries-and-manual-curation
|Text=Also I am doing another [[Sorry Anagora]] (https://anagora.org/sorry-anagora) by speculating about the overlay syntax in-medium, but the need for repeated wikilinks above there revives my interest in recursive wikilinks that can be used in overlapping terms
|Link=https://discord.com/channels/1029514961782849607/1041565762546053190/1041569029841764373
}}{{Message
|Author=sneakers-the-rat
|Avatar=https://cdn.discordapp.com/avatars/305044217393053697/2970b22bd769d0cd0ee1de79be500e85.png?size=1024
|Date Sent=22-11-14 20:37:50
|Channel=general-brainstorming
|Text=in thinking about some of the problems from this weekend like the (affectionately titled) [[The Google Docs Problem]] and various other interface problems with the wiki, where it'll always be easier for people to interact with a system from something they're more used to using, I've been thinking about a more generalized kind of bridging where one can set a [[Context of Interoperability]] where for a given workshop, time period, project, etc. people can plug their tools together and work in a shared space without needing to make all of them anew - so for the simple example of this discord and this wiki, it should be possible to reuse this space to eg. connect to a different (or multiple) wikis, and vice versa to have a different discord connect to it. Along those lines, being able to have a synchronizing eg. git repository of the pages on the wiki so that people could edit them in obsidian or logseq or whatever their tool of choice is... this feels like an incredibly generic idea, so I feel like there must already be a ton of work on it, but it feels like it starts by just making a framework for bridging where the n-to-n problem is simplified by having a set of tools for auth and format translation and modeling documents and messages... I'm going to start sketching one piece of that with the [[Mediawiki-Git Bridge]], but I'm curious to hear if anyone either has any ideas, prior experience, or unmet needs that I might be orbiting around here
|Link=https://discord.com/channels/1029514961782849607/1034992937391632444/1041814238362079242
}}{{Message
|Author=sneakers-the-rat
|Avatar=https://cdn.discordapp.com/avatars/305044217393053697/2970b22bd769d0cd0ee1de79be500e85.png?size=1024
|Date Sent=22-11-14 23:15:29
|Channel=bridges
|Text=This project, [[Git-Mediawiki]] looks pretty good: https://github.com/Git-Mediawiki/Git-Mediawiki
I'm gonna see if i can get a further translating layer between wiki markup and markdown going, thank god for [[Pandoc]]
|Link=https://discord.com/channels/1029514961782849607/1041814238362079242/1041853913479000245
}}
== 22-11-15 ==
{{Message
|Author=sneakers-the-rat
|Avatar=https://cdn.discordapp.com/avatars/305044217393053697/2970b22bd769d0cd0ee1de79be500e85.png?size=1024
|Date Sent=22-11-15 08:38:56
|Channel=synthbots
|Text=<@743886679554654299> brilliant idea for a [[Local Algorithm]] [[Parametrization]] along the lines of using the [[Medium as Storage]] and parametrization from a conversation I was having just now
|Link=https://discord.com/channels/1029514961782849607/1041519468121161798/1041995710901526608
}}{{Message
|Author=petermr
|Avatar=
|Date Sent=22-11-15 10:44:45
|Channel=semantic-climate
|Text=We have been developing code for extraction of "claims" from IPCC [[executive summary]]s . <@322545403876868096> <@499904513038090240>  So far we have the following design:
* exec summary for chapter => 15-20 paras
* bold leading sentence for each para => leading_claim
* subsequent sentences => supporting_claims
* annotation (high|medium|robust|low) (evidence|agreement|confidence)
I will continue
|Link=https://discord.com/channels/1029514961782849607/1040060354161557574/1042027374377709578
}}{{Message
|Author=petermr
|Avatar=
|Date Sent=22-11-15 10:57:35
|Channel=general-brainstorming
|Text=Thanks for [[Glamorous Toolkit]] . Watched the video and understood most of it. Impressive, and maybe the future, but not quite what I wanted now - it requires a fluency with creating new types of object on the fly and so a change in orientation. I want something that I can tag the methods with (say) 'PDF conversion', 'prototype`, etc. I don't mind dumping that as static docs and navigating with Obsidian.
|Link=https://discord.com/channels/1029514961782849607/1034992937391632444/1042030600359510026
}}
== 22-11-23 ==
{{Message
|Author=Wutbot
|Avatar=https://cdn.discordapp.com/avatars/709165833888464966/d959819a9a72aa307c6ef1b91d7f94a2.png?size=1024
|Date Sent=22-11-23 18:59:16
|Channel=discourse-modeling
|Text=[[claim]] claims and questions dominate in natural conversation; the imbalance of sources & evidence is quite stark. This aligns with my mental model of *conversational charity*, where we assume our interlocutors *could* ground their statements in evidence if pressed, but skip this step in the interest  of time.
|Link=https://discord.com/channels/1029514961782849607/1038988750677606432/1045050924466458725
}}
== 22-12-20 ==
{{Message
|Author=sneakers-the-rat
|Avatar=https://cdn.discordapp.com/avatars/305044217393053697/2970b22bd769d0cd0ee1de79be500e85.png?size=1024
|Date Sent=22-12-20 10:34:44
|Channel=synthesizing-social-media
|Text=check this out. [[DIY Algorithms]]. instead of adding accounts to lists and autopopulating, you can directly add posts themselves. so then you can rig up whatever the frick algorithm you want to masto:
https://social.coop/@jonny/109545449455062668
https://github.com/sneakers-the-rat/mastodon/tree/feature/postlists
|Link=https://discord.com/channels/1029514961782849607/1038983225348993184/1054708427399626872
}}
== 23-01-31 ==
{{Message
|Author=bengo
|Avatar=https://cdn.discordapp.com/avatars/602622661125996545/f01c2d17587b5d9b1542dcf40c7c2e33.png?size=1024
|Date Sent=23-01-31 16:02:30
|Channel=computable-graphs
|Text=I've also recently been using logseq. I like how it just writes to markdown. I've been wanting to parse that markdown, look for we--known #hashtags and [[wikitags]], and build an rdf dataset. It looks like SBML is kinda like XML, so maybe something similar is possible there. Have you done anything more with logseq since this post in November?
|Link=https://discord.com/channels/1029514961782849607/1038983137222467604/1070011203939749958
}}
}}