Discord Messages: Difference between revisions

From Synthesis Infrastructures
No edit summary
(→‎23-01-31: new section)
 
(23 intermediate revisions by the same user not shown)
Line 581: Line 581:
if i read him right, i sort of agree that infrastructure (possibility) an usability and communities (norms) are prior to / foundational to incentives and policy. top-down incentives and policies that don't align with existing norms and usable practices may risk incentivizing 'just comply with it' practices or just fall flat, like some data sharing mandates.
if i read him right, i sort of agree that infrastructure (possibility) an usability and communities (norms) are prior to / foundational to incentives and policy. top-down incentives and policies that don't align with existing norms and usable practices may risk incentivizing 'just comply with it' practices or just fall flat, like some data sharing mandates.
|Link=https://discord.com/channels/1029514961782849607/1041061650977009704/1041196489214537728
|Link=https://discord.com/channels/1029514961782849607/1041061650977009704/1041196489214537728
}}{{Message
|Author=joelchan86
|Avatar=https://cdn.discordapp.com/avatars/322545403876868096/6dd171845a7a4e30603d98ae510c77b8.png?size=1024
|Date Sent=22-11-13 03:44:39
|Channel=incentive-mechanisms
|Text=[[Source]] for the figure in the previous msg: https://assets.pubpub.org/5nv701md/01521405455055.pdf
|Link=https://discord.com/channels/1029514961782849607/1041061650977009704/1041196875237298318
}}{{Message
|Author=Konrad Hinsen
|Avatar=https://cdn.discordapp.com/avatars/499904513038090240/343ae17c322fa09b3260f95e58bc4f29.png?size=1024
|Date Sent=22-11-13 08:26:56
|Channel=A request from the Discourse Modeling
|Text=Thanks <@322545403876868096> ! Added to https://synthesis-infrastructures.wiki/Discourse_Modeling. I guess I could have used [[Wikibot]] for that, but it was easier to do it by hand than figuring out the intricacies of Wikibot.
|Link=https://discord.com/channels/1029514961782849607/1041046303804772402/1041267914017357825
}}{{Message
|Author=petermr
|Avatar=
|Date Sent=22-11-13 09:06:34
|Channel=incentive-mechanisms
|Text=[[Blue Obelisk]] is (i.e. still active) a remote asynchronous collaboration with no central management or funding. A large part consists of nodes representing software packages. See [[https://en.wikipedia.org/wiki/Blue_Obelisk]]. It works because several of the authors knew/know each other and agreed at the outset to adopt an interoperability mantra "Open Data, Open Standards, Open Source" (ODOSOS).
Because everyone agrees the same approach to interoperability the nodes can develop indeoendently! The management is informal - a mailing list and occasional back channels. So there is a collaborative network - see WP article.
|Link=https://discord.com/channels/1029514961782849607/1041061650977009704/1041277888764325938
}}{{Message
|Author=petermr
|Avatar=
|Date Sent=22-11-13 09:08:25
|Channel=computable-graphs
|Text=Yes, as a scientist I also made this assumption. For example the [[IPCC report]] is 10,000 pages of scientific discourse. Hmm!
|Link=https://discord.com/channels/1029514961782849607/1038983137222467604/1041278352809537637
}}{{Message
|Author=Konrad Hinsen
|Avatar=https://cdn.discordapp.com/avatars/499904513038090240/343ae17c322fa09b3260f95e58bc4f29.png?size=1024
|Date Sent=22-11-13 10:25:58
|Channel=discourse-modeling
|Text=Just added a "proposal" tag to our discussion. In scientific discourse, that would be a category used in opinion papers etc. Is this already part of the [[Discourse Graph]] repertory?
|Link=https://discord.com/channels/1029514961782849607/1038988750677606432/1041297868809584680
}}{{Message
|Author=Konrad Hinsen
|Avatar=https://cdn.discordapp.com/avatars/499904513038090240/343ae17c322fa09b3260f95e58bc4f29.png?size=1024
|Date Sent=22-11-13 10:36:54
|Channel=off-topic
|Text=Note to <@305044217393053697> about [[Wikibot]]: it doesn't pick up edits on messages that it has already added to the WIki. The version in the Wiki ends up being obsolete. Could be important when someone edits to add "not", for example. Discord users are used to having this possibility.
|Link=https://discord.com/channels/1029514961782849607/1035691728356790322/1041300620109426699
}}{{Message
|Author=joelchan86
|Avatar=https://cdn.discordapp.com/avatars/322545403876868096/6dd171845a7a4e30603d98ae510c77b8.png?size=1024
|Date Sent=22-11-13 13:02:42
|Channel=general-brainstorming
|Text=I think this is probably covered by [[Glamorous Toolkit]] (cc <@499904513038090240> who is a core user)!
|Link=https://discord.com/channels/1029514961782849607/1034992937391632444/1041337312648376430
}}{{Message
|Author=Konrad Hinsen
|Avatar=https://cdn.discordapp.com/avatars/499904513038090240/343ae17c322fa09b3260f95e58bc4f29.png?size=1024
|Date Sent=22-11-13 13:07:02
|Channel=general-brainstorming
|Text=Yes, that's a prominent use case for [[Glamorous Toolkit]].
|Link=https://discord.com/channels/1029514961782849607/1034992937391632444/1041338405390401536
}}{{Message
|Author=Konrad Hinsen
|Avatar=https://cdn.discordapp.com/avatars/499904513038090240/343ae17c322fa09b3260f95e58bc4f29.png?size=1024
|Date Sent=22-11-13 13:22:21
|Channel=general-brainstorming
|Text=Note that [[Glamorous Toolkit]] is not (yet) a development environment for Python. What is described here is "data science" on a Python codebase. You analyze the code, but you cannot change it. For Pharo Smalltalk, there is excellent code refactoring support in addition to analysis features.
|Link=https://discord.com/channels/1029514961782849607/1034992937391632444/1041342259930603592
}}{{Message
|Author=Flancian
|Avatar=https://cdn.discordapp.com/avatars/708787219992805407/3552e578a664f2e66d7bccad375e589d.png?size=1024
|Date Sent=22-11-13 15:09:48
|Channel=front-door
|Text=[[joel chan]] -> [[decentralized discourse graph]]
|Link=https://discord.com/channels/1029514961782849607/1029514961782849610/1041369298817536121
}}{{Message
|Author=Flancian
|Avatar=https://cdn.discordapp.com/avatars/708787219992805407/3552e578a664f2e66d7bccad375e589d.png?size=1024
|Date Sent=22-11-13 15:10:56
|Channel=oh ya sorry there s a zoom link in
|Text=thank you! [[meta]] why zoom instead of something like [[jitsi]]
|Link=https://discord.com/channels/1029514961782849607/1041369253011542146/1041369585477226587
}}{{Message
|Author=Flancian
|Avatar=https://cdn.discordapp.com/avatars/708787219992805407/3552e578a664f2e66d7bccad375e589d.png?size=1024
|Date Sent=22-11-13 16:06:41
|Channel=front-door
|Text=apologies I didn't make it to [[discourse modeling]]!
|Link=https://discord.com/channels/1029514961782849607/1029514961782849610/1041383613943513098
}}{{Message
|Author=Wutbot
|Avatar=https://cdn.discordapp.com/avatars/709165833888464966/d959819a9a72aa307c6ef1b91d7f94a2.png?size=1024
|Date Sent=22-11-13 18:04:14
|Channel=discourse-modeling
|Text=From the Gutenberg city of Mainz, the [[CLAIM]] home of modern intellectual synthesis and dissemination - thank you for your participation! I've enjoyed our discussions and look forward to their continuation!
|Link=https://discord.com/channels/1029514961782849607/1038988750677606432/1041413196940066936
}}
== 22-11-14 ==
{{Message
|Author=sneakers-the-rat
|Avatar=https://cdn.discordapp.com/avatars/305044217393053697/2970b22bd769d0cd0ee1de79be500e85.png?size=1024
|Date Sent=22-11-14 04:10:29
|Channel=general
|Text=I've got a question that seems appropriate for this group, if anyone is interested in sticking around in this discord :).
So I spend a decent amount of time talking to [[Librarians]] [[Libraries]], and it always strikes me that they are a group of people with a ton of training and experience specifically in synthesis-like work but seem often stymied by their tools, often for lack of resources. I should have asked earlier, are there any other libraries-adjacent people in this chat?
Here's a question for whoever is interested: what would you do (what tools, what would your workflow look like) for [[Manual Curation]] of thousands of papers from structured queries across multiple databases, with curation criteria that include
a) reasonably specific/computable **minimum standards** (peer-reviewed, word count, etc.) and
b) **topic standards** that are a series of keywords, but rely on someone doing manual curation to be able to recognize an intuitive n-depth similarity to the specific keywords
|Link=https://discord.com/channels/1029514961782849607/1041005559954022471/1041565762546053190
}}{{Message
|Author=sneakers-the-rat
|Avatar=https://cdn.discordapp.com/avatars/305044217393053697/2970b22bd769d0cd0ee1de79be500e85.png?size=1024
|Date Sent=22-11-14 04:21:58
|Channel=libraries-and-manual-curation
|Text=encouraging the use of the thread for the sake of people's notifications as we enter slow-mode.
sidebar: this to me is one of the more interesting uses of this kind of wiki-bot, in a more long-lived chat and communication medium (glad 2 have <@708787219992805407> here for the long-timescales perspective btw). in both this and any future workshops, being able to plug in something like a wikibot that can let different threads get tagged to common concepts through time to different/overlapping discord servers and output to potentially multiple overlapping wikis is v interesting to me.
I'm gonna continue to make it easier to deploy because i feel like the [[Garden and Stream]] metaphor is one that can unfold on multiple timescales, and it would be cool to build out the ability to make that easier: how cool would it be if you didn't have to decide on a chat/document medium or have to make a new set at the start of an organizing project since it was arbitrary anyway and your infra supported use and crossposting across many media.
Eg. the very understanding surfacing of [[The Google Docs Problem]] because of [[Mediawiki]]'s lack of [[Synchronous Editing]] [[Live Editing]] and the need to remember to link out to external services rather than that being a natural expectation of a multimodal group and having systems that explicitly support that is illustrative to me. Maybe one description is being able to deploy a [[Context of Interoperability]] [[Interoperability]]: during this time period I am intending these documents/discord servers/hashtags/social media accounts/etc. to be able to crosspost between each other so that everyone needs to to as little as possible to make their workflows align
|Link=https://discord.com/channels/1029514961782849607/1041565762546053190/1041568655948922910
}}{{Message
|Author=sneakers-the-rat
|Avatar=https://cdn.discordapp.com/avatars/305044217393053697/2970b22bd769d0cd0ee1de79be500e85.png?size=1024
|Date Sent=22-11-14 04:23:28
|Channel=libraries-and-manual-curation
|Text=Also I am doing another [[Sorry Anagora]] (https://anagora.org/sorry-anagora) by speculating about the overlay syntax in-medium, but the need for repeated wikilinks above there revives my interest in recursive wikilinks that can be used in overlapping terms
|Link=https://discord.com/channels/1029514961782849607/1041565762546053190/1041569029841764373
}}{{Message
|Author=sneakers-the-rat
|Avatar=https://cdn.discordapp.com/avatars/305044217393053697/2970b22bd769d0cd0ee1de79be500e85.png?size=1024
|Date Sent=22-11-14 20:37:50
|Channel=general-brainstorming
|Text=in thinking about some of the problems from this weekend like the (affectionately titled) [[The Google Docs Problem]] and various other interface problems with the wiki, where it'll always be easier for people to interact with a system from something they're more used to using, I've been thinking about a more generalized kind of bridging where one can set a [[Context of Interoperability]] where for a given workshop, time period, project, etc. people can plug their tools together and work in a shared space without needing to make all of them anew - so for the simple example of this discord and this wiki, it should be possible to reuse this space to eg. connect to a different (or multiple) wikis, and vice versa to have a different discord connect to it. Along those lines, being able to have a synchronizing eg. git repository of the pages on the wiki so that people could edit them in obsidian or logseq or whatever their tool of choice is... this feels like an incredibly generic idea, so I feel like there must already be a ton of work on it, but it feels like it starts by just making a framework for bridging where the n-to-n problem is simplified by having a set of tools for auth and format translation and modeling documents and messages... I'm going to start sketching one piece of that with the [[Mediawiki-Git Bridge]], but I'm curious to hear if anyone either has any ideas, prior experience, or unmet needs that I might be orbiting around here
|Link=https://discord.com/channels/1029514961782849607/1034992937391632444/1041814238362079242
}}{{Message
|Author=sneakers-the-rat
|Avatar=https://cdn.discordapp.com/avatars/305044217393053697/2970b22bd769d0cd0ee1de79be500e85.png?size=1024
|Date Sent=22-11-14 23:15:29
|Channel=bridges
|Text=This project, [[Git-Mediawiki]] looks pretty good: https://github.com/Git-Mediawiki/Git-Mediawiki
I'm gonna see if i can get a further translating layer between wiki markup and markdown going, thank god for [[Pandoc]]
|Link=https://discord.com/channels/1029514961782849607/1041814238362079242/1041853913479000245
}}
== 22-11-15 ==
{{Message
|Author=sneakers-the-rat
|Avatar=https://cdn.discordapp.com/avatars/305044217393053697/2970b22bd769d0cd0ee1de79be500e85.png?size=1024
|Date Sent=22-11-15 08:38:56
|Channel=synthbots
|Text=<@743886679554654299> brilliant idea for a [[Local Algorithm]] [[Parametrization]] along the lines of using the [[Medium as Storage]] and parametrization from a conversation I was having just now
|Link=https://discord.com/channels/1029514961782849607/1041519468121161798/1041995710901526608
}}{{Message
|Author=petermr
|Avatar=
|Date Sent=22-11-15 10:44:45
|Channel=semantic-climate
|Text=We have been developing code for extraction of "claims" from IPCC [[executive summary]]s . <@322545403876868096> <@499904513038090240>  So far we have the following design:
* exec summary for chapter => 15-20 paras
* bold leading sentence for each para => leading_claim
* subsequent sentences => supporting_claims
* annotation (high|medium|robust|low) (evidence|agreement|confidence)
I will continue
|Link=https://discord.com/channels/1029514961782849607/1040060354161557574/1042027374377709578
}}{{Message
|Author=petermr
|Avatar=
|Date Sent=22-11-15 10:57:35
|Channel=general-brainstorming
|Text=Thanks for [[Glamorous Toolkit]] . Watched the video and understood most of it. Impressive, and maybe the future, but not quite what I wanted now - it requires a fluency with creating new types of object on the fly and so a change in orientation. I want something that I can tag the methods with (say) 'PDF conversion', 'prototype`, etc. I don't mind dumping that as static docs and navigating with Obsidian.
|Link=https://discord.com/channels/1029514961782849607/1034992937391632444/1042030600359510026
}}
== 22-11-23 ==
{{Message
|Author=Wutbot
|Avatar=https://cdn.discordapp.com/avatars/709165833888464966/d959819a9a72aa307c6ef1b91d7f94a2.png?size=1024
|Date Sent=22-11-23 18:59:16
|Channel=discourse-modeling
|Text=[[claim]] claims and questions dominate in natural conversation; the imbalance of sources & evidence is quite stark. This aligns with my mental model of *conversational charity*, where we assume our interlocutors *could* ground their statements in evidence if pressed, but skip this step in the interest  of time.
|Link=https://discord.com/channels/1029514961782849607/1038988750677606432/1045050924466458725
}}
== 22-12-20 ==
{{Message
|Author=sneakers-the-rat
|Avatar=https://cdn.discordapp.com/avatars/305044217393053697/2970b22bd769d0cd0ee1de79be500e85.png?size=1024
|Date Sent=22-12-20 10:34:44
|Channel=synthesizing-social-media
|Text=check this out. [[DIY Algorithms]]. instead of adding accounts to lists and autopopulating, you can directly add posts themselves. so then you can rig up whatever the frick algorithm you want to masto:
https://social.coop/@jonny/109545449455062668
https://github.com/sneakers-the-rat/mastodon/tree/feature/postlists
|Link=https://discord.com/channels/1029514961782849607/1038983225348993184/1054708427399626872
}}
== 23-01-31 ==
{{Message
|Author=bengo
|Avatar=https://cdn.discordapp.com/avatars/602622661125996545/f01c2d17587b5d9b1542dcf40c7c2e33.png?size=1024
|Date Sent=23-01-31 16:02:30
|Channel=computable-graphs
|Text=I've also recently been using logseq. I like how it just writes to markdown. I've been wanting to parse that markdown, look for we--known #hashtags and [[wikitags]], and build an rdf dataset. It looks like SBML is kinda like XML, so maybe something similar is possible there. Have you done anything more with logseq since this post in November?
|Link=https://discord.com/channels/1029514961782849607/1038983137222467604/1070011203939749958
}}
}}

Latest revision as of 16:03, 31 January 2023

22-10-16

sneakers-the-rat#testing-wikibot22-10-17 00:57:56

Once again i am talking about my Test Topic that's a part of my Test Project and i want to see some links in the reply

sneakers-the-rat#testing-wikibot22-10-17 01:10:41

Now hopefully without the uncaught exceptions for Test Topic

sneakers-the-rat#testing-wikibot22-10-17 01:15:09

Now I am going to link a message into a specific section in my Test Topic#New Section and also see if we propagate that through to the wikilinks in Test Project

sneakers-the-rat#testing-wikibot22-10-17 01:15:09

Now I am going to link a message into a specific section in my Test Topic#New Section and also see if we propagate that through to the wikilinks in Test Project

sneakers-the-rat#testing-wikibot22-10-17 01:38:21

i'll leave the bot running for a lil bit but yeah it's just running on my laptop for now, will move it over to the linode running the wiki when i go to switch the url. made a page to document the WikiBot#Status Updates


22-10-16

sneakers-the-rat#testing-wikibot22-10-17 00:57:56

Once again i am talking about my Test Topic that's a part of my Test Project and i want to see some links in the reply


22-10-31

sneakers-the-rat#testing-wikibot22-10-31 23:55:22
sneakers-the-rat#wikibot22-11-01 00:03:06

I would suggest turning Discord#Notifications off for this channel. on mobile click the person looking icon in the top right and then the notification options are near the top. on desktop there should be a bell-looking icon along the top row of icons


22-11-01

sneakers-the-rat#testing-wikibot22-11-01 02:02:02


22-11-02

sneakers-the-rat#testing-wikibot22-11-02 07:43:01
sneakers-the-rat#testing-wikibot22-11-02 07:43:07
sneakers-the-rat#fedi22-11-02 08:12:57

Excuse me let me be a good role model on continuous archiving. One of the reasons I am excited about academics adopting Mastodon is because ActivityPub is built on Linked Data, which i think inspires the possibility for fundamentally new modes of scholarly communication. I have written about this in the past (10.48550/arXiv.2209.07493, but will do my best to decenter my own ideas except for when I am using them as a demonstration for others as part of a demonstration of using the technology developed for the workshop

sneakers-the-rat#wikibot22-11-02 08:14:45

omg lmao WikiBot#TODO Don't make a separate page using semantic wikilinks lol

sneakers-the-rat#fedi22-11-02 08:17:07

Then i just made a page to link to the pages. There's not really a well defined way to do meta-categorization like that in-medium as far as I'm aware, but am happy to receive WikiBot#Feature Requests about it

joelchan86#table-222-11-02 12:27:51

Konrad, are you familiar with Chemical Markup Language (CML)? I stumbled across it on Twitter a few weeks ago via discussions about open publishing, and was surprised at the longevity of the project. I don’t love XML, but it seems to have gained some traction in its day, though I am not sure how active it is these days. https://en.m.wikipedia.org/wiki/Chemical_Markup_Language


22-11-03

joelchan86#table-222-11-03 02:56:36

ah, that is both informative and sad to hear. i think ahead of its time is a reasonable diagnosis.

ScholOnto I think was also ahead of its time: had a working prototype integration into a Word processor for directly authoring discourse-graph like things while drafting a manuscript (described here: https://onlinelibrary.wiley.com/doi/abs/10.1002/int.20188)

sneakers-the-rat#off-topic22-11-03 11:27:24

this is almost exactly the idea with the WikiBot that pushes to a Semantic Wiki, and good to have a name in Gradual Enrichment. looking forward to digging though the references and finishing that piece^ tomorrow. (and finishing the n-back linking syntax so I can just directly include the piece in the annotation that is this message). thanks for sharing 🙂

sneakers-the-rat#In terms of overlap with my own22-11-03 22:51:30

Project Ideas#Linked Data Publishing On Activitypub

ooh I'm very interested in this. so are you thinking a Twitter#Bridge -> ActivityPub#Bridge where one could use markup within the twitter post to declare Linked Data#Markup Syntax and then post to AP? I have thought about this kind of thing before, like using a bot command syntax to declare prefixes by doing something like ``` @ bot prefix foaf: https:// (ontology URL) ``` or ``` @ bot alias term: foaf.LongerNameForTerm ``` so that one could do maybe a semantic wikilink like `[ [term::value] ]` either within the tweet or as a reply to it (so the tweet itself doesn't become cluttered/it can become organized post -hoc?).

I've also thought about a bridge (I called Threadodo ) that implements that kind of command syntax to be able to directly archive threads to Zenodo along with structured information about the author, but this seems more interesting.

I can help try and clear some of the groundwork out of the way to make it easier for you and other interested participants to experiment. I have asked around fedi a bunch for a very minimal AP server implementation, and I could try and find one (or we could try and prototype one) if you want to experiment with that :), and I can also document and show you a tweepy-based bot that has an extensible command/parsing system too

sneakers-the-rat#table-322-11-03 23:11:44

hello Matthew! very curious about this. As someone not familiar with materials science, I'm curious if you could say more about what OPTIMADE does in this case? Is the idea that the zenodo plugin parses some paper, and then sends it to other listening clients that the parsed data comes from the paper? is it a vocabulary, or communication protocol, or both? and what kind of information would it be parsing/do materials scientists want to be able to analyze in an automated way? sorry if I am being dense, just curious because I've always admired materials but have had very little exposure.


22-11-04

Konrad Hinsen#Wikibot22-11-04 16:21:19

Nice idea, that Wikibot! Do I understand correctly that it grabs all messages that contain a page name in double brackets, and adds them to the Wiki page with that name? (this message being as much a test as a question of course)

sneakers-the-rat#Wikibot22-11-04 21:01:30

the idea is exactly to merge the Garden and Stream we have here, or as olde wiki culture called it, DocumentMode and ThreadMode in a process of Gradual Enrichment http://meatballwiki.org/wiki/DocumentMode http://meatballwiki.org/wiki/ThreadMode


22-11-05

sneakers-the-rat#mod-requests22-11-05 01:23:44

Wiki#Organization As we get towards proposing projects and organizing ideas, I've added a set of pages for the different concepts that y'all indicated either here or in your applications: https://synthesis-infrastructures.wiki/Concepts

Each page should give a list of participants that have a `Interested In` property on their participant page (or you can declare interest on the page using the template (see example at https://synthesis-infrastructures.wiki/Template:Concept ) as another way of finding people with similar interests. Feel free to add additional interests from your own page and add new pages by using the `

Discord Messages
Interested Participants


` template on any new page. The pages are all stubs at the moment, but I have made links between related concepts/subconcepts/etc. These will also help us catch any wikilinks made from within the discord 🙂


22-11-07

sneakers-the-rat#black-boxes22-11-07 01:11:33

Hello Pooja and welcome 🙂 I certainly share your concerns here, and would love to read any writing or work you've done on the topic! I'm curious if you had any initial inklings of Discovery systems that go beyond the Search#Black Box Model ? I have my own ideas but as you say, everyone has a unique standpoint and experience that structures their ideas so I would love to hear yours!

sneakers-the-rat#general-brainstorming22-11-07 01:33:13

For everyone that is embarking on a project, how about setting up a page under Projects where we can start organizing people that are interested in them, and setting up any prerequisite infra/tools so we don't have to be struggling with stuff like provisioning servers and getting permissions setup during our limited time this weekend 🙂


22-11-08

sneakers-the-rat#linked-data-activitypub22-11-08 23:32:39

To add to the Reading List#Linked Data on Linked Data, Standards, and Collaboration: a piece from one of the authors of ActivityPub on the merger of the distributed messaging and linked data communities that I think puts into context what a massive achievement AP was http://dustycloud.org/blog/on-standards-divisions-collaboration/


22-11-09

sneakers-the-rat#SEPIO + ActivityStreams via JSON-LD22-11-09 23:25:43

Haven't finished n-back thread capture yet but this rocks and let's keep track of it on the wiki. Scroll up in this thread for SEPIO + ActivityStreams/ActivityPub + JSON-LD. On a train now and having to work on some other stuff but this is making me unreasonably excited to check out later


22-11-10

sneakers-the-rat#mod-requests22-11-10 00:15:39

Reminder as the conversations start thickening (which has been great to read, looking forward to jumping in more later when I have a few minutes) and thus become a bit harder to keep track of that you should feel free to make liberal use of Wikilinks in your posts to archive them in the wiki and make them more discoverable by people outside of your table/project. (For example this message will appear here https://synthesis-infrastructures.wiki/Wikilinks ). This would be especially useful because it looks like some folks are interested in doing some <#1038988750677606432> on the wiki!

sneakers-the-rat#discourse-modeling22-11-10 08:58:52

I am about to go to bed but personally I favor the model of the federated wiki, that the same "term" or page title in the case of the wiki has many possible realizations, and what's useful is their multiplicity. I think everything2 was an early model of this, but basically it cuts to the core of the history of early wikis, to the initial fork of ward's wiki into meatball. the singularity of meaning as implied by Wikipedia is imo an artifact of wikis having been adopted by encyclopedists, with all the diderot-like enlightenment-era philosophy that entails. this seems exceptionally apt today and yesterday given Aaron Swartz telling of that history , particularly his "Who Writes Wikipedia?" Everyone can contribute in a linked context, and that's what the synthesis of wikilike thinking, linked data, and distributed messaging gives us :). I write about this idea more completely here: https://jon-e.net/infrastructure/#the-wiki-way after my take on the critical/ethical need for forking in information systems as given by the case study of NIH's biomedical translator (link to most relevant part in the middle of the argument, the justification and motivation precedes it): https://jon-e.net/infrastructure/#problematizing-the-need-for-a-system-intended-to-link-all-or-eve

joelchan86#discourse graphs22-11-10 15:51:29

the idea DiscourseGraphs is rooted in a bunch of models like SEPIO (h/t <@602622661125996545>) and ScholOnto that have been around for various amounts of time, though not yet with (to my knowledge) serious widespread adoption.

joelchan86#discourse graphs22-11-10 15:55:39

we think the problem now is user-friendly tools and workfows that can create discourse graph structures, and have seen some exciting progress across a bunch of new user-facing "personal wikis". but bridging from personal to communal is still a challenge, partially bc of tooling.

this is why i'm excited about the Discourse Modeling idea, which i sort of understand as a way to try to instantiate something like Discourse Graphs into a wiki (bc wikis have a lot more in-built affordances for collaboration, such as edit histories, talk pages, etc.), which may hopefully lead to a lower barrier to entry for collaborative discourse graphing.

a high hope is that we can develop a process that is easy enough to understand and implement that can then be applied to discourse graphing the IPCC or similarly large body of research on a focused, contentious, interdisciplinary topic.

other examples include: - effects of masks on community transmission (can't do decisive RCTs, need to synthesize) - effects of social media on political (dys)function: (existing crowdsourced lit review here, in traditional narrative form: https://docs.google.com/document/d/1vVAtMCQnz8WVxtSNQev_e1cGmY9rnY96ecYuAj6C548/edit#)

sneakers-the-rat#Thanks sneakers the rat2880 Your site is22-11-10 20:40:46

I don't know of any either! The closest I know of is ward's Fedwiki: but i plan on making one (probably more related to <#1038983225348993184> than this channel, which i am trying hard not to derail lol)

Konrad Hinsen#Thanks sneakers the rat2880 Your site is22-11-10 20:45:36

Looking forward to your work in this space! I do know about Fedwiki but only as a spectator. I tried to convince a few colleagues to set up a network of Fedwikis in our research domain, but nobody was keen on becoming a sysadmin to run their own Wiki instance.

sneakers-the-rat#Thanks sneakers the rat2880 Your site is22-11-10 20:58:52

yes anagora does have a rough kind of federation! it's a very very permissive model which I love, markdown and plaintext with wikilinks, a lot of the wikis that it federates with are just git repositories of .md files 🙂

sneakers-the-rat#anagora22-11-10 21:01:50

Maybe Synthesis Infrastructures 2022 or something? but we haven't made one yet no lol

sneakers-the-rat#semantic-climate22-11-10 21:38:12

another group ( <#1038988750677606432> ) will i believe be analyzing the semantic information on the wiki ( https://synthesis-infrastructures.wiki/Main_Page ), and you can archive the text of any message onto a wiki page by using Wikilinks: ( so eg. this message will go to https://synthesis-infrastructures.wiki/Wikilinks )

joelchan86#discourse graphs22-11-10 21:38:25

in human-computer interaction we have a similar problem of trying to think about and synthesize across many genres of contributions/research. one map (adapted for information studies) breaks things out into "empirical" contributions (these most often follow the standard intro/methods/results/discussion format), "conceptual" contributions (which are often more amorphous theory papers), and "constructive" contributions (making a new system/method)

from here: HCI Research as Problem-Solving


22-11-11

Konrad Hinsen#WikiFunctions22-11-11 05:26:42

That said, the more abstract idea of defining a data model plus execution semantics that any programming language can plug into looks very promising. That aspect of WikiLambda was in fact one of my inspirations for developing Digital Scientific Notations.

Konrad Hinsen#Thanks sneakers the rat2880 Your site is22-11-11 05:45:14

I'll try to turn this thread into Project Ideas#Federated knowledge synthesis: identify protocols, data models, tools, practices, etc. that can support the process of synthesizing and formalizing scientific knowledge, then build on these ingredients. One dimension is going from narratives via discourse graphs to knowledge graphs. Another dimension is going from conceptual ideas to formal systems.

sneakers-the-rat#Thanks sneakers the rat2880 Your site is22-11-11 06:50:07

we're in the process of consolidating the ideas into group pages, so far the group pages are incomplete, but tomorrow (I'm on Pacific time, US) will work on that and take whatever ya write and move it over there 🙂 <@322545403876868096> got this started here: https://synthesis-infrastructures.wiki/Workshop_Working_Groups and then we'll split those up into pages in Category:Group

joelchan86#what is obsidian-logseq-roam22-11-11 14:05:36

I think of all of these tools as "personal hypertext notebooks" - basically taking what is possible in wikis (organizing by means of linking, hypertext) and lowering the barrier to entry (no need to spin up a server, can just download an app and go).

The common thread across these notebooks then is allowing for organizing and exploring by means of bidirectional hyperlinks between "notes": - In Obsidian each linkable note is a markdown file and can be as short or long as you like - in Logseq/Roam and other outliner-style notebooks, you can link "pages", and also individual bullets in the outlines on each page.

In this way, the core functionality of these tools is similar to a wiki, but they do leave out a lot of the collaborative functionality that makes wikis work well (granular versioning and edit histories, talk pages, etc.). So for folks like <@305044217393053697> who are comfortable with wikis already, they add marginal value IMO.

Their technical predecessors in the "personal (vs. collaborative) wiki" space include TiddlyWiki and emacs org-mode (and inherit their technical extensibility: many users create their own extensions of the notebooks' functionality. an example is the Roam Discourse Graph extension that <@824740026575355906> is using).

These tools also tend to trace their idea lineage back to vannevar bush's Memex and ted nelson's Xanadu.

joelchan86#what is obsidian-logseq-roam22-11-11 14:08:30

These tools are still not entirely mainstream compared to tools like Notion, which is related to your experience trying to learn more about the tools - so they tend to have a steep learning curve!

IMO the best way to get a feel for what they are is to see some examples/videos.

I like this video for an overview of Logseq: https://www.youtube.com/watch?v=ZtRozP8hfEY&t=6s

I describe Roam and the Roam Discourse Graph extension in this portion of a talk I recently gave: https://youtu.be/jH-QF7rVSeo?t=1417

joelchan86#what is obsidian-logseq-roam22-11-11 19:01:10

i agree it's not universal! my feeling is that Claim: a statement (claim or evidence) might be the more universal element: - empirical work also consists of statements about the world (this is less controversial) - design/technological innovation rests in part on claims about a) what is needed in the world, what is hard to do, constraints, and b) what is needed to succeed: examples here: https://deepscienceventures.com/content/the-outcomes-graph-2 (h/t <@559775193242009610>) - theories often consist of systems of core claims (e.g., in models like what <@824740026575355906> and <@734802666441408532> are working with, where we can think of the claims as subgraphs of the overall knowledge graph)

see, e.g., Evidence from this review of models of scientific knowledge https://publish.obsidian.md/joelchan-notes/discourse-graph/evidence/EVD+-+Four+positivist+epistemological+models+from+philosophy+of+science%2C+including+Popper%2C+emphasiz...+statements+as+a+core+component+of+scientific+knowledge+-+%40harsDesigningScientificKnowledge2001

and Evidence convergence/contrasts across users of the Roam Discourse Graph extension in terms of building blocks: common thread across all was Evidence

sneakers-the-rat#graphdb22-11-11 23:05:02

super glad to hear that the endpoint worked btw, i've never used SPARQL and am more used to just making my own data models that generate API queries & parse etc. so I would love to see what you've been doing and how you've been using it - I'll make a SPARQL page linked off the wiki page that gives the URL and maybe we can embed sample queries and etc. there


22-11-12

sneakers-the-rat#discourse-modeling22-11-12 03:30:30

I am definitely on team "scruffy" per Lindsay Poirier's typology (BTW "A Turn for the Scruffy" should be on the collective Reading List for anyone who hasn't come across it) and so yes definitely "Own-terminology" iterating into something shared, part of why i love the semwiki model of building them. On the other end of things for tomorrow - Is there any particular existing ontology/schema/etc. anyone in this group would like to have imported into the wiki for discourse modeling?

Wutbot#general-brainstorming22-11-12 11:22:37

those brackets cue the WikiBot to link the message to the wiki page containing the mentioned terms

sneakers-the-rat#synthesizing-social-media22-11-12 16:02:25
joelchan86#general-brainstorming22-11-12 17:35:29
joelchan86#what is obsidian-logseq-roam22-11-12 17:53:19

hi peter, yes, the `...` (wikilinks) syntax has been quite widely adopted, spread from wikis!

sneakers-the-rat#discourse-modeling22-11-12 18:36:37

Info on using Page Schemas: So you could only need to make schemas for the different types of nodes that you'd want, so if i'm reading right then yes you would have several hundred pages but only 4-5 schemas.

A schema is defined (using page schemas) from a Category Page

A page is only ever loosely connected to a schema (rather than strictly, ie. can only have/requires the schema's fields) through its categ

sneakers-the-rat#discourse-modeling22-11-12 18:39:00

Info on using Page Schemas: So you could only need to make schemas for the different types of nodes that you'd want, so if i'm reading right then yes you would have several hundred pages but only 4-5 schemas.

A schema is defined (using page schemas) from a Category Page

A page is only ever loosely connected to a schema (rather than strictly, ie. can only have/requires the schema's fields) through its category. Page schemas then generates a template for the category. Typically templates will add a page to a category anyway ([ [Category:CategoryName] ]). So a page can have multiple schemas - that would just look like using multiple templates on the same page.

sneakers-the-rat#discourse-modeling22-11-12 18:40:38

Semantic MediaWiki vs WikiBase: you're right! Semantic mediawiki is more for being an interface that can support unstructured and structured information in the same place, it's a lot more freeform and gestural, but at the cost of predictability/strictness/performance as a database. Definitely different tools with different applications, albeit with a decent amount of overlap in philosophy and etc.

sneakers-the-rat#discourse-modeling22-11-12 19:32:48

Page Schemas#Creating a new Schema Page schemas is mostly a handy way to generate boilerplate templates and link them to semantic properties. A Form (using Page Forms is something that is an interface for filling in values for a template.

For an example of how this shakes out, see Category:Participant Template:Participant Form:Participant

  • go to a `Category:CategoryName` page, creating it if it doesn't already exist.
  • Click "Create schema" in top right
  • If you want a form, check the "Form" box. it is possible to make a schema without a form. The schema just defines what pages will be generated, and the generated pages can be further edited afterwards (note that this might make them inconsistent with the schema)
  • Click "add template" If you are only planning on having one template per category, name the template the same thing as the category.
  • Add fields! Each field can have a corresponding form input (with a type, eg. a textbox, token input, date selector, etc.) and a semantic property.
  • Once you're finished, save the schema
  • Click "Generate pages" on the category page. Typically you want to uncheck any pages that are already bluelinks so you don't overwrite them. You might have to do the 'generate pages' step a few times, and it can take a few minutes, bc it's pretty buggy.
sneakers-the-rat#mod-requests22-11-12 20:53:17

OK we have a testing Mastodon#Test Instance server up and running at https://masto.synthesis-infrastructures.wiki - since I am not going to bother setting up sending emails from the test instance, I need to manually bypass the email verification step for any accounts that are registered. If you want to make an account just for funzies, send me a DM here with the email you used to sign up with and i'll bypass it for you. - this is not secure! at all! I did nothing to secure it! seriously this is just used for testing purposes! When the workshop ends I'll shut it down and archive the toots as static pages!

sneakers-the-rat#wikibot22-11-12 23:01:53

<@771783584105234462> WikiBot#Bugfixes just pushed an update to the wikibot that might fix the red X's you're getting - likely an error from when there isn't an avatar set, but the logs aren't being kept long enough back for me to see for sure.


22-11-13

joelchan86#incentive-mechanisms22-11-13 03:34:58

my examples are more the latter.

there are also strong roots in this idea of Infrastructure in CSCW, studying lots of attempts to get scientists to adopt new infrastructure and why they... didn't work.

one challenge is the Claim that "infrastructures often fail because of the inertia of the installed base" (existing software, workflows, norms, institutions, legal codes, etc.)

one decent entry point Source on this: Information Infrastructures and the Challenge of the Installed Base

joelchan86#incentive-mechanisms22-11-13 03:36:24

another classic Source on Infrastructure is Steps Toward an Ecology of Infrastructure: Design and Access for Large Information Spaces

joelchan86#incentive-mechanisms22-11-13 03:37:44

<@690574739785121815> can probably point to others, including his own work with the GLOBE system 🙂

http://globe.umbc.edu/

Source: Infrastructuring for Cross-Disciplinary Synthetic Science: Meta-Study Research in Land System Science

joelchan86#incentive-mechanisms22-11-13 03:43:07

a bit further afield, i'd point to the Open Science Framework as a thoughtful case study in incentive mechanism design focused on integration into *intrinsic* benefits (i'm more thoughtful about my science, i can easily document things so i don't forget them)

this podcast interview is a decent look into how he thinks about things: https://everythinghertz.com/69

if i read him right, i sort of agree that infrastructure (possibility) an usability and communities (norms) are prior to / foundational to incentives and policy. top-down incentives and policies that don't align with existing norms and usable practices may risk incentivizing 'just comply with it' practices or just fall flat, like some data sharing mandates.

joelchan86#incentive-mechanisms22-11-13 03:44:39
Konrad Hinsen#A request from the Discourse Modeling22-11-13 08:26:56

Thanks <@322545403876868096> ! Added to https://synthesis-infrastructures.wiki/Discourse_Modeling. I guess I could have used Wikibot for that, but it was easier to do it by hand than figuring out the intricacies of Wikibot.

petermr#incentive-mechanisms22-11-13 09:06:34

Blue Obelisk is (i.e. still active) a remote asynchronous collaboration with no central management or funding. A large part consists of nodes representing software packages. See [[1]]. It works because several of the authors knew/know each other and agreed at the outset to adopt an interoperability mantra "Open Data, Open Standards, Open Source" (ODOSOS). Because everyone agrees the same approach to interoperability the nodes can develop indeoendently! The management is informal - a mailing list and occasional back channels. So there is a collaborative network - see WP article.

petermr#computable-graphs22-11-13 09:08:25

Yes, as a scientist I also made this assumption. For example the IPCC report is 10,000 pages of scientific discourse. Hmm!

Konrad Hinsen#discourse-modeling22-11-13 10:25:58

Just added a "proposal" tag to our discussion. In scientific discourse, that would be a category used in opinion papers etc. Is this already part of the Discourse Graph repertory?

Konrad Hinsen#off-topic22-11-13 10:36:54

Note to <@305044217393053697> about Wikibot: it doesn't pick up edits on messages that it has already added to the WIki. The version in the Wiki ends up being obsolete. Could be important when someone edits to add "not", for example. Discord users are used to having this possibility.

joelchan86#general-brainstorming22-11-13 13:02:42

I think this is probably covered by Glamorous Toolkit (cc <@499904513038090240> who is a core user)!

Konrad Hinsen#general-brainstorming22-11-13 13:07:02

Yes, that's a prominent use case for Glamorous Toolkit.

Konrad Hinsen#general-brainstorming22-11-13 13:22:21

Note that Glamorous Toolkit is not (yet) a development environment for Python. What is described here is "data science" on a Python codebase. You analyze the code, but you cannot change it. For Pharo Smalltalk, there is excellent code refactoring support in addition to analysis features.

Flancian#oh ya sorry there s a zoom link in22-11-13 15:10:56

thank you! meta why zoom instead of something like jitsi

Flancian#front-door22-11-13 16:06:41

apologies I didn't make it to discourse modeling!

Wutbot#discourse-modeling22-11-13 18:04:14

From the Gutenberg city of Mainz, the CLAIM home of modern intellectual synthesis and dissemination - thank you for your participation! I've enjoyed our discussions and look forward to their continuation!


22-11-14

sneakers-the-rat#general22-11-14 04:10:29

I've got a question that seems appropriate for this group, if anyone is interested in sticking around in this discord :).

So I spend a decent amount of time talking to Librarians Libraries, and it always strikes me that they are a group of people with a ton of training and experience specifically in synthesis-like work but seem often stymied by their tools, often for lack of resources. I should have asked earlier, are there any other libraries-adjacent people in this chat?

Here's a question for whoever is interested: what would you do (what tools, what would your workflow look like) for Manual Curation of thousands of papers from structured queries across multiple databases, with curation criteria that include a) reasonably specific/computable **minimum standards** (peer-reviewed, word count, etc.) and b) **topic standards** that are a series of keywords, but rely on someone doing manual curation to be able to recognize an intuitive n-depth similarity to the specific keywords

sneakers-the-rat#libraries-and-manual-curation22-11-14 04:21:58

encouraging the use of the thread for the sake of people's notifications as we enter slow-mode.

sidebar: this to me is one of the more interesting uses of this kind of wiki-bot, in a more long-lived chat and communication medium (glad 2 have <@708787219992805407> here for the long-timescales perspective btw). in both this and any future workshops, being able to plug in something like a wikibot that can let different threads get tagged to common concepts through time to different/overlapping discord servers and output to potentially multiple overlapping wikis is v interesting to me.

I'm gonna continue to make it easier to deploy because i feel like the Garden and Stream metaphor is one that can unfold on multiple timescales, and it would be cool to build out the ability to make that easier: how cool would it be if you didn't have to decide on a chat/document medium or have to make a new set at the start of an organizing project since it was arbitrary anyway and your infra supported use and crossposting across many media. 

Eg. the very understanding surfacing of The Google Docs Problem because of Mediawiki's lack of Synchronous Editing Live Editing and the need to remember to link out to external services rather than that being a natural expectation of a multimodal group and having systems that explicitly support that is illustrative to me. Maybe one description is being able to deploy a Context of Interoperability Interoperability: during this time period I am intending these documents/discord servers/hashtags/social media accounts/etc. to be able to crosspost between each other so that everyone needs to to as little as possible to make their workflows align

sneakers-the-rat#libraries-and-manual-curation22-11-14 04:23:28

Also I am doing another Sorry Anagora (https://anagora.org/sorry-anagora) by speculating about the overlay syntax in-medium, but the need for repeated wikilinks above there revives my interest in recursive wikilinks that can be used in overlapping terms

sneakers-the-rat#general-brainstorming22-11-14 20:37:50

in thinking about some of the problems from this weekend like the (affectionately titled) The Google Docs Problem and various other interface problems with the wiki, where it'll always be easier for people to interact with a system from something they're more used to using, I've been thinking about a more generalized kind of bridging where one can set a Context of Interoperability where for a given workshop, time period, project, etc. people can plug their tools together and work in a shared space without needing to make all of them anew - so for the simple example of this discord and this wiki, it should be possible to reuse this space to eg. connect to a different (or multiple) wikis, and vice versa to have a different discord connect to it. Along those lines, being able to have a synchronizing eg. git repository of the pages on the wiki so that people could edit them in obsidian or logseq or whatever their tool of choice is... this feels like an incredibly generic idea, so I feel like there must already be a ton of work on it, but it feels like it starts by just making a framework for bridging where the n-to-n problem is simplified by having a set of tools for auth and format translation and modeling documents and messages... I'm going to start sketching one piece of that with the Mediawiki-Git Bridge, but I'm curious to hear if anyone either has any ideas, prior experience, or unmet needs that I might be orbiting around here

sneakers-the-rat#bridges22-11-14 23:15:29

This project, Git-Mediawiki looks pretty good: https://github.com/Git-Mediawiki/Git-Mediawiki I'm gonna see if i can get a further translating layer between wiki markup and markdown going, thank god for Pandoc


22-11-15

sneakers-the-rat#synthbots22-11-15 08:38:56

<@743886679554654299> brilliant idea for a Local Algorithm Parametrization along the lines of using the Medium as Storage and parametrization from a conversation I was having just now

petermr#semantic-climate22-11-15 10:44:45

We have been developing code for extraction of "claims" from IPCC executive summarys . <@322545403876868096> <@499904513038090240> So far we have the following design:

  • exec summary for chapter => 15-20 paras
  • bold leading sentence for each para => leading_claim
  • subsequent sentences => supporting_claims
  • annotation (high
petermr#general-brainstorming22-11-15 10:57:35

Thanks for Glamorous Toolkit . Watched the video and understood most of it. Impressive, and maybe the future, but not quite what I wanted now - it requires a fluency with creating new types of object on the fly and so a change in orientation. I want something that I can tag the methods with (say) 'PDF conversion', 'prototype`, etc. I don't mind dumping that as static docs and navigating with Obsidian.


22-11-23

Wutbot#discourse-modeling22-11-23 18:59:16

claim claims and questions dominate in natural conversation; the imbalance of sources & evidence is quite stark. This aligns with my mental model of *conversational charity*, where we assume our interlocutors *could* ground their statements in evidence if pressed, but skip this step in the interest of time.


22-12-20

sneakers-the-rat#synthesizing-social-media22-12-20 10:34:44

check this out. DIY Algorithms. instead of adding accounts to lists and autopopulating, you can directly add posts themselves. so then you can rig up whatever the frick algorithm you want to masto:

https://social.coop/@jonny/109545449455062668

https://github.com/sneakers-the-rat/mastodon/tree/feature/postlists


23-01-31

bengo#computable-graphs23-01-31 16:02:30

I've also recently been using logseq. I like how it just writes to markdown. I've been wanting to parse that markdown, look for we--known #hashtags and wikitags, and build an rdf dataset. It looks like SBML is kinda like XML, so maybe something similar is possible there. Have you done anything more with logseq since this post in November?