I took a look to find a tool that could help me author triple-stores, but didn’t find anything. The most compelling tool Protégé is designed for writing vocabularies not the actual statements. So I asked a mailing list. To summarize:
- Agreement that the tooling is lacking. Theorization that this was caused by the Semantic Web Winter (2010-2018)
- Pitched a few tools
- Semantic Forms: Promising but clearly a prototype (running on a high-number port, fairly slow). Nevertheless, it’s definitely on the right path and the creator was very nice in trying to help out! It’s definitely the right UX etc. I didn’t want to have to sign up to use it though.
- Sewelis: Also promising. It conforms to my default UX expectation of the ability to import vocabularies and to construct subject/predicate/object statements
- Sarven Capadisli crushes it. He suggested both a really sensible editor for writing statements and showed off a cool tool for storing annotations. I’m pretty sure these two pieces could be mushed together to create an implementation of my goal.
- The OpenLink Structured Data Editor (OSDE): This is definitely the best software I’ve seen. Or, at the least, it matches my expectations the most.
- Martynas Jusevičius Provided a demonstration of AtomGraph. It seems very powerful, but I simply don’t think my problem is the right scale for the horsepower of this solution.
I was struck, because it seemed odd to me that this avenue of research, while probably likely burdened by its academic pedigree had once seemed so vibrant. What happened with the idea of data interchange, on unified vocabularies for describing data?
I mean, sure, maybe the idea wasn’t going to catch on like sharing all your personal details and photos with the internet, but this level of desolation was really surprising. I went back to the twobit history article and perused the footnotes and links.
In several places above I had seen JSON-LD mentioned and Manu Sporny, the author was mentioned in several places. He also authored a post called ” JSON-LD and Why I Hate the Semantic Web”.
I read this like Zola’s “J’accuse.” Sporny was merciless and tore up what had so stymied me about SemWeb 9 years ago:
If you want to make the Semantic Web a reality, stop making the case for it and spend your time doing something more useful, like actually making machines smarter or helping people publish data in a way that’s useful to them.
I’m certainly not aiming for making “the Semantic Web a reality.” I just want a data format that allows observations to tie in to other hooks out here on the internet. I’d like my breathless research of a book to live on, to fuel research, to provide interesting insights into queries. I’m not educated in the right way to make machines smarter, so the best I can do, I think, is help people publish data in a useful way.
To my mind, that would mean something like: data editing tools and data that lets people collaborate. Sporny was speaking my language. What, O Sporny, can JSON-LD do for me that the decade-old Linked Data tech could not (can not?) do?
Sporny’s video had a few key stand out points:
- Linked Data: we know how to embed it in HTML, we use RDFa
- Linked Data in JSON: we don’t know how to link, ergo JSON-LD
- When we try to mix-and-match JSON data, b/c there’s no standard, we don’t know if the blobs are compatible
- What if we made the keys of the blob URLs (or IRIs). We could then be sure that we mean the same thing! OK that’d be great, but devs would hate this. We want simple terms.
- JSON-LD defines a key called
@contextkey which points to another JSON file (
*.jsonld) which explains the context in which the JSON blob is to be understood.
- The JSON blob should also feature an
@idkey that points to a global, unique identifier so that disparate systems can handshake that when
@idis equal, we’re talking about the same thing.
- Additional keys are
OK, this seems to have some real horsepower. I’m going to investigate JSON-LD now.