<?xml version="1.0" encoding="utf-8" standalone="yes"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom">
  <channel>
    <title>Semiotics on Clark Seanor</title>
    <link>/tags/semiotics/</link>
    <description>Recent content in Semiotics on Clark Seanor</description>
    <generator>Hugo -- gohugo.io</generator>
    <language>en</language>
    <lastBuildDate>Sat, 09 May 2026 20:56:04 +0100</lastBuildDate>
    <atom:link href="/tags/semiotics/index.xml" rel="self" type="application/rss+xml" />
    <item>
      <title>The Representational Limits of Ontologies</title>
      <link>/posts/the-representational-limits-of-ontologies/</link>
      <pubDate>Sat, 09 May 2026 20:56:04 +0100</pubDate>
      <guid>/posts/the-representational-limits-of-ontologies/</guid>
      <description>This post aims to provide a high-level overview of some of the ways in which falsehoods can be created and obscured within ontologies, and how sometimes things that can be evidenced cannot be represented within them. My aim is to demonstrate that ontologies are a lossy format, and that there are specific things we can say that we lose when we use them.&#xA;I want to start by defining some terms, including what &amp;lsquo;an ontology&amp;rsquo; is.</description>
    </item>
  </channel>
</rss>
