Skip to content
On this page

Permissive Commons Tech

Permissive Commons is thought of as a form of technology, given the requirement for a specification to be defined in-order for it to be made to work harmonously online.

webizen diagram 1 2
webizen_diagram_1-2.jpg

About PCT

SemanticWeb Technologies are basically, part of the foundational technology components that are used for producing Artificial Intelligence systems. When producing content that uses semantic web technologies such as linked-data (rdf), this provides an array of abilities, however the content being authored is not only the content provided by the individual themselves; in a linked-data document, the terms used are provided meaning via ontologies; such as a document discussing flora or fauna, or statistics or law or a particular circumstance; all of these sorts of written works (as an example) relies both upon the writings of the author and in-turn external resources, that are generally intended to be considered from a perspective of the present-tense; as relates, to when that document was authored.

Yet the way HTTP / WWW works currently, it is difficult to maintain the TemporalSemantics or semantic resolution / comprehension, as a consequence of how various websites update or change their content or at times, change their names and no-longer exist. in other circumstances, the context of the time - is different to the context of a future time; so a link in a webpage describing something, may in-turn refer to an external source that later changes.

There are many examples of how these sorts of issues results in an inability to maintain knowledge artifacts via current-day web-sources (http / www). Whilst Archive.org and other sites like it do exist, they become a centralised archive rather than a part of how the internet is made to work - decentralising the responsibility to support these sorts of services, as a component that is as important as other components - such as DNS...

Historically also, there has been an 'open data' movement; however these resources are often not made available in a linked-data format; nor are they reliably available and/or verifiable, alongside an array of other qualities that the open-data movement has not seemingly achieved. So, in response to these issues, PermissiveCommons ecosystems concepts have grown overtime to respond to various issues that otherwise poorly impact FreedomOfThought and other values and semantics, of importance overall.

Some earlier notes include;

(other records available )

Permissive Commons Technology Outline

Permissive Commons refers to the use of non-http-uris and protocols to support the management of 'commons' information in a manner that employs linked-data to structure consumable assets relating to a particular topic or subject; that supports the following features,

  1. The ability to decentralise the use of the information artifacts.
  2. The ability to independently assert read, append, write and delete permissions to different agents based on the context of the information category / type / stakeholder framework.
  3. The ability to support TemporalSemantics (inclusive to version control)
  4. The ability to make use of different types of protocols, depending on the needs of the producer of the permissive commons assets / content.
  5. The ability to make notations in RDF (linked data) that can point to a particular version of an artifact; and a protocol method that can support the discovery and use of that particular version, even if the original publisher no-longer makes that data available themselves (pending types / methods, etc.)
  6. The ability to support 'rules' about content management (read / append) in a way that is different to read permissions.
  7. The ability to support privacy (/ human dignity).

Linked-Data Ontologies are often difficult to find online, all too often the original location of the namespace has moved or is no-longer available. As semantic web systems are built using namespaces that are pointed at a particular location where the description of that ontology is intended to exist; when the links become broken, the consequences have a network effect.

Web 2.0 APIs provide information, whether it be permissioned or public; yet, this method requires users to make specified queries for the information they're looking for at the time of the query; which has an array of privacy implications in addition to the problems noted in relation to the linked-data ontology space issues.

The concept of 'commons' basically refers to some kind of 'group'. It could refer to a group of people who live in a particular juristiction, whose governments publish their legislation in a permissive commons format; that provides others the ability to refer to the specific law that existed at a specific time in relation to a contract or other document relating to law is intended to be associated with; in this case, the general public have no ability to modify the records, but they are able to read / consume the records with their Local AI agent.

Another form of permissive commons may be an array of information about biosphere related things (ie: like wikipedia related assets) or machine learning models; or indeed also, works that relate to a group project, that leads to an ability to support different sorts of business models (note: ValueAccountingInitiatives ).

The concept of SemWebOntologies is fairly important to understand; noting, one of the greatest purposes of Permissive Commons, is to support means to decentralise 'group' ontologies (is is distinct to personal ontologies).

The Development Folder for ontology related work is currently located here on github

Edit this page
Last updated on 1/19/2023