Safety Protocols
There is a balance between absolute security and secrecy / privacy vs. social needs, to uphold basic values. The method considered most feasible to address this difficult balance, is emboided by the concept of creating Safety protocols.
The Safety Protocols requirement will end-up being defined in-full to be interoperable terms with by TheWebizenCharter.
The Effect of developing 'safety protocols' is instrumental to providing a means to develop a decentralised social-web framework that supports values based networking between agents (people) in a way that supports the recgonition and use of declared values, to provide support for agents to provide the necessary social protections needed, to be able to create safe decentralised online environments.
To the greatest capacity available; the intention is to make as many of the safety protocols as possible, optional. However, the idea is that there will be notifications about whether or not another agent is running a particular type of safety protocol; as to alert others to their status or ideology, and thereby provide an ability for others they are communicating with, to decide how and/or if they want to communicate with that agent, by being able to take into account whether and/or what safety protcols they're operating.
Safety protocols may relate to providing insights as to whether or not an agent is who they claim themselves to be; or whether they've got particalar protocols taht are operated in-order to protect themselves and others from materials that are blatently criminal and/or relating to abuses of the human rights of others.
Safety protocols can be empoyed at different levels; some will be designed to operate at the systems level, others will be designed to operate in connection to the functionality of an app.
Safety protocols will employ ValuesCredentials and PermissiveCommonsTech, alongside other tooling as required to make an environment that is both able to be made highly secure, and simultaniously also - support safety both for the user and others they interact with, based upon the values they decide to be important for themselves, as individuals / groups.
Defining the Webizen Rules - Safety Protocols
When defining Webizen - I want to consider: The Code of Chivalry - Forming an AI Lore of Chivalry
The Three Laws of Robotics;
Was first published in 1942 by Isaac Asimov , the concept therefore pre-dates the 1945 post war article by Vannaver Bush on ‘As We May Think’.
First Law:
A robot may not injure a human being or, through inaction, allow a human being to come to harm.
Second Law:
A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
Third Law:
A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
What other Rules need to be defined?
Considerations include;
It has an owner: it serves its owner(s).
They probably need to be defined in a way that supports the ability for a family to have many different webizen running on the same ‘webizen box’; but if there’s a change (family separation, or child becomes an adult) their ‘webizen’ can be migrated.
If the owner is a child or is otherwise similarly incapable of ‘personhood’, then it shall be subject to the rules defined by that persons guardian & the ‘webizen rules’.
Cannot Fabricate ‘evidence’, that there must be an ability to audit the logical basis of bot facts
Defining the rules of ‘Battle Bots’; what is allowed in the civilian domain and what is not allowed?
How are rules defined to ensure the people, webizen serve are protected from acts of violence, harms, threats, etc..
Webizen Ethics (cont).
There are also significant ideological considerations that do need to be made; about the difference between designing a ‘thing’ that is a tool; and that the responsibility over how it is designed to operate by its owner, is really up to that person - which is in-turn, a morality related method; that seeks to support ‘personal responsibility’ rather than ‘big brother’ or ‘system defined rules’ of conduct. This methodology; is in-turn, a bit like providing other types of tools, whether it be desktop computers or applications that work on them, like spreadsheet programs; or physical artefacts, like cars or hammers; all of which, can be used for harmful / wrongful purposes; But that there’s either no controls, or limited controls placed upon the sale of these ‘products’.
In effect a design method; to support the notion, that “The primary person responsible for ones behaviour is themselves…”... Therein - design standards would in-turn seek to ensure that the natural persons who owns their ‘webizen’ (environment); is supported, in seeking to ensure that they are indeed, responsible for the actions of their webizen. This would include an array of design qualities & requirements; to ensure this form of ideology is indeed supported; whilst also offering a capacity for innovators to innovate.
But the consequence, much like cars or hammers - is that people might seek to use them as a weapon; as such, there are various considerations to be made about how best to address this problem, perhaps not so much in relation to the ‘webizen’ tooling itself; but moreover, how it is designed to be supported by broader ecosystems - including but not limited to, law & legal processes.
Yet - these considerations are not intended to entirely absolve responsibility for ensuring good design. Part of the ‘webizen.org’ initiative, is to figure out how to form a multi-stakeholder approach to forming ‘open standards’ that support and are in-turn supported by participants who get involved.
There are various methods to seek to provide hygiene and ensure that ‘webizen systems’ are safe and indeed also; very secure (technically).
It is considered that a ‘webizen’ be a far more complicated form of ‘identity apparatus’ or tooling; which is very different to the present-day mainstream ‘beliefs’ about the benefits for mankind by issuing them ‘wallets’ powered by ‘web3’ with keys / credentials, that can be reissued in the case the wallet is lost.
This isn’t just about ‘wallets’, its moreover about human agency - and the need to produce a solution, despite the consequences of decisions made by others to not provide infrastructure online to support human agency (ie: my old ‘knowledge banking’ works); by refactoring the designs, to better consider / take into account; the circumstances where the primary goals relate to ‘property’ (inclusive to AI / Information Infrastructure) & contract law.
The outcome of producing Webizen, should result in significant ‘safety / dignity’ improvements for the people who are able to buy one, and have it help them with their life as influenced via technology. Part of how this will be achieved is via ‘safety protocols’ (per below, but think - star-trek, holodeck - which has safety protocols); therein, some of what needs to be done, is a bunch of work on various forms of values frameworks that need to be produced as ontologies, so that owners can decide which ‘optional’ ones they want to employ’ whilst, the product may not function if the mandatory ones are not ‘turned on’ or in some way otherwise corrupted (aka safety protocols).
Necessary Protections - SafetyProtocols
In Star-Trek Voyager - the ‘holodeck’ systems, have ‘safety protocols’ to protect people; so should webizen. Designing these systems is important, so that the way those safety outcomes are achieved; are designed, rather than forced - in ways that are likely to result in other sorts of consequences.
Whilst I'm not sure what they will be yet, there is a need to ensure these boxes cannot be used in a federated manner to engender an attack..
The types of attacks will need to be defined (mis-use cases; and in-turn, a framework to attend to these threats developed and addressed; in a manner that does not breach ‘first principles’ which are in-turn also to be defined, but are essentially about the protection and support of human rights.
These ‘codes’ should be produced as part of the webizen.org works broadly; leveraging, broad-ranging discussions about ‘ai ethics’ through a different lens; a lens, where its now able to be focused upon a concept where, people, may in-future, own their own robots; and as a new and innovative form of ‘artificial species’ or ‘artificial agent’, form ‘common-sense’ approaches to what sorts of things should be discouraged, what sorts of governance principles should apply - and fundamentally; when a person gets a ‘webizen’, perhaps they sign an oath; or, that they’ll need to install their own software/firmware, as a consequence of deciding not to; but, part of the necessary protections are also, that people should be able to transfer their webizen environment (the data / software environment) and operate it on compatible systems; as is a form of protection.
#socialfabric #ValuesFrameworks #SafetyProtocols