ohad



Offline



Activity: 897

Merit: 1000



http://idni.org







Hero MemberActivity: 897Merit: 1000http://idni.org Re: Tau-Chain and Agoras Official Thread: Generalized P2P Network November 17, 2015, 12:49:51 PM #321 Quote from: klosure on November 17, 2015, 10:52:14 AM

Thanks for the excellent answer, that clears pretty much all my questions so far. I guess next step to understand better Tau is to get familiar with Idris and read Per Martin Lof's Type Theory (BTW for anyone else interested, there is a copy online



@Ohad

I've been playing a bit with CWM lately, and I'm underwhelmed by how the negation as failure under a closed world assumption is being handled. Unless I've missed something, it seems that the only way to express a negation using basic CWM builtin semantics is to parse a N3 document into a formula with log:semantics and test for the absence of a specific clause in that formula using log:notIncludes. That seems to be confirmed by a few sources: Norman Walsh's



Anyway, since your code examples so far have been compatible with CWM, I was wondering how you planned to handle the negation as failure in Tau from a syntactic perspective. If you already have some idea of what the syntax will be, can you please give an example? I'm thinking of a simple case: build a rdf:List of all objects that do NOT have a specific property set. Let's say for instance all objects that are not a fish to continue the example in your (btw very convincing, congrats!) LetsTalkBitcoin interview.



I'm starting to build an application that I will port to Tau as soon as it's functional, and I need a way to express a negation in a closed world. I could move up the food chain to something like OWL and use set semantics to get my way but I'm concerned that this is like going in the wrong direction on the completeness-vs-consistency spectrum and is going to make it more painful to backport the application to Tau in the future. Can you recommand some existing RDF-flavored language that is close to the target in Tau, and that allows to derive negation as failure in a closed world kb without having to rely on static files to do so?



Of the course the ideal would be to start directly with Tau if the reasoner is already functional. But it would also need builtins.. Maybe I can help you with that if you already have a clear idea of what builtins will be available. If you are planning to adopt a subset of the CWM builtins among other things, that could be a good start.

@HMCThanks for the excellent answer, that clears pretty much all my questions so far. I guess next step to understand better Tau is to get familiar with Idris and read Per Martin Lof's Type Theory (BTW for anyone else interested, there is a copy online there . AFAICT this book is out of print, so you will likely not find it in the normal distribution circuit). That's going to keep me busy for a while. I'll come back with more questions once I have digested the underlying theory.@OhadI've been playing a bit with CWM lately, and I'm underwhelmed by how the negation as failure under a closed world assumption is being handled. Unless I've missed something, it seems that the only way to express a negation using basic CWM builtin semantics is to parse a N3 document into a formula with log:semantics and test for the absence of a specific clause in that formula using log:notIncludes. That seems to be confirmed by a few sources: Norman Walsh's blog and some chat he had with Dan Connolly (one of the authors of CWM). These sources are old but CWM hasn't evolved much since and I haven't been able to find any other way by looking at the documentation and test files. If that's still the case, it means that in CWM negation as failure can only be achieved on static documents included in a file and cannot be done on the general context of the reasoner which is always considered to be open world. So much for the so called "Closed World" machine Huh...Anyway, since your code examples so far have been compatible with CWM, I was wondering how you planned to handle the negation as failure in Tau from a syntactic perspective. If you already have some idea of what the syntax will be, can you please give an example? I'm thinking of a simple case: build a rdf:List of all objects that do NOT have a specific property set. Let's say for instance all objects that are not a fish to continue the example in your (btw very convincing, congrats!) LetsTalkBitcoin interview.I'm starting to build an application that I will port to Tau as soon as it's functional, and I need a way to express a negation in a closed world. I could move up the food chain to something like OWL and use set semantics to get my way but I'm concerned that this is like going in the wrong direction on the completeness-vs-consistency spectrum and is going to make it more painful to backport the application to Tau in the future. Can you recommand some existing RDF-flavored language that is close to the target in Tau, and that allows to derive negation as failure in a closed world kb without having to rely on static files to do so?Of the course the ideal would be to start directly with Tau if the reasoner is already functional. But it would also need builtins.. Maybe I can help you with that if you already have a clear idea of what builtins will be available. If you are planning to adopt a subset of the CWM builtins among other things, that could be a good start.

Thanks,

Later on I'll dig further, but for now I'll mention negation in tau in Martin-Lof's fashion:

Aside implication (=>) as rules, we can also imply false: "{ x y z } => false." When tau loads a kb, it first tries to see if it can prove "false" from it. If so, it rejects the kb as inconsistent. So if a program contains "{ ?x a cat. ?x a fish } => false", the reasoner will assert that cats aren't fish indeed, otherwise reject. Moreover, it can be used as a condition, something of the form "{ ?x a animal. { ?x a cat. ?x a fish } => false } => { x y z }" though I don't know yet how it will syntactically look on tau.



About builtins, yes we'd like to have cwm/eye builtins, at least string/int etc. We tend to mimic eye more than we tend to mimic cwm. Thanks,Later on I'll dig further, but for now I'll mention negation in tau in Martin-Lof's fashion:Aside implication (=>) as rules, we can also imply false: "{ x y z } => false." When tau loads a kb, it first tries to see if it can prove "false" from it. If so, it rejects the kb as inconsistent. So if a program contains "{ ?x a cat. ?x a fish } => false", the reasoner will assert that cats aren't fish indeed, otherwise reject. Moreover, it can be used as a condition, something of the form "{ ?x a animal. { ?x a cat. ?x a fish } => false } => { x y z }" though I don't know yet how it will syntactically look on tau.About builtins, yes we'd like to have cwm/eye builtins, at least string/int etc. We tend to mimic eye more than we tend to mimic cwm. Tau-Chain & Agoras

HunterMinerCrafter



Offline



Activity: 434

Merit: 250







Sr. MemberActivity: 434Merit: 250 Re: Tau-Chain and Agoras Official Thread: Generalized P2P Network November 18, 2015, 06:50:33 PM #322



The short answer is that tau provides for both open world reasoning under standard RDF semantics, as well as a fully closed reasoning under an extended type-system. The tau inference is a staged, cyclical reasoning process that alternates between what we refer to as "open reasoning" (which is OWA and polymorphic) and "closed reasoning" (which is CWA and dependently typed) in rounds, per block of a chain.



Quote from: ohad on November 17, 2015, 12:49:51 PM Quote from: klosure on November 17, 2015, 10:52:14 AM

I've been playing a bit with CWM lately, and I'm underwhelmed by how the negation as failure under a closed world assumption is being handled. Unless I've missed something, it seems that the only way to express a negation using basic CWM builtin semantics is to parse a N3 document into a formula with log:semantics and test for the absence of a specific clause in that formula using log:notIncludes. That seems to be confirmed by a few sources: Norman Walsh's

@OhadI've been playing a bit with CWM lately, and I'm underwhelmed by how the negation as failure under a closed world assumption is being handled. Unless I've missed something, it seems that the only way to express a negation using basic CWM builtin semantics is to parse a N3 document into a formula with log:semantics and test for the absence of a specific clause in that formula using log:notIncludes. That seems to be confirmed by a few sources: Norman Walsh's blog and some chat he had with Dan Connolly (one of the authors of CWM). These sources are old but CWM hasn't evolved much since and I haven't been able to find any other way by looking at the documentation and test files. If that's still the case, it means that in CWM negation as failure can only be achieved on static documents included in a file and cannot be done on the general context of the reasoner which is always considered to be open world. So much for the so called "Closed World" machine Huh...

Yes, the name is something of a (confusing) joke. The CWM reasoner employs an open world assumption in general reasoning, as is standard for RDF inference. It provides this limited facility of scoped negation as failure as a nonstandard extension in it's built-ins. Unfortunately, as you discovered, this limited ability to reason over a closure often ends up being less than convenient, due to these sorts of interplay with other cwm semantics. (In this case, it's consideration of the whole of the file:// and http:// uri spaces as being subsumed by the kb.)



Quote from: ohad on November 17, 2015, 12:49:51 PM Quote from: klosure on November 17, 2015, 10:52:14 AM I'm starting to build an application that I will port to Tau as soon as it's functional, and I need a way to express a negation in a closed world. I could move up the food chain to something like OWL and use set semantics to get my way but I'm concerned that this is like going in the wrong direction on the completeness-vs-consistency spectrum and is going to make it more painful to backport the application to Tau in the future. Can you recommand some existing RDF-flavored language that is close to the target in Tau, and that allows to derive negation as failure in a closed world kb without having to rely on static files to do so?



As Ohad pointed out, we support the slightly more general (but still scoped) negation-as-failure in the style introduced in Eulersharp. Unlike CWM, this does allow a complement to be calculated all the way up to the background graph, so you can constrain over a whole KB. (Here tau considers a KB as a parent root context in the typesystem's universe hierarchy, which basically means either the root chain or a sub-context of it. Tau considers only a specific, finite subset of the uri space as being in the KB scope at any given time, unlike cwm/euler.)



Quote from: ohad on November 17, 2015, 12:49:51 PM Quote from: klosure on November 17, 2015, 10:52:14 AM Of the course the ideal would be to start directly with Tau if the reasoner is already functional. But it would also need builtins.. Maybe I can help you with that if you already have a clear idea of what builtins will be available. If you are planning to adopt a subset of the CWM builtins among other things, that could be a good start.



The reasoner is largely functional and already passing much of the cwm/euler swap test suite, but we are still working on testing, debugging, and verification. We are also currently working on implementing built-ins. We plan to support most CWM builtin-ins, as well as some of rif, and possibly including some prolog predicates as eulersharp does. We will also have quite a few tau-specific, of course.



Quote from: ohad on November 17, 2015, 12:49:51 PM Thanks,

Later on I'll dig further, but for now I'll mention negation in tau in Martin-Lof's fashion:

Aside implication (=>) as rules, we can also imply false: "{ x y z } => false." When tau loads a kb, it first tries to see if it can prove "false" from it. If so, it rejects the kb as inconsistent. So if a program contains "{ ?x a cat. ?x a fish } => false", the reasoner will assert that cats aren't fish indeed, otherwise reject. Moreover, it can be used as a condition, something of the form "{ ?x a animal. { ?x a cat. ?x a fish } => false } => { x y z }" though I don't know yet how it will syntactically look on tau.



About builtins, yes we'd like to have cwm/eye builtins, at least string/int etc. We tend to mimic eye more than we tend to mimic cwm.



OK, to get down to some real details... what Ohad describes initially in his first example is negation constraint at the most primitive level, scoped negation as failure in the open reasoning representation. This is, as he points out, generally used to assert a consistency constraint on the KB. When a reasoning context is assembled, the reasoner first queries to see if "false" can be proved directly, and rejects the kb as inconsistent if it can.



This is a rather *critical* aspect of tau to understand, as it is central to both how tau is able to "reject things" as they are coming in to the system, as well as how it is able to retract or "delete" knowledge that it has previously accepted. When a new block arrives at a node, potentially containing some new facts/rules, it is run through a series of checks. The block and it's contents are first speculatively added to the KB, and then the reasoner performs a consistency check. If the block itself is found to be invalid then it is rejected and ignored, and the speculative addition is rolled back. If the block itself is consistent, then it's contents are each checked for consistency in turn. If some rule in the incoming block is contradictory to what is in the KB (meaning some rule addition allows the KB to derive false) then the offending fact/rule that was already existing in the KB is considered deleted (retracted) and is removed from the kb for any subsequent reasoning.



For (a trivial and contrived) example, let's say in block 5 it is asserted (by whatever established means in the existing rules, this is unimportant detail) that {<HMC> a tau:SuperAdmin; social:likes food:cheese.} asserting that I am a dairy loving user in a special role, and then in block 10 we decide that this special role is no good, and assert the rule {?x a tau:SuperAdmin}=>false. During processing of block 10, when it is accepted as a valid next block, the conflict between the new rule and the existing fact is discovered. The source of the conflict from the KB side is traced to the offending fact from block 5, and this fact is removed from the kb. The fact that I like cheese remains, block 5 is still considered otherwise valid, and the new rule from block 10 is inserted into the kb. If block 11 tries to re-assert that I am a super admin, it will fail it's initial consistency check, and will not be accepted. Now if in another block we later come in and introduce a rule of {{?x a tau:SuperAdmin}=>false}=>false we could then re-assert me as a super admin. (These could even happen both within the same block, as long as the rule was in an earlier transaction in the block than the fact.)



This is a very low level view of our constraint system, however, as it applies "directly" at the "base" RDF open reasoning semantics. In tau, we additionally define (within the open logic) a more constrained type-system by representing dependent type signatures (to be exact, we will use dependent record types) over top of the rdfs semantics. Further, we will implement a type-check pass by a set of constraint rules in the open logic that derive false from ill-formed expressions in this type theory. As such, any KB formed from a context during these "closed" reasoning passes must entirely fit this structuring, and must be total in it's rule definitions. (Any well formed closed-reasoning expressions are implicitly also well formed in the open RDF semantics, btw.) By this mechanism, we establish reasoning over a full closure, and "deny" at run-time the open world assumption during this higher level type check. This allows for the underlying and "open" scoped negation as failure to sit "hidden" under this extended type theory in the form of an uninhabitable Bottom type arising out of the taking of the full closure of a type-checked theory kb.



This still brings us only to a "mid-level" view of complement within the tau design! Going a step further, we intend to implement over this closed and typed logic (which in turn is implemented over the open and polymorphic logic) the logic of a tractable OWL-DL subset. (Unlike the closed reasoning logic, which is enforced during any "on chain" reasoning, this OWL layer is an optional extension for the user to choose to employ, or not, at any level.) This OWL implementation will "hide" the Bottom type of the closed logic in the same way that the closed logic "hides" the SNAF of the underlying open logic. This allows for a very natural and high level constraint system over acceptable values, cardinality, equivalences, etc for the user without needing much thought on their part about the mechanical details of the complement reasoning that actually occurs at the lower layers.



Whew, that was a lot of information to burn through so quickly! Long story short, tau will provide any level of closure that you are looking for, with some caveats with regards to network operation and state change over time depending upon which level of closure we are talking about. (For example, fully open world reasoning cannot also be "authed" on chain...) Further, don't fear a jump to a consistent OWL-DL ontology too much, as we do intend to support it.



(EDIT: It seems worth noting here that something like OWL-Full is, of course, "right out." HEH.)



Of course I have only scratched the surface of these details. I highly suggest checking out some of the reading materials (a couple of them in particular hit directly on these topics) linked from the #zennet IRC channel's topic. Also, you should feel encouraged to jump into the discussions there, as this will be the best way to learn about these design decisions and implementation details of the system. Never hesitate to ask us such questions, we actually love re-discussing this stuff. (It helps to cement our own understanding, as well!)



I'd just like to jump in and add some more precise and technical detail to Ohad's answer. I get the impression that you have the background for an appreciation of these details, and you've raised an excellent question that speaks to the very heart of tau's design.The short answer is that tau provides for both open world reasoning under standard RDF semantics, as well as a fully closed reasoning under an extended type-system. The tau inference is a staged, cyclical reasoning process that alternates between what we refer to as "open reasoning" (which is OWA and polymorphic) and "closed reasoning" (which is CWA and dependently typed) in rounds, per block of a chain.Yes, the name is something of a (confusing) joke. The CWM reasoner employs an open world assumption in general reasoning, as is standard for RDF inference. It provides this limited facility of scoped negation as failure as a nonstandard extension in it's built-ins. Unfortunately, as you discovered, this limited ability to reason over a closure often ends up being less than convenient, due to these sorts of interplay with other cwm semantics. (In this case, it's consideration of the whole of the file:// and http:// uri spaces as being subsumed by the kb.)As Ohad pointed out, we support the slightly more general (but still scoped) negation-as-failure in the style introduced in Eulersharp. Unlike CWM, this does allow a complement to be calculated all the way up to the background graph, so you can constrain over a whole KB. (Here tau considers a KB as a parent root context in the typesystem's universe hierarchy, which basically means either the root chain or a sub-context of it. Tau considers only a specific, finite subset of the uri space as being in the KB scope at any given time, unlike cwm/euler.)The reasoner is largely functional and already passing much of the cwm/euler swap test suite, but we are still working on testing, debugging, and verification. We are also currently working on implementing built-ins. We plan to support most CWM builtin-ins, as well as some of rif, and possibly including some prolog predicates as eulersharp does. We will also have quite a few tau-specific, of course.OK, to get down to some real details... what Ohad describes initially in his first example is negation constraint at the most primitive level, scoped negation as failure in the open reasoning representation. This is, as he points out, generally used to assert a consistency constraint on the KB. When a reasoning context is assembled, the reasoner first queries to see if "false" can be proved directly, and rejects the kb as inconsistent if it can.This is a rather *critical* aspect of tau to understand, as it is central to both how tau is able to "reject things" as they are coming in to the system, as well as how it is able to retract or "delete" knowledge that it has previously accepted. When a new block arrives at a node, potentially containing some new facts/rules, it is run through a series of checks. The block and it's contents are first speculatively added to the KB, and then the reasoner performs a consistency check. If the block itself is found to be invalid then it is rejected and ignored, and the speculative addition is rolled back. If the block itself is consistent, then it's contents are each checked for consistency in turn. If some rule in the incoming block is contradictory to what is in the KB (meaning some rule addition allows the KB to derive false) then the offending fact/rule that was already existing in the KB is considered deleted (retracted) and is removed from the kb for any subsequent reasoning.For (a trivial and contrived) example, let's say in block 5 it is asserted (by whatever established means in the existing rules, this is unimportant detail) that { a tau:SuperAdmin; social:likes food:cheese.} asserting that I am a dairy loving user in a special role, and then in block 10 we decide that this special role is no good, and assert the rule {?x a tau:SuperAdmin}=>false. During processing of block 10, when it is accepted as a valid next block, the conflict between the new rule and the existing fact is discovered. The source of the conflict from the KB side is traced to the offending fact from block 5, and this fact is removed from the kb. The fact that I like cheese remains, block 5 is still considered otherwise valid, and the new rule from block 10 is inserted into the kb. If block 11 tries to re-assert that I am a super admin, it will fail it's initial consistency check, and will not be accepted. Now if in another block we later come in and introduce a rule of {{?x a tau:SuperAdmin}=>false}=>false we could then re-assert me as a super admin. (These could even happen both within the same block, as long as the rule was in an earlier transaction in the block than the fact.)This is a very low level view of our constraint system, however, as it applies "directly" at the "base" RDF open reasoning semantics. In tau, we additionally define (within the open logic) a more constrained type-system by representing dependent type signatures (to be exact, we will use dependent record types) over top of the rdfs semantics. Further, we will implement a type-check pass by a set of constraint rules in the open logic that derive false from ill-formed expressions in this type theory. As such, any KB formed from a context during these "closed" reasoning passes must entirely fit this structuring, and must be total in it's rule definitions. (Any well formed closed-reasoning expressions are implicitly also well formed in the open RDF semantics, btw.) By this mechanism, we establish reasoning over a full closure, and "deny" at run-time the open world assumption during this higher level type check. This allows for the underlying and "open" scoped negation as failure to sit "hidden" under this extended type theory in the form of an uninhabitable Bottom type arising out of the taking of the full closure of a type-checked theory kb.This still brings us only to a "mid-level" view of complement within the tau design! Going a step further, we intend to implement over this closed and typed logic (which in turn is implemented over the open and polymorphic logic) the logic of a tractable OWL-DL subset. (Unlike the closed reasoning logic, which is enforced during any "on chain" reasoning, this OWL layer is an optional extension for the user to choose to employ, or not, at any level.) This OWL implementation will "hide" the Bottom type of the closed logic in the same way that the closed logic "hides" the SNAF of the underlying open logic. This allows for a very natural and high level constraint system over acceptable values, cardinality, equivalences, etc for the user without needing much thought on their part about the mechanical details of the complement reasoning that actually occurs at the lower layers.Whew, that was a lot of information to burn through so quickly! Long story short, tau will provide any level of closure that you are looking for, with some caveats with regards to network operation and state change over time depending upon which level of closure we are talking about. (For example, fully open world reasoning cannot also be "authed" on chain...) Further, don't fear a jump to a consistent OWL-DL ontology too much, as we do intend to support it.(EDIT: It seems worth noting here that something like OWL-Full is, of course, "right out." HEH.)Of course I have only scratched the surface of these details. I highly suggest checking out some of the reading materials (a couple of them in particular hit directly on these topics) linked from the #zennet IRC channel's topic. Also, you should feel encouraged to jump into the discussions there, as this will be the best way to learn about these design decisions and implementation details of the system. Never hesitate to ask us such questions, we actually love re-discussing this stuff. (It helps to cement our own understanding, as well!)

NILIcoin



Offline



Activity: 638

Merit: 500







Hero MemberActivity: 638Merit: 500 Re: Tau-Chain and Agoras Official Thread: Generalized P2P Network November 18, 2015, 09:51:23 PM #323 Quote from: HunterMinerCrafter on November 18, 2015, 06:50:33 PM





This is a rather *critical* aspect of tau to understand, as it is central to both how tau is able to "reject things" as they are coming in to the system, as well as how it is able to retract or "delete" knowledge that it has previously accepted. When a new block arrives at a node, potentially containing some new facts/rules, it is run through a series of checks. The block and it's contents are first speculatively added to the KB, and then the reasoner performs a consistency check. If the block itself is found to be invalid then it is rejected and ignored, and the speculative addition is rolled back. If the block itself is consistent, then it's contents are each checked for consistency in turn. If some rule in the incoming block is contradictory to what is in the KB (meaning some rule addition allows the KB to derive false) then the offending fact/rule that was already existing in the KB is considered deleted (retracted) and is removed from the kb for any subsequent reasoning.



For (a trivial and contrived) example, let's say in block 5 it is asserted (by whatever established means in the existing rules, this is unimportant detail) that {<HMC> a tau:SuperAdmin; social:likes food:cheese.} asserting that I am a dairy loving user in a special role, and then in block 10 we decide that this special role is no good, and assert the rule {?x a tau:SuperAdmin}=>false. During processing of block 10, when it is accepted as a valid next block, the conflict between the new rule and the existing fact is discovered. The source of the conflict from the KB side is traced to the offending fact from block 5, and this fact is removed from the kb. The fact that I like cheese remains, block 5 is still considered otherwise valid, and the new rule from block 10 is inserted into the kb. If block 11 tries to re-assert that I am a super admin, it will fail it's initial consistency check, and will not be accepted. Now if in another block we later come in and introduce a rule of {{?x a tau:SuperAdmin}=>false}=>false we could then re-assert me as a super admin. (These could even happen both within the same block, as long as the rule was in an earlier transaction in the block than the fact.)



I am not a computer person but this specific issue is so central on a most fundamental level (I actualy spent an hour arguing with Ohad on this point, having him disagree with me but yet I got up the next day more convinced that I am right, so Im trying to argue it again ) I insisting that tau by its design as a clean slate is thus an ever changing rule machine and as such it will be able to revers any rule to the beginning to the first rule that was established. and.... that is eventually all up to computing power and not just logic.!!!! Logic will enforce removing one or the other contradicting fact once it arise, but what will ditemrani which rule is computing power not seniority. if more power is choosing to run the code with one contradicting rule over the of the other contradicting rule, the next rule is set by the "stronger" not the "earlier".



As long as rules are not contradicting its all go smooth, also if a change only contradict the logic it will be fix on the next block, but if my interest is one and it contradicts another party interest then the computing power become a voting power. and this is where the tau begin to be really interesting.



In a way the tau is designed for "attacks" its design give the answer to any attack since one will need to invest power in keeping a rule from changing at each block if there is a contradicting interest. Thus if most users will be offended by a new rule they will reject it on the next block and the attacker will have to spend again the same power as in the previews block to maintain the change from being revers. basically such a system is immune to attacks.



But far greater then this, such system is the key to an evolution of a true intelligence. Intelligence is not the ability to learn new rules but to change and exchange existing rules. It is also the most essential character to allow the most economic system of life itself, which is the freedom to make mistakes. Assume a rule according to prediction and change it according to feedback on the results.



So I am yet to understand if the tau is such but again, having the tau start as clean slate in must be such. and that is because the building blocks are not rules, but rather logic carried on by computing power.



And if I am wrong and the tau rules can not be reversed based on computing power, then is will dry off very fast. and will have to restart every time since it will never be able to fix it self or adjust to meet the best possible results for all participants (game theory style) and will never have the incentives to put computing power into maintaining a voting power. I am not a computer person but this specific issue is so central on a most fundamental level (I actualy spent an hour arguing with Ohad on this point, having him disagree with me but yet I got up the next day more convinced that I am right,so Im trying to argue it again) I insisting that tau by its design as a clean slate is thus an ever changing rule machine and as such it will be able to revers any rule to the beginning to the first rule that was established. and.... that is eventually all up to computing power and not just logic.!!!! Logic will enforce removing one or the other contradicting fact once it arise, but what will ditemrani which rule is computing power not seniority. if more power is choosing to run the code with one contradicting rule over the of the other contradicting rule, the next rule is set by the "stronger" not the "earlier".As long as rules are not contradicting its all go smooth, also if a change only contradict the logic it will be fix on the next block, but if my interest is one and it contradicts another party interest then the computing power become a voting power. and this is where the tau begin to be really interesting.In a way the tau is designed for "attacks" its design give the answer to any attack since one will need to invest power in keeping a rule from changing at each block if there is a contradicting interest. Thus if most users will be offended by a new rule they will reject it on the next block and the attacker will have to spend again the same power as in the previews block to maintain the change from being revers. basically such a system is immune to attacks.But far greater then this, such system is the key to an evolution of a true intelligence. Intelligence is not the ability to learn new rules but to change and exchange existing rules. It is also the most essential character to allow the most economic system of life itself, which is the freedom to make mistakes. Assume a rule according to prediction and change it according to feedback on the results.So I am yet to understand if the tau is such but again, having the tau start as clean slate in must be such. and that is because the building blocks arerules, but rather logic carried on by computing power.And if I am wrong and the tau rules can not be reversed based on computing power, then is will dry off very fast. and will have to restart every time since it will never be able to fix it self or adjust to meet the best possible results for all participants (game theory style) and will never have the incentives to put computing power into maintaining a voting power.

NILIcoin



Offline



Activity: 638

Merit: 500







Hero MemberActivity: 638Merit: 500 Re: Tau-Chain and Agoras Official Thread: Generalized P2P Network November 19, 2015, 09:25:06 AM #324 Quote from: NILIcoin on November 18, 2015, 09:51:23 PM Quote from: HunterMinerCrafter on November 18, 2015, 06:50:33 PM





This is a rather *critical* aspect of tau to understand, as it is central to both how tau is able to "reject things" as they are coming in to the system, as well as how it is able to retract or "delete" knowledge that it has previously accepted. When a new block arrives at a node, potentially containing some new facts/rules, it is run through a series of checks. The block and it's contents are first speculatively added to the KB, and then the reasoner performs a consistency check. If the block itself is found to be invalid then it is rejected and ignored, and the speculative addition is rolled back. If the block itself is consistent, then it's contents are each checked for consistency in turn. If some rule in the incoming block is contradictory to what is in the KB (meaning some rule addition allows the KB to derive false) then the offending fact/rule that was already existing in the KB is considered deleted (retracted) and is removed from the kb for any subsequent reasoning.



For (a trivial and contrived) example, let's say in block 5 it is asserted (by whatever established means in the existing rules, this is unimportant detail) that {<HMC> a tau:SuperAdmin; social:likes food:cheese.} asserting that I am a dairy loving user in a special role, and then in block 10 we decide that this special role is no good, and assert the rule {?x a tau:SuperAdmin}=>false. During processing of block 10, when it is accepted as a valid next block, the conflict between the new rule and the existing fact is discovered. The source of the conflict from the KB side is traced to the offending fact from block 5, and this fact is removed from the kb. The fact that I like cheese remains, block 5 is still considered otherwise valid, and the new rule from block 10 is inserted into the kb. If block 11 tries to re-assert that I am a super admin, it will fail it's initial consistency check, and will not be accepted. Now if in another block we later come in and introduce a rule of {{?x a tau:SuperAdmin}=>false}=>false we could then re-assert me as a super admin. (These could even happen both within the same block, as long as the rule was in an earlier transaction in the block than the fact.)



I am not a computer person but this specific issue is so central on a most fundamental level (I actualy spent an hour arguing with Ohad on this point, having him disagree with me but yet I got up the next day more convinced that I am right, so Im trying to argue it again ) I insisting that tau by its design as a clean slate is thus an ever changing rule machine and as such it will be able to revers any rule to the beginning to the first rule that was established. and.... that is eventually all up to computing power and not just logic.!!!! Logic will enforce removing one or the other contradicting fact once it arise, but what will ditemrani which rule is computing power not seniority. if more power is choosing to run the code with one contradicting rule over the of the other contradicting rule, the next rule is set by the "stronger" not the "earlier".



As long as rules are not contradicting its all go smooth, also if a change only contradict the logic it will be fix on the next block, but if my interest is one and it contradicts another party interest then the computing power become a voting power. and this is where the tau begin to be really interesting.



In a way the tau is designed for "attacks" its design give the answer to any attack since one will need to invest power in keeping a rule from changing at each block if there is a contradicting interest. Thus if most users will be offended by a new rule they will reject it on the next block and the attacker will have to spend again the same power as in the previews block to maintain the change from being revers. basically such a system is immune to attacks.



But far greater then this, such system is the key to an evolution of a true intelligence. Intelligence is not the ability to learn new rules but to change and exchange existing rules. It is also the most essential character to allow the most economic system of life itself, which is the freedom to make mistakes. Assume a rule according to prediction and change it according to feedback on the results.



So I am yet to understand if the tau is such but again, having the tau start as clean slate in must be such. and that is because the building blocks are not rules, but rather logic carried on by computing power.



And if I am wrong and the tau rules can not be reversed based on computing power, then is will dry off very fast. and will have to restart every time since it will never be able to fix it self or adjust to meet the best possible results for all participants (game theory style) and will never have the incentives to put computing power into maintaining a voting power.

I am not a computer person but this specific issue is so central on a most fundamental level (I actualy spent an hour arguing with Ohad on this point, having him disagree with me but yet I got up the next day more convinced that I am right,so Im trying to argue it again) I insisting that tau by its design as a clean slate is thus an ever changing rule machine and as such it will be able to revers any rule to the beginning to the first rule that was established. and.... that is eventually all up to computing power and not just logic.!!!! Logic will enforce removing one or the other contradicting fact once it arise, but what will ditemrani which rule is computing power not seniority. if more power is choosing to run the code with one contradicting rule over the of the other contradicting rule, the next rule is set by the "stronger" not the "earlier".As long as rules are not contradicting its all go smooth, also if a change only contradict the logic it will be fix on the next block, but if my interest is one and it contradicts another party interest then the computing power become a voting power. and this is where the tau begin to be really interesting.In a way the tau is designed for "attacks" its design give the answer to any attack since one will need to invest power in keeping a rule from changing at each block if there is a contradicting interest. Thus if most users will be offended by a new rule they will reject it on the next block and the attacker will have to spend again the same power as in the previews block to maintain the change from being revers. basically such a system is immune to attacks.But far greater then this, such system is the key to an evolution of a true intelligence. Intelligence is not the ability to learn new rules but to change and exchange existing rules. It is also the most essential character to allow the most economic system of life itself, which is the freedom to make mistakes. Assume a rule according to prediction and change it according to feedback on the results.So I am yet to understand if the tau is such but again, having the tau start as clean slate in must be such. and that is because the building blocks arerules, but rather logic carried on by computing power.And if I am wrong and the tau rules can not be reversed based on computing power, then is will dry off very fast. and will have to restart every time since it will never be able to fix it self or adjust to meet the best possible results for all participants (game theory style) and will never have the incentives to put computing power into maintaining a voting power.

Lets for example take a rule of making rule, which most likely will be one of the first rules to be tested. the rule is : "In order to change a rule, over 70% have to vote for it" . Now lets take it step by step.This voting process itself have many layers but the most fundamental layer will be for someone who run the code to submit a change. thus only if 70% submit the change to the code, it can be changed. Now that is all nice an dandy and from now on even to change the 70% constituting rule we will need 70%. Well....wrong!!! our intuition assert that but it is not true. If the 70% rule would have been a part of the genesis code of tauchain, then to overrule it you would have to rewrite the tau code. however since it is a rule that is not part of the tau code it can be changed withing the framework of the chain which constituted it!!!!

If anything, this is the most important feature of the tau, the fact that the code itself is not bound to any rule but to logic framework and the computing power that runs it only.

The 70% rule can be overruled by an offer , say a 35%. by a 51% computing power "attack". but unlike in bitcoin such an "attack" is part of the fair game. It is in actual the most important factor of the game. Unlike bitcoin it is not devastating to the entire ecosystem but rather the incentive itself that prop up the entire ecosystem..

So to recap, just like in life itself the tau will end up running on the verge of the 50% agreement at all times. while power will tend to concentrate to reach that benchmark, it will be decentralized up to the last user. but more so.......



This brings us the the business model of the tauchain - The Agora tokens of the decentralized super computer!!!!

This Agora fueled mechanism actualy allow each user on the tauchain to put his/her voting power into computing power.

as explained before, Computing power is eventually the ruling power. Thus by using the Agora platform I, as a user, will be able to direct all me power to support my vote.



Yes there is one concern here. a concern that all my resources will go to "fight" for my voice, but I believe that this will happened only at times of chaos. At most times the entire ecosystem will operate on some balance and most users will be able to allocate their resources to different tasks on the network, tasks which meets their broader goals. However at time of need they will be able to pull it out in order to support an essential change in the network constituting rules. The simplicity of reallocation of power on the tauchain network thru the Agora layer will make the use of the of that supercomputer most appealing to all tau chain participants, thus a good investment in the token. Lets for example take a rule of making rule, which most likely will be one of the first rules to be tested. the rule is : "In order to change a rule, over 70% have to vote for it" . Now lets take it step by step.This voting process itself have many layers but the most fundamental layer will be for someone who run the code to submit a change. thus only if 70% submit the change to the code, it can be changed. Now that is all nice an dandy and from now on even to change the 70% constituting rule we will need 70%. Well....wrong!!! our intuition assert that but it is not true. If the 70% rule would have been a part of the genesis code of tauchain, then to overrule it you would have to rewrite the tau code. however since it is a rule that is not part of the tau code it can be changed withing the framework of the chain which constituted it!!!!If anything, this is the most important feature of the tau, the fact that the code itself is not bound to any rule but to logic framework and the computing power that runs it only.The 70% rule can be overruled by an offer , say a 35%. by a 51% computing power "attack". but unlike in bitcoin such an "attack" is part of the fair game. It is in actual the most important factor of the game. Unlike bitcoin it is not devastating to the entire ecosystem but rather the incentive itself that prop up the entire ecosystem..So to recap, just like in life itself the tau will end up running on the verge of the 50% agreement at all times. while power will tend to concentrate to reach that benchmark, it will be decentralized up to the last user. but more so.......Yes there is one concern here. a concern that all my resources will go to "fight" for my voice, but I believe that this will happened only at times of chaos. At most times the entire ecosystem will operate on some balance and most users will be able to allocate their resources to different tasks on the network, tasks which meets their broader goals. However at time of need they will be able to pull it out in order to support an essential change in the network constituting rules. The simplicity of reallocation of power on the tauchain network thru the Agora layer will make the use of the of that supercomputer most appealing to all tau chain participants, thus a good investment in the token.

HunterMinerCrafter



Offline



Activity: 434

Merit: 250







Sr. MemberActivity: 434Merit: 250 Re: Tau-Chain and Agoras Official Thread: Generalized P2P Network November 19, 2015, 05:25:28 PM #325 Quote from: NILIcoin on November 19, 2015, 09:25:06 AM The 70% rule can be overruled by an offer , say a 35%. by a 51% computing power "attack". but unlike in bitcoin such an "attack" is part of the fair game. It is in actual the most important factor of the game. Unlike bitcoin it is not devastating to the entire ecosystem but rather the incentive itself that prop up the entire ecosystem..

So to recap, just like in life itself the tau will end up running on the verge of the 50% agreement at all times. while power will tend to concentrate to reach that benchmark, it will be decentralized up to the last user. but more so.......



I'll keep my reply to this brief(ish) here. Ultimately this is part of the (much bigger) discussion from the channel about the "implicit vote" aspect of the system. You are not exactly wrong in anything that you've put, but these are a bit "deeper" details than what Klosure was really looking for, I think.



We generally like to keep the discussion "simple" in our hypothetical scenarios about post-genesis operation, by assuming that consensus proceeds normally by only "on chain mechanics." You are correct in that such mechanics are not really the totality of the story - it is not actually as simple as "vote per the rules" alone. There are, of course, actually two additional ways besides "per the rules" that rules can be manipulated!



First, a sufficient number of users simply choosing to diverge their rule selection from that of the rest of the network. In other words, if a majority of users decide to stray from protocols as decided upon mechanically, and instead follow protocol as decided upon personally, then the network will "follow" the users, despite the new protocol direction being in contradiction with the systematically accepted rules. This is the scenario that you describe, and it is what we call the "implicit vote" mechanism. Technically this same mechanism exists, verbatim, in bitcoin. If suddenly a majority of users decided to block the addresses of a dominant miner, it wouldn't matter how much hash-rate that miner ramped up, they could not continue to produce btc blocks. This is a form of soft fork.



Second, there is a (quite related) aspect of tau which we refer to as "going mao." This is where the structuring of the rules (at some context) reaches a point at which the semantics of that context become such that it is no longer the case that all participants in that context are allowed to know (have divulged to them) all rules within the context. By this mechanism, also, the system can be structured such that rule selection decisions are not longer necessarily entirely "on chain" and per protocol. This one does not exactly have a direct parallel in bitcoin, except as a loose comparison to a special sort of hard fork. For tau, this is generally considered something of an "ultimate fail-safe" mechanism for controlling contexts, but is usually seen as something of a "nuclear devastation" option. It allows for the ability to "revert" a contexts control no matter what scenario has occurred within the rules, but also leaves the knowledgebase of that context (after the event) in a state in which the system can no longer offer all of the same level of assurances to it's users about safety and security of their operation within the network. This is generally regarded as something of a last resort option for context survival, and the general consensus among our team is that users should generally be quite wary of participating in contexts which have gone this route.



(Of course, as in bitcoin, there is also always the third option of straight up hard forks of the core! This would also be the only way to change *genesis* rules, and starting from an alternate genesis hash is considered a fork of the core. However, this is a bit different from what we are really talking about here... which is rule manipulation "on chain.")



However, it is much easier to conceptualize and discuss the basic mechanics of tau without going into the details of these two "context panic button" related aspects, particularly for new people who do not have the same background/exposure to our discussions. It is generally assumed that these two aspects of the mechanics will only rarely be of concern, and will not be the normal course of operation for tau contexts. (We might be wrong. It might turn out, somehow, that users decide that in their post-genesis world it is actually better, for some unknown-to-us-now reason, to leverage the implicit vote and/or changing from nomic to mao as a common matter of course. I can't possibly imagine that this will be the outcome, as there are lot of caveats and potential "dangers" to users that arise out of both of these mechanisms, so personally I suspect that these will always be regarded as nothing more than "fail-safes" reserved for only the most extreme circumstances within a context, and will rarely be employed.)



Quote from: NILIcoin on November 19, 2015, 09:25:06 AM Yes there is one concern here. a concern that all my resources will go to "fight" for my voice, but I believe that this will happened only at times of chaos. At most times the entire ecosystem will operate on some balance and most users will be able to allocate their resources to different tasks on the network, tasks which meets their broader goals. However at time of need they will be able to pull it out in order to support an essential change in the network constituting rules. The simplicity of reallocation of power on the tauchain network thru the Agora layer will make the use of the of that supercomputer most appealing to all tau chain participants, thus a good investment in the token.



This speaks quite directly to another point that comes up in the IRC from time to time: allocation of resource. Unlike in btc, where mining is "easy" in that there is only one task to direct all resource toward so mining rigs are basically "set it and forget it," on tau there will always be a decision point when volunteering resources to the network. The contributor always has to make some choice of "where and how" to apply their contributed resource. The canonical example is whether to put work toward deepening the root chain with your context pegged, versus work toward deepening your context's chain itself. Indeed, as you point out, it will be the balance of these trade-offs in resource contribution selections that will keep tau well stabilized and "directed" over time. (At least that is the hope!)



However, I am not sure that I agree entirely with your analysis of how this affects the Agoras token. Specifically, I don't think there is ever a realistic circumstance (assuming all actors are rational) where the agoras resources could be meaningfully "redirected" as resource contribution back into tau itself in order to influence root rules. The problem that I see, here, is that if there is some decision point to be made on root, the *original* contributors of the resource into Agoras will divert all of this resource toward operation on root themselves, in hopes of influencing this root chain decision point toward *their* favored outcome, when possible. This means that such resource will not be available to Agoras users at that time, for them to leverage toward directing root. In other words, the person providing the resource into Agoras will have to decide if it is ultimately better for them to continue to rent that resource into agoras, effectively "selling" their influence over the rule direction, or if it is better to divert that resource themselves and ensure that any influence it can have is directed in the way that they desire. I would imagine that for any really meaningful change to tau, itself, at root (which we expect will be an extremely rare event anyway) the decision would almost always be to direct that resource at root themselves instead of continuing to rent. As such, I'm not sure that it is safe to assume any direct impact from such occurrences on the Agoras token's market value at all.



But then again, I could be entirely wrong on this point. This is not investment advice, and everyone should do their own considerations of this, and make their own decisions about such a matter!



Trying to speculate heavily, now, on the post-genesis nature of tau is, IMO, likely a fool's errand. It is very difficult to say what the dynamics will actually end up being in any longer term, and what a post-tau-genesis world will actually begin to look like, beyond simply "different."



(Woops, that did not end up as being very brief at all...) I'll keep my reply to this brief(ish) here. Ultimately this is part of the (much bigger) discussion from the channel about the "implicit vote" aspect of the system. You are not exactly wrong in anything that you've put, but these are a bit "deeper" details than what Klosure was really looking for, I think.We generally like to keep the discussion "simple" in our hypothetical scenarios about post-genesis operation, by assuming that consensus proceeds normally by only "on chain mechanics." You are correct in that such mechanics are not really the totality of the story - it is not actually as simple as "vote per the rules" alone. There are, of course, actually two additional ways besides "per the rules" that rules can be manipulated!First, a sufficient number of users simply choosing to diverge their rule selection from that of the rest of the network. In other words, if a majority of users decide to stray from protocols as decided upon mechanically, and instead follow protocol as decided upon personally, then the network will "follow" the users, despite the new protocol direction being in contradiction with the systematically accepted rules. This is the scenario that you describe, and it is what we call the "implicit vote" mechanism. Technically this same mechanism exists, verbatim, in bitcoin. If suddenly a majority of users decided to block the addresses of a dominant miner, it wouldn't matter how much hash-rate that miner ramped up, they could not continue to produce btc blocks. This is a form of soft fork.Second, there is a (quite related) aspect of tau which we refer to as "going mao." This is where the structuring of the rules (at some context) reaches a point at which the semantics of that context become such that it is no longer the case that all participants in that context are allowed to know (have divulged to them) all rules within the context. By this mechanism, also, the system can be structured such that rule selection decisions are not longer necessarily entirely "on chain" and per protocol. This one does not exactly have a direct parallel in bitcoin, except as a loose comparison to a special sort of hard fork. For tau, this is generally considered something of an "ultimate fail-safe" mechanism for controlling contexts, but is usually seen as something of a "nuclear devastation" option. It allows for the ability to "revert" a contexts control no matter what scenario has occurred within the rules, but also leaves the knowledgebase of that context (after the event) in a state in which the system can no longer offer all of the same level of assurances to it's users about safety and security of their operation within the network. This is generally regarded as something of a last resort option for context survival, and the general consensus among our team is that users should generally be quite wary of participating in contexts which have gone this route.(Of course, as in bitcoin, there is also always the third option of straight up hard forks of the core! This would also be the only way to change *genesis* rules, and starting from an alternate genesis hash is considered a fork of the core. However, this is a bit different from what we are really talking about here... which is rule manipulation "on chain.")However, it is much easier to conceptualize and discuss the basic mechanics of tau without going into the details of these two "context panic button" related aspects, particularly for new people who do not have the same background/exposure to our discussions. It is generally assumed that these two aspects of the mechanics will only rarely be of concern, and will not be the normal course of operation for tau contexts. (We might be wrong. It might turn out, somehow, that users decide that in their post-genesis world it is actually better, for some unknown-to-us-now reason, to leverage the implicit vote and/or changing from nomic to mao as a common matter of course. I can't possibly imagine that this will be the outcome, as there are lot of caveats and potential "dangers" to users that arise out of both of these mechanisms, so personally I suspect that these will always be regarded as nothing more than "fail-safes" reserved for only the most extreme circumstances within a context, and will rarely be employed.)This speaks quite directly to another point that comes up in the IRC from time to time: allocation of resource. Unlike in btc, where mining is "easy" in that there is only one task to direct all resource toward so mining rigs are basically "set it and forget it," on tau there will always be a decision point when volunteering resources to the network. The contributor always has to make some choice of "where and how" to apply their contributed resource. The canonical example is whether to put work toward deepening the root chain with your context pegged, versus work toward deepening your context's chain itself. Indeed, as you point out, it will be the balance of these trade-offs in resource contribution selections that will keep tau well stabilized and "directed" over time. (At least that is the hope!)However, I am not sure that I agree entirely with your analysis of how this affects the Agoras token. Specifically, I don't think there is ever a realistic circumstance (assuming all actors are rational) where the agoras resources could be meaningfully "redirected" as resource contribution back into tau itself in order to influence root rules. The problem that I see, here, is that if there is some decision point to be made on root, the *original* contributors of the resource into Agoras will divert all of this resource toward operation on root themselves, in hopes of influencing this root chain decision point toward *their* favored outcome, when possible. This means that such resource will not be available to Agoras users at that time, for them to leverage toward directing root. In other words, the person providing the resource into Agoras will have to decide if it is ultimately better for them to continue to rent that resource into agoras, effectively "selling" their influence over the rule direction, or if it is better to divert that resource themselves and ensure that any influence it can have is directed in the way that they desire. I would imagine that for any really meaningful change to tau, itself, at root (which we expect will be an extremely rare event anyway) the decision would almost always be to direct that resource at root themselves instead of continuing to rent. As such, I'm not sure that it is safe to assume any direct impact from such occurrences on the Agoras token's market value at all.But then again, I could be entirely wrong on this point. This is not investment advice, and everyone should do their own considerations of this, and make their own decisions about such a matter!Trying to speculate heavily, now, on the post-genesis nature of tau is, IMO, likely a fool's errand. It is very difficult to say what the dynamics will actually end up being in any longer term, and what a post-tau-genesis world will actually begin to look like, beyond simply "different."(Woops, that did not end up as being very brief at all...)

lujunqing7890



Offline



Activity: 164

Merit: 100









Full MemberActivity: 164Merit: 100 Re: Tau-Chain and Agoras Official Thread: Generalized P2P Network November 22, 2015, 02:14:09 PM #327 I do not understand Agoras token, why in bittrex sell, start trading. I do not know how many total IPO BTC, it is not already closed, it sent out so many Agoras token, how much money are ready to tau-chain on those tokens become, there is the hands of the team or the community ready to take the number of credits, we do not know, so you start trading, the grass is not too handsome bar. https://bitcointalk.org/index.php?topic=1733140.msg17340945#msg17340945

ohad



Offline



Activity: 897

Merit: 1000



http://idni.org







Hero MemberActivity: 897Merit: 1000http://idni.org Re: Tau-Chain and Agoras Official Thread: Generalized P2P Network November 26, 2015, 03:23:46 PM

Last edit: November 27, 2015, 10:22:47 AM by ohad #330 Yesterday I gave a talk on a Bitcoin conference in Tel-Aviv, will go up to youtube soon.



I encountered a common question which I'd like to dedicate a post here: do tau really speak about efficient autoproving? Do we have a P-time algo for NPC problems?

Unfortunately, no. Proof is a hard problem, exponential time, no matter which logic we pick, let it be Turing or Martin-Lof or Euclidean geometry.



Yet, tau is not about such uses. It is not about software being able to prove everything automatically. Proof in general is indeed unfeasible. Indeed, we do not expect the autoprover to fully automatically prove that, for example, that any given code meets or not any given requirements, at least not in a lifetime (though theoretically it's always possible with long enough time). Otherwise we could just solve all unsolved math problems.



Nevertheless, the program has to "lead" the prover to the proof, and "miss" only "small enough" details. If you know that your code meets the requirements, you can write it in a way that the prover will find the proof quickly. One cannot expect to write code that depends on very deep insights (like Goldbach conjecture) while hiding it from the prover. At one extreme, code can be as explicit as in Turing machine languages, leaving almost no room for inference. In practice, automatic proving is meant to be used in a way computers can handle, still they're very useful even while proof being a hard task.

Conversely, you can write a proof yourself, or, add some lemmas to make life easier to the autoprover.



The writer of the code should check how many steps it takes to prove the claimed assertions about it, and to keep in mind that users won't use that code if their autoprover is exhausted after some iterations.



Verification of the proof, though, is linear. It is exactly the length of the proof.



The difference between consistent and inconsistent logic is: given a proof, can I trust that the proved statement is true indeed? Under Turing machine logic, both true and false claims can be proved. Tau-Chain & Agoras