1
Video / Re: The Action Lab - I Broke ChatGPT With This Paradox
« Last post by ivan.moony on Today at 01:54:59 pm »The problem with NNs is that they don't distinguish lies from the truth. They just learn all the input->output pairs without critical opinion, possibly with some good generalization magic.
To detect lies, one approach may be to build a symbolic model of the stories told. Feeding statements one by one, we can detect if the new statement is in contradiction to already accepted statements. Of course, there can be any combination of statements that may hold the truth, but the combination should be mutually non-contradicting (in the sense of theorem proving).
When the contradicting statement is detected, another problem may be in deciding whether to keep the current theory and to reject the new statement, or to start building a new theory based on the new statement.
So, I believe that's a missing piece required to build an AGI: a non-contradicting model of the world.
To detect lies, one approach may be to build a symbolic model of the stories told. Feeding statements one by one, we can detect if the new statement is in contradiction to already accepted statements. Of course, there can be any combination of statements that may hold the truth, but the combination should be mutually non-contradicting (in the sense of theorem proving).
When the contradicting statement is detected, another problem may be in deciding whether to keep the current theory and to reject the new statement, or to start building a new theory based on the new statement.
So, I believe that's a missing piece required to build an AGI: a non-contradicting model of the world.