I'm about to read Stephen Wolfram's 2002 book A New Kind of Science (https://www.wolframscience.com/nks/). Has anyone read it yet?
top -> sum
sum -> sum + fact
sum -> fact
fact -> fact * primary
fact -> primary
primary -> [0-9]+
(
(
(
(
[0-9]+ -> primary
) | (
fact * primary
)
) -> fact
) | (
sum + fact
)
) -> sum
(twice <x>) -> (<x> + <x>)
Are you suggesting we put trees in cells? :)
O0 I'm not against, but I don't see how.
</
</ node1 </ using grammar1 /> />
...
content1
...
</
</ node1-1 </ using grammar1-1 /> />
...
content1-1
...
</
... tree goes on ...
/>
/>
</
</ node1-2 </ using grammar1-2 /> />
...
content1-2
...
</
... tree goes on ...
/>
/>
...
/>
↑ e = o
// if north-neighbor contains 'e', take its value with 'e' replaced by 'o'
3 f , e = o
// if 3 f around, replace 'e' by 'o' here
e = o
// replace 'e' by 'o' here
3 f , ↑ e = o
// if 3 f around, if north-neighbor contains 'e', take its value with 'e' replaced by 'o'
3 f , + e
// if 3 f around, create 'e' here
O0 I'm not against, but I don't see how.
I just brought this out of the naphthalene a few hours ago. Coincidentally, rewriting is then mentioned in this thread, so I felt like sharing it. I'm giving it a second thought now, wanting to replace the whole logic behind esperas (v-parser) (https://aidreams.co.uk/forum/index.php?topic=12348.msg62179#new) project.
I'll give it a week or two to be sure I really like it.
The whole idea is about defining basic s-expression based alternative to output (a kind of HTML in my case). Then, the grammar is extended by a user, by functions that transpile to this base output. Because it is a parser under the hood, functions may have any syntax we want. It seems that functions are fully typechecked using only the parser.
so I'd have the following:Code</
</ node1 </ using grammar1 /> />
...
content1
...
</
</ node1-1 </ using grammar1-1 /> />
...
content1-1
...
</
... tree goes on ...
/>
/>
</
</ node1-2 </ using grammar1-2 /> />
...
content1-2
...
</
... tree goes on ...
/>
/>
...
/>
Grammars are cumulative, which means grammars `1-1` and `1-2` are stacked on top of grammar `1`. In other words, grammars are applied cumulatively to all the children nodes. And, since grammars are full term rewriting systems, they are Turing complete, capable of doing any computations needed for e-teoria project (https://aidreams.co.uk/forum/index.php?topic=14404.msg62724#new).
3 f, ↑ bar = baz
[
{
"proximity": [
{
"number": "3",
"content": "f"
}
],
"action": {
"direction": "↑",
"original": "bar ",
"replacement": "baz"
}
}
]
rulebook
= _ r:rule* { return r; }
_
= [ \t]*
rule
= p:proximity* _ a:action _ [\r\n]* _ {
return {
proximity: p,
action: a
};
}
proximity
= n:number _ c:content _ ',' _ {
return {
number: n,
content: c
};
}
action
= replace / create
create
= '+' _ c:content _ {
return {
type: "create",
content: c
};
}
replace
= d:direction? _ original:content _ '=' _ replacement:content _ {
return {
direction: d,
original: original,
replacement: replacement
};
}
number
= d:[0-9]+ { return d.join(''); }
content
= c:[^↑↓→â†,=+\r\n\(\)]+ { return c.join(''); }
direction
= ('↑' _ / '(up)' _ / '(north)' _) { return '↑'; }
/ ('↓' _ / '(down)' _ / '(south)' _) { return '↓'; }
/ ('→' _ / '(right)' _ / '(east)' _) { return '→'; }
/ ('â†' _ / '(left)' _ / '(west)' _) { return 'â†'; }
do you think this toy of mine has any chance of being relevant?
do you think this toy of mine has any chance of being relevant?
Yes, you're doing great Zero. The only thing that really matters though is whether or not it interests you. Don't ever do things to impress anyone else; just do them to improve yourself. As long as there are enough people looking under every rock, again and again, someone somewhere is going to make the next useful discovery. It's a joy to be part of the process.
If you don't get it, then what don't you ask? And am I supposed to sell things? I don't think so. In fact, you're not answering my question.
And can also say, while I'm at it, that I'm almost about to get you out of my ignore-list. Maybe the crazy man of the village is still part of the village. He's one of its components. And your constant meaningless blah blah still have a value, after all. It is an extreme creativity layer in our proceedings, annoying sometimes, when you're anywhere anytime, but still source of inspirations, just like random shapes of clouds can be inspiring, for the man who has a look at them, during a quiet time. You inputs are valuable, I'm starting to understand that.
But my question was a real / serious one. Is this direction of an SRS/CA mix looks like a path that deserve to be explored, with the hope that it could be fertile, in terms of algorithms related to AI. And I was asking this question to people here who do create implemented tools, namely ivan.moony, infurl, korrelan, Art, Fred, and of course our elusive female engineer :)
Hey big guys, do you think this toy of mine has any chance of being relevant?
Why don't you share them?
Oh, one of these big silent moment again. It's ok.
Am I crazy? And more important, does it disqualify me as a good input provider?
Lifetime being short, I jump and jump again, like a terrible child.
People like you, korrelan and infurl, have incredibly advanced pieces of softwares in your computers. Why don't you share them?
I, too, am aware that life is short, but somehow I draw the opposite conclusion. I don't think I have time to dissipate myself by sampling everything. I need to focus. Because, whichever path I choose, it's likely to demand a long input of hard work.
1. Infurl beat me to the first reason. Maintaining a functional open-source project is more work than writing code for yourself ... especially when the project is nowhere near finished, and the code base is in a state of constant churn. Trying to, for instance, preserve backward-compatibility between different versions of all the modules would not be fun right now. And if anybody saw how messy and incomplete the code actually is, I'd be embarrassed.
A day may come when everything is tidied up and stable and has a documentation package. But it is not this day.
2. Acuitas has never really been intended as a tool. Not that it would be impossible for the software to do practical work, but that's not what it's primarily for. If I were merely inventing a new type of wrench, then I suppose I wouldn't mind stamping out hundreds of copies and handing them around. But Acuitas has aspects of ... an art piece, maybe. Releasing the code under present circumstances would be kind of like releasing the first half of my unpublished novel, and inviting other people to write the ending. No thanks.
Some projects just aren't meant to be collaborative, and this is one of them. I prefer to keep creative control.
3. IF my work ever does manage to grow into something innovative and great, then I would be concerned about the possibility of its being misused (or maybe even mistreated). I love humanity, but I don't trust it! So in that case, I'd want to be cautious about who got to see or expand upon the code. I'd pick people whose philosophical/moral alignment and personal character I admired, not just people of adequate skill.
Zero, I see your work as mildly interesting: At least you're trying something productive. I saw your Levenshtein distance algorithm the other day and thought it was an interesting idea to apply it to word sequences instead of letters, but would still have very limited uses (Due to levenshtein algorithms being what they are). I almost commented on it but did not, because 1: I prefer to make things rather than talk about making things, and 2: Every word I type exacerbates the RSI in my fingers, I have to pick my battles.
It's true that you don't seem to break out of an endless cycle of experiments, but at every experiment you do gain something. I'm also an artist and know a lot of creatives. Whenever I get close to finishing a drawing, I stop, the challenge is over, and it just sits there for 5 years until I decide to just get it over with and draw the last three lines. Many creatives get new ideas faster than they can finish the old ones, it's a common problem, but they get better while doing it nonetheless. Every piece of code you type becomes another tool that might solve a later problem. I once wrote a stupid piece of code to detect insults from Loebner Prize judges, it was a waste of time in my eyes. But now an expansion of that code's principles runs my AI's ethical subroutine. It's still too crude, the kind of crude that might make you stop and try something else, but you could also think of it as a placeholder: I know it's not good enough, and I have an idea for a better system to replace it with, but until then it does a reasonable job, and provides practical experiences that will help design that better system later. It doesn't have to be perfect from the get-go, you can always change parts that don't work or redo the whole system if you want. I've overhauled my AI's knowledge structure five times. Every time took me two months, but I would not have figured it out without the insights I gained from using the earlier versions.
As to the question of sharing, the effort doesn't gain me much. It could take months to explain everything I've programmed, longer if people are going to ask questions, and I'd rather use that time to work. Secondly, the field of AI attracts a lot of crazy people, and I've had my fill of them when I shared my progress in the past. I don't need that kind of attention.
My reasons are all of the above.
With regards to your input into the site and multiple personal projects… You are looking for something, trying to work something out, and you’re not sure what it is.
Any ‘thought’ is comprised of sub fragments/ facets, a base set of general bits/ tools are recombined to create other thoughts. Each project you start will have bits in common with previous projects but be combined differently. You stop the project when you have satisfied your curiosity, when you have gained insight. You then use what you have learned from all your experience so far, to think through the next iteration.
You might not be consciously aware of the process, or be misinterpreting it, but your sub conscious knows exactly what it’s doing… its working towards the goal… keep it up.
I'm skeptical of any approach that relies too heavily on "emergence," because to me it reeks strongly of wishful thinking, or "magic." "Let me just get these simple processes going and hope that something interesting and complex falls out!" But this is a mere personal intuition; it's not as if I've tried this sort of approach and found out it didn't work. So if you really want to know, the thing to do is finish it and find out. AGI doesn't exist yet, so any advice that anyone gives you about how to reach it will be highly speculative.
A lot of this work does not involve random initial conditions but rather precise initial layouts, designed to produce specifically one cause-effect sequence. These simple patterns are then combined in very precise ways to produce more complex patterns. It's another way to code, really, nothing like "wishful thinking".
You can start simple and make interesting and complex things appear by iteratively keeping the best, mutating a little bit, keeping best, mutating, ...etc.
I read an article once which claimed that genetic algorithms only work out well if you put as much complexity into the environment, fitness function, survival challenges, etc. (the thing that performs your "which one is the best?" evaluation) as you want to see appear in the "organism" that is being optimized. The point being that there's no way to get something for nothing, even from evolution.
The free energy principle tries to explain how (biological) systems maintain their order (non-equilibrium steady-state) by restricting themselves to a limited number of states.[1] It says that biological systems minimise a free energy function of their internal states, which entail beliefs about hidden states in their environment. The implicit minimisation of variational free energy is formally related to variational Bayesian methods and was originally introduced by Karl Friston as an explanation for embodied perception in neuroscience,[2] where it is also known as active inference.
It's a very interesting point, however I believe there's a hole in this theory. In true randomness, you necessarily have maximal complexity, somewhere. For instance, in universe you have earth, which is a very complex spot. The hole I'm talking about is about scope.
You have two elements, on the one hand the mutating thing, and on the other, the "what's best" function. I think what they say in this article is: if you remove randomness from one of these elements, the whole becomes as complex as the less complex element. Since experiments include hand-made material, so to speak, they obtain nothing more than this material's complexity. But if everything was random, like in universe, then you'd have maximal complexity, and complex animals like humans.
What do you think?
In true randomness, you necessarily have maximal complexity, somewhere.
But if everything was random, like in universe, then you'd have maximal complexity, and complex animals like humans.
What do you think?
Funny thing: when my 5 yo daughter saw your avatar this morning, she said: "it's mommy's eye"... like she knew it was the eye of a female. To me it looks more like the eye of a T-Rex :)
Funny thing: when my 5 yo daughter saw your avatar this morning, she said: "it's mommy's eye"... like she knew it was the eye of a female. To me it looks more like the eye of a T-Rex :)