start ::= e //s-expression
e ::= a // atom
| a e // sequencing
| (e) // grouping
a ::= c // constant
| {q} // query
q ::= a q // querying to the right of a
| q a // querying to the left of a
| ? // return query result
| ! // return query complement
| (q) // grouping
(equal ?x ?x)
(<=> (equal ?x ?y) (equal ?y ?x))
(<=> (and (equal ?x ?y) (equal ?y ?z)) (equal ?x ?z))
(not (parent ?x ?x))
(not (grandparent ?x ?x))
(<=> (or (father ?x ?y) (mother ?x ?y)) (parent ?x ?y))
(<=> (or (grandfather ?x ?y) (grandmother ?x ?y)) (grandparent ?x ?y))
(<=> (and (parent ?x ?y) (mother ?y ?z)) (grandmother ?x ?z))
(<=> (and (parent ?x ?y) (father ?y ?z)) (grandfather ?x ?z))
(father carol robert)
(mother carol alice)
(father alice bill)
(mother alice mary)
(father robert tom)
(mother robert sally)
(grandparent carol ?z)
// integer definition
(
Int {
Zero |
(One {Int})
}
)
// increment and decrement by one
{
Use (
{Int}
(x {Int})
) in (
((++ {x}) {One {x}})
((-- {One {x}}) {x})
)
}
// addition and subtraction
{
Use (
(x {Int})
(y {Int})
{
Use (
(incremented {Int})
(decremented {Int})
) in (
Loop {incremented} {decremented} {
iff {{decremented} in Zero}
{Loop {++{incremented}} {--{decremented}}}
{incremented}
}
)
}
) in (
(
({x} + {y})
{Loop {x} {y}}
)
(
({x} - {y})
{(? + {y}) {x}}
)
)
}
// multiplication and division
{
Use (
{Int}
(x {Int})
(y {Int})
{
Use (
(step {Int})
(accumulator {Int})
(decremented {Int})
) in (
Step {step} Loop {accumulator} {decremented} {
iff {{decremented} in Zero}
{Loop {{accumulator} + {step}} {--{decremented}}}
{accumulator}
}
)
}
) in (
(
({x} * {y})
{Step {x} Loop {Zero} {y}}
)
(
({x} / {y})
{(? * {y}) {x}}
)
)
}
This is the essence of extracting, for example a function parameters from a function result: we substitute the whole expression, just to extract pieces from it. For this process to be functional, we need to define all the calculations in terms of our metatheoretical framework. Only then, since we don't use external, hard coded definitions, but only definitions in a form of queryable s-expressions, we are covering a complete range of computable expressions. Once calculated, we are able to extract any fragment from a process of possible construction of any expression, thus reducing the problem only to pattern matching. However, low level implementation will be a bit of challenge to grapple with, regarding to recursive functions, but the implementation is possible, as long as we restrict the number of possible recursions to a finite number.
#define mKeyWords \
mKeyWord(Aliases, "aliases") \
mKeyWord(Amount, "amount") \
mKeyWord(Badges, "badges") \
mKeyWord(CalendarModel, "calendarmodel") \
mKeyWord(Claims, "claims") \
mKeyWord(DataType, "datatype") \
mKeyWord(DataValue, "datavalue") \
mKeyWord(Descriptions, "descriptions") \
mKeyWord(Hash, "hash") \
mKeyWord(Id, "id") \
mKeyWord(Labels, "labels") \
mKeyWord(Language, "language") \
mKeyWord(LastRevId, "lastrevid") \
mKeyWord(LowerBound, "lowerBound") \
mKeyWord(MainSnak, "mainsnak") \
mKeyWord(Modified, "modified") \
mKeyWord(Property, "property") \
mKeyWord(Qualifiers, "qualifiers") \
mKeyWord(QualifiersOrder,"qualifiers-order") \
mKeyWord(Rank, "rank") \
mKeyWord(References, "references") \
mKeyWord(Site, "site") \
mKeyWord(SiteLinks, "sitelinks") \
mKeyWord(SnakType, "snaktype") \
mKeyWord(SnaksOrder, "snaks-order") \
mKeyWord(Time, "time") \
mKeyWord(Title, "title") \
mKeyWord(Type, "type") \
mKeyWord(Unit, "unit") \
mKeyWord(UpperBound, "upperBound") \
mKeyWord(Value, "value") \
static const char* gKeyWords[] =
{
#define mKeyWord(Symbol,String) String,
mKeyWords
#undef mKeyWord
};
enum tKeyWordIndex
{
#define mKeyWord(Symbol,String) k##Symbol##KeyWordIndex,
mKeyWords
#undef mKeyWord
};
enum tKeyWord
{
#define mKeyWord(Symbol,String) k##Symbol##KeyWord = (-1 - k##Symbol##KeyWordIndex),
mKeyWords
#undef mKeyWord
};
#define kKeyWordLimit (sizeof(gKeyWords) / sizeof(char*))
The only question is how expressive the macro system is... I'm sure there are tons of similar material out there, but unless a possible use is recognized, we can't really do much with it just because we don't see the real potential. We just don't know how to use it.
I didn't work with such advanced database tools. What, for example we can do with those?The only question is how expressive the macro system is... I'm sure there are tons of similar material out there, but unless a possible use is recognized, we can't really do much with it just because we don't see the real potential. We just don't know how to use it.
Well you see, here's the thing. X-macros are just a way of implementing a simple relational database at compile time in a C program, for code generation. Relational databases are a well known concept with lots of applications and if you've used an enterprise grade RDBMS like Oracle or PostgreSQL then you might have some idea of their real power. (I should point out that SQL/Server, MySql and MS-Access are just toy implementations of SQL not worth wasting your time with, although apparently a lot of people do.
Haha, fools.)Please don't...
So I have many gigabytes of data comprising different kinds of knowledge bases aligned and stored in my database server, and I use SQL to extract the information and recombine it in different ways to generate the code for my artificial intelligence software. I'm still ramping up my builds as I add more knowledge sources, but we're already talking about millions of lines of code in a very high-level language of my own design, which ultimately compiles down to C and machine code. I like to think that I'm doing AI on an industrial scale. :DSounds like a big project.
If we put data entanglements right, then all the calculations could be done by simply querying data structures. I believe the same thing happens when you're constructing a processor with its assembler. And I'm trying to extract a higher level version of it. It has to be able to do anything that a human mind can conceive (in a sense of reconstructing a thought process).
If we put data entanglements right, then all the calculations could be done by simply querying data structures.
Have you devoted much thought to the use of S-expressions for building ontologies or do you have something else in mind for them?
Actually, I'm building a language in which any other language (programming, DSL, scientific, natural) could be syntactically and semantically described. It is a knowledge base theory language. For a reference, throw a look to the left of this post, to click on the blue-green globe. That is a link to my year-and-a-half old specification of such a language.
S-expressions are underrated these days. More complex formats took a place that could be filled by simpler s-expressions.I agree. About HTML in S-expression, there's SEML (http://loganbraga.github.io/seml/) (appending classes and ids to the tag names feels a bit like cheating to me though).
For example, I'd also like to see s-expression version of HTML with some macro support.
Hi,
Are you still working on this ivan?
Looking for the simplest way to organize discrete data, I thought yesterday that cons cells are really the lowest level one could reach: conceptually, a memory cell holding two pointers, so it's like a digraph where vertices have always 2 edges. What I'd like to make is an algorithm that finds the best way to compress a network of cons cells, so that it is expressed by another (smaller) network of cons cells. Is it related to your work?
(
(1 . 2 . 3) &
(2 . 2 . 4) &
(1 . 4 . 5) &
(3 . 4 . 7) &
...
) => (a . b . (a + b))
Wait a minute here lol :) ... Is this project of yours ivan all about:
x does b
g does z
....therefore h is w !
"discovery forming" ? And, what else is it all about? If you can list a 2nd thing, that'd be big, besides discovery forming being 1st.
Wait a minute here lol :) ... Is this project of yours ivan all about:
x does b
g does z
....therefore h is w !
"discovery forming" ? And, what else is it all about? If you can list a 2nd thing, that'd be big, besides discovery forming being 1st.
Lock, I'm building a functional programming language. Functional programming languages have a nice property of representing term rewriting systems (https://en.wikipedia.org/wiki/Rewriting). Term rewriting systems, in turn can describe conclusions in a form of deduction, abduction and induction (as thinking processes). Intention of this programming language is mainly supporting automated thinking processes.
A field I'm particularly interested to support is science. I hope to connect all the sciences by unified theory description (programming) language. I would need to build a crowdsourced site for this intention because I need scientists to enter their theories into my language. I hope for providing new automated conclusions when enough science fields are connected by the language.
I know it is a long shot, but I'll try. I have an idea of helping scholars and students solving their everyday school tasks. When they grow up, some of them would become scientists that I need to populate all the wonderful data on my (or even third party distributed) site.
To me it seems as though the programming technology is going backwards... (maybe people are just catching up).... IT seems like the repel Environment is the way to go!
Hi, it's been a long time... Hope everything is ok. :-\
Yes, I'm on it these days. I still have to finish and document the specification.
I don't understand what compression is there to make. Do you mean something like reusing duplicate elements?
However, I'm counting I'll get to a level where I would be able to make inferences by induction, like:Code(
(1 . 2 . 3) &
(2 . 2 . 4) &
(1 . 4 . 5) &
(3 . 4 . 7) &
...
) => (a . b . (a + b))
address = car, cdr
objectA = null, null
propA1 = objectA, has
whatA1 = propA1, wheels
howmanyA1 = whatA1, 4
propA2 = objectA, has
whatA2 = propA2, engine
objectB = null, null
propB1 = objectB, has
whatB1 = propB1, wheels
howmanyB1 = whatB1, 4
propB2 = objectB, has
whatB2 = propB2, engine
objectA = null, null
whatisA = objectA, car
objectB = null, null
whatisB = objectB, car
objectA = null, null
propA1 = objectA, has
whatA1 = propA1, wheels
howmanyA1 = whatA1, 4
propA2 = objectA, has
whatA2 = propA2, engine
objectC = null, null
propC1 = objectC, has
whatC1 = propC1, wheels
howmanyC1 = whatC1, 2
propC2 = objectC, has
whatC2 = propC2, engine
objectA = null, null
whatisA = objectA, vehicle
nwheelsA = objectA, 4
objectC = null, null
whatisC = objectC, vehicle
nwheelsC = objectC, 2
I'd like not to bruteforce it.
I didn't understand, LOCKSUIT.
Can someone understand this (https://arxiv.org/pdf/1304.7392.pdf)?
I think I just solved the NP problem..........I instantly realized the answer to a NP question on the wiki page in polynomial time. I'm NOT telling yous how just yet haha! That was fast! ?
It said above it:
There are many important NP problems that people don't know how to solve in a way that is faster than testing every possible answer. Here are some examples:
EDIT: This is the easiest thing everrrrrrrrrrrrrrrr
I didn't understand, LOCKSUIT.
Can someone understand this (https://arxiv.org/pdf/1304.7392.pdf)?
I have read a similar paper (for DNA Genome Encoding and Searching) its the same technique ....) They Create trees First (try trees) then reduce them with compression algorithms... by Combining pathways, and reducing the Pathways.... then applying Referencing to the nodes..essentially looking up the required node rather than Parsing the tree..... the tree Stores the locations therefore the tree becomes compressed. Less nodes... Yet each combined end leaf node, which may have pointers to lower end nodes which contain the previous end node... (node are considered to be end nodes although having sub-nodes) ...Combined Prefixes to the final end nodes.... it seems very complex in construction terms as first a optimised tree needs to be created, then optimised... then Locations applied.... by combining locations which point to sub locations to complete a Suffix or "piece of data"....
I have recently been playing with tree/tries/binary trees.... and am currently meditating on the same compression methods.... it the look up process where i'm trying to get straight....the tree reduction process depends on the data being stored and how your storing it in the tree... so that's tailored to your own project.... Videos on suffix tries and suffix arrays .... is helping me along.... i also keep getting sidetracked with understanding the Lambda functions in lisp ... and how it might be useful for me....i like the idea of being able to rewrite functions with the lambda function....
that paper would take me a few days to read and understand....
Are you aware of this paper (http://www.vldb.org/pvldb/1/1453965.pdf)? Basically it's a space/time tradeoff technique you might like to use in your project ivan. There's also an implementation (http://crubier.github.io/Hexastore/).
Also, is there a relation between your project and minikanren (http://minikanren.org/)?
I like what you're doing here, if there's anything I can do to help, don't hesitate.
What logic often lacks is temporality. Things seem to exist in a land where there's no notion of time. A logic programming paradigm that would take transformation into account would be really nice.