ACE Tool - the closest thing to artificial intelligence I've ever seen

  • 23 Replies
  • 6508 Views
*

Art

  • At the end of the game, the King and Pawn go into the same box.
  • Trusty Member
  • **********************
  • Colossus
  • *
  • 5864
<...careless as to give it programming abilities...>

I'm sure you've heard of the scenario of a monkey left alone in a room with a typewriter would eventually, given enough time, compose a novel.
(or something to that effect).

Let's hope that's not the case in your bot. ;)

Actually, it's the way larger mainframe computer and supercomputers that we should be concerned with once they have the ability to self correct or perhaps even construct better, faster code for itself autonomously. Time to move....

Many years a friend and I wrote a program that made use of a simple timer (number of runs) as a safety in case anyone stole our idea. It actually worked too well as we lost track of how many test runs we did and on that last run, it wiped itself clean and destroyed our timer subroutine as well.
Talk about starting from scratch with great amounts of frustration. Live and learn. That was way back in the early computing days shortly after the IBM "clones" appeared on the market or you assembled your own custom box, which we did.
So long ago and we've come so far.

I'm so old that when I was little, there was no color TV. In fact even the rainbows were in Black and White! :2funny:
In the world of AI, it's the thought that counts!

*

Carl2

  • Trusty Member
  • ********
  • Replicant
  • *
  • 560
  Interesting discussion, I've always liked the idea of a self programing bot,  I believe I first saw that at OTC's site but never got a hold of the programing.  I've learned if you watch black and white TV you will dream in black and white.
Carl2

*

Don Patrick

  • Trusty Member
  • ********
  • Replicant
  • *
  • 617
    • Artificial Detective
I'm all about control, which is why I have issues with unsupervised AI projects. I compare it to kicking a 3-year old out on the street and expecting it to become a perfectly well adjusted citizen on its own. Shudder the thought of my AI writing novels  ;)
The first scenario I read of this sort of thing involved lego bricks. Small lego robots programmed to build other lego robots who would build other lego robots who would build other lego robots etc etc. - and then the world.

I hadn't thought of limiting the number of runs, that's a good idea :). Did you store the number in a file, or...?

I can't say I know any AI project that is self-reprogramming and not a wanton mess because of it. Though I thought I read something about Google having evolved to a point where its programmers no longer knew how it worked. But I think that's more of a neural net sort of thing, that only improves towards the training data that it is fed.
CO2 retains heat. More CO2 in the air = hotter climate.

*

Art

  • At the end of the game, the King and Pawn go into the same box.
  • Trusty Member
  • **********************
  • Colossus
  • *
  • 5864
@ Don,

No, at the time I stored the number within the file / program that was being run, as this was more of a test bed at the time.
I could have placed the number or better encrypted it using an ASCII representation, one line at a time until the program stepped
through all the lines and "added" the ASCII codes together, then a simple conversion and a quick check to see if TIMER = 5 or whatever
then Delete...or something equally as devious.

We once found a "back door" into a (rather primitive by today's standards) BBS online game. Each player used a unique name or Handle. We employed a CHR$(X) + CHR$(Y) +.... trick that added different letters together as the program jumped through various subroutines eventually arriving at our "secret" name which only we knew. The program wrote user's names to a file so that it could recognize new players by their respective names.
All we then had to do was to "log in" under that Secret name and within minutes of playing we were awarded a Flagship for every drone ship we destroyed. Flagships = large points to buy large things. So yeah, it was kind of a trick on our friend who was running the BBS at the time and eventually we told him of our prank. I don't think he would have found the "additions" intermixed within the regular code. It was an Extended Basic language and this was back in the early 1980's so pace yourselves. The logic can still be applied today.
In the world of AI, it's the thought that counts!

*

ivan.moony

  • Trusty Member
  • ************
  • Bishop
  • *
  • 1604
    • contrast-zone
I'm all about control, which is why I have issues with unsupervised AI projects.

That should depend on specific algorithm. I'm sure I could work out something you would agree on, if that day comes. Here it is about being in harmony with all living beings, with humans, over animals and bugs to very plants and other uni or multicellular beings. I think unsupervised learning should be allowed if it respects all beings, don't you too?
There exist some rules interwoven within this world. As much as it is a blessing, so much it is a curse.

*

Art

  • At the end of the game, the King and Pawn go into the same box.
  • Trusty Member
  • **********************
  • Colossus
  • *
  • 5864
It is always good and great and wonderful...until someone hacks it. ???
In the world of AI, it's the thought that counts!

*

Don Patrick

  • Trusty Member
  • ********
  • Replicant
  • *
  • 617
    • Artificial Detective
Art: Very interesting. I never learned how to embed variables or code within a program by a program, though I can see how it would be easier in a program that is basically a text file and doesn't require compiling like c++.

Ivan: Most of the time the well-being of beings are in conflict: Humans kill animals for food, so even grocery shopping makes the AI an accomplice to murder. Even if the AI is not hazardous, I just have issues with the lack of wisdom in unsupervised methods. For instance, Google ran an image recognition neural net experiment where they just let the AI loose without goal, and the result was extremely random. Another example: IBM once allowed Watson to learn from the internet, and as a result it kept swearing when it answered. They had to install a specifically designed swear filter.
But if you figure out an ensured way to encode respect for all living beings, I'm interested.
CO2 retains heat. More CO2 in the air = hotter climate.

*

ivan.moony

  • Trusty Member
  • ************
  • Bishop
  • *
  • 1604
    • contrast-zone
The idea is that AI would do only things on which no one objects. If we ask it to kill an animal for food, it should say "Sorry, animal objects, but I can try to find a way to fix artificial food". If we ask it to pass a toy to a kid, it should say "OK, no one objects, I'll do it".

But issues start with kids and drunk people who don't know what is good for them. Their thoughts are contradictory and I still don't know how to resolve this. Usually people know somehow what is better for kids, more than kids themselves. And somehow, I don't like the idea of AI telling others what to think. But maybe something safe would pop up.

There will always be someone who will say: "AI is unsafe, it should make  a suicide". But what would AI do then? Maybe to prove that it is safe, continuing its existance in the world. Maybe proofs are the answer of safety.
There exist some rules interwoven within this world. As much as it is a blessing, so much it is a curse.

*

ivan.moony

  • Trusty Member
  • ************
  • Bishop
  • *
  • 1604
    • contrast-zone
The following would be a starting point for unsupervised learning. It is an IBNF definition of grammar which, when parsed with all ambiguities, returns an abstract syntax tree (forest more precisely) with all possible interpretations of passed stream of characters.

Code
knowledge {
    abstract word (
        letter {@AnyCharacter \ @WhiteSpace},
        next {@word | @Null}
    ) |
   
    sequence (
        element {
            @word |
            @sequence
        },
        next {(@WhiteSpace, @sequence) | @Null}
    )
}

After parsing, learner code should somehow analyze the forest and induce basic grammar rules. It should find data or elements of sets which are being periodically repeated in each sentence, sequence of words, or in each word (we helped the learner by hardcoding whitespace between words and words could be split in two or more pieces to find morphemes - not shown here). The more data there is to analyze, the more accurate rules would be found by induction.
There exist some rules interwoven within this world. As much as it is a blessing, so much it is a curse.