Perkun

  • 23 Replies
  • 7656 Views
*

pawel.biernacki

  • Roomba
  • *
  • 17
Re: Perkun
« Reply #15 on: February 25, 2018, 06:58:11 am »
Do you think your algorithm may be useful to optimize writing short stories?

Yes. It is possible. The game Perkun Wars would be an example (take a look at "chat" - all the NPCs you meet tell you a simple story).

"use Perkun PL to make optimal decisions"
i.e.
Perkun helps you figure out how to code something U don't know how to code.

Ya but...If I'm coding something I want to code...only I (me, an AGI) know what must be coded and what comes next after that to code and how the whole program works...Perkun *cannot* give me optimal suggestions like "oh time to use a For Loop!" or "Oh time to code in x = (7>3)*3>1 in line 45 or the next line!", or "Oh time to delete lines 6-98 cus it doesn't work after all!", because Perkun isn't thinking, it isn't an AGI to do so, it doesn't know what must be put in code to create said application...only I know and can reason what to code to make my program...

However if you mean, it makes the code more efficient but remains the same "code" / "application"....then that's another story. That would help our civilization. Who knows how small one's program could be if coded professionally.

I would address the problem as follows:

I introduce the input variables representing the program and output variables allowing modifying it.

variables
{
    input variable program_command_1:{instruction_if_then, instruction_print, instruction_loop};
    input variable program_command_2:{instruction_if_then, instruction_print, instruction_loop};
    input variable program_command_3:{instruction_if_then, instruction_print, instruction_loop};
    input variable it_works:{false, true, none};
    ...
    output variable action:{change_instruction_to_if_then, change_instruction_to_print, change_instruction_to_loop, execute_the_program};
    output variable which_instruction_to_change:{one, two, three};
    hidden variable does_it_work:{false, true};
}


We would also need the input variables denoting the parameters for the instructions. Then we need a model - this is a bit problematic because Perkun does not learn models (they are taken for granted). It is quite obvious how the actions "change_instruction_" should work, but "execute_the_program" is a mystery. The payoff would depend only on the variable "it_works".

In this simple example we would end up with a Perkun specification that would try to build this three-commands program. It is an interesting idea, I did not think about using it for programming.


*

pawel.biernacki

  • Roomba
  • *
  • 17
Re: Perkun
« Reply #16 on: February 25, 2018, 10:48:15 am »
LOCKSUIT - I have done a small experiment.

You may download it from http://www.pawelbiernacki.net/programmer.zip. It contains three files:
programmer_initial.perkun
programmer.prolog
programmer_final.perkun

Now build Perkun and execute:

perkun programmer_final.perkun


It will enter the interactive mode:

loop with depth 3
I expect the values of the variables: program_command_1 program_command_2 it_works
perkun>


Now type "instruction_if_then instruction_if_then none" and press Enter. Perkun will choose action:

action=execute_the_program which_instruction_to_change=one

Let us assume the program is wrong. We must type "instruction_if_then instruction_if_then false".

Now Perkun wishes to modify the first command to "print":


action=change_instruction_to_print which_instruction_to_change=one


So we must type: "instruction_print instruction_if_then none". Now Perkun chooses:

action=execute_the_program which_instruction_to_change=one

Let us assume the program was wrong again. Type "instruction_print instruction_if_then false". Now it will say:

action=change_instruction_to_print which_instruction_to_change=two


So it wants to change to "print" the second instruction. Now the program is "print print". Let us confirm it: "instruction_print instruction_print none". It responds with:

action=execute_the_program which_instruction_to_change=one

Now let us assume this program is correct. Let us type "instruction_print instruction_print true". Now Perkun will want it executed again!

action=execute_the_program which_instruction_to_change=one

Now maybe it is the time to take a look at the programmer_initial.perkun. In the "variables" section it contains the following variables:

variables
{
    input variable program_command_1:{instruction_if_then, instruction_print};
    input variable program_command_2:{instruction_if_then, instruction_print};
    input variable it_works:{false, true, none};
    output variable action:{change_instruction_to_if_then, change_instruction_to_print, execute_the_program};
    output variable which_instruction_to_change:{one, two};
   
    hidden variable does_if_then_if_then_work:{false, true};
    hidden variable does_if_then_print_work:{false, true};

    hidden variable does_print_if_then_work:{false, true};
    hidden variable does_print_print_work:{false, true};
}


And the model (the one you will see in programmer_final.perkun) has been created automatically with a little help from Prolog. In short - I have made a simple Perkun session in which it searches through all the four combinations:

"if_then if_then"
"if_then print"
"print if_then"
"print print"

And tries to figure out which one works. This is a very primitive programming, I admit. But even for this simple case the model is huge (just take a look at programmer_final.perkun).

*

8pla.net

  • Trusty Member
  • ***********
  • Eve
  • *
  • 1307
  • TV News. Pub. UAL (PhD). Robitron Mod. LPC Judge.
    • 8pla.net
Re: Perkum
« Reply #17 on: February 25, 2018, 05:10:41 pm »
Quote from: http://www.pawelbiernacki.net
% This is a code in Prolog
% created automatically by perkun 0.1.7
% It can be used to generate Perkun code.


if Perkum writes Prolog
And Prolog is A.I.
Then Perkum writes A.I.

My Very Enormous Monster Just Stopped Using Nine

*

pawel.biernacki

  • Roomba
  • *
  • 17
Re: Perkun
« Reply #18 on: February 25, 2018, 06:18:54 pm »
Quote from: http://www.pawelbiernacki.net
% This is a code in Prolog
% created automatically by perkun 0.1.7
% It can be used to generate Perkun code.


if Perkum writes Prolog
And Prolog is A.I.
Then Perkum writes A.I.


Yes, indeed Perkun can generate code in Prolog. You can do it with the Perkun command:

cout << prolog generator << eol;


And this generated code creates a Perkun specification. But it needs to be enhanced a little - the rules in programmer.prolog following "% PLEASE INSERT YOUR CODE HERE" are written manually. It is anyway much easier than writing a model manually.

The result (programmer_final.perkun) has been created by this enhanced Prolog code (programmer.prolog).

*

8pla.net

  • Trusty Member
  • ***********
  • Eve
  • *
  • 1307
  • TV News. Pub. UAL (PhD). Robitron Mod. LPC Judge.
    • 8pla.net
Re: Perkun
« Reply #19 on: February 25, 2018, 09:33:35 pm »
The more I look...

Code
#!/usr/local/bin/perkun

The more I like it.
My Very Enormous Monster Just Stopped Using Nine

*

LOCKSUIT

  • Emerged from nothing
  • Trusty Member
  • *******************
  • Prometheus
  • *
  • 4659
  • First it wiggles, then it is rewarded.
    • Main Project Thread
Re: Perkun
« Reply #20 on: February 26, 2018, 08:28:24 am »
Is it just me cus.....I'm still lost....I can't seem to understand why I would use Perkun over C++ once Perkun is finished being developed.

Hmmm. well... answer these simple questions:
Does Perkun make your coding session faster?
Less error-prone?
Smaller code yet same program?
Emergent          https://openai.com/blog/

*

pawel.biernacki

  • Roomba
  • *
  • 17
Re: Perkun
« Reply #21 on: February 26, 2018, 08:57:56 am »
Is it just me cus.....I'm still lost....I can't seem to understand why I would use Perkun over C++ once Perkun is finished being developed.

You have two options: using Perkun directly (from command line) or using Perkun as a library - which means you write in C++ but use my algorithm. The point is the algorithm - C++ as such does not contain it (unless you use Perkun).

BTW. The Perkun package contains a tool called "zubr" (meaning a European bison in Polish) which generates code in Java. It is based on the same algorithm.

Hmmm. well... answer these simple questions:
Does Perkun make your coding session faster?
Less error-prone?
Smaller code yet same program?

Perkun makes my coding session possible. It is not about the performance, it is about possibility.

*

AgentSmith

  • Bumblebee
  • **
  • 37
Re: Perkun
« Reply #22 on: February 27, 2018, 01:55:59 pm »
So, similar to Minimax the leafs of your search tree have specific payoffs that are propagated back to higher layers, right?
How is your search tree created?
Why are you choosing a tree for your model? If you would allow for cycles w.r.t to your transitions the model representation can be much more compact.

Right. I must say it is not only the payoff that is propagated but also the belief.

The search tree is created as follows (see perkun-0.1.7/src/optimizer.cc):

- get_optimal_action(belief b, int n)
this is searching all the actions and returns the argmax (just as in minimax) of the function get_payoff_expected_value_for_consequences

- get_payoff_expected_value_for_consequences(belief b1, int n, action a)
this is iterating over all visible states (observations), calculates the visible state probability given current belief and the selected action and if >0 it propagates one level higher - it is important that it calculates the next belief using populate_belief_for_consequence. So Perkun thinks "I believe in b0, let us assume I will do action a and observe the outcome i as a result, what will I believe then?".
The above function returns the expected value of the payoff.

- get_consequence_probability(belief b1, action a, visible_state vs)
This function is used to calculate the prior probability of the visible state given the current belief (which contains the current visible state) and action.

- populate_belief_for_consequence(belief b1, action a, visible_state vs, belief & target)
 This function calculates the interpretation (the new belief), how Perkun interprets the observation vs once it believes in b1 and performs action a.

These four functions form the core of the algorithm. The most important thing is propagating the belief throughout the game tree.

Now the question why I choose a tree - it was just natural for me. I do not quite understand how to apply cycles. I wanted an algorithm that would work with uncertain information in a stochastic environment.

Thanks for clarification. I have to admit that I dont understand why propagating the beliefs is necessary.  Propagating the final payoffs should be sufficient for optimal decision-making. Each node in your tree corresponds to a distribution over all states (belief) or just the probability of a single state?

This is not AGI.

Thats true, but the approach is very interesting.
An AGI should be able to deal with complex environments containing a huge number of possible states. According to my experience, algorithms based on a tree searches are very time and memory consuming in this case unless specific heuristics are used for pruning etc.
« Last Edit: February 27, 2018, 02:24:00 pm by AgentSmith »

*

pawel.biernacki

  • Roomba
  • *
  • 17
Re: Perkun
« Reply #23 on: February 27, 2018, 06:33:05 pm »
Thanks for clarification. I have to admit that I dont understand why propagating the beliefs is necessary.  Propagating the final payoffs should be sufficient for optimal decision-making. Each node in your tree corresponds to a distribution over all states (belief) or just the probability of a single state?

Each node in my tree corresponds to a distribution over all states. Precisely the visible states (corresponding with the vector of input variables values) are used as a domain of the probability distribution - each visible state is at the same time a collection of possible states (each state corresponds with the vector of hidden variable values).

It is easier to express it in C++. In the file perkun-0.1.7/inc/perkun.h in the class visible_state you will see:

   std::list<state*> list_of_states; // owned
   std::map<variable*,value*> map_input_variable_to_value;


The list means that we have a collection of states here, and the map is a mapping of values of the input variables. And each belief has a visible state as a domain for its probability distribution. In the class state you will see:

   std::map<variable*, value*> map_hidden_variable_to_value;

Which is a map of hidden variables to values. Finally in the class belief you will see:

   visible_state & my_visible_state;
   std::map<state *, float> map_state_to_probability;


I will try to explain why it is necessary to propagate the beliefs. Let us take the example I made for LOCKSUIT (the programmer). Let us pretend we input "instruction_print instruction_print none". Perkun does not initially know whether the program "print print" is correct. It therefore tries to execute the program. After executing it it expects the variable "it_works" to be either "true" or "false" (but not "none"). Depending on the result Perkun builds its interpretation. If it gets "true" then the interpretation will be that this is the correct program. Then the optimal action will be to execute it again and again! But if it gets "false" it will know for sure that "print print" is not the correct program. It will therefore make some change to it and try out a different program.

Just before the first "execute_the_program" Perkun thinks: I will execute it once, and if it succeeds I will keep executing it (because the payoff says that success is good). But if it fails I will know for sure it would never succeed so I will give this program up.

Propagating the beliefs is necessary because the function get_optimal_action depends on the belief, and what Perkun believes NOW is something different than what he would believe after executing the program for the first time. Now he believes that the program "print print" can be good (50%) or bad (50%). But after he executes the program he will know for sure whether it is good or bad, so his belief will be either {good(100%) and bad (0%)} or {good(0%) and bad (100%)}. Consequently the plan is to keep executing the program if it is good (which we will know after the first attempt) or switch to a different program if it is bad.
« Last Edit: February 27, 2018, 07:11:23 pm by pawel.biernacki »

 


LLaMA2 Meta's chatbot released
by spydaz (AI News )
August 24, 2024, 02:58:36 pm
ollama and llama3
by spydaz (AI News )
August 24, 2024, 02:55:13 pm
AI controlled F-16, for real!
by frankinstien (AI News )
June 15, 2024, 05:40:28 am
Open AI GPT-4o - audio, vision, text combined reasoning
by MikeB (AI News )
May 14, 2024, 05:46:48 am
OpenAI Speech-to-Speech Reasoning Demo
by MikeB (AI News )
March 31, 2024, 01:00:53 pm
Say good-bye to GPUs...
by MikeB (AI News )
March 23, 2024, 09:23:52 am
Google Bard report
by ivan.moony (AI News )
February 14, 2024, 04:42:23 pm
Elon Musk's xAI Grok Chatbot
by MikeB (AI News )
December 11, 2023, 06:26:33 am

Users Online

322 Guests, 0 Users

Most Online Today: 353. Most Online Ever: 2369 (November 21, 2020, 04:08:13 pm)

Articles