Recent Posts

Pages: [1] 2 3 ... 10
1
General Project Discussion / Re: Determined and at the core of AI.
« Last post by Art on Today at 03:14:03 am »
Lock,

Unless you live in a very remote area, you & your mom are certainly entitled to obtain a Second opinion from another practitioner. Do explain your desire for urgency in this matter and tell them you really need to get relief, as well as any other conditions or issues.

(my point was, a lot of remote/country settings have a limited number of clinical psychologists or even physicians, for that matter, so a larger city would seem to be a better choice in the availability of alternative medical professionals).
2
General AI Discussion / Re: Danger from AI
« Last post by korrelan on Today at 12:29:35 am »
The whole concept of enforcing ‘laws’ on an AGI is flawed… I’m taking a different approach. Because my AGI is based on the human connectome it learns in a hierarchical manner… this gives a distinct advantage. All new knowledge is based on prior learned facets of knowledge.

From my experiments I’ve found that positive/ negative rewards of any kind are not required for my AGI to know the difference between right and wrong.  There is no difference per se between the concepts, both have to be learned and recognised and acted upon… the only difference is how they affect decision process.

If the AGI is taught correctly and its peers express morel judgement then the AGI will develop a set of morel networks from example at an early stage in its development that actually influence/ guide its entire decision making processes.

Just like a deeply engrained ‘human’ morel belief system this would be ‘practically’ impossible to negate for nefarious purposes.

Although… the system will be conscious, and self aware with ‘free will’, and so may decide at any point to ignore its morel judgements and kill us all anyway… but at least it will know it’s been bad lol.

 :)
3
General Project Discussion / Re: Determined and at the core of AI.
« Last post by ranch vermin on Today at 12:00:05 am »
then ur busy gettin' dizzy.

no good man.
4
General Project Discussion / Re: Determined and at the core of AI.
« Last post by LOCKSUIT on October 23, 2017, 08:41:58 pm »
Me and my mom are trying.

The emergency visit the 1st time around I seen the hot nurse then doctor then the yelling psychologist, and he gave me something that may help in 2 weeks, and my 2nd visit last night to this other hospital the doc gave me a new RELazapan that puts you to sleep. But I need something NOW that puts me in a droopy state the whole day until the first drug kicks in, so that i mustn't be tormented any further.

They are saying, to see a dedicated psychologist, I'll need a lengthy time consumo-inga fam doc refferal to psychologist, to then hopefully get meh knock out drug. Too log of a wait.

Should I find another route to their psychologists?
Get refferal from walk-in-clinic since my fam doc has weeks of wait time?
Hire a freelancing psychologist without having to wait for fam doc referrals?
Or get medical marijawana?
5
General Project Discussion / Re: Determined and at the core of AI.
« Last post by Art on October 23, 2017, 06:58:46 pm »
Apparently, you missed my post about spending less time on Forums and more time following the doctor's advice and trying to take care of yourself.

If you were absolutely serious about what you had earlier described, your condition is in dire need of professional assistance.

Do it! Take care of yourself!
6
General AI Discussion / Re: Danger from AI
« Last post by ranch vermin on October 23, 2017, 05:19:43 pm »
Hey Locksuit,   i know that u maybe had a bit of thinking in this area,  just because you might be right, doesnt mean other cant be right as well.

If we are all talking about rules...   then we are all headed in a similar direction.
7
General AI Discussion / Re: Danger from AI
« Last post by LOCKSUIT on October 23, 2017, 05:00:59 pm »
No no no. We all have multiple positive & negative rewards, and will do anything to get them (stopping good rewards can be a bad reward). Collecting stamps is same as the vacuum one btw.

I've had this very early in like at 20, now I'm 22.5. We create artificial rewards for everything.
8
General AI Discussion / Re: Danger from AI
« Last post by ranch vermin on October 23, 2017, 04:34:54 pm »
goals are something humans stuff up very regularly all over the place!!!

Its one of the parts of intelligence humans are very poor at!

this is a very epic idea of yours,  i think your headed somewhere successful.

Also you made me understand something important!  I was always stuck on how to get a GA to develop its own rules/games/goals.

We could talk more if you want,  im currently developing some stuff to get a genetic algorythm to follow a law program, which itself (like your saying) could actually develop its own laws using like what your saying.
9
General Project Discussion / Re: Determined and at the core of AI.
« Last post by LOCKSUIT on October 23, 2017, 03:46:25 pm »
Lol that fancy hospital I went to the second day in a row wanted to put me, Locksuit, in the Lockdown section. Lolz

Lock has escaped son
10
General AI Discussion / Danger from AI
« Last post by elpidiovaldez5 on October 23, 2017, 03:29:31 pm »
I have long been an optimist about the future of AI.  Probably I have been a bit too influenced by Iain Banks' Culture novels which I love.  However there has been a lot in the news lately about the dangers of super-intelligence, or even much more mundane machines, with poorly designed goals. 

Here I am talking about machine learning systems which are given a lot of freedom to discover how to achieve a goal, principally Reinforcement Learning(RL) systems.  Goals are given to RL by a reward function which computes a reward from an observation of the environment.   The RL algorithm attempts to maximise received reward (total reward or reward/timestep).

Sometimes the reward is a physical property which should be maximised e.g. The weight of fruit harvested by an automatic fruit picker.  Other times a reward function is specified, with the aim of eliciting certain desired behaviour from a machine.  This is a lazy way of describing the desired behaviour and it can often often produce unexpected/undesirable results. Either scheme can lead to problems.  Below are some key reasons:

Indifference
The reward function specifies a certain property of the world which must be maximised.  All other aspects of the world are ignored, and the system has no concern for them.  This means that a cleaning robot which wants to clean the floor would happily destroy anything in its path to achieve its goal.

Disproportionate Behaviour
The machine is not prevented from taking the goal to extremes.  A famous thought experiment concerns a super-intelligence that is given  a task to collect stamps.  This (seemingly harmless) task results in the AI consuming all the worlds resources (including humanity) in an effort to produce as many stamps as possible.

Reward Hacking
An AI may discover that it is easier to subvert the reward measurement than perform the intended behaviour.  For example a cleaning robot that gets reward for cleaning up,  could learn to create mess so as to receive reward for subsequently cleaning up.  If the cleaner is motivated by receiving negative reward for seeing mess, it may discover that it is easier, and more effective, to close its eyes than clean up. 


Solutions ?

I have tried to come up with some ideas for reducing these problems.  They are guided by thinking about how human society addresses these problems.

Evolution has provided humans with completely selfish goals and drives.  In essence we want the best for ourselves, and there has been no attempt to design-in reward functions that inevitably lead to good outcomes. Nonetheless humans seem to be quite capable of working together cooperatively and peacefully under the right circumstances (this is the norm, since there are actually very few mass murderers and malevolent dictators).  Why is this ?   One factor is that we live in a community surrounded by other entities of comparable abilities who defend their own interests.  Thus we never have the ability to do exactly what we want.  If we are too selfish in our competition for resources, neighbours/colleagues/police will punish us (negative reward).  If we act in a way which assists others to achieve there own goals, we receive reward (praise).  This results in the emergence of cooperative behaviour and philanthropy (see prisoner's dilemma argument for a mathematical explanation of cooperation). The system can, and does, break down when it is possible to hide anti-social behaviour, or when an individual becomes so powerful that they cannot be punished by others.

Human reward functions avoids extremes.  For example we want food and experience pleasure (reward) from eating, but when we are full the pleasure diminishes, allowing other drives to dominate behaviour.  Our reward function does not try to maximise food, rather it tries to obtain sufficient food.  Multiple drives control behaviour and constantly change their order of importance.  Having multiple drives may result in less extreme behaviour and eliminate problems resulting from indifference to all but one goal.

Conclusion

AIs should receive reward and punishment socially from human responses to their actions.  The AI cannot fully know the (stochastic) function behind the reward given by humans.  It must attempt to learn policies which receive reward.   

We should prefer multiple AIs, which must cooperate with us and each other, to a single all powerful AI.  This is not a requirement to limit the intelligence of individual AIs, rather it limits the extent to which an individual AI can control resources. 

We should provide AIs with a rich and varied set of reward sources,  resulting in a wide ranging set of concerns, rather than a single all-consuming goal.  Drives, which can be satiated, should replace unqualified maximisations in the reward function.
Pages: [1] 2 3 ... 10