Ideas for Alternatives to Logic

  • 46 Replies
  • 6102 Views
*

LOCKSUIT

  • Emerged from nothing
  • Trusty Member
  • *******************
  • Prometheus
  • *
  • 4659
  • First it wiggles, then it is rewarded.
    • Main Project Thread
Re: Ideas for Alternatives to Logic
« Reply #30 on: November 03, 2019, 05:42:24 pm »
Ok so you simulate a neural structure and THAT gives you the results/ computes the result but how......all that could happen is a neuron with 8 or 1,000 connections all receive a vote and combine it at that location, it's just a weighing-in vote after all that. A number based on signals. Exactly what all nets do. Collect votes/energy...

Or does your do it in a more realist manner? Like signals driving drown axons and crashing together at the end of the tunnel in a detailed expression....?

Your peers want to know.
Emergent          https://openai.com/blog/

*

Korrelan

  • Trusty Member
  • ***********
  • Eve
  • *
  • 1454
  • Look into my eyes! WOAH!
    • YouTube
Re: Ideas for Alternatives to Logic
« Reply #31 on: November 03, 2019, 06:19:59 pm »
Quote
Or does your do it in a more realist manner?

Yes, I simulate a biochemical/ electrical neural processor that runs a 'program' (GTP) that processes 'data' and gives the 'results', well... technically it cycles the 'results'.

 :)
It thunk... therefore it is!...    /    Project Page    /    KorrTecx Website

*

LOCKSUIT

  • Emerged from nothing
  • Trusty Member
  • *******************
  • Prometheus
  • *
  • 4659
  • First it wiggles, then it is rewarded.
    • Main Project Thread
Re: Ideas for Alternatives to Logic
« Reply #32 on: November 03, 2019, 06:43:44 pm »
Lol stop saying GTP, we get that, that is an algorithm.....we're talking about the 3D structure - what is being simulated - cell walls? Axon walls, reaction physics when the signals unite....and why is that important also.

I want to know about this new simulated processor - the physics behind it. Imagine 2 bouncing balls in a 5D space. I don't want to know about their direction or what controls them. I want to know about the environment, the 5D space - the walls to be specific....why simulate physics of the walls of the structure, or friction etc of the balls while travelling as 'signals'...
Emergent          https://openai.com/blog/

*

Korrelan

  • Trusty Member
  • ***********
  • Eve
  • *
  • 1454
  • Look into my eyes! WOAH!
    • YouTube
Re: Ideas for Alternatives to Logic
« Reply #33 on: November 03, 2019, 07:29:54 pm »
Quote
Lol stop saying GTP, we get that, that is an algorithm

No it’s not… it’s a complex pattern of activation, a wave, a cyclic electro-chemical manifestation/ phenomena lol.

Quote
we're talking about the 3D structure - what is being simulated

Everything, well… everything I deem required… many types of neurons, glial cells, transmitter compounds, axons, dendrites, synaptic junctions, electrical activity, etc

Quote
and why is that important also.

Because it wouldn’t work otherwise lol.

Quote
I want to know about the environment, the 5D space - the walls to be specific....why simulate physics of the walls of the structure, or friction etc of the balls while travelling as 'signals'

I believe I’ve figured out what creates/ generates intelligence/ self awareness/ consciousness in nature, be it in slime mould, ants, octopuses, dolphins, monkeys, apes or humans.  The ‘mechanism’ can’t be computed in a standard schema, so I have to simulate the whole machine.

I can do this all night lock….I’m not giving up my tech… just yet.

 :)

Anyway... I thought GPT-2 was the answer/ solution to AGI?

 :)
It thunk... therefore it is!...    /    Project Page    /    KorrTecx Website

*

LOCKSUIT

  • Emerged from nothing
  • Trusty Member
  • *******************
  • Prometheus
  • *
  • 4659
  • First it wiggles, then it is rewarded.
    • Main Project Thread
Re: Ideas for Alternatives to Logic
« Reply #34 on: November 03, 2019, 07:39:36 pm »
Or maybe you mean not the simulated blob is making the result, but rather it is COMPUTING the result.

However I can't help but imagine chemo-electrical cars crashing at the cross-section joint is the result AND the computation. I'll stick to my understanding of intelligence to grasp AGI for now.


Edit:
Cus whether you make a real robot tinman or simulate his tincan butt in a computer world, there's always going to be nodes voting and activating and summing up some sort of number you see...
« Last Edit: November 03, 2019, 08:33:39 pm by LOCKSUIT »
Emergent          https://openai.com/blog/

*

LOCKSUIT

  • Emerged from nothing
  • Trusty Member
  • *******************
  • Prometheus
  • *
  • 4659
  • First it wiggles, then it is rewarded.
    • Main Project Thread
Re: Ideas for Alternatives to Logic
« Reply #35 on: November 03, 2019, 10:02:33 pm »
"So, instead of looking for something to replace computation, let's look for the right kind of computation."

"BROOKS: Let me give you an example that fits your model there. We went from the Turing machine to the RAM model, and current computational complexity is really built on the RAM model of computation. It’s how space and time trade off in computation."

Yes! Here they speed up 'computation' (and 'do computation') by , just storing larger data, or refined data...that way you compute less and have experience instead in the system. You can have fast speed but big memory, or small mem but slow speed.
Emergent          https://openai.com/blog/

*

goaty

  • Trusty Member
  • ********
  • Replicant
  • *
  • 552
Re: Ideas for Alternatives to Logic
« Reply #36 on: November 04, 2019, 03:34:16 pm »
Goaties DYK -> (did u know)

*  If you've got an input or an output,  you don't need an oscillator to run the system,    you just supply battery and when you change the input the output naturally changes. The form of a feedforward neural network is the same as that.   an I->O machine needent have an oscillator, it just has to conduct its way across for its result.

So no osc required for logic nescessarily!!!  yay ezy!

*

HS

  • Trusty Member
  • **********
  • Millennium Man
  • *
  • 1175
Re: Ideas for Alternatives to Logic
« Reply #37 on: November 09, 2019, 07:42:18 am »
Alright, weekendly update! Here's my attempts at the further development of this non rigorous perception idea. Also, I'm trying out a GCRERAC format, who knows, maybe it will guide me to the light.

Goal:

My primary assumption here, is that nature has refined the art of making effective divisions in experience to quite a high degree. These divisions are units of thought, they are shaped to fit together, thus having the benefit of being easily unified and held at arms length when necessary. Seeing these "thinks" in terms of others, in a web of interaction, could be a powerful tool for AGI. I will discuss the pros and cons of logical "Single File" perception VS my theoretical "Wholeistic" perception, then try to discover and convey ways of representing them.

"Diagram":

pressure-cracks-in-lake-ice-nick-norman" border="0

Conflict:

There are some foreseeable consequences to just using a linear thought process. For example, after carefully reasoning, a one-at-a-time logical machine, (or person taught to use their mind in this way), might conclude they should devote their time to the pursuit of happiness. Then, not having access to the big picture as a cohesive whole, they're liable to overlook that pursuing happiness directly won't be a fulfilling use of their time, the linear logical pursuit can block itself. Like inadvertently walking in large circles by focusing exclusively on foot placement.

When we consider robots, we want to give them our best characteristics. Science and technology are in right now, so we focus on those, the things we consider most civilized, the quantitative.

To discover what something is, we perform an experiment, take the results, the numerable facts, and associate them backwards with the object of the experiment. Which objectively, is backwards thinking. For a purpose yes, but you don't want to be doing it all the time.

It's mostly useful because it contrasts well against our wholeistic perception. A(~G)I's are on track to lack this property; and giving them solely analytic linear processing will limit their mental depth perception. How might that manifest? Well, humans are susceptible to the misapplication of thought types as well.

Our instincts probably take their cues from the greater world model. Giving us advice which we may not immediately understand, nor need to understand. That's helpful, to a point. Continued running on reason without sufficient referral to perception can result in errors, which can corrupt much of our world view. Perception, assumptions/conclusions, continuous verification. That's how it should go, with teamwork.

When properly balanced, the wholeistic perception should keep the single file perception from doing anything laughably unwise, while the single file perception should greatly enhance the wholeistic system's ability to alter their environment.

In conclusion, I think the most effective way of experiencing the universe is like a traditional beam pattern. A cone of diffuse light with a bright spot at its center. The larger cone reveals where to apply the smaller one. A traditional robot would perceive with a focused beam, and a dog would perceive with a diffuse beam. Combine them, and you get human like perception. We've got the logic down, the question is, how do we give robots diffuse perception?

Diagram:

Diagrams-Flashlight" border="0

Result:

Firstly, since the wide beam will cover A LOT, we should make an attempt to simplify its job. It may be more efficient to interpret perception in terms of verbs, rather than nouns. It makes sense to use verbs because of their properties. There are fewer of them, they add a dimension, and they've got fuzzy boundaries! Maybe that's why memory and understanding have fuzzy edges. Maybe integration pays better than precision when handling large data. Maybe something to do with lose tolerances as well, so processes have lower odds of jamming up, (and higher odds of jamming ;).

Secondly, the random bits of perception intended for the background mosaic should be fitted together symmetrically (like the first "Diagram":). (Fitting things together using symmetry is also efficient, balance conserves energy. But I suspect that perfect symmetry is too stable, too nonreactive, that's probably why it paradoxically misses the mark on humans.) Matching the pieces isn't easy, our senses are not all-knowing, they only pick disjointed fragments out of the universal tapestry. We have to invent our own context for each of these fragments.

So it's not like an AGI's "cohesive" world view would make exact sense, or possesses much inherent symmetry. What we perceive can point to a greater order, but isn't itself ordered. So we need a fitting method to make use of the jumbled info we do get. It's like chess, only with fuzzy rules, and an unknown number of invisible pieces. You can't brute force all the possible moves, can't insure victory by only using the basic principles of the visible pieces. The invisible pieces will create exceptions to these rules.

Diagram:

Dice" border="0

The losses from unsymmetrical interactions: (Bell curve?)
5x5=25         9x9 = ((8+10)/2)^2 = 81
4x6=24         8x10 = 80
3x7=21...      7x11 = 77...

Dis-Symmetry-Losses" border="0

Yup, why is it always bell curves!?!?!? But fortunately this means the effect is forgiving near the apex. Therefore you can do this:

Symmetry-17" border="0


Now you might ask: "HS?  Why not pure symmetry? What exactly is wrong with that?" Yes, it is the easiest to move, but its movement has no novelty. If you take a double pendulum which is unbalanced, vs balanced, vs nearly balanced, and give them all a swing, the nearly balanced one will probably cross the most points on it's natural reactive movement. This would correspond to the most general type of intelligence. 

pendulums" border="0


Emotion:

The wide beam communicates in large bandwidth to the small beam using emotion. It's a ground up approach. Acute intelligence is more suited to a top down approach. Eg, [you are my/I am your] [wife/husband] , which means we should interact like [ ex ].  Then you get a divorce... It's binary on or off. Going with the bottom up route, the most comfortable natural relationship indicates appropriate adjectives. It's basically listening to reality. This way behavior is less forced, and personal/interpersonal discord is reduced.

If its true that perception is made from verbs, then that's the source of the phenomenon where people are miserable in apparently luxurious environments. Like the typical paradox with smiling peasants and sulky business men. Material (nounal) quality of life ≠ actual quality of life. Rather the actual quality of life = the quality of personal narrative. And the quality of personal narrative = (material quality of life)*(perceived verb sequences). The brain requires experiencing the correct verbs in the correct orders, for proper informational nutrition. But verbs require nouns, the discrete things created by the strong beam, in order to exist in an according abundance.

Far be it from me to say, I feel like a complete waste of food when one of them shows up, but I think this may partially explain why highly intelligent people are more susceptible to depression and the like. We use what's available right? High IQ people have a better narrow beam, so they apply it more often, to a greater range of things. Conversely, it doesn't make sense for someone like me to make my way through the world, using primarily logical analysis, I'd never get anywhere!

Point is, that too great of an imbalance between the apparent usefulness of the broad and the narrow beams of perception, can lead to a disproportionate use of one or the other. And this would do what? Remember "5*5=25" ? And,  "(material quality of life)*(perceived verb sequences) = (quality of personal narrative)"? It could be that: (single file logic)*(wholistic perception) = (a maximized self).

Diagram:

Screenshot-336" border="0

Reason:

What life forms are having the best go of it? In general, probably retrievers, labs, beagles... Now using linear, a to b, cause and effect reasoning, will tell you that dogs have good lives thanks to humans, ergo lets pat ourselves on the back, we are very generous. But when we step back and see the interactions of everything, the other perspective reveals that credit is due to their skill at selecting/creating the best niche. They can be viewed as winning at life more skillfully than any of us intellectually advanced lifeforms. This is indicative of really effective divisions in their broad beams of perception. We should learn from them to see how its done, and convey these skills to AGI.

So, it follows that the focused dissective intelligence we're most proud of has little to do with acting rightly in the grand scheme of things. All it does is increase the influence of a given entity on the world. It is more a tool of being, than the core of being.

That being said, focused intelligence can still multiply the awesomeness of your creature by a brsdfgkjsillion. It could sort of juggle the various facets illuminated by the wide beam. I think you'd need to have several main methods of arranging this data into useful internal discoveries. A basic mental toolbox, a collection of useful algorithms. Each GI develops a unique bag of tricks. Though each member of a species probably gets a similar "essentials" starter pack at the start.

We should think carefully about these tools because they should be designed to handle, and pick apart whatever the broad beam presents. Verb handling tools...

"Diagram:"

boxing-tool-box-character-cartoon-drawing-csp53582099" border="0

Anticipation:

How could  the pieces of wholistic perception be fitted together? Melted edges? Imagined framework? Bookshelf? Fitting into personal narrative like bricks and mortar? I like the last one, it fits with my "primary pathways and processes of intelligence" diagram and accidentally incorporates the verb idea. Maybe that's not how we do it at all, but there could be several ways of getting this party started. I mean, just getting cohesion between my ideas could be worth something, even if nature is doing it another way.

Alright. So lets represent it by showing the potentials of a chess board all at once with verbs indicated by positive and negative shading like magnetic fields, and then applying logic to the critical paths. They can be represented by magnetic fields because the invisible pieces reduce the visible piece's probability of influence with distances (x,y,t).

The other adjustment to regular chess is giving rules soft boundaries to introduce some room for error. I believe neural nets are able to think quickly and repetitively because they use approximations.

There are sure to be less glitches if thinking involves collisions of large numbers of signals. Given the brain's seeming "what the hell" attitude towards the fine points of internal structuring, (respect), clashes of armies instead of duels look like an effective strategy for dependable repeatable results. Like running a simulation multiple times, only in parallel.

Therefore, to get those types of forgiving mechanics, I'll represent each chess piece with 81 pieces of it's type.  For now, here's the basic idea.

Diagram:

Screenshot-356" border="0

The big external perceptual chess board becomes more and more internal with time. Which helps to identify present situations from context noticed in past situations. Part of patterns evoke larger memorized ones from experiences in the past. Therefore broad perception will draw increasingly from these memorized big data slides. It could keep optimizing itself indefinitely. What's next? No clue. Maybe someone else has some ideas.
« Last Edit: November 09, 2019, 09:21:15 am by Hopefully Something »

*

WriterOfMinds

  • Trusty Member
  • ********
  • Replicant
  • *
  • 606
    • WriterOfMinds Blog
Re: Ideas for Alternatives to Logic
« Reply #38 on: November 09, 2019, 06:01:01 pm »
This is the first long-form theory piece someone's posted here since ... I don't know when.  Respect.

The complaint I've had in the back of my mind since the beginning of this thread, has been that I think you're being a little unfair to the concept of "logic."  There are forms of logic that incorporate uncertainty and imprecision, and to imply that logic is always linear or sequential might also be overly narrow.  But that is probably just a semantic quibble.

I like the general idea of "broad and shallow PLUS focused and deep."  Seems like it could apply not just to perception, but to any form of information access or search.  (Though maybe when you said "perception" you weren't just referring to the data stream from sense organs.)

I'm less convinced by "verbs over nouns."  There's some experimental evidence that human children learn nouns first/faster, which could suggest that nouns are easier or more fundamental (though there seems to be uncertainty about how much the child's native language and specifically speech-related [as opposed to conceptualizing] skills play into this).  In any case, you end up needing both, and I'm not sure that categorically excluding either one from the diffuse zone is what I'd do.

It is true that change in the "wide beam" area tends to draw our focus.  For example, unexpected motion in your peripheral vision will probably prompt you to turn your head and get details.  But this seems less about "verbs" (putting a label on what happened) than the mere concept of change: "something moved."

Your discussion of the smiling peasant and sulky rich man also comes across as a bit weak, because nouns aren't strictly about material objects.  "Joy," "justice," "friendship," and "beauty" are all nouns, for instance.  "Personal narrative" strikes me as a worthwhile thing to explore, though.

Quote
What we perceive can point to a greater order, but isn't itself ordered. So we need a fitting method to make use of the jumbled info we do get.

The flood of information we receive from our perceptive organs is scarcely useful to us in its raw form, so we make sense of it by finding patterns, learning categories, and building up successive layers of abstraction.  Yes.

When you talk about symmetry being more efficient, are you reaching for ideas about data compression?  (Symmetry implies a degree of redundancy, so you can get away with storing and processing less info.)

Quote
What life forms are having the best go of it? In general, probably retrievers, labs, beagles...

That's very subjective.  Some beagles are being brutalized in testing labs.  Some retrievers get dumped at an animal shelter and euthanized when their human family decides they're too old or too inconvenient.  In general, an individual dog is no match for an individual human who wants to do him harm ... they're only "successful" compared to humans at the species level.  Even if we only consider the lucky dogs, I wouldn't trade my life for theirs.  I value my "higher" reasoning powers even when they bring me anguish.

"I envy not in any moods
The captive void of noble rage,
The linnet born within the cage,
That never knew the summer woods:

I envy not the beast that takes
His license in the field of time,
Unfetter'd by the sense of crime,
To whom a conscience never wakes ...

I hold it true, whate'er befall;
I feel it, when I sorrow most;
'Tis better to have loved and lost
Than never to have loved at all."  (Alfred, Lord Tennyson)

Some of what dogs have achieved could also be an accident of history.  Is their mutualism with humans a product of superior intelligence on their part, or did it form due to circumstances outside their control?  Would they have fared equally well if the available niches had been different?  (In short, how "general" is the type of intelligence that dogs excel in?)

Quote
The wide beam communicates in large bandwidth to the small beam using emotion. ... It's basically listening to reality.

Careful.  I agree that emotions provide information.  Sometimes it's valuable.  But it's also prone to inaccuracies, bias, etc. ... which means it generally needs to be audited by the narrow beam before you use it, in my opinion.  I would not consider emotional information to be a more reliable representation of reality than any other.  Plus, your top-level executive functions feed back into how your "wide beam" interprets the world.  You can groom your emotions over time.  Don't just follow your heart; lead your heart.

In the example you gave of relationships and marriage ... I suspect that choosing to love someone (in the operational sense, where you work toward their best interest) can help manufacture love-the-emotion for you, creating positive feedback loops.  If you only love (action) the people you experience love-the-emotion for, you're doing things backwards.

Quote
So, it follows that the focused dissective intelligence we're most proud of has little to do with acting rightly in the grand scheme of things. All it does is increase the influence of a given entity on the world. It is more a tool of being, than the core of being.

I like this, but I would identify the "core of being" or the source of "acting rightly" as goals and values -- not a more diffuse variant of intelligence, nor emotions.  All forms of intelligence are tools for reaching our goals, and emotions are just more things that happen to us.  The choices of the will are what ultimately make for a good or bad life.

The "chess with invisible pieces" experiment is nice as a metaphor, to show how the skills you're reaching for differ from those needed for regular chess ... but for a demo, I'd rather see the concepts applied to a real-world scenario.  Like "navigate through this crowd of people who will move in only semi-predictable ways," or "given a sentence that might have missing words, construct the sentence most-likely-intended by the speaker."  That would better help me connect your ideas to their practical realization.
« Last Edit: November 09, 2019, 06:45:56 pm by WriterOfMinds »

*

HS

  • Trusty Member
  • **********
  • Millennium Man
  • *
  • 1175
Re: Ideas for Alternatives to Logic
« Reply #39 on: November 11, 2019, 11:23:16 pm »
I appreciate the extensive feedback.

The complaint I've had in the back of my mind since the beginning of this thread, has been that I think you're being a little unfair to the concept of "logic."  There are forms of logic that incorporate uncertainty and imprecision, and to imply that logic is always linear or sequential might also be overly narrow.

Yep, darn logic seems to be everywhere. I should probably just look for a variety which incorporates room for error.

Quote
I'm less convinced by "verbs over nouns." 

I was going for the concept of "static things" VS "changing things", and yes, nouns/verbs don't describe that perfectly. Same with the contentment/discontentment in people. That could be dependent on the static/dynamic elements found by their perception, as well as their ability to turn those elements into something worthwhile, eg, a fulfilling personal narrative. I believe this is the predominant ultimate goal for people.

The idea is, that without sufficient dynamic elements, no amount of static elements will improve your narrative. They would become just so much irrelevant exposition. I think we know things in terms of objects, and understand them in terms of processes. So, the broad beam understands, while the narrow beam knows. We require a daily understanding of our own progression through our story.

Quote
There's some experimental evidence that human children learn nouns first/faster, which could suggest that nouns are easier or more fundamental (though there seems to be uncertainty about how much the child's native language and specifically speech-related [as opposed to conceptualizing] skills play into this).

Learning to represent dynamic things in a language might also be more challenging because, the symbols used for actions, usually have less in common with the things they are meant to represent. For example, a glyph for crocodile which resembles a crocodile is something resembling geometric onomatopoeia, so learning is simplified. Conversely, "to bounce", as written, is further removed; requiring more points of verification, and therefore time, to be integrated by the brain.


Quote
When you talk about symmetry being more efficient, are you reaching for ideas about data compression?  (Symmetry implies a degree of redundancy, so you can get away with storing and processing less info.)

Symmetry seems like it could help with data placement and reduce storage/retrieval/calculation requirements. I wasn't exactly reaching for that use of it, I just noticed a pattern and wondered if it could be useful in some way. I was more leaning towards the idea that the system should detect and link opposites, up-down, forwards-backwards, light-dark, light-heavy. Because these things appear together in our perception. To recognize something as heavy requires a lighter thing to compare it to, so those concepts end up linked in our minds. Probably in both beams even, we know things are light or heavy, and we understand the difference in terms of processes.

Quote
In general, an individual dog is no match for an individual human who wants to do him harm ... they're only "successful" compared to humans at the species level.  Even if we only consider the lucky dogs, I wouldn't trade my life for theirs.  I value my "higher" reasoning powers even when they bring me anguish.

Ok, dogs don't top the food chain, or power pyramid, but I still think they are living the most... admirably, maybe. Both in relation to others, and themselves. Firstly, they refrain from making themselves miserable, while humans have a talent for it. Secondly, dogs don't abandon humans if they are old or inconvenient, neither do they perform "ends justify the means" lab experiments on us, or anything. That stuff rebounds on us. If you do something wrong, your psyche doesn't accept intellectual excuses. So I'm not convinced it's the higher reasoning directly which brings you anguish, it could be just the "lower" reason's protests at being interfered with.

Anyways, consequences spring up, people are harboring increasing resentments towards their own species. See, even our will to remedie often starts off on the wrong foot, it would be funny if it didn't turn out so serious. I think it's safe to say, that we probably don't have all the best ideas. It could be worthwhile to study/teach how other life forms interact. Those are the best non-theoretical examples we have, and we don't have a good track record with the institution of purely theoretical methods.


Quote
(In short, how "general" is the type of intelligence that dogs excel in?)

I think they excel in adaptability, which is even better, but can lead to very general intelligences. I think their genetics are very Swiss army knife. So they can adjust their behaviors to the changing times rather well. Evolutionarily speaking, we probably have a strong reaction to what we see as kindness/goodness because we of its usefulness. I think dogs have discovered this same, small group mentality as humans, only they have honed it to an even broader degree. If true, this should allow them to secure better niches in a broader range of environments.

Quote
I like this, but I would identify the "core of being" or the source of "acting rightly" as goals and values -- not a more diffuse variant of intelligence, nor emotions.  All forms of intelligence are tools for reaching our goals, and emotions are just more things that happen to us.  The choices of the will are what ultimately make for a good or bad life.

I agree, personal pivot points hinge on your basic values, but those seem to be noticed, chosen, and strengthened by emotional responses. The will judges importance by sensing emotion and takes an appropriate stance. A depressed person has some emotions reduced, and their will seems reduced accordingly. A passionate person has some emotions increased, and their will seems increased accordingly. I think emotions, are like air, they are always there, sustaining you. But so ever-present that you only notice them when you have "weather."


Great theory testing ideas! I'll save them for when I feel like continuing this "background of thought" idea.

*

LOCKSUIT

  • Emerged from nothing
  • Trusty Member
  • *******************
  • Prometheus
  • *
  • 4659
  • First it wiggles, then it is rewarded.
    • Main Project Thread
Re: Ideas for Alternatives to Logic
« Reply #40 on: November 12, 2019, 01:24:05 am »
I'm really confused on this thread, can we summarize.........say we want to build a AGI, and we focus on the computer running it (including the logic of the AGI).

Logic, to me, means:
1) AND OR NOR gates in transistors.
2) Prediction (aka reasoning/planning) using big data.
3) Binary data - 1s & 0s in computers.

To me, to build AGI, we need binary data (everything is binary in electronics...). We also need prediction modeling (AGI basically is a decision maker based on past data...). And we need computer transistors to carry out logic on 1s & 0s.

How, or what, do you differ or ponder HS? What is it you are deviating away from? Why lol? I mean, when we decide how AGI works, we need not worry about 1s & 0s theory nor worry about carrying out logic on transistors, because binary data and transistors allow us to simulate any possible movie in a 3D simulator, any kind of physics, anything in the universe, my computer is a universal computer, it can run a 3D sim that parallely processes a prime number on a quantum computer...As the CPU runs sequentially, it updates each part of the 3D sim, so my pc is universal and can run any 3D sim, as long as the project file has the right codes :-) (for running a mini universe lol). Anyhow, if we run a AGI algorithm, my point is we can use transistors.

But to consider an alternative to transistors in an AGI brain to act as its predictive model, hmm, well here we are trying to discover how AGI works. The reasoning in an AGI brain is like RAM+processor logic, it is self-attention, gates DO open in relation to related words, and even tally up as votes like an AND or OR gate. So really we got the idea there already, well, at least I do lol.
Emergent          https://openai.com/blog/

*

WriterOfMinds

  • Trusty Member
  • ********
  • Replicant
  • *
  • 606
    • WriterOfMinds Blog
Re: Ideas for Alternatives to Logic
« Reply #41 on: November 12, 2019, 01:56:15 am »
Quote
The idea is, that without sufficient dynamic elements, no amount of static elements will improve your narrative.

But without sufficient static elements, the dynamic elements have nothing to operate on. :)

Making a distinction between "things" (the nouns), "processes" (the verbs), and "states" (we haven't talked about those yet ... they'd be the adjectives), and making separate provision for each of them, is an idea that I appreciate.  I just don't know if there's a good basis for preferring one over another, or trying to improve efficiency by choosing only one to look at.

Quote
Quote
There's some experimental evidence that human children learn nouns first/faster, which could suggest that nouns are easier or more fundamental (though there seems to be uncertainty about how much the child's native language and specifically speech-related [as opposed to conceptualizing] skills play into this).

Learning to represent dynamic things in a language might also be more challenging because, the symbols used for actions, usually have less in common with the things they are meant to represent. For example, a glyph for crocodile which resembles a crocodile is something resembling geometric onomatopoeia, so learning is simplified. Conversely, "to bounce", as written, is further removed; requiring more points of verification, and therefore time, to be integrated by the brain.

The studies I was thinking of were testing little kids' ability to learn spoken language, not ideograms.  The researchers used made-up words for both the objects and the actions to avoid biasing the results.

Quote
Ok, dogs don't top the food chain, or power pyramid, but I still think they are living the most... admirably, maybe.

Maybe so; but now we're not talking about intelligence any more.  Have you heard of the Orthogonality Thesis?  It posits that an entity could have an arbitrarily high level of intelligence, while holding values and carrying out behaviors that the average person would find horrific.  (See https://wiki.lesswrong.com/wiki/Orthogonality_thesis and https://arbital.com/p/orthogonality/)

I have seen some people on Reddit who were stubbornly committed to the idea that "intelligence" is the same thing as "enlightenment," and insisted that an ASI couldn't possibly have values that would be evil or stupid by human lights; even if it started out with bad values, once it got smart enough it would change them. I think this is ridiculous. The only way to reason out (or emote out) that your values are bad is to judge them against a higher standard. Humans sometimes change low-level values when we reason out that they are in conflict with our top-level values -- but we never reason our way to a change in our top-level values. If you have a higher standard to compare your top-level values against, then they're not really top-level. An ASI with bad top-level values would never change them, and would be a terror.

I would also caution against relying too much on the optimistic maxim "kind behavior is the most useful."  I think this is generally true if you are dealing with an agent who is embedded in a society of similarly powerful agents.  But it starts to break down when you introduce entities who are less powerful and aren't considered part of the society ... in short, entities who are easy for the agent to exploit.  It definitely does not apply to an AGI-turned-ASI faced with a bunch of cowering humans that are in its way.

In the context of "how do we build an artificial mind, and what should it be like," values are certainly an important thing to consider.  And I would contend that Good AI is better than Smart AI.  But Good is not part of Smart.  Good is its own thing.

Quote
If you do something wrong, your psyche doesn't accept intellectual excuses. So I'm not convinced it's the higher reasoning directly which brings you anguish, it could be just the "lower" reason's protests at being interfered with.

Guilt isn't necessarily the anguish I was thinking of.  I was thinking of the bad feelings that arise not from doing something bad, but from simply knowing troubling things: like existential dread and weltschmerz.  I don't suppose that dogs have to deal with weltschmerz, though I guess I can't ask them.

*

HS

  • Trusty Member
  • **********
  • Millennium Man
  • *
  • 1175
Re: Ideas for Alternatives to Logic
« Reply #42 on: November 12, 2019, 02:20:35 am »
Quote
I'm really confused on this thread, can we summarize.........say we want to build a AGI, and we focus on the computer running it (including the logic of the AGI).

I'm just looking for a way to get the world into a mind, and then have it reason logically based on that. We feel we are looking out into the world, but we are seeing our brain. It channels portions of the base universe into itself, and creates a functional secondary universe for us to contemplate, this is what I think of as perception, or broad beam.

The contemplation takes place in a third representation of the universe, where we can make changes and see what happens. This third universe is what I think of as intelligence, or narrow beam.   

Quote
Have you heard of the Orthogonality Thesis?

Nope. Seems interesting I'll take a look.

Quote
I would also caution against relying too much on the optimistic maxim "kind behavior is the most useful."

Yeah, I  see your point. You need all kinds of tools and methods to deal effectively with the world. One approach will eventually run into an obstacle which it is not suited to tackle.

Quote
I was thinking of the bad feelings that arise not from doing something bad, but from simply knowing troubling things: like existential dread and weltschmerz.

Oh, I haven't really had to deal with that. Hope I don't acquire it with time.


*

goaty

  • Trusty Member
  • ********
  • Replicant
  • *
  • 552
Re: Ideas for Alternatives to Logic
« Reply #43 on: November 12, 2019, 07:34:36 pm »
The Welschmerz comes out as jokes in music and art, sometimes. u taught me a word I think Im going to attach to,  because it might as well be my middle name.  Your Welschmerzphobic, if you are illogically scared of there being not much in front of us, as humans in our pathetic boring existences where there is nothing to be proud of, even if it was right to be proud of it,  the truth of there being nothing at the bottom of a big blue sea sea sea is actually needent be because we are all dieing left right and centre, I don't even know how ppl get old its so dangerous here.
The healthy personal narrative thing, that I just appropriated in a mongrelization of what u meant probably, is I could take it as having a story you could be proud of in life, or something perhaps better, would be just simply describing your sorrounds more correctly, understanding that things are more than they seem to the surface peepers.

That thing about the robot being evIl, you reaffirmed me then, and u could only get something that [***narrow and shallow***] to do the right thing via artificial rammifications it doesn't even understand properly.



*

AndyGoode

  • Guest
Re: Ideas for Alternatives to Logic
« Reply #44 on: November 12, 2019, 07:52:54 pm »
For example, a glyph for crocodile which resembles a crocodile is something resembling geometric onomatopoeia, so learning is simplified. Conversely, "to bounce", as written, is further removed; requiring more points of verification, and therefore time, to be integrated by the brain.

When I tried to merge your two examples into a single mental image I came up with bouncing crocodiles. I like that. Sounds like either a child's toy or a rock band.

You beat me to the statement about the drawbacks of time--however much information an object contains, as soon as you move that object continuously in time you suddenly need an uncountably infinite amount of information to describe it. As soon as databases, neural networks, logic, or any other computer constructs attempt to incorporate time, we find the required memory space shoots out of control, as well as the complexity of the description.

As for LOCKSUIT's later comment on logic... There are many types of little-known logics, such as temporal logics, multi-valued logic (https://en.wikipedia.org/wiki/Many-valued_logic), description logics, etc., not just boolean logic. Also, speed and efficiency are crucial to intelligence, and I even include 'efficiency' in my own definition of intelligence, so even though in theory a digital computer or Turing machine can run any algorithm, if the data structure used is not efficient for the application then an answer/response from the computer may not happen fast enough to ever be useful, which by my definition would not be a demonstration of intelligence of any appreciable degree.

If I weren't so busy on my article this week I'd comment more at length on this and other threads.
« Last Edit: November 12, 2019, 08:22:18 pm by AndyGoode »

 


OpenAI Speech-to-Speech Reasoning Demo
by ivan.moony (AI News )
March 28, 2024, 01:31:53 pm
Say good-bye to GPUs...
by MikeB (AI News )
March 23, 2024, 09:23:52 am
Google Bard report
by ivan.moony (AI News )
February 14, 2024, 04:42:23 pm
Elon Musk's xAI Grok Chatbot
by MikeB (AI News )
December 11, 2023, 06:26:33 am
Nvidia Hype
by 8pla.net (AI News )
December 06, 2023, 10:04:52 pm
How will the OpenAI CEO being Fired affect ChatGPT?
by 8pla.net (AI News )
December 06, 2023, 09:54:25 pm
Independent AI sovereignties
by WriterOfMinds (AI News )
November 08, 2023, 04:51:21 am
LLaMA2 Meta's chatbot released
by 8pla.net (AI News )
October 18, 2023, 11:41:21 pm

Users Online

336 Guests, 0 Users

Most Online Today: 396. Most Online Ever: 2369 (November 21, 2020, 04:08:13 pm)

Articles