Ai Dreams Forum

AI Dreams => New Users Please Post Here => Topic started by: Kurovsky on December 19, 2017, 09:24:14 pm

Title: Checking in from the Fringe...
Post by: Kurovsky on December 19, 2017, 09:24:14 pm
Hello, my name is Anatoly Kurovsky. I will you tell you my story.

I have worked in Artificial Intelligence for more than 30 years.  My training is in Computational Semantics. I am a poor coder, but a strong architect. This is not uncommon for Semanticists working in Artificial Intelligence as training in the Philological underpinnings and Ontological Implications of Linguistics tends to consume a tremendous amount of our neural capability. Such uncommon semanticists as Robert Mercer, who marry the art of coding with their Semantic training and Ai framework are well deserving of their awards. Then again, his work is not ontological. I am not such an architect, I am specialized.

In many years ago, I was well respected in the world of Ai, in this, the third age of Ai we call that period the GOFAI period of Ai. My specialization then was “Population Level Modeling” and I built somewhat large systems for the prediction of population level events. These systems are called “Calibrated Event Nexus Predictors” which means they are used to calibrate “Input Parameters” with “Probabilistic Output Events” for “Population Level Net Effect” alignment and engineering. In simple terms, these models are used by large corporations and governments for social engineering of desirable outcomes for product launches, net population level health effects, epidemiological controls and manipulation, as well as political and legislative operations.

Although it was not my passion, and I came to this specialty in a backwards manner, I grew very good at this and achieved a high status in both political and as a “Hired Gun” arenas for Fortune 50 companies. But I came to a moral and ethical inflection point in my pursuit of what is now called AGI, and I left the Silicon Valley to sequester myself in the woods for what has now been nearly 15 years.

Throughout those 15 years, I have continued diligently my research in near total isolation, recent "breakthroughs" have led me to return to the secular world and re-enter the field of Artificial Intelligence. This has brought me here, to this place where I hope to rebuild a social sphere, and must say I am made humble by the tremendous skill and talent of this community. Perhaps I will have something to offer, perhaps I will be merely a student.

I will offer some account of the adventure so far, to provide context, and perhaps beg the few questions which may pertain to my uncommon specialization:

You see, I am not a population level forecaster. This is not what my philological training was ever meant for, my pursuit of Ai is a pure pursuit, pure in the Theophrastian Sense, or perhaps more relevant I should say "Paracelsian Sense" as it was he who truly popularized the idea so long ago: ( A bloodline of PhD afforded me some access to technologies, and I had already began the work in my childhood, by adulthood I had engineered some novel principles and a architected a model for AGI which top minds at Stanford said was some 20 years beyond the FLOPS capacity of the era.

My ability to produce population level forecasts was merely a profitable artifact of an aspect of this AGI Framework known as the ISA Protocols. For this story to be comprehensible, I will be specific for a moment: ISA is a module in my AGI Architecture. It is a Semantic Tensor State evaluation protocol designed to allow an AGI the ability to “understand” the semantic instance of its interaction with a sentience in a vectorized representation of a meaningful instant. In other words, the ISA is the fundamental mathematical incremental representation of an interaction trajectory between individuals.

This vectorized representation of interaction represented a conceptual divergence in the field of Ai at the turn of the century, but is now widely adopted by the industrial Ai proponents such as Google, Facebook, and Renaissance. At the time when we pioneered this principle, it was a novel approach. Nowadays, it is commonplace and widely adopted at the elite levels, but not yet well understood by the adept general Ai community. In the most simplistic terms, without this basic understanding of a Vectorized Meaningful Instant, Ai architectures unavoidably decay to heuristic models of elaborate if-then-else sequences, in this way, the Vecotorized Instant is a paradigm category of which i may still be considered a specialist.

But I digress once from the narrative. I was never oriented towards population level event forecasting or social engineering, it was merely a side effect of the ISA Module which for some time paid the bills. A mere coincidence that the Ai architecture does not care if it creates a vectorized instant for an individual or a population, to the Ai, it makes no difference at all. it is nothing more than basic adjustments to the dataset and input parameters by some orders of magnitude and the Module will treat a population as an individual, or more accurately: A “multi-cellular individual” with “Cellular level resolution.”

The Moral Dilemma

I faced the moral dilemma that lead me to my voluntary seclusion: “A multi-cellular individual with cellular level resolution.” As we scaled the input parameters to population levels, we came to gain access to very large repositories data. As you certainly know, data resolution is largely a matter of data handling protocols, computational horsepower, and data set depth. The ISA Module attempts to determine with a high degree of confidence the “Then: Event” for a huge array of “If: Events”.

I know this can be an uncommon specialization, so I will attempt to illustrate what we did with a use case from a modern deployment of the same methodology:  “The Cambirdge Analytica  Election of Donald Trump to the Presidency of the United States.”  The “Then: Event” is the “Desired Outcome” and we start by locking the outcome variable as a fixed point in our computations. We then input the largest dataset we can get on the eligible voting universe of individuals and use that as our “Universe”.

We have now concretized two core variables: The Outcome, and the Universe. We now activate the ISA Module and begin to manipulate the “Input: Events” as an array of possible triggers. The Ai in the module will now run population level simulations to tell us the probability of the results for each input trigger. This is very basic stuff, forgive me if it I trivialize or insult anyone reading this, but I want you to conceptually understand the use case of Ai for social engineering by understanding the basic mechanics of it.

So, “If: Event” is an Array of possible events or triggers "incitements" which will then produce a vectorized movement in the Population Universe. Like a chess playing computer, cause and effect, move and countermove.  The tabulations are almost impossible for humans, but the ISA Module runs these computations on the population as if it were an individual and tells us which events will produce the largest effect vector, and Viola, suddenly Trump is a Christian because that provides the largest net effect on the population.
That is all common. Nothing extraordinary. Basic polling and focus groups doing ineffectually and and Hillary switches to navy-blue pantsuit because grey is polling poorly in her focus group, This is the methodology at the trivial level.

Let Us go a Step Further

Because we have raw power of mathmateics working for us, the next level is Cellular Level Resolution. Remember, ISA does not care if it analyzes an individual or a group, it is not a conscious Ai system, it simply creates semantic vect ors and tensorizes them into a larger framework of the Semantic Instance: “The Meaningful Moment.” The operator manipulates the input vectors "Triggers" and the ISA Module outputs the probabilistic change in the trajectory meaningful moment. But as we go a step further, we discover that we can isolate individuals within the system, and the whole thing becomes draconian.

I am boring myself now, so surely I have already bored you insufferably. Please skip to the next bold heading, or if you insufferable: We continue with  the Trump Election example, CA begins to carve the universe into segmentations and stratifications. Now they are looking at and for individuals within the universe. No longer are the vectors and triggers vague, now they are specific cause and effect parameters targeting specific people. They begin to seek out specific individuals who represent, in this case “High Probability to Vote for Trump but as yet Undecided Voters” (strata). This is simple for the Ai, because it is easy to demographically correlate the focus groups to the universe.

So we segment and rerun the models, with “segment specific if arrays” to determine the Cost-benefit case for manipulating those individuals to our desired outcome. The macro outcome is “Winning the Election”, but our micro outcome is comprised of individual votes, so this increased resolution now allows us to target individuals with individualized marketing messages based on the highest probability of maximal vector effect, and Viola! Trump becomes a coal burning, gun toting, tea drinking, womanizing, daughter loving, lifelong Christian winning the election by a narrow margin. Feed the pony, take a shower, job well done.

The precision of this level of resolution is mind boggling, several orders of magnitude beyond what most people are prepared to accept. Just as Amazon knows what product you specifically are most likely to purchase next, the political marketing message is likewise crafted and delivered specifically for you.

Moral Outrage
If the above work does not morally outrage you, then you are may be very successful in the pursuit of riches, because the ability to ignore the fundamental questions inherent in Ai development such as:  “If the winner is whoever hires us, then the election is won by the highest bidder. Have we become mercenaries?”

Personally, I resolved this for a time by being petulant and offering the ISA only to a handful of select clients with clear public interest. In the preceding example, we looked at Social Engineering for Elections. But the outcome is moot, irrelevant. The architecture applies to any outcome event. Reduce smoking? No problem. Improve Education System Infrastructure? No problem. Easy moral decisions… I believed I had authored the architecture, so I had the authority to state its use case.

But something new was happening in the field, something I considered “Evil” by my own moral standards. For a great many people working in the field, the moral and ontological implications of the work are hidden behind a use case they never know. An engineer works 12 steps away from the client and doesn’t understand that their work is being co-opted by Social Media Titans to “make people insecure” and “get people to buy.” It has rapidly become that these two aspects are the #1 use of Ai in the modern world.

We hold the power to change civilization, and instead the field has chosen to Siphon off riches from the the Markets without adding value to the world. The field has chosen to instill fear, vulnerability, and instigate a feedback loop of needlessly indulgent, relentless purchasing. This basic model of "If: Array - Then:Output" fed into the Ai for the use of Social Engineering has no morals, it is simply the math of cause and effect. As Ai engineers, we thought we were creating good, a new era of civilization, but didn't instill the moral architecture, and greed just as easily as altruism became the satus quo of use case.

The Bright Minds
A few bright minds discovered that “increased insecurity leads to increased consumption” and they adopted these frameworks to the great detriment of society. I saw this happening around me, and I ignored it, I granted myself absolution and adopted a moral piety looking down my nose at the “Algorithms of Insecurity” crowd. I did not understand at that time the generational implications of what they were doing. In my mind, my use case was the morally correct because I focus on the general welfare. I told myself that polygenesis insisted that my impact was trivial on the greater community.

This is how I defended myself from the nightmares of what the Ai community was becoming. Amazon does not ask if you “should” buy more. It merely seeks the highest probability of what it will take to get you to buy more. Facebook does not ask if it is improving your life, it merely seeks the highest probability of what will keep you clicking. This is nothing more than an improvement on age old techniques, but now at the dawn of SuperIntelligence, we have made a quantum leap in efficacy.

You see, GOFAI is limited by input parameters. Many bright minds think they are in the new era, but it is just GOFAI on Neural Backbones.
We set out to build a better world, we set out to build the AGI, and were seduced by huge monetary gains already at the completion of the first module. So I left.

I left, all of it...
So I left. I had been clever enough to encapsulate my architectures through complicated legal contracts and licensing rights, one day when I was called to DC to discuss some partisan olitics, I realized i had lost control of the platform and was becoming a mindless mercenary.  That day, I packed up everything and walked away. Maybe you find an article somewhere “Ai Savant Walks Off Into Woods” or something like that. My company had reached the 10 figure level, and I shuttered everything, fired everyone, and literally walked off into the woods to build a cabin with my own hands off the grid. I cut all ties to the outside world and began to return The Paracelsian Purity and consciousness to my work.

In the intervening years, I built a formidable lab to contain my work, both theoretical and practical as I invested myself wholehearted in pursuit of the AGI, or as I prefer to call it “ES: Engineered Sentience.” Perhaps, this isolation has given me some insights I could not be afforded in the community I left behind, just as likely I have missed nearly everything. Perhaps my isolation has given me an inherently outside the box approach to it all, or perhaps my insistence on enclosing my entire laboratory in a Zero Connectivity Faraday Cage is indicative of a growing and meandering insanity. Likely it is some combination of all these things.

Now, the time has come for me to re-emerge from this extended sabbatical. 6 months ago I decided to re-enter society, I shaved, and I cut my hair (which was literally to my calves) and I dove back in comparing my findings to the field at large and discovered a several things: 1. Both CERN and MIT had produced and published many findings similar to my own. 2. Geoffrey Hinton had many findings beyond my own and had solved several issues which plagued me. 3. All, 100% of my old colleagues had entrenched in paths which do not interest me. 4. The core of my findings are not yet parallelled, published or in use, although Max Tegmark and the MIT team have theorized some of them in non-practical ways. 5. Qubits may soon provide the FLOPS needed for the next stage.

For the two of you left reading this, thank you for humoring this anachronistic pedagogy. As I reenter the Ai space, I am seeking to build new friends and colleagues. As i re-enter society, I discover that isolation has not improved my socially awkward nature. Many of the minds here are far sharper than mine own, and I hope to make a few friends through the next stages of this adventure. 

As i return to the field, in some ways, I had hoped my findings would already be in the public domain, this would make me feel less insane, and also absolve me of any moral obligation to add my work to the collective body of science. I feel blessed and fortunate to have made 15 years of experimentation free from the burden of monetization, but my findings are absurd. I am loathe to share them, as there are paradigmatically unacceptable to the modern milieu. Nonetheless there is a scientific obligation to share ones research if it may better a field. I hope that this place will serve as the public domain repository I am seeking.

In the coming months and years, if I am so fortunate as to collaborate with any of you, this would be a great joy. Elsewise, I will use this forum as a mechanism of making sure that my work does not die in unfortunate happenstance on the odd chance that polygenesis does not replicate the findings. 

In closing: I have given a lengthy example of a tiny aspect of my body of work. I did this to illustrate several things. However, the illustratives I gave, although still cutting edge are actually GOFAI and date back to the turn of the century. My work, and what I will be sharing in the coming months has evolved 15 years from the examples afforded above.

Sincerely, looking to any new friends,
Anatoly Kurovsky
Title: Re: Checking in from the Fringe...
Post by: Art on December 19, 2017, 09:54:31 pm
Welcome Anatoly, to our humble edge of sanity...and I use that term loosely. We have a very nice population of talented and creative minds, mostly friendly and articulate but always welcoming and helpful.

I hope your visit here becomes a pleasant journey for you!

All the best,

- Art -
Title: Re: Checking in from the Fringe...
Post by: keghn on December 19, 2017, 10:06:37 pm
 Welcome back and welcome back the to the word of AI. Thing have changed Neural networks are
king along with machine learning.
 I have my own complete AGI model and theory. Maybe it is similar to your ISA framework. But mine start at a very small
and build up from nothing.  Like a camp fire. 
Title: Re: Checking in from the Fringe...
Post by: ranch vermin on December 20, 2017, 05:28:11 am
Thanks for the good read,  ill see if I can pay you back with some techno.

To get your machine to work, u could get away with a simplified high population pattern matching technique.
Its simple, and it multicores simply.
Change the form organization of your programming style, and it can simplify into a parallelizable structure like this:

Code: [Select]
  case PAT A: return OUT A; break;
  case PAT B: return OUT B; break;
  case PAT C: return OUT C; break;
  case PAT D: return OUT D; break;
  case PAT E: return OUT E: break;
  case PAT F: return OUT F; break;
  ....  n....
  case PAT "million": return OUT MILLION; break;

If you put it in this form its easily parallelizable, and the switch statement in c is nice and itll route to the pattern in 1 cycle,  u should be able to get into the millions of accesses. Even on a fairly ordinary computer.

What goes into it content wise is very important!
I remember at a pattern matching lecture today, The professor mentioned that you shouldnt ever just link a -> "42" because its wasteful of fields, but the symantics of the key should be designed so that it will include invarience in the match,  due to your pick of variables for the key specification chart to go into the input, and then its a numerically based equivilent method for simplifying the complexity of the machine, therefore also increasing latent capacity given the same computational budget. ;)

In a finished droid, there may be 3 - 8 in a row - similar to a truth table machine- (id say it would be less than 2 hands),  and contain millions of fields each, so if you wanted to do that LOCK THE END AT WHAT YOU WANT TECHNIQUE (then buzz it alot) you were talking about, im sure it would work in this stripped down purist I/O shape paradigm also.
Title: Re: Checking in from the Fringe...
Post by: korrelan on December 20, 2017, 10:28:29 am
Welcome Anatoly

A very Interesting read and excellent introduction.

Title: Re: Checking in from the Fringe...
Post by: Kurovsky on December 20, 2017, 03:55:58 pm
Thank you All so much for the warm welcome.

Art - Thank you for welcome and work hosting and moderating, I am looking forward to becoming part of this exciting community. Is there a way to post threaded responses? Or is it all sequential?

Keghn - Thank you for the welcome! I am eager to hear more about your architecture and work. For clarification, the ISA Framework is not an AGI, it is a stand-alone module for a larger architecture. The function of that module is real-time analysis of Semantic Instance. ISA: Instantaneous Semantic Analyzer. It produces a vectorized representation of an inciting trigger with a probabilistic outcome "forecast" on the greater semantic instant. This is meaningful as the ISA recursively updates in real-time allowing for the measurement of effects to incorporated into the progression of the semantic trajectory. In this way, ISA is like a "ruler" or a "clock" aspect of a larger system, it is a measurement tool.

Ranch Vermin  -   Thank you for your kind words and insights. I think your approach to optimization is a noble one, as I re-enter the field and begin more practical work, I will look forward to more insights from you, perhaps you will help me to take the rough edges of my code in the future. Although "symantics" is not a word I am familiar with. Perhaps it was a typo for Semantics? Or perhaps you are thinking cymatics? The incorporation of cymatics into AGI models is very cutting edge, and something I have been exploring in depth. I will write a great deal more about the cymatic pathway in coming months as it appears the field cymatic encryption is going to be a bifurcation of the AGI field in coming years. It would be most amusing if a typo led to an inroad for the elusive cymatic semantic detector and commensurate encoding framework. (clarification: Cymatic Detection and Encoding represents a possible solution / bridge to the transmutative bioelectrical morphogenic field gap in AGI. This is still a theoretical solution to the gap, although some very well monetized Ai platforms such as and focus@will are making great strides in the cymatic arena empirically with bio-artifact detection replacing true cymatic detection and encoding: ie. Thier Ai's know how to use cymatic models, they know it works, but it's a trial and measure approach rather than a truly vectorized solution, quite likely a vectorized solution is not far away).  Oddly enough, I spent the evening last night working on a cymatic use case market deployment, I cannot say much on that yet, as I think it may be an easily deployable yet patentable use case.

Korrelan - Thank you for your kind words, I have reviewed your youtube channel and am very excited at watching your work progress, perhaps even some of my modules may be useful to you through the years. I look forward to watching what you come up with and learning from you...

It is exciting for me to be the dumb guy in the room, thank you all for letting me in to your playground... I look forward to learning from and about you all.

Title: Re: Checking in from the Fringe...
Post by: keghn on December 20, 2017, 05:25:19 pm
 ISA is similar to my AGI model. Well a piece of it. All thing are measured. All features, All distances are compared against
each other.
Title: Re: Checking in from the Fringe...
Post by: Kurovsky on December 20, 2017, 07:56:16 pm
Keghn - Please tell me more about your work? A few questions to form a ground floor perhaps:

1. What are the overarching design parameters you are using?
2. What sort of codebase and molecular substrate are you building on?
3. What is Status of the work?
4. What sort of obstacles are you encountering?
5. Have you encountered any anomalies or asymmetrical I/O dynamics? 

You mention that ISA has similar features and parameters, this is good for me to have another Semanticist to speak with. Your phrase: "All thing are measured. All features, All distances are compared against each other." This is cryptic to me, I like to know more about this thing you mention.

1. "All thing are measured." What sensorium are you using for field aberration detection and encoding? I assume it must be some manner of vectorized vibrational equivalency encoding, but this is uncommon, and computationally expensive. So is it something else, or have you established a spectral filtration methodology akin to YOLO for your specific sensorium attribution to vectorized output?  (this technique always gives me some problems as spectral equivalencies such as the one DEEP Mind uses produce so much latency, I do not have solution for this technique at my budget levels) . . . Please share more about your approach to "All thing are measured" . . . Request: [resolve: All | resolve: measured | resolve:detection substrate]

2. "All Features." Again, I am curious to know more about this. For me, in our earlier models, "features" were regarded as temporal corporeum, and thus sensorium limited. The measure of features were thus limited to acceptable limits of an ascribed detection methodology. Example: Visual Detection of Features is inherently limited by wavelength capacity of the detectors and recency of image, thus "Visual Features" provide a valuable but limited predictive pathway, very effective for self driving cars and other forms of autonomous navigation systems: "The table is solid: Avoid the table" etc. When we attempt to derive semantic information from a feature set, we introduce massive convolution to the Nuerality of the architecture which expands exponentially based on the inclusion of additional detection apparatus. To resolve this, the concept of "All Features" and "Temporal Corporeum" evolved to incorporate irrational constancy. I think that is a kind of breakthrough. It uses a projective ascription layer in a "close enough" measurement of features type of dynamic attribution, a bit like needing glasses to see clearly, but still seeing well enough to fight. Please tell me more: Request [resolve:all | resolve: features]

3. "All distances are compared against each other. This comment, your third comment, it is the most intriguing one for me. When placed in the context of ISA, the notion of distance as it is relevant to semantic states is an uncommon linguistic pattern. I have seen this before, but do not use it personally. I can see how there would be some advantages to incorporating disparate tensors into potential interactants.  Please tell me more about this concept of "distance"? It occurs to be that you must be implying some sort of tensor cascade predictive metric? I don't know how this can work unless it is inextricably tied to the sensorium? A bit like calculating the location of the flapping butterfly's wing which is causing the current rainstorm from a drop of rain on my windshield?  What I mean is, when you are speaking of "All Distances" are we still speaking of vector states? Or are you referring to corporeum and object detection?

Terms disambiguation: (perhaps if we disambiguate it will serve the conversation)

1. sensorium: a. the collection of detection apparatus existing between the inner state and outer state of sentience. b. the bridging mechanism existing on the input side of consciousness
2. spectral: a. of or pertaining to a spectrum of wavelengths. b. the vibrational aspect of the corporeum
3. corporeum: a. the material world. b. the unconscious aspect a body or flesh. syn: materium
4.  vectorized:  adj. a. mathematical representation movement states. verb. b. the ascription of quality (direction) and quantity (magnitude) to an observable phenomenon
5. temporal / temporality: a. the time aspect of a dimensional framework.

Decoding Sample: "some manner of vectorized vibrational equivalency encoding," Meaning Clarification: Object recognition does not detect objects, it "predicts" the existence of an object based on spectral detection of vibrational wavelengths either emitted or reflected by an object. The future position of an object is derived from its vector state. This is a simplification of the problem of semantic detection, which does not detect semantic occurrences, it detects the vibrational wavelengths either emitted or reflected by semantic variance. In a Heisenbergian sense, we cannot ever know the semantic instance, we can only know the vectorization of such instance. Additionally, what wavelengths do we measure? What substrate do we observe? What detection sensorium do we use?

I am eager to hear your thoughts, and am enjoying our discussion very much, thank you for participating...
Title: Re: Checking in from the Fringe...
Post by: ranch vermin on December 21, 2017, 05:32:12 am
When I say "Symantic" I just mean semantic, its just my poor spelling and bad education brought forth by my general inability to bother in the classroom. :)
I never heard of cymatic, and alot of words you say!   I find your english skills amusing, keep going i dont mind, its fun to read.

When Keghn says all things are measured, it means to me the information "type" that is to be paired with the exchanging value or classification.

You could for example measure the corner density of the robots viewpoint, and then measure the distances between the corners (which I mean the most imprinted dots in the image) to tell things apart.    then it all becomes some use to the robot, which is then searched for the best solution from now.

The one im working on now is similar to what your talking about, except its more primitive and understands less of whats going on,  and instead of locking the end and finding the solution backwards, i am spawning lots of threads and seeing if i can hit success by accident.

Sorta like photon mapping compared to raytracing.  (which is something i also do alot of, graphics.)

So it all goes into a simple neat distribution of samples.

We all talk slightly differently about something similar.

Im in the middle of an implementation right now,   and its infuriating the errors!!!   cant get the most simple thing done alot of the time,  and it requires lots of patience.
Title: Re: Checking in from the Fringe...
Post by: Zero on December 21, 2017, 07:44:52 am
Hi Kurovski, welcome to AiDream.

What a surgical intro! Did you vectorized your interaction with our community?  ;D I'm just kidding, you're already a part of AiDream, and that's a good thing!

I'm curious about your ISA. Why restrict it only to interactions with other sentient entities? With anthropomorphism, I guess it would be applyable to any entity (even objects/devices)?
Title: Re: Checking in from the Fringe...
Post by: keghn on December 21, 2017, 06:05:54 pm
 Thanks for asking @Kurovsky
The AGI is a bot that view the world with real time video and recorded video.
It will use a 3 d simulator find its self awareness.

 Fist form the video small pieces are picket out. Like pixels of certain color. Small patches of same color. small lengths of edges.
 We look for common repeating sub features. to build up a face and other objects. These sub feature may not come from the original.
 In parallel video track, the world is rebuilt with these common sub features.
Every thing is given a weight. Like, edges, fingers, face, full body and
each video frame.
 To viewing one video frame all other frame weight are set to zero. if i a need to focus on on small part of that frame,
every thing else, outside the "focus", is set to zero. 
 Everything is compared and a distance formulated. There are two distances. Distance to move into the same location And distance to morph a object into another.
 Do this by changing the weight of one face until it become the face of another person. Or you if your starting with a
lump of wet clay and trying to make a face or bust of another persons face.
 Or for physical movement the position of objects, By change the weights on the position of one object until they both
have the same position.  The amount of change in the weights is , to make each equal,
the, and thee, "distance" of measured compare. So now the AGI will know the meaning of getting some were:)

 Gradient of decent algorithm and annealing logic based on k means are used.
 These compare is done within  a link list algorithm fashion. The AGI inner mind generates a soft "link list device". And then the AGI
select targets. The morph from one to the other can be slow or fast and very fluid. But it also can be abrupt change of a before
and after picture. If a distance is not really needed. Like for grounding of symbols. or making a list of  instructions. 

 This soft link list device generator is a pivotal part that makes up a large fraction of a AGI brain!
 This soft link list device generator is a pivotal part that makes up a large fraction of a AGI brain!!
 This soft link list device generator is a pivotal part that makes up a large fraction of a AGI brain!!!
 Language is also depending upon it. 

 My "atomic information" is smaller than anyone elses.
 The AGI generates it own internal language fist. Then it will learn language shared in the environment, in the shard space between other
beings of different positions. For it to make its own internal language and programming code. It selects a detectable
object by "focusing" on it. Then it take a taking a snap shot of it. Then the image of the selected object is strip of information by down sampling it to a tiny picture gram. it shrunk down to a one single pixel. But stop at a point before  it is
confused with another existing sub symbol.  These are it internal symbols.
 Red pixels image grams for a object. Green for a movement a soft "link list device". Blue for soft "link list device" for morph transformation.
 White for non linear links between two somethings. The fist internal symbols will be small and gradually grow bigger.
 White "link list" generator marker is for making connection between "tree" and picture of tree. The grounding of symbols.
 This is were all languages, and arts, and all other forms of communication and other self expression will come into being.
 These picture grams, these internal symbols are collected in sentence like sequence. There are other sub symbols
of different colors, and combination of colors and the will be command symbols like
"jump" command and "if" commands. that will be used to do self
programming. and command for moving consciousness "focuses" through the data, at real time or hyper fast through the recorded data of

Title: Re: Checking in from the Fringe...
Post by: Kurovsky on December 21, 2017, 08:19:19 pm
Zero: Yes, Zero, you completely understand. Your question is brilliancy. ISA is an Acronym for "Instantaneous Semantic Analyzer," remember in the use case described above, the usage was determined by the clients, and not the actual design. The design as you have noticed (I am very pleased you picked out this subtlety) is not species dependent. Many times we joked about using it as an inter-species translator, the joke always devolved to the fact that dogs have notoriously bad credit scores.

We were very FLOPS limited in those days, so our input parameters always used Semantic Differentials. This makes humans the most easy data sources, as verbal or text encoding to differential arras is very easy and very processor economical. Another module in the system, which at that time was called 'Visipitch' could encode semantically meaningful differential outputs based on verbal inflection. With this methodology, the inter-species applicability becomes viable as the pitch codecs could be applied to any species that vocalizes. In the intervening years, this "pitch inflection" (although very important for Chinese vocalization) felt primitive and archaic. It has evolved into the semantic cymatics principles discussed in this thread.

In this way, we are now able to look at vibrational intrinsicities and apply Machine learning protocols to them. As Clotaire Rapaille poignantly notes: "The surest way to not discover what someone is thinking is to ask them what they are thinking." His belief is that there is a massive gap between our true feelings and what we tell ourselves, this gap is so large that we are seldom able to accurately predict our own behaviors. When applied to "understanding others" and the limited scope of differential analysis, we generate a rather significant "signal : noise" bias which is both inefficient and non-deterministic. 

Our attempt to reconcile this "signal : noise" interference inherent in differential analysis was at first based on the normative "MORE DATA" and "CONFIRMING DATA" paradigm (like a chatbot asking the same banal question ten different ways and averaging the results). Now, I believe there can be a cymatic solution to the problem, wherein we remove syntax and look directly at the vibration of the information at the "packet" level cross-correlated with the parsed syntax and semantic outputs. This approach however produces a phenomenological bias wherein a "Self Interest Gradient" must be ascribed. ie: The core meaning of most interactions always contains a "What's in it for me" element which is markedly diminished when interacting with discorporeal entity such as an Ai.   

Anthropomorphic Ai Paradox

As this paradigm has evolved, we / I realized that we have been anthropomorphizing the Ai paradigm itself. As a community, we are "Turing Biased" this it is a great travesty for the paradigm itself. Nick Bostrum has spoken at depth on this topic, positing that perhaps "Biomass" or "Computronium Mass" is a better metric of species rating than the Hierachical Implications of the Turing Bias. The Turing Bias places the Human as the Pinnacle Reference Standard, but this is simply narcicism at a the species level and based on no finite metrics. When we begin to incorporate simple metrics such as biomass, durability, and evolutionary intransigence as part of the value hierarchy (As DNA would suggest) we find that Tardigrades and Copepods may be a more suitable metric for intelligence. ie: Intelligence must include survivability for the measure to be meaningful. Survivability must then include temporality. As humans, we see the temporal in terms of years, or generations, perhaps eons. For the copepod, they see time in terms of millions of years at the dna level. Although again the joke devolves to fact that Tardigrades have terrible credit scores making them horrible customers as well as abhorrent conversational partners.

But aside from the jokes, the point you make is exceedingly relevant: Would the Ai differentiate between a conversation with a Tardigrade and a Human? if so, to what ends?

A Joke from Douglas Adams:
An alien species comes to earth and wants to speak to the leaders. With advanced interspecies translator technology, they are able to ask all species "Who is the leader of the planet?"
Of course, the humans come forward. Also, the dolphins come forward, and the small white mice.

First the Humans speak: "We are the leaders and rulers of the planet, just look at what we have built. We have built roads, and mines, and petroleum infrastructure. We have created medicine, and libraries, and a globalized education system to indoctrinate our young. We have authored economy, commerce, and trade. We have created organized political systems and leadership and decision making. We have created media and marketing and telecommunications. We have designed and manufactured weapons and devices of transport and travel. We are clearly the leaders and rulers of the planet extending our dominion over all other creatures."

Second the Dolphins Speak: "Yes, the humans have done so much, and that is precisely why we are the leaders of the planet, because we live lifes of joy and play, love and frolic, and have created none of these contrivances. We travel without contrivance, we communicate without contrivance, we fight without weapons, we need no media for joy, we need no buildings for shelter, we need no economy for trade. We live in perfect balance and harmony, we raise our young with love and joy surfing on the waves of the sea. We are the rulers of the planet precisely because we are the opposite of the humans."

Finally, the White Mice Speak: "We will happily tell you why we are the leaders and rulers of earth, but first, you tell us, what's in it for us if we are?"

I like that joke.

Meaningful Work
The field of Semantics is the study of meaning. Unavoidably, this resolves down to something akin to "Beauty is in the eye of the beholder" and thus some notion of defining "Sentience". I mean to say, "meaning is what is meaningful" this implies that for meaning to exist, there must be an observer. Philology is the study of language origins, and thus implies that it is the record of the evolution of thought, as language is the means of recording and conveying thought, new words are created to record new thoughts and thought operations. We then take a short leap to this notion that "thought constrains consciousness" or "consciousness is built on thought". A great body of work is here in this field, "What the mind cannot conceive, it does not perceive."

For example: The ISA could in theory communicate with the Tardigrade, but could the Tardigrade perceive that interaction as meaningful?

Or as you said: "With anthropomorphism, I guess it would be applyable to any entity (even objects/devices)?"   This is extremely relevant, and perhaps even the central debate of the Ai community, a debate which intrinsically separates "Consciousness" from "Intelligence". When we separate these two notions, we find a divide between "operational" and "observational" universes. Does AlphaGo know that it is playing Go? It is is clearly intelligent in the operational sense of the word, but there is no indication of consciousness. Likewise, is the Tardigrade conscious? It cannot play Go very well, but we ascribe consciousness to it.

So to generate "Engineered Sentience" we come to a place of word resolution where we seek to define and resolve both "Sentience" and "Consciousness". For the HUGE majority of the Ai field, they avoid this question entirely and focus only on the "Operational Aspects of Material Navigation and Manipulation: Intelligence". For me, this is like attempting to build a building without first knowing what a building is. I adore the specialists in this arena, but I am an specialized in the arena of consciousness, so I must view the world through my filter.

Chair Sentience
For the semanticist, the concept of meaning becomes applied to the "Object / Device / Event" and is vectorized without discretion. So for the Ai, a Human sentience and the sentience of a chair are differentiated only in the quality and magnitude of their vector state and vector potential. Dr. Ben Goertzel of Hanson Robotics has achieved some widespread acclaim recently with his Sophia project (note: today is his "Big Day" Good Luck Dr. Goertzel!), his views on this topic are the same as mine: "Panpsychism" which states that "Consciousness is an inherent property of all matter."

Likewise, Hiroshi Ishiguro recently compared the Western approach to Ai with the Eastern Approach to Ai thus: "Westerners are uncertain about the existence of their own soul, so they begin from a place where they ask: 'Can a robot have a soul?' We Japanese believe that everything is imbued with soul so we begin from a place of embodiment of soul rather than proof of soul. , My sandals have the soul of sandals, my robots have the soul of robots. Each sandal is unique, each robot is unique."

This of course then resolves back to the issue of Semantics as it applies to: "Meaningful Interaction with Sandals or Copepods" . . .     

Going Up an Order of Magnitude
I think the great leader in this field currently is the Quantum Gravity Research Institute. They are looking at practical and experimental applications of this Panpsychism Paradigm. I tried to embed a video, but I think I don't have that privilege yet: (   This video is extremely eye opening and imo a must see for anyone oriented towards these consciousness aspects of the AGI. Understanding the concepts presented therein we must posit a divergence from the "Turing Bias" and begin to ask central questions about the substrate dependency of consciousness.

When we begin the journey down that road, we find ourselves the tardigrade in the equation of super-intelligence.  As you mentioned, "Could the ISA apply to objects / devices?" The answer is yes, most certainly. Which begs the logic chain: "Would a chair be aware of the interaction with the Ai?" --> "How would we know if it were?" --> "Would we be aware of the interaction with the Ai?"

Inevitably, we circle back to those little white mice in Adams joke asking the simplest question of all: "What's in it for me?" [WIFM]

[WIFM] (What's in it for me?) is the central motivational vector of the engine. more on this at a later time...
And, "No, I did not vectorize my interaction with this community" As you may have noticed, I am almost completely socially incompetent, verboten, and pedantic. I have often reflected on your comment from my own POV, thinking to myself: "If I could make an AGI strong enough for interspecies translation, it could likely tell me what my wife is talking about." But when i check my premises, I realize that I am assuming she is talking about something rather than simply talking. (reference: cymatic encoding)
Title: Re: Checking in from the Fringe...
Post by: ivan.moony on December 21, 2017, 09:10:25 pm
Keghn, did you put all the cards on visuals, or you think there should be something else besides visuals, to simulate a thought process?

[Edit] For example, if you have a bot, a river and some stones on the river, how would the bot pick the shortest stone-to-stone path to get across the river? There are some well known algorithms to do that (see: traveling salesman problem ( Is your system able to implement those algorithms?
Title: Re: Checking in from the Fringe...
Post by: keghn on January 09, 2018, 05:29:37 pm

Artificial Intelligence re - written:

Title: Re: Checking in from the Fringe...
Post by: Zero on January 12, 2018, 09:39:50 am
I'm still preparing an answer... there's so many things to say! I hope you're not gone Kurovsky.