Hi Writer, let me reveal some of my insights from investigating knowledge about knowledge.
About humaneness and groundings
While syntax is a study of correctly composed sentences, semantics is a study of meaning of those sentences. This isn't restricted only to textual data, but to any sensory input/output (you may think of grammar of a photo of a face as compound of brows, eyes, cheeks, nose, ...). To explain my view of semantics, imagine we understand only one language - our mother's one, and we want to learn another one. During learning process, we ground the learned language into words of our mother's language. When we learn this new language, we may recursively learn another one, maybe even grounding it not on our primary language, but on any language we learned in a meanwhile. We get an onion layers of languages that (guess what) each compile to the one level below, until we reach our primary language that we finally understand.
Now, let's go in the other direction, from our mother's language below. What's beneath that? I guess it could be another layers of symbolic languages, from culturally learned responses, to responses we are taught in schools, to our home upbringings, all the way to our deepest instincts we were born with. These instincts are our groundings. No one taught us to them in our lives, we are born with them, and we just feel them. Pure emotions, some of which we like, some of which we don't like, some of which we learned how to achieve by using any semantic levels above the basic one.
So, how deep a machine needs to go? We may teach it any semantic level, and thats it. That may be its basic instinct, so to say. It may form any new semantic levels above it (to learn new languages, etc). The thing is, if its deepest semantic level is somewhat aligned with ours, it will make sense to us when the machine, say, see an injured man, and it calls doctors on the phone. It may seem it exhibits an intelligence, although it doesn't share with us its deepest grounding to emotions - any level above will do the trick if it is complete enough.
About computing algebra and groundings
I happen to have explored lambda calculus, a functional paradigm used to calculate stuff. It is based on functions that have its parameters which it apply to function bodies composed of other functions, going downwards as deep as we need. Here I draw a parallel to semantic levels from above. In theory, as I learned from various sources, there are a special kinds of functions called combinators. Combinators are functions that in their bodies don't introduce new functions, but only recombine order and existence of parameters. For example, constant true is defined as `λx.λy.x`, which means given parameters x and y, return x. False is defined as `λx.λy.y`. Similarly, operators and, or, and not are defined as `λp.λq.p q p`, `λp.λq.p p q`, and `λp.p FALSE TRUE`, respectively. operator if-then-else is defined as `λp.λa.λb.p a b`.
As we see, combinators are describing some constant values and operations we'd expect to have to be grounded or hard-coded, yet they refer to themselves, forming a system we may use like they are really grounded to something while they are not. With a set of combinators powerful enough, we may describe any operation and algorithm known to us (this is a well known academic fact, I believe).
Why am I mentioning combinators? Well, I see them as a concrete semantic layer that doesn't need any further down-passing-meaning, yet they are able to interpret and calculate anything we imagine without referencing to anything below them.
So, can combinators be considered as grounding constants without actually be grounded to anything outside?