My free energy to read that was wasted. Friends:
minimize free energy
Compression (in hierarchy)
Generalization in wide domains. Inference. Context.
Cost (amount of X needed)
Unsupervised Learning stages
Conversational intelligence / Knowledge about the world & desires
Discover dependent and independent man to queen relationships between nodes. Causation
Find Patterns (Cells differentiate like Brains do in Culture, to populate/save on free energy)
Prediction, surprise, probabilistics.
Go deep and sum it up. Go shallow and expand it.
Like I've implied:
"brains compute and perceive in a probabilistic manner, constantly making predictions and adjusting beliefs based on what the senses contribute. According to the most popular modern Bayesian account, the brain is an “inference engine†that seeks to minimize “prediction error."
"when you are minimizing free energy, you are minimizing surprise."
"Free energy is the difference between the states you expect to be in and the states your sensors tell you that you are in. Or, to put it another way, when you are minimizing free energy, you are minimizing surprise.
Making knowledge discoveries about the real world:
"The only difference is that, as self-organizing biological systems go, the human brain is inordinately complex: It soaks in information from billions of sense receptors, and it needs to organize that information efficiently into an accurate model of the world. “It’s literally a fantastic organ in the sense that it generates hypotheses or fantasies that are appropriate for trying to explain these myriad patterns, this flux of sensory information that it is in receipt of,†Friston says. In seeking to predict what the next wave of sensations is going to tell it—and the next, and the next—the brain is constantly making inferences and updating its beliefs based on what the senses relay back, and trying to minimize prediction-error signals."
"This isn’t enough for Friston, who uses the term “active inference†to describe the way organisms minimize surprise while moving about the world. When the brain makes a prediction that isn’t immediately borne out by what the senses relay back, Friston believes, it can minimize free energy in one of two ways: It can revise its prediction—absorb the surprise, concede the error, update its model of the world—or it can act to make the prediction true. If I infer that I am touching my nose with my left index finger, but my proprioceptors tell me my arm is hanging at my side, I can minimize my brain’s raging prediction-error signals by raising that arm up and pressing a digit to the middle of my face."
Yep:
"And in fact, this is how the free energy principle accounts for everything we do: perception, action, planning, problem solving. When I get into the car to run an errand, I am minimizing free energy by confirming my hypothesis—my fantasy—through action."
Exactly, tomorrow's world starts at the simulation that uses knowledge:
"But the limitation of the Bayesian model, for Friston, is that it only accounts for the interaction between beliefs and perceptions; it has nothing to say about the body or action. It can’t get you out of your chair."
"it can minimize free energy in one of two ways: It can revise its prediction—absorb the surprise, concede the error, update its model of the world—or it can act to make the prediction true. If I infer that I am touching my nose with my left index finger, but my proprioceptors tell me my arm is hanging at my side, I can minimize my brain’s raging prediction-error signals by raising that arm up and pressing a digit to the middle of my face."
"fundamental drive of human thought isn’t to seek some arbitrary external reward. It’s to minimize prediction error. "
"But a free energy agent always generates its own intrinsic reward: the minimization of surprise. And that reward, Pitt says, includes an imperative to go out and explore."
Exactly. Beliefs, desires, goals, innovation, generations, verification, implementation:
"“It’s not sufficient to understand which synapses, which brain connections, are working improperly,†he says. “You need to have a calculus that talks about beliefs.â€"
Often victory in a Game you say? Try switching RL to knowledge-describing RL:
"Reinforcement learning doesn’t require humans to label lots of training data; it just requires telling a neural network to seek a certain reward, often victory in a game. The neural network learns by playing the game over and over, optimizing for whatever moves might get it to the final screen, the way a dog might learn to perform certain tasks for a treat."
Like said, bump RL up a notch, conversational knowledge-net is your answer:
"But reinforcement learning, too, has pretty major limitations. In the real world, most situations are not organized around a single, narrowly defined goal. (Sometimes you have to stop playing Breakout to go to the bathroom, put out a fire, or talk to your boss.) And most environments aren’t as stable and rule-bound as a game is. The conceit behind neural networks is that they are supposed to think the way we do; but reinforcement learning doesn’t really get us there."
HAHA not like that. Use knowledge net.