Ooh, nice find. I'd never heard of the Tsetlin Automaton or even its more general field of learning automata...
https://en.wikipedia.org/wiki/Learning_automaton...but with my recent interest and learning of graph theory, that's right up my alley. Here's how I understand it so far:
The Tsetlin Automaton is another type of automaton, like DFAs, except that few people have heard about the Tsetlin Automata since it dates back to the 1970s and was never mainstream like DFAs were. Therefore it's not a neural network since the nodes are not neurons, but nodes are rather states as in DFAs or other types of automata...
https://www.comp.nus.edu.sg/~gem1501/year1314sem2/turing.pdfIt's based on probability and competitive learning. Competitive learning is familiar to those in the field of neural networks, since that is one of the types of learning used in some neural networks, where neurons compete against each other...
https://en.wikipedia.org/wiki/Competitive_learningThe Tsetlin Automaton looks like the diagram in Figure 1 of that article, sort of like a segmented worm where each segment represents one state. Whereas most DFAs look roughly like a tree, the Tsetlin Automaton looks roughly like a list that doesn't branch. Branching isn't needed because it's so simple in operation.
Learning is done with the F function, which increments or decrements the integer that represents the state of the entire system. That single integer is what makes the system simple and take up so little memory. If the system guesses the right action, the single active cell shifts one toward the nearest end to signal reward; if the system guesses the wrong action, the single active cell shifts one toward the middle to signal penalty. You can think of the automaton being a segmented worm with always a single lit-up segment on its body. Over a period of time if the lit-up cell is near one of the ends, that means the system guessed consistently a lot, whereas if the lit-up cell is near the middle, that means the system guessed inconsistently a lot, which averaged out any strong hypothesis. At one end is one action, like turning left, and at the other end is another action, like turning right. My guess is that this linear structure is what makes it competitive in learning--one action is competing against the other, which visually equates to the left-hand cell of the worm competing against the right-hand cell of the worm.
I'd have to see an example of the automaton in operation to really understand it, but hopefully my understanding is accurate and hopefully my comments made it a little more understandable to others in a hurry.
P.S.--[9-26-2019]
I found a book that mentions learning automata that anyone can download for free at...
https://cs.nyu.edu/~mohri/mlbook/Foundations of Machine Learning
Mehryar Mohri, Afshin Rostamizadeh, and Ameet Talwalkar
MIT Press, Second Edition, 2018.
The book has a chapter on the topic of learning automata, but the Tsetlin Machine isn't mentioned, and most of that chapter is very technical, so it isn't good for a beginner's overview. Still, the hardcover version of that book can cost $57 on Amazon.