The Tsetlin Machine

  • 7 Replies
  • 1776 Views
*

LOCKSUIT

  • Emerged from nothing
  • Trusty Member
  • *******************
  • Prometheus
  • *
  • 4659
  • First it wiggles, then it is rewarded.
    • Main Project Thread
The Tsetlin Machine
« on: September 25, 2019, 10:47:16 am »
https://arxiv.org/pdf/1804.01508.pdf

Ok.. what is this in English?

My understanding is it is a simple explainable Online Learning neural net that works using bits for small memory footprint and it uses AND OR XOR operations for speed and does a Game Theory method to update Global Optima over Logical Propositions. Now how to draw that in one image I'm still yet to try but I bet I will get minimal luck of taking away anything here without some guidance.
Emergent          https://openai.com/blog/

*

AndyGoode

  • Guest
Re: The Tsetlin Machine
« Reply #1 on: September 25, 2019, 11:45:13 pm »
Ooh, nice find. I'd never heard of the Tsetlin Automaton or even its more general field of learning automata...

https://en.wikipedia.org/wiki/Learning_automaton

...but with my recent interest and learning of graph theory, that's right up my alley. Here's how I understand it so far:

The Tsetlin Automaton is another type of automaton, like DFAs, except that few people have heard about the Tsetlin Automata since it dates back to the 1970s and was never mainstream like DFAs were. Therefore it's not a neural network since the nodes are not neurons, but nodes are rather states as in DFAs or other types of automata...

https://www.comp.nus.edu.sg/~gem1501/year1314sem2/turing.pdf

It's based on probability and competitive learning. Competitive learning is familiar to those in the field of neural networks, since that is one of the types of learning used in some neural networks, where neurons compete against each other...

https://en.wikipedia.org/wiki/Competitive_learning

The Tsetlin Automaton looks like the diagram in Figure 1 of that article, sort of like a segmented worm where each segment represents one state. Whereas most DFAs look roughly like a tree, the Tsetlin Automaton looks roughly like a list that doesn't branch. Branching isn't needed because it's so simple in operation.
Learning is done with the F function, which increments or decrements the integer that represents the state of the entire system. That single integer is what makes the system simple and take up so little memory. If the system guesses the right action, the single active cell shifts one toward the nearest end to signal reward; if the system guesses the wrong action, the single active cell shifts one toward the middle to signal penalty. You can think of the automaton being a segmented worm with always a single lit-up segment on its body. Over a period of time if the lit-up cell is near one of the ends, that means the system guessed consistently a lot, whereas if the lit-up cell is near the middle, that means the system guessed inconsistently a lot, which averaged out any strong hypothesis. At one end is one action, like turning left, and at the other end is another action, like turning right. My guess is that this linear structure is what makes it competitive in learning--one action is competing against the other, which visually equates to the left-hand cell of the worm competing against the right-hand cell of the worm.
I'd have to see an example of the automaton in operation to really understand it, but hopefully my understanding is accurate and hopefully my comments made it a little more understandable to others in a hurry.

P.S.--[9-26-2019]
I found a book that mentions learning automata that anyone can download for free at...

https://cs.nyu.edu/~mohri/mlbook/
Foundations of Machine Learning
Mehryar Mohri, Afshin Rostamizadeh, and Ameet Talwalkar
MIT Press, Second Edition, 2018.

The book has a chapter on the topic of learning automata, but the Tsetlin Machine isn't mentioned, and most of that chapter is very technical, so it isn't good for a beginner's overview. Still, the hardcover version of that book can cost $57 on Amazon.
« Last Edit: September 27, 2019, 11:01:18 pm by AndyGoode »

*

goaty

  • Trusty Member
  • ********
  • Replicant
  • *
  • 552
Re: The Tsetlin Machine
« Reply #2 on: September 27, 2019, 08:21:51 am »
Hey, u explained it nicely, I think ive got an idea what your talking about.
That sounds interesting,   Id rather a DFA myself for security reasons,  I wouldn't want my robot to work out its own state transfer system,  but that's what this would be as far as I understood. Interesting,  but I think id rather not let the computer be in charge of this part of its actions.  (watch out for all the spills it would cause!)  But I guess the computer would be a lot less redundant, say if it were playing soccer.    If you just dictated the state transfer, then its quite predictable and redundant,  but this would let the computer decide when it wants to defend and attack.

It may be worth a look,  this,  if you ever got a DFA successfully running - you can employ the Tsetlin! state transfer system and make it less predictable.
« Last Edit: September 27, 2019, 08:50:18 am by goaty »

*

AndyGoode

  • Guest
Re: The Tsetlin Machine
« Reply #3 on: September 27, 2019, 11:33:08 pm »
Thanks, goaty.

Part of what makes this topic so interesting to me is that I never even considered the possibility of an automaton learning. Everything I ever read or was taught about them was always about how you design an automaton to do a certain chore you want--basically you program it by giving it the states it needs, then make it move to those states when it sees certain inputs--never that it could learn on its own. Yet in retrospect, learning seems such an obvious extension to automata, since they already look like neural networks, which we all know learn on their own.

Another thing that is a little odd about this Tsetlin Machine is that it doesn't learn by adding nodes or links, or by adjusting weights on its links, all of which some neural networks can do, but rather is like a linear gauge, like the bars that light farther on a stereo when the volume in a certain frequency range increases. It's sort of a humorous way to make such a machine work, in fact: put enough nodes together in a row so that the viewer can forget that the nodes are discrete, rather than like a continuous slider, then make the nodes light up in sequence so that they have the same overall effect as a continuous linear gauge. It's sort of like using nodes to draw a smiley face--sort of an abuse of a serious architecture in an attempt to make a picture.


*

LOCKSUIT

  • Emerged from nothing
  • Trusty Member
  • *******************
  • Prometheus
  • *
  • 4659
  • First it wiggles, then it is rewarded.
    • Main Project Thread
Re: The Tsetlin Machine
« Reply #4 on: September 28, 2019, 10:10:57 am »
That's cool its not adding nodes or adjusting weights, just sliding a potential along the bar of nodes. But what is this Automa for...what can it do for me? I don't yet see any hierarchy, or cat=dog relational discovery, or use for scientific method....is there an interesting point from it?
Emergent          https://openai.com/blog/

*

goaty

  • Trusty Member
  • ********
  • Replicant
  • *
  • 552
Re: The Tsetlin Machine
« Reply #5 on: September 28, 2019, 02:17:51 pm »
Nothing wrong with simplicity if it makes the machine smaller.  Sometimes its a good thing.   Especially If u were to make a micro bot - it can help a lot.

*

LOCKSUIT

  • Emerged from nothing
  • Trusty Member
  • *******************
  • Prometheus
  • *
  • 4659
  • First it wiggles, then it is rewarded.
    • Main Project Thread
Re: The Tsetlin Machine
« Reply #6 on: September 28, 2019, 04:07:30 pm »
Maybe we haven't looked deep enough into the Tsetlin theory. I mean, it could be just as useful as Congress;

https://www.youtube.com/watch?v=jVZFRAncTgA
Emergent          https://openai.com/blog/

*

AndyGoode

  • Guest
Re: The Tsetlin Machine
« Reply #7 on: September 28, 2019, 06:53:43 pm »
But what is this Automa for...what can it do for me? I don't yet see any hierarchy, or cat=dog relational discovery, or use for scientific method....is there an interesting point from it?

Honestly, in my opinion it's just another case of what I've come to call 'idiot level machine learning' [ILML], which means the system is learning in the most idiotic, uninformed, blinders-on manner. There is no future in it, as far as I can see. It does have two claimed advantages--low memory requirement, and optimal learning in some manner--but minor adjustments in speed or memory are not very important when dealing with the ability of a machine to think. Ultimately what's important is making the machine intelligent, and this ain't it. I happen to be interested only because it's so different from the automata I've heard of, and it ties into graph theory, which I happen to be studying now, but I couldn't recommend it for anyone. It would be interesting to learn more about their learning function F, and to see it in action in a simulation, but I can't justify the time to study it much more except for those novel engineering solutions I mentioned that I might use someday in a different context when designing some other machine of my own.

P.S.--The F function must be adjusting something equivalent to weights, but that's the part I don't understand yet.
« Last Edit: September 28, 2019, 07:27:20 pm by AndyGoode »

 


OpenAI Speech-to-Speech Reasoning Demo
by MikeB (AI News )
March 31, 2024, 01:00:53 pm
Say good-bye to GPUs...
by MikeB (AI News )
March 23, 2024, 09:23:52 am
Google Bard report
by ivan.moony (AI News )
February 14, 2024, 04:42:23 pm
Elon Musk's xAI Grok Chatbot
by MikeB (AI News )
December 11, 2023, 06:26:33 am
Nvidia Hype
by 8pla.net (AI News )
December 06, 2023, 10:04:52 pm
How will the OpenAI CEO being Fired affect ChatGPT?
by 8pla.net (AI News )
December 06, 2023, 09:54:25 pm
Independent AI sovereignties
by WriterOfMinds (AI News )
November 08, 2023, 04:51:21 am
LLaMA2 Meta's chatbot released
by 8pla.net (AI News )
October 18, 2023, 11:41:21 pm

Users Online

228 Guests, 0 Users

Most Online Today: 233. Most Online Ever: 2369 (November 21, 2020, 04:08:13 pm)

Articles