Ideas/opinions for troubleshooting exploding output values (DQN)

  • 37 Replies
  • 11885 Views
*

Marco

  • Bumblebee
  • **
  • 34
Hello folks!

While introducing myself, I mentioned that I work on adding a Deep Reinforcement Learning (DQN) implementation to the library ConvNetSharp. I've been facing one particular issue for days now: during training, the output values grow exponentially till they reach negative or positive infinity. For this particular issue I could need some fresh ideas or opinions, which could aid me in finding further opportunities for tracking down the cause of this issue.

So here is some information about the project itself. Afterwards (i.e. the next paragraph), I'll list the conducted steps for troubleshooting. For C#, there are not that many promising libraries out there for neural networks. I already worked with Encog, which is quite convenient, but it does not provide GPU support neither convolutional neural nets. The alternative, which I chose now, is ConvNetSharp. The drawback of that library is the lack of documentation and in-code comments, but it supports CUDA (using managed CUDA). An alternative would be to implement some interface between C# and Python, but I don't have any idea for such an approach, e.g. TCP most likely will turn out to be a bottleneck. The DQN implementation is adapted to ConvNetJS's deepqlearn.js and a former ConvNetSharp Port . For testing my implementation, I created a slot machine simulation using 3 reels, which are stopped individually by the agent. The agent receives as input the current 9 slots. The available actions are Wait and Stop Reel. A reward is handed out according to the score as soon as all reels are stopped. The best score is 1. If I use the old ConvNetSharp port with the DQN Demo, the action values (output values of the neural net) stay below 1. The same scenario on my implementation, using the most recent version of ConvNetSharp, faces the issue of exponentially growth during training.

Here is what I checked so far.
  • Logged inputs, outputs, rewards, experiences (all, except the growing outputs, look fine)
  • Used former DQN ConvNetSharp Demo for testing the slot machine simulation (well, the agent does not come up with a suitable solution, but the outputs do not explode)
  • Varying hyperparameters, such as a very low learning rate or big and small training batches

There are two components, which are vague to me. The regression layer of ConvNetSharp got introduced recently and I'm not sure if I'm using the Volume (i.e. Tensor) object as intended by its author. As I'm not familiar with the actual implementation details of neural nets, I cannot figure out if the issue is caused by ConvNetSharp or not. I was in touch with the author of ConvNetSharp a few times, but still wasn't able to make progress on this issue. Parts of this issue are tracked on Github.


It would be great if someone has some fresh ideas for getting new insights about the underlying issue.

*

keghn

  • Trusty Member
  • *********
  • Terminator
  • *
  • 824
Re: Ideas/opinions for troubleshooting exploding output values (DQN)
« Reply #1 on: August 08, 2017, 03:37:53 pm »
 Exploding values? Do your neurons us a squashing function?

*

Marco

  • Bumblebee
  • **
  • 34
Re: Ideas/opinions for troubleshooting exploding output values (DQN)
« Reply #2 on: August 08, 2017, 05:50:43 pm »
I'm not sure if this is desired for regression. To my understanding of the ConvNetSharp Code, there is no squashing function during the training process on the regression layers. Regardless, the values are expected to be in the range 0 to maybe 2. Growing beyond that range for the slot machine is just wrong.

*

Korrelan

  • Trusty Member
  • ***********
  • Eve
  • *
  • 1454
  • Look into my eyes! WOAH!
    • YouTube
Re: Ideas/opinions for troubleshooting exploding output values (DQN)
« Reply #3 on: August 08, 2017, 07:22:51 pm »
I'm not familiar with this setup but I would check the Sigmoid function in the new version of Convnetsharp.
It thunk... therefore it is!...    /    Project Page    /    KorrTecx Website

*

keghn

  • Trusty Member
  • *********
  • Terminator
  • *
  • 824
Re: Ideas/opinions for troubleshooting exploding output values (DQN)
« Reply #4 on: August 08, 2017, 07:46:16 pm »
 Tanh will keep the output between -1 and 1. Which is about the same thing: 

https://www.google.com/imgres?imgurl=http://mathworld.wolfram.com/images/interactive/TanhReal.gif&imgrefurl=http://mathworld.wolfram.com/HyperbolicTangent.html&h=233&w=360&tbnid=L1oXlQOCweP7rM:&tbnh=136&tbnw=211&usg=__uFQAtr9nqUDIA8HsgLc6M6LPlSU=&vet=1&docid=W4RxENI2SKGxAM&client=ubuntu&sa=X&ved=0ahUKEwj-3pSBnsjVAhXCzVQKHc_BDtgQ9QEILDAA


Could be  Sigmoid or Relu function times two?:




 If early in a deep Neural net work the signal coming out of a neuron get pined to maximum or minimum, then the
 information is lost to the following layer. If the signal bounce in and out of max output or minimal amplification and spends 50 percent
of the time there then the information is only there for the following layer 50 percent of the time. Or if signal to begin with
is not need at all. But is pulled within detection and the following are using it then that would not be good? 

 So i guess you would be normalizing your data between 0 and 2 or -1 and 1?





*

Marco

  • Bumblebee
  • **
  • 34
Re: Ideas/opinions for troubleshooting exploding output values (DQN)
« Reply #5 on: August 08, 2017, 07:51:42 pm »
This is how the neural net is setup:

Input Layer (9 nodes)
Hidden Layer, ReLU Activation (10 nodes)
Output Layer, Regression (2 nodes)

The Sigmoid activation function is not used. The loss of the regression layer is computed similar to this (this is the ConvNetJs version):

var i = y.dim;
        var yi = y.val;
        var dy = x.w - yi;
        x.dw = dy;
loss += 0.5*dy*dy;

I could use Softmax for the last layer, but that would be for classification and not regression. The inputs are normalized.

*

keghn

  • Trusty Member
  • *********
  • Terminator
  • *
  • 824
Re: Ideas/opinions for troubleshooting exploding output values (DQN)
« Reply #6 on: August 08, 2017, 08:08:10 pm »
 Slot machine is the worst thing for this example. What "F" is you input data? The input form the last spin? Or data logged from the
past hundred?

*

Marco

  • Bumblebee
  • **
  • 34
Re: Ideas/opinions for troubleshooting exploding output values (DQN)
« Reply #7 on: August 08, 2017, 08:24:46 pm »
Single arm and contextual bandits have been used before for reinforcement learning tasks.

About the slot machine:
Each reel has 3 slots. Each slot is occupied by an item (Peach, Cherry, ..., Seven). AS sopon as the slot machine starts, each reel starts spinning (pulling new items from a custom probability distribution). The action to stop the reel stops the left reel, the second stop stops the middle real and of course the last stop event stops the right reel. So this is not a typical single armed bandit where all reels stop at the same time or automatically. It works like seen in the games Digimon World or Pokemon:

Each tick of the main loop updates each reel. Decisions are to be made on each tick (waiting or stopping a reel).

Maybe the slot machine is not the right fit for Deep Reinforcement Learning, but it already reveals issues of the DQN implementation. The old implementation does not suffer from that high value issue, but the newer one does. I could implement the poison and apples example of ConvNetJS for additional tests.

*

Marco

  • Bumblebee
  • **
  • 34
Re: Ideas/opinions for troubleshooting exploding output values (DQN)
« Reply #8 on: August 14, 2017, 09:33:34 am »
Here is a small update:

The author of ConvNetSharp fixed a major bug of the regression layer. So now the outputs do not grow exponentially anymore.

The next step is to keep testing the implementation. Concerning the slot machine example, I didn't achieve a reasonable result, yet. I'm going to try out different reward signals soon. For the Apples&Poison Demo, the performance lacks severely as soon as the training starts.

Does anybody know of a good demo for verification. The goal is to successfully train some model within minutes.

*

keghn

  • Trusty Member
  • *********
  • Terminator
  • *
  • 824
Re: Ideas/opinions for troubleshooting exploding output values (DQN)
« Reply #9 on: August 14, 2017, 02:43:49 pm »
 So the operator of this slot machine can stop A spinning wheel or reel when it see the desired symbols are showing?
Then can dot the same with the following two others spinning reels? Or when the first is stopped then the other  reals stop in a chain reaction? What about order of selection of wheel? What about time between selection
of wheels?   

*

Marco

  • Bumblebee
  • **
  • 34
Re: Ideas/opinions for troubleshooting exploding output values (DQN)
« Reply #10 on: August 14, 2017, 03:09:29 pm »
With the start of the slot machine, all 3 reels start spinning. The available actions are StopReel and Wait. StopReel stops the first reel, while the other two reels keep spinning. Triggering again StopReel stops the second and thus the third one is still spinning. And of course the last reel is stopped again by executing StopReel.

It's not a typical single arm bandit which just works on a single probability. So the agent looks at the state of the whole slot machine, it can observe all the slots items and can then decide to to stop one reel. After executing one action (either wait or stop), the slot machine is getting updated. So the spinning reels shift down the slots' items. So for each change of the slot, the agent is asked to stop or wait.


For this example I came up with new reward signals. Before, the reward was based on the total outcome - meaning the achieved score after stopping all reels.
Now, each reel provides a reward or punishment. The first one rewards the agent based on the item (e.g. a 7 is worth 1 and a cherry is worth 0.01). The second reel is all about if the agent managed to score a match or not. Scoring a match grants a reward of one and if it doesn't match, the agent is punished by -0.5. The second and the third reel behave the same concerning the reward.


Edit:
One more info: The slot machine has 6^9 (10 Mio) states
« Last Edit: August 22, 2017, 02:00:05 pm by Marco »

*

Marco

  • Bumblebee
  • **
  • 34
Re: Ideas/opinions for troubleshooting exploding output values (DQN)
« Reply #11 on: August 22, 2017, 02:07:45 pm »
Progress is still sparse. I still didn't get a good result for the slot machine so I switched to the Apples&Poison Demo. Well, this revealed a huge lack in performance, because ConvNetSharp seems to be single threaded (no imports for using threads). The old port of ConvNetJS, which comes with the Apples&Poison Demo is running much faster, but is single threaded as well.

As ConvNetSharp features CUDA, I wanted to overcome this problem for now by letting the GPU do the job, but from that point on I'm pushed from one exception to another one. The first one was about setting the project to 64bit only and now I'm stuck on a CUDA exception concerning memory allocation.

ConvNetSharp does not come with any documentation or some comments in the code. What do you think? Should I stick with ConvNetSharp? The alternative would be to build an interface to Python or to implement neural networks myself. Of course the implementation takes much more effort, but has educational advantages. And I'd approach it using compute shaders in Unity that wouldn't limit the usage to nvidia GPUs.

*

Zero

  • Eve
  • ***********
  • 1287
Re: Ideas/opinions for troubleshooting exploding output values (DQN)
« Reply #12 on: August 22, 2017, 02:36:18 pm »
Hi,
You did the pros & cons of implementing NN yourself, which I understand. But what are the pros of sticking with ConvNetSharp? (I mean, since it doesn't work nicely)

*

Marco

  • Bumblebee
  • **
  • 34
Re: Ideas/opinions for troubleshooting exploding output values (DQN)
« Reply #13 on: August 22, 2017, 02:57:16 pm »
« Last Edit: August 23, 2017, 08:37:43 pm by Marco »

*

Zero

  • Eve
  • ***********
  • 1287
Re: Ideas/opinions for troubleshooting exploding output values (DQN)
« Reply #14 on: August 23, 2017, 08:27:28 pm »
The "GPU exceptions" cons is shadowing two pros out of four, isn't it?

Also, don't you think the educational benefits of DIY are heavy weight in the balance?

 


Requirements for functional equivalence to conscious processing?
by DaltonG (General AI Discussion)
November 19, 2024, 11:56:05 am
Will LLMs ever learn what is ... is?
by HS (Future of AI)
November 10, 2024, 06:28:10 pm
Who's the AI?
by frankinstien (Future of AI)
November 04, 2024, 05:45:05 am
Project Acuitas
by WriterOfMinds (General Project Discussion)
October 27, 2024, 09:17:10 pm
Ai improving AI
by infurl (AI Programming)
October 19, 2024, 03:43:29 am
Atronach's Eye
by WriterOfMinds (Home Made Robots)
October 13, 2024, 09:52:42 pm
Running local AI models
by spydaz (AI Programming)
October 07, 2024, 09:00:53 am
Hi IM BAA---AAACK!!
by MagnusWootton (Home Made Robots)
September 16, 2024, 09:49:10 pm
LLaMA2 Meta's chatbot released
by spydaz (AI News )
August 24, 2024, 02:58:36 pm
ollama and llama3
by spydaz (AI News )
August 24, 2024, 02:55:13 pm
AI controlled F-16, for real!
by frankinstien (AI News )
June 15, 2024, 05:40:28 am
Open AI GPT-4o - audio, vision, text combined reasoning
by MikeB (AI News )
May 14, 2024, 05:46:48 am
OpenAI Speech-to-Speech Reasoning Demo
by MikeB (AI News )
March 31, 2024, 01:00:53 pm
Say good-bye to GPUs...
by MikeB (AI News )
March 23, 2024, 09:23:52 am
Google Bard report
by ivan.moony (AI News )
February 14, 2024, 04:42:23 pm
Elon Musk's xAI Grok Chatbot
by MikeB (AI News )
December 11, 2023, 06:26:33 am

Users Online

503 Guests, 0 Users

Most Online Today: 597. Most Online Ever: 2369 (November 21, 2020, 04:08:13 pm)

Articles