Ai Dreams Forum
Artificial Intelligence => General AI Discussion => Topic started by: keghn on January 22, 2018, 05:38:13 pm
-
Researchers in Japan are showing way to decode thoughts:
https://techxplore.com/news/2018-01-japan-decode-thoughts.html
-
Just don't look in my head.
You won't wanta see what's in my head.
-
I am amazed and grateful at this yet another advancement.
We can use this for comas, court, studying, movie-directing/creating, saving/sharing our thoughts and modelling you for immortality, communicating, controlling the internet and programs, adding metadata, playing video games, etc
-
But what about this one?
https://medicalxpress.com/news/2013-04-scientists-peek.html#nRlv
Are both really just using images from a database? Or is keghn's article's actually re-creating what it's scanning?
Also, how do we know they aren't just using backpropagation and making the output iteratively look closer? How do we know they tried it on unseen & un-error-corrected images?
Also speech perception to text:
https://medicalxpress.com/news/2016-10-brain-computer-interface-thoughts-text.html#nRlv
-
?