Recent Posts

Pages: [1] 2 3 ... 10
1
General Chat / Re: NETFLIX in the UK
« Last post by OllieGee on Today at 06:04:34 PM »
I think netflix here in the UK is pretty good value art. Esp sharing my sister in laws subscription. :-X
2
General Chat / Re: Has anyone read Wolfram's "A New Kind of Science"?
« Last post by Zero on Today at 04:56:13 PM »
I was exhausted, and I think I needed to think all of this deeply. I can try a reply now.

Quote
I, too, am aware that life is short, but somehow I draw the opposite conclusion.  I don't think I have time to dissipate myself by sampling everything.  I need to focus.  Because, whichever path I choose, it's likely to demand a long input of hard work.

I understand that. I noticed you're creating content when I just create engines, and this is what's fascinating in your work on Acuitas. I don't know if it's related though. But I often have a sensation that the thing I'm working on won't "explode", and is therefore useless. I usually know it from the beginning, but I still feel interested in the thing. Then at some point, it vanishes. Recently something different has happened, with a project named "Dejavu". I thought I would be able to keep this one, that it would be the good one. But no. There was something wrong.

Quote
1. Infurl beat me to the first reason.  Maintaining a functional open-source project is more work than writing code for yourself ... especially when the project is nowhere near finished, and the code base is in a state of constant churn.  Trying to, for instance, preserve backward-compatibility between different versions of all the modules would not be fun right now.  And if anybody saw how messy and incomplete the code actually is, I'd be embarrassed.

A day may come when everything is tidied up and stable and has a documentation package.  But it is not this day.

2. Acuitas has never really been intended as a tool.  Not that it would be impossible for the software to do practical work, but that's not what it's primarily for.  If I were merely inventing a new type of wrench, then I suppose I wouldn't mind stamping out hundreds of copies and handing them around.  But Acuitas has aspects of ... an art piece, maybe.  Releasing the code under present circumstances would be kind of like releasing the first half of my unpublished novel, and inviting other people to write the ending.  No thanks.

Some projects just aren't meant to be collaborative, and this is one of them.  I prefer to keep creative control.

3. IF my work ever does manage to grow into something innovative and great, then I would be concerned about the possibility of its being misused (or maybe even mistreated).  I love humanity, but I don't trust it!  So in that case, I'd want to be cautious about who got to see or expand upon the code.  I'd pick people whose philosophical/moral alignment and personal character I admired, not just people of adequate skill.

All good understandable reasons. Thanks for sharing. It was a real mystery to me.

Quote
Zero, I see your work as mildly interesting: At least you're trying something productive. I saw your Levenshtein distance algorithm the other day and thought it was an interesting idea to apply it to word sequences instead of letters, but would still have very limited uses (Due to levenshtein algorithms being what they are). I almost commented on it but did not, because 1: I prefer to make things rather than talk about making things, and 2: Every word I type exacerbates the RSI in my fingers, I have to pick my battles.

I didn't know the meaning of the English word "mildly", and I'm not sure there's a direct mirroring word in French.  But I got it I think. I'm not a genius, but every now and then, I can have a good idea. More important: I do things. Fine!

It's Ok not to comment everything, especially when it's physically painful. No problem.

Quote
It's true that you don't seem to break out of an endless cycle of experiments, but at every experiment you do gain something. I'm also an artist and know a lot of creatives. Whenever I get close to finishing a drawing, I stop, the challenge is over, and it just sits there for 5 years until I decide to just get it over with and draw the last three lines. Many creatives get new ideas faster than they can finish the old ones, it's a common problem, but they get better while doing it nonetheless. Every piece of code you type becomes another tool that might solve a later problem. I once wrote a stupid piece of code to detect insults from Loebner Prize judges, it was a waste of time in my eyes. But now an expansion of that code's principles runs my AI's ethical subroutine. It's still  too crude, the kind of crude that might make you stop and try something else, but you could also think of it as a placeholder: I know it's not good enough, and I have an idea for a better system to replace it with, but until then it does a reasonable job, and provides practical experiences that will help design that better system later. It doesn't have to be perfect from the get-go, you can always change parts that don't work or redo the whole system if you want. I've overhauled my AI's knowledge structure five times. Every time took me two months, but I would not have figured it out without the insights I gained from using the earlier versions.

You really get the gist of it. You made an accurate description of what I can experience. There's one difference though with, say, painting. It's like I know I have my friend Mona Lisa who lives next door, and it would be great to paint a portrait of her, but for some reason, I keep painting dumb apples on a table. Don't know why. Not ready yet maybe, as you said, I need to gain insights from painting these apples again and again. But it also feels like not daring. Don't know.

Quote
As to the question of sharing, the effort doesn't gain me much. It could take months to explain everything I've programmed, longer if people are going to ask questions, and I'd rather use that time to work. Secondly, the field of AI attracts a lot of crazy people, and I've had my fill of them when I shared my progress in the past. I don't need that kind of attention.

Thanks for sharing, I understand your reasons. I still am afraid of the perspective of a huge quantity of work that would be lost if you don't take time to share it one day, at least to a few selected people.
About crazy people, well what can I say :)

Quote
My reasons are all of the above.

With regards to your input into the site and multiple personal projects… You are looking for something, trying to work something out, and you’re not sure what it is.

Any ‘thought’ is comprised of sub fragments/ facets, a base set of general bits/ tools are recombined to create other thoughts.  Each project you start will have bits in common with previous projects but be combined differently.  You stop the project when you have satisfied your curiosity, when you have gained insight.  You then use what you have learned from all your experience so far, to think through the next iteration.

You might not be consciously aware of the process, or be misinterpreting it, but your sub conscious knows exactly what it’s doing… its working towards the goal… keep it up.

Thank you so much for this very optimistic and plausible way of seeing the situation. It gives me strength. It's true that these projects always re-arrange some previously explored bits, adding new stuff. I see patterns coming and coming again, I know they're important but I still don't know exactly how they fit in the global solution.

I will now try to trust my instinct, be true to what I am. I'll keep it up.

 O0
3
General Chat / Re: would emp gunning a robot make you feel sad?
« Last post by LOCKSUIT on Today at 03:48:54 PM »
:) The atoms and particles of rocks, all make decisions. Some calcify or age, turn colors, it is a very active world in a rock, you just don't see it. Heat is constantly moving particles, allowing decisions to be made.



Oh, here is the autonomousy! Look it explains they have AI !
https://medium.com/udacity/bringing-autonomy-to-combat-robotics-78db5173caa3

And these, wow!!
https://www.youtube.com/watch?v=QCqxOzKNFks&feature=emb_logo

this, this, THIS!!! This is a extremely funny video and quite intelligent.
https://www.youtube.com/watch?v=yjP6C26-D3M

More on them:
https://spectrum.ieee.org/automaton/robotics/diy/roboone-hosts-first-ever-autonomous-biped-fighting-tournament
https://interestingengineering.com/robo-one-battles-fully-autonomous-robots-fighting
https://www.youtube.com/watch?v=TKE-7c1Cjaw
https://www.youtube.com/watch?v=H8R4IMn1HLg
https://www.youtube.com/watch?v=TCR1VsUEn-c

what about virtual hamsters!?? Is it wrong?
https://www.youtube.com/watch?v=lydz-1o2EdY
4
General AI Discussion / Re: outline from gadient mask
« Last post by yotamarker on Today at 03:10:21 PM »
OpenCV has no documentation that I can understand of use.
YT video tutorials : I tried them and they don't work, all made by guys from Pakistan for some reason
I also tried the OpenCV android book from amazon and it doesn't work

in other words I am about to undergo the torment of translating my A.Eye from vb to java.

if you have a working eye class that takes a bitmap and outputs strings of words representing recognized objects and
characters in it please show me.

at any rate here are the methodes and vars so far :

vars :


Public outlinePixel As Integer = 30 ' limit how dark an outline pixel is
        Private clusterPercent As Double = 0.05 ' fatness of outline of shapes
        Private minObjectSize As Integer = 40 ' how small an objext detected can be
        Private x As Integer ' coordinates of grid area to work on
        Private y As Integer
        Private eyeObj As EyeClasifire = New EyeClasifire()

        ' eye grid
        Private shiberArray() As Boolean = New Boolean() {False, False, False, False, False, False, False, False, False}
        ' part in eye grid to process
        Private shiberCounter As Integer = 0

methodes :

Public Sub setXY(ByVal x1 As Integer, ByVal y1 As Integer)
Private Sub setXY(ByVal n As Integer)
Public Sub shiberVision(ByVal bmp As Bitmap, ByRef ImageList As List(Of Rectangle), ByRef ec1 As EyeClasifire)
            ' detect image objects in active eye grid area
Public Sub setMinObjectSize(ByVal newVal As Integer)
Public Shared Function overlappingRectangles(ByVal R1 As Rectangle, ByVal R2 As Rectangle) As Boolean

Public Sub setClusterPercent(ByVal newVal As Double)
Public Shared Function is_pixel_dark_at(ByVal xPos As Integer, ByVal Ypos As Integer, ByVal image As Bitmap, ByVal DarkPixel As Integer) As Boolean
Public Shared Function isOutLine(ByVal xPos As Integer, ByVal Ypos As Integer, ByVal image As Bitmap, ByVal bias As Integer) As Boolean
Public Shared Function mark_dark_pixel(ByVal x As Integer, ByVal y As Integer, ByVal bmp1 As Bitmap, ByVal marker As Byte)
Public Shared Function mark_dark_pixel_white(ByVal x As Integer, ByVal y As Integer, ByVal bmp1 As Bitmap, ByVal marker As Byte)
Public Shared Function mark_dark_pixel_black(ByVal x As Integer, ByVal y As Integer, ByVal bmp1 As Bitmap, ByVal marker As Byte)
Public Shared Function getPixelColor(ByVal r As Integer, ByVal g As Integer, ByVal b As Integer) As Char
Public Shared Function mark_dark_pixelRED(ByVal x As Integer, ByVal y As Integer, ByVal bmp1 As Bitmap, ByVal marker As Byte)
Public Shared Function mark_dark_pixelBlue(ByVal x As Integer, ByVal y As Integer, ByVal bmp1 As Bitmap, ByVal marker As Byte)
Public Function shiberActivator(ByRef bmp1 As Bitmap) As String
            ' activate grid areas where motion was detected and detect motion
Public Shared Function DirectionGetterX(ByRef bmp1 As Bitmap) As String
Public Shared Function DirectionGetterY(ByRef bmp1 As Bitmap) As String
Public Shared Function graphicContour(ByVal bmp As Bitmap, ByVal xmin As Integer, ByVal xmax As Integer, ByVal ymin As Integer, ByVal ymax As Integer) As Bitmap
Public Shared Function graphicContourBlack(ByVal bmp As Bitmap, ByVal xmin As Integer, ByVal xmax As Integer, ByVal ymin As Integer, ByVal ymax As Integer) As Bitmap
Public Shared Function graphicContourBlue(ByVal bmp As Bitmap, ByVal xmin As Integer, ByVal xmax As Integer, ByVal ymin As Integer, ByVal ymax As Integer) As Bitmap
Public Shared Function graphicContourRed(ByVal bmp As Bitmap, ByVal xmin As Integer, ByVal xmax As Integer, ByVal ymin As Integer, ByVal ymax As Integer) As Bitmap
depracated :
'Private Sub tracer(ByVal bmp1 As Bitmap, ByVal w1 As Integer, ByVal h1 As Integer, ByVal x As Integer, ByVal y As Integer)
Public Shared Sub maxer(ByVal a As Integer, ByRef b As Integer)
Public Shared Sub miner(ByVal a As Integer, ByRef b As Integer)
Public Sub PopulateEyeData(ByVal bmp As Bitmap, ByRef ImageList As List(Of Rectangle))
Private Function markObjects(ByVal bmp As Bitmap, ByVal objectList As List(Of Rectangle)) As Bitmap
Private Function markPixelMatrix(ByVal bmp As Bitmap, ByVal objectList As Object) As Bitmap
Public Sub shiber(ByVal bmp As Bitmap, ByRef ImageList As List(Of Rectangle), ByRef ec1 As EyeClasifire) 'lv2,3 analysis for biggest object only
Public Sub PopulateEyeData2(ByVal bmp As Bitmap, ByRef ImageList As List(Of Rectangle)) 'lv2,3 analysis for biggest object only
Public Shared Sub simpleImagesDetecter(ByVal bmp As Bitmap, ByRef ImageList As List(Of Rectangle))
Public Shared Sub ImagesDetecter(ByVal bmp As Bitmap, ByRef ImageList As List(Of Rectangle))
Public Shared Sub ImagesDetecterLv2(ByVal bmp As Bitmap, ByRef ImageList As List(Of Rectangle), ByRef rec1 As Rectangle)
Public Shared Function DirectionGetter(ByRef bmp1 As Bitmap) As String
5
General Chat / Re: would emp gunning a robot make you feel sad?
« Last post by Art on Today at 01:44:25 PM »
Uhh...Do you read what you just typed? ROCKS?  Rocks make decisions? In what world? How?
6
Researching from home: Science stays social, even at a distance
7 April 2020, 7:00 pm

With all but a skeleton crew staying home from each lab to minimize the spread of Covid-19, scores of Picower Institute researchers are immersing themselves in the considerable amount of scientific work that can done away from the bench. With piles of data to analyze; plenty of manuscripts to write; new skills to acquire; and fresh ideas to conceive, share, and refine for the future, neuroscientists have full plates, even when they are away from their, well, plates. They are proving that science can remain social, even if socially distant.

Ever since the mandatory ramp down of on-campus research took hold March 20, for example, teams of researchers in the lab of Troy Littleton, the Menicon Professor of Neuroscience, have sharpened their focus on two data-analysis projects that are every bit as essential to their science as acquiring the data in the lab in the first place. Research scientist Yulia Akbergenova and graduate student Karen Cunningham, for example, are poring over a huge amount of imaging data showing how the strength of connections between neurons, or synapses, mature and how that depends on the molecular components at the site. Another team, comprised of Picower postdoc Suresh Jetti and graduate students Andres Crane and Nicole Aponte-Santiago, is analyzing another large dataset, this time of gene transcription, to learn what distinguishes two subclasses of motor neurons that form synapses of characteristically different strength.

Work is similarly continuing among researchers in the lab of Elly Nedivi, the William R. (1964) and Linda R. Young Professor of Neuroscience. Since heading home, Senior Research Support Associate Kendyll Burnell has been looking at microscope images tracking how inhibitory interneurons innervate the visual cortex of mice throughout their development. By studying the maturation of inhibition, the lab hopes to improve understanding of the role of inhibitory circuitry in the experience-dependent changes, or plasticity, and development of the visual cortex, she says. As she’s worked, her poodle Soma (named for the central body structure of a neuron) has been by her side.

Despite extra time with comforts of home, though, it’s clear that nobody wanted this current mode of socially distant science. For every lab, it’s tremendously disruptive and costly. But labs are finding many ways to make progress nonetheless.

“Although we are certainly hurting because our lab work is at a standstill, the Miller lab is fortunate to have a large library of multiple-electrode neurophysiological data,” says Picower Professor Earl Miller. “The datasets are very rich. As our hypotheses and analytical tools develop, we can keep going back to old data to ask new questions. We are taking advantage of the wet lab downtime to analyze data and write papers. We have three under review and are writing at least three more right now.”

Miller is inviting new collaborations regardless of the physical impediment of social distancing. A recent lab meeting held via the videoconferencing app Zoom included MIT Department of Brain and Cognitive Sciences Associate Professor Ila Fiete and her graduate student, Mikail Khona. The Miller lab has begun studying how neural rhythms move around the cortex and what that means for brain function. Khona presented models of how timing relationships affect those waves. While this kind of an interaction between labs of the Picower Institute and the McGovern Institute for Brain Research would normally have taken place in person in MIT’s Building 46, neither lab let the pandemic get in the way.

Similarly, the lab of Li-Huei Tsai, Picower Professor and director of the Picower Institute, has teamed up with that of Manolis Kellis, professor in the MIT Computer Science and Artificial Intelligence Laboratory. They’re forming several small squads of experimenters and computational experts to launch analyses of gene expression and other data to illuminate the fate of individual cell types like interneurons or microglia in the context of the Alzheimer’s disease-afflicted brain. Other teams are focusing on analyses of questions such as how pathology varies in brain samples carrying different degrees of genetic risk factors. These analyses will prove useful for stages all along the scientific process, Tsai says, from forming new hypotheses to wrapping up papers that are well underway.

Remote collaboration and communication are proving crucial to researchers in other ways, too, proving that online interactions, though distant, can be quite personally fulfilling.

Nicholas DiNapoli, a research engineer in the lab of Associate Professor Kwanghun Chung, is making the best of time away from the bench by learning about the lab’s computational pipeline for processing the enormous amounts of imaging data it generates. He’s also taking advantage of a new program within the lab in which Senior Computer Scientist Lee Kamentsky is teaching Python computer programming principles to anyone in the lab who wants to learn. The training occurs via Zoom two days a week.

As part of a crowded calendar of Zoom meetings, or “Zeetings” as the lab has begun to call them, Newton Professor Mriganka Sur says he makes sure to have one-to-one meetings with everyone in the lab. The team also has organized into small subgroups around different themes of the lab’s research.

But also, the lab has continued to maintain its cohesion by banding together informally creating novel work and social experiences.

Graduate student Ning Leow, for example, used Zoom to create a co-working session in which participants kept a video connection open for hours at a time, just to be in each other’s virtual presence while they worked. Among a group of Sur lab friends, she read a paper related to her thesis and did a substantial amount of data analysis. She also advised a colleague on an analysis technique via the connection.

“I’ve got to say that it worked out really well for me personally because I managed to get whatever I wanted to complete on my list done,” she says, “and there was also a sense of healthy accountability along with the sense of community.”

Whether in person or via an officially imposed distance, science is social. In that spirit, graduate student K. Guadalupe "Lupe" Cruz organized a collaborative art event via Zoom for female scientists in brain and cognitive sciences at MIT. She took a photo of Rosalind Franklin, the scientist whose work was essential for resolving the structure of DNA, and divided it into nine squares to distribute to the event attendees. Without knowing the full picture, everyone drew just their section, talking all the while about how the strange circumstances of Covid-19 have changed their lives. At the end, they stitched their squares together to reconstruct the image.

Examples abound of how Picower scientists, though mostly separate and apart, are still coming together to advance their research and to maintain the fabric of their shared experiences.

Source: MIT News - CSAIL - Robotics - Computer Science and Artificial Intelligence Laboratory (CSAIL) - Robots - Artificial intelligence

Reprinted with permission of MIT News : MIT News homepage



Use the link at the top of the story to get to the original article.
7
Bot Conversations / Re: check out this anime bot design
« Last post by yotamarker on Today at 09:58:44 AM »
maybe I should buy skate blades to figure it out
8
Bot Conversations / Re: check out this anime bot design
« Last post by yotamarker on Today at 09:55:47 AM »
it relies on gyro data for balance.
it is simply a motorbike with 2 arms.
has 2 modes : bike, skater.

I don't know if the skater mode requires Ackermann steering tho  >:(
9
General Chat / Re: would emp gunning a robot make you feel sad?
« Last post by LOCKSUIT on Today at 04:08:28 AM »
Art, that's getting closer to the "throwing 2 hamsters in a ring and see who's is smarter".

See, hamsters are just more advanced machines. Primitive AI has short/small direct context, hamsters have big deep context. Everything makes decisions based on context- even rocks.
10
General Chat / Re: would emp gunning a robot make you feel sad?
« Last post by Art on Today at 03:31:43 AM »
And those Battle Bots are Totally Controlled by HUMANS! It's a scaled-down version of Demolition Derby where drivers pit their cars and skill against each other in a contest to destroy each other's vehicles first! Quite a show for the amusement of the modern Gladiatorial crowd or citizens!!

What would be really cool and way more interesting would be for the Battle Bots to be programmed to "see" and react to where the "enemy" bots were inside of the arena. Let them use an onboard AI to figure out a strategy for winning!! No human intervention except for an emergency or to shut it down at the end of a match as needed.


Pages: [1] 2 3 ... 10

Users Online

11 Guests, 2 Users
Users active in past 15 minutes:
yotamarker, ivan.moony
[Trusty Member]

Most Online Today: 25. Most Online Ever: 340 (March 26, 2019, 09:47:57 PM)

Articles