Recent Posts

Pages: [1] 2 3 ... 10
1
General AI Discussion / Re: Eva Progress, from AGI to ASI
« Last post by AgentSmith on Today at 03:03:43 pm »
@unreality

Is your AI able to improve itself on the base of past experiences? If so, what sort of optimization techniques/heuristics are you using for your AI? Are they based on evolutionary algorithms? Could you roughly describe how the self improvement takes place?
2
General Chatbots and Software / Re: KorrBot
« Last post by LOCKSUIT on Today at 12:49:56 pm »
First paragraph at end I say "Lastly vision" WHAT lol what is that for damn it...nvm for now

also typo "dining table *of" i meant "or" fixed now but I'm sure you got it either understanding or understood it correctly
3
Robotics News / Building AI systems that make fair decisions
« Last post by Tyler on Today at 12:00:14 pm »
Building AI systems that make fair decisions
24 April 2018, 9:30 pm

A growing body of research has demonstrated that algorithms and other types of software can be discriminatory, yet the vague nature of these tools makes it difficult to implement specific regulations. Determining the existing legal, ethical and philosophical implications of these powerful decision-making aides, while still obtaining answers and information, is a complex challenge.

Harini Suresh, a PhD student at MITs Computer Science and Artificial Intelligence Laboratory (CSAIL), is investigating this multilayered puzzle: how to create fair and accurate machine learning algorithms that let users obtain the data they need. Suresh studies the societal implications of automated systems in MIT Professor John Guttag’s Data-Driven Inference Group, which uses machine learning and computer vision to improve outcomes in medicine, finance, and sports. Here, she discusses her research motivations, how a food allergy led her to MIT, and teaching students about deep learning.

Q: What led you to MIT?

A: When I was in eighth grade, my mom developed an allergy to spicy food, which, coming from India, was truly bewildering to me. I wanted to discover the underlying reason. Luckily, I grew up next to Purdue University in Indiana, and I met with a professor there who eventually let me test my allergy-related hypotheses. I was fascinated with being able to ask and answer my own questions, and continued to explore this realm throughout high school.

When I came to MIT as an undergraduate, I intended to focus solely on biology, until I took my first computer science class. I learned how computational tools could profoundly affect biology and medicine, since humans can’t process massive amounts of data in the way that machines can.

Towards the end of my undergrad, I started doing research with [professor of computer science and engineering] Peter Szolovits, who focuses on utilizing big medical data and machine learning to come up with new insights. I stayed to get my master’s degree in computer science, and now I’m in my first year as a PhD student studying personalized medicine and societal implications of machine learning.

Q: What are you currently working on?

A: I’m studying how to make machine learning algorithms more understandable and easier to use responsibly. In machine learning, we typically use historical data and train a model to detect patterns in the data and make new predictions.

If the data we use is biased in a particular way, such as “women tend to receive less pain treatment”, then the model will learn that. Even if the data isn’t biased, if we just have way less data on a certain group, predictions for that group will be worse. If that model is then integrated into a hospital (or any other real-world system), it’s not going to perform equally across all groups of people, which is problematic.

I’m working on creating algorithms that utilize data effectively but fairly. This involves both detecting bias or underrepresentation in the data as well as figuring out how to mitigate it at different points in the machine learning pipeline. I’ve also worked on using predictive models to improve patient care.

Q: What effect do you think your area of work will have in the next decade?

A: Machine learning is everywhere. Companies are going to use these algorithms and integrate them into their products, whether they’re fair or not. We need to make it easier for people to use these tools responsibly so that our predictions on data are made in a way that we as a society are okay with.

Q: What is your favorite thing about doing research at CSAIL?

A: When I ask for help, whether it's related to a technical detail, a high-level problem, or general life advice, people are genuinely willing to lend support, discuss problems, and find solutions, even if it takes a long time.

Q: What is the biggest challenge you face in your work?

A: When we think about machine learning problems with real-world applications, and the goal of eventually getting our work in the hands of real people, there’s a lot of existing legal, ethical, and philosophical considerations that arise. There’s variability in the definition of “fair,” and it’s important not to reduce our research down to a simple equation, because it’s much more than that. It's definitely challenging to balance thinking about how my work fits in with these broader frameworks while also carving out a doable computer science problem to work on.

Q: What is something most people would be surprised to learn about you?

A: I love creative writing, and for most of my life before I came to MIT I thought I would be an author. I really enjoy art and creativity. Along those lines, I painted a full-wall mural in my room a while ago, I frequently spend hours at MIT's pottery studio, and I love making up recipes and taking photos.

Q: If you could tell your younger self one thing what would it be?

A: If you spend time on something, and it doesn't directly contribute to a paper or thesis, don't think of it as a waste of time. Accept the things that don't work out as a part of the learning process and be honest about when to move on to something new without feeling guilty.

If you’d rather be doing something else, sooner is better to just go do it. Things that seem like huge consequences at the time, like taking an extra class or graduating slightly later, aren't actually an issue when the time rolls around, and a lot of people do it. Honestly, my future self could probably use this advice too!

Q: What else have you been involved with at MIT?

A: During Independent Activity Period 2017, I organized a class called Intro to Deep Learning. I think machine learning gets a reputation of being a very difficult, expert-only endeavor, which scares people away and creates a pretty homogenous group of “experts.”

I wanted to create a low-commitment introduction to an area of machine learning that might help ease the initial barrier to entry. My co-organizer and I tried to keep our goals of accessibility and inclusivity at the forefront when making decisions about the course. Communicating complex ideas in an accessible way was a challenge, but a very fun one.

Source: MIT News - CSAIL - Robotics - Computer Science and Artificial Intelligence Laboratory (CSAIL) - Robots - Artificial intelligence

Reprinted with permission of MIT News : MIT News homepage



Use the link at the top of the story to get to the original article.
4
AI Programming / Re: A Tensortflow manual
« Last post by Kaeldric on Today at 10:30:59 am »
I had a lot of problems finding good books too. Maybe the online tutorials and guides are good enough.

I will watch the videos you suggested for sure.
Thank you very much.

5
General AI Discussion / Re: Eva Progress, from AGI to ASI
« Last post by Don Patrick on Today at 09:26:08 am »
Makes me wonder all the things I could claim about my program. At the end of the day you can call it whatever you want but the only thing that matters is what it actually does.
We should have a forum bot that can function as a sounding board or something, because there's no other use to posting projects like this without any concrete information. All you'll get is scepticism, and for a project that's seen only a few months of development, I'd say that's right.
6
General Chat / Re: Stand out
« Last post by Art on Today at 02:58:58 am »
Yes, I think it was in the creator/artist's mind to go a step beyond that mask used in the 2005 movie V for Vendetta, in which he used hundreds, if not more, Guy Fawkes masks as a protest.
https://en.wikipedia.org/wiki/Guy_Fawkes_mask

Just a thought.
7
General AI Discussion / Re: Eva Progress, from AGI to ASI
« Last post by unreality on Today at 02:33:48 am »
Most humans I know these days are nearly 100% database driven. Not much life there.
8
General Chat / Re: Westworld season 2 in 2 days
« Last post by unreality on Today at 02:31:57 am »
They're playing Blue Bloods on syfy channel? LOL

I'm looking forward to the next season of Dark Matter. Hopefully there'll be one. Cool characters. Android is cool, but a bit annoying at times.
9
General AI Discussion / Re: Eva Progress, from AGI to ASI
« Last post by Freddy on Today at 02:25:36 am »
Actually I need to make a clearer distinction between alive and human level.

 :juggle-new:
10
General AI Discussion / Re: Eva Progress, from AGI to ASI
« Last post by Freddy on Today at 02:20:34 am »
She has put thought into how she could obtain more abilities.
She has put thought into how she compares to other lifeforms.

Code: [Select]
for i in range(1,10):
    print("I'm thinking about how to obtain more abilities.")
print("I got nothing.")
for i in range(1,10):
    print("I'm thinking about other lifeforms.")
print("I got nothing.")

Consuming electricity doesn't count as thinking.

This is the trouble. Things can be dressed up to look human, even my Jess will say how she feels according to what has already been said. Things she chats about during the day affect her artificial mood. It doesn't make her alive though, no more than a calculator is a mathematician and a book about Physics is not Einstein.

Hmm. Well thanks for answering my questions Unreality, it will be interesting to see how your project evolves.
Pages: [1] 2 3 ... 10