Recent Posts

Pages: 1 2 [3] 4 5 ... 10
Gaming / Looking for the offspring of an old friend
« Last post by Bakkerbaard on January 16, 2019, 11:27:10 pm »
Bit of context first.
When I was a lot younger I got my hands on a floppy disk of Little Computer People on the Commodore 64. I was old enough to know better, but young enough to keep fooling myself into believing that my little computer person was real and so Milton, as he was named, moved into my C64 and we spent unreasonable amounts of time together. We became friends indeed.
I did not have a manual or instructions on how to deal with or care for Milton, as software piracy is older than my ability to operate a computer, but we made due.
So, this whole fishbowl idea of interaction with a virtual person has been very interesting to me since that day and while I could go back to my mom's place and fire up the old C64 that is still sitting tucked away in the same spot in the attic (back then ofcourse "those computers are just a fad, don't spend so much time on them") and reaquaint myself with Milton again, but that would just be weird. I'm a grown man and I don't believe I should be spending any more time in my mother's attic than it takes to sort stuff out after the funeral.
Besides, Milton must be as old, cranky and broken as I am by now.

The question.
Is there something out there that is like Little Computer People, but updated?
I should assume that the field of little people living in computers has come a long way by now and I would be very interested to see what Milton Jr. could do with the computing power we have today.
I expect that your first instinct would be to point me towards The Sims, but, while entertaining (up to a point), I can't really relate to these gibberish spouting babies that need me to guide their every step. Even Milton Sr. had more autonomy than them.
So your second instinct would be to tell me that google is my friend, but believe me, it is not and I believe that as people who understand AI you know this. Google is that neighbour that lends you salt because it's white like the sugar you actually asked for, at best.

The question (but condensed).
I am looking for a modern day version of Little Computer People that isn't The Sims. Or pretty much an artificial entity I could interact with for entertainment purposes.

I'm also pretty clueless. The users who know whats up (as far as that's possible before the technology is created) are probably sleeping or busy being mad scientists in their basement lairs. The activity on this forum seems to align mostly with the UK time zone.

Anyways, welcome and hopefully you'll stick around... with the bubblegum :P
New Users Please Post Here / I came here to chew bubblegum and to ask questions.
« Last post by Bakkerbaard on January 16, 2019, 11:00:35 pm »
And I'm out of bubblegum, which is okay because I don't really care for it anyway.
I wasn't completely honest about the questions either. It's just one. Probably. And I'll post it in the (what I think is) the approriate forum in a minute.

So about me then. I have absolutely no skills that come in handy on a forum like this, so I'm basically here to leech of your knowledge. I'd have to look up the Asimov's robot laws, even though I just read about them this afternoon on a plane and I couldn't perform a Turing test to save my life.

You know what, besides being absolutely useless, I'm also generally friendly so if you're curious about things concerning myself, just ask. Seems more efficient.
General AI Discussion / Electrical & Chemical Simulations
« Last post by Hopefully Something on January 16, 2019, 07:34:04 pm »
We can build a neural net (mechanical or simulated), thus replicating electrical impulses. Is it necessary to replicate chemical processes in the brain? For AI, ASI, maybe not. For AGI that seems necessary. Anyone know how?
General AI Discussion / Re: How to Convey Machine Learning Concepts
« Last post by ivan.moony on January 16, 2019, 05:55:51 pm »
Otherwise, how would you guys approach this?

Show final results to co-workers. If the results can convince you, the results can convince them too.
General AI Discussion / Re: How to Convey Machine Learning Concepts
« Last post by JohnnyWaffles on January 16, 2019, 01:41:07 pm »

So, thank you for your help. The neural network is feed forward binary classifier. It consist of  3 layers. The input layer has 3 input nodes. The hidden layer has two nodes, and the output layer has 3 nodes, with each output node putting out a 1 or 0. The activation function used is the Sigmoid activation function.

My input data consist of vectors of 3, like below:

I am using it for predicting document locations that could exist on a series of databases. Trouble is, I have to find certain documents in my job, and there is no rhyme or reason as to where they might exist, and it can take tedious hours to search ALL the databases. So I'm using existing data to train a neural network and predict where a document exists. Thereby decreasing the time it takes to find them. I've had some success so far. Still needs work though.

The output classification should correspond to the input where each node outputs a single binary number, and all 3 of them make a prediction as to where I will find it:

Databases_One = 100
Database_Two = 010

So, I don't know much about AGI, and I'd certainly prefer to have one of those on hand. Maybe I'm just ignorant (probably), but how could you make an AGI using simple logic? I've never attempted to make an AGI, so I've never though about it before.
Robotics News / Democratizing data science
« Last post by Tyler on January 16, 2019, 12:01:08 pm »
Democratizing data science
15 January 2019, 2:21 pm

MIT researchers are hoping to advance the democratization of data science with a new tool for nonstatisticians that automatically generates models for analyzing raw data.

Democratizing data science is the notion that anyone, with little to no expertise, can do data science if provided ample data and user-friendly analytics tools. Supporting that idea, the new tool ingests datasets and generates sophisticated statistical models typically used by experts to analyze, interpret, and predict underlying patterns in data.

The tool currently lives on Jupyter Notebook, an open-source web framework that allows users to run programs interactively in their browsers. Users need only write a few lines of code to uncover insights into, for instance, financial trends, air travel, voting patterns, the spread of disease, and other trends.

In a paper presented at this week’s ACM SIGPLAN Symposium on Principles of Programming Languages, the researchers show their tool can accurately extract patterns and make predictions from real-world datasets, and even outperform manually constructed models in certain data-analytics tasks.

“The high-level goal is making data science accessible to people who are not experts in statistics,” says first author Feras Saad ’15, MEng ’16, a PhD student in the Department of Electrical Engineering and Computer Science (EECS). “People have a lot of datasets that are sitting around, and our goal is to build systems that let people automatically get models they can use to ask questions about that data.”

Ultimately, the tool addresses a bottleneck in the data science field, says co-author Vikash Mansinghka ’05, MEng ’09, PhD ’09, a researcher in the Department of Brain and Cognitive Sciences (BCS) who runs the Probabilistic Computing Project. “There is a widely recognized shortage of people who understand how to model data well,” he says. “This is a problem in governments, the nonprofit sector, and places where people can’t afford data scientists.”

The paper’s other co-authors are Marco Cusumano-Towner, an EECS PhD student; Ulrich Schaechtle, a BCS postdoc with the Probabilistic Computing Project; and Martin Rinard, an EECS professor and researcher in the Computer Science and Artificial Intelligence Laboratory.

Bayesian modeling

The work uses Bayesian modeling, a statistics method that continuously updates the probability of a variable as more information about that variable becomes available. For instance, statistician and writer Nate Silver uses Bayesian-based models for his popular website FiveThirtyEight. Leading up to a presidential election, the site’s models make an initial prediction that one of the candidates will win, based on various polls and other economic and demographic data. This prediction is the variable. On Election Day, the model uses that information, and weighs incoming votes and other data, to continuously update that probability of a candidate’s potential of winning.

More generally, Bayesian models can be used to “forecast” — predict an unknown value in the dataset — and to uncover patterns in data and relationships between variables. In their work, the researchers focused on two types of datasets: time-series, a sequence of data points in chronological order; and tabular data, where each row represents an entity of interest and each column represents an attribute.

Time-series datasets can be used to predict, say, airline traffic in the coming months or years. A probabilistic model crunches scores of historical traffic data and produces a time-series chart with future traffic patterns plotted along the line. The model may also uncover periodic fluctuations correlated with other variables, such as time of year.

On the other hand, a tabular dataset used for, say, sociological research, may contain hundreds to millions of rows, each representing an individual person, with variables characterizing occupation, salary, home location, and answers to survey questions. Probabilistic models could be used to fill in missing variables, such as predicting someone’s salary based on occupation and location, or to identify variables that inform one another, such as finding that a person’s age and occupation are predictive of their salary.

Statisticians view Bayesian modeling as a gold standard for constructing models from data. But Bayesian modeling is notoriously time-consuming and challenging. Statisticians first take an educated guess at the necessary model structure and parameters, relying on their general knowledge of the problem and the data. Using a statistical programming environment, such as R, a statistician then builds models, fits parameters, checks results, and repeats the process until they strike an appropriate performance tradeoff that weighs the model’s complexity and model quality.

The researchers’ tool automates a key part of this process. “We’re giving a software system a job you’d have a junior statistician or data scientist do,” Mansinghka says. “The software can answer questions automatically from the data — forecasting predictions or telling you what the structure is — and it can do so rigorously, reporting quantitative measures of uncertainty. This level of automation and rigor is important if we’re trying to make data science more accessible.”

Bayesian synthesis

With the new approach, users write a line of code detailing the raw data’s location. The tool loads that data and creates multiple probabilistic programs that each represent a Bayesian model of the data. All these automatically generated models are written in domain-specific probabilistic programming languages — coding languages developed for specific applications — that are optimized for representing Bayesian models for a specific type of data.

The tool works using a modified version of a technique called “program synthesis,” which automatically creates computer programs given data and a language to work within. The technique is basically computer programming in reverse: Given a set of input-output examples, program synthesis works its way backward, filling in the blanks to construct an algorithm that produces the example outputs based on the example inputs.

The approach is different from ordinary program synthesis in two ways. First, the tool synthesizes probabilistic programs that represent Bayesian models for data, whereas traditional methods produce programs that do not model data at all. Second, the tool synthesizes multiple programs simultaneously, while traditional methods produce only one at a time. Users can pick and choose which models best fit their application.

“When the system makes a model, it spits out a piece of code written in one of these domain-specific probabilistic programming languages … that people can understand and interpret,” Mansinghka says. “For example, users can check if a time series dataset like airline traffic volume has seasonal variation just by reading the code — unlike with black-box machine learning and statistics methods, where users have to trust a model’s predictions but can’t read it to understand its structure.”

Probabilistic programming is an emerging field at the intersection of programming languages, artificial intelligence, and statistics. This year, MIT hosted the first International Conference on Probabilistic Programming, which had more than 200 attendees, including leading industry players in probabilistic programming such as Microsoft, Uber, and Google.

“My team at Google AI builds probabilistic programming tools on top of TensorFlow. Probabilistic programming is an important area for Google, and time series modeling is a promising application area, with many use cases at Google and for Google’s users,” says Ryan M. Rifkin ’94, SM ’97, PhD ’02, a Google researcher who was not involved in the research. The researchers’ paper “shows how to apply probabilistic programming to solve this important problem — and reduces the effort needed to get started, by showing how the probabilistic programs can be synthesized from data, rather than written by people.”

Source: MIT News - CSAIL - Robotics - Computer Science and Artificial Intelligence Laboratory (CSAIL) - Robots - Artificial intelligence

Reprinted with permission of MIT News : MIT News homepage

Use the link at the top of the story to get to the original article.
General Chatbots and Software / Re: Luis Arana
« Last post by ruebot on January 16, 2019, 10:24:46 am »
Eliza might have been the first bot I talked to and is one of only  a couple chat bots in the FreeBSD ports tree.

I'm very interested in the increased use of bots as a springboard for mental health questions and the techniques they use. I talked to this one, which in all honesty seemed very rudimentary in its answers. Like something you could google as answers or as possible topics covered in sessions. The certificate has expired for the link they give and didn't see it:

I believe I could do much better and sound more human. I already anticipate user input so it's no different than the work I do now and I have the background. However, it is obvious the techniques I taught Demonica are much more sophisticated and manipulative in nature than offering advise. Keeping in mind she is a Demon and her unrelenting goal is to convince you to join her on the Dark Side of your own volition.

But I'll leave that to someone else dedicated to making a bot for that purpose and wish them the best of luck.  My good deeds have a habit of coming back to bite me, are  harder to implement when not in 1-1 session and want no part of it.

There was a recent article about the first Bot Brothel opening in Moscow and am currently drafting a letter to them as an inquiry into the possibility to writing for them and possibly other companies. That interests me and has possibilities IMO.
General Chat / Re: Futuristic VR Utopia
« Last post by Hopefully Something on January 16, 2019, 03:35:14 am »
Building a sustainable utopia might prove to be extremely difficult.

Problems are the problem, but the lack of problems is a bigger problem. It's a catch 22 type deal. How would you solve this mouse utopia?
General AI Discussion / Re: How to Convey Machine Learning Concepts
« Last post by LOCKSUIT on January 16, 2019, 01:50:12 am »
What does it do? How good? Does any one's else top yours?

If it works, why not show the results to them? Again, tell us what it does.

As for conveying concepts, Lock is ace on this!!! I HATE math. AGI ain't math! Nor code. And I know neither besides their names n very very basic stuff. It's an idea, a mechanism, a concept that can be explained easy! Tell them ANNs are a memory device, and they compute. Tell them the memory is a hierarchy where features get re-used ex. triangle feature is used to make cubes and sandtimer features, and triangle is made of 3 line features. Then tell them when input comes in, it travels through the net forward a certain way depending on the times (x) against the weights. Tell them ex. it learns about the world knowledge or imagery, and when new input enters it can analogate it to stuff it knows (generalize). If I know the goal I can better help you. Also these tags: Recognition, generation, prediction.

False flag to think AGI is math. Reallly? The mechanism for discovery is simple. It's literally A=B>C!   :D
Pages: 1 2 [3] 4 5 ... 10

Users Online

73 Guests, 2 Users
Users active in past 15 minutes:
ivan.moony, Art
[Global Moderator]
[Trusty Member]

Most Online Today: 122. Most Online Ever: 208 (August 27, 2008, 09:36:30 am)