avatar

Maviarab

Friday Funny in General Chat

Share your jokes here to bring joy to the world  :)

1781 Comments | Started February 13, 2009, 01:52:35 pm
avatar

ivan.moony

Ten Commandments of Logic in General Chat

5 Comments | Started July 19, 2018, 06:42:13 pm
avatar

Tyler

More efficient security for cloud-based machine learning in Robotics News

More efficient security for cloud-based machine learning
17 August 2018, 5:00 am

A novel encryption method devised by MIT researchers secures data used in online neural networks, without dramatically slowing their runtimes. This approach holds promise for using cloud-based neural networks for medical-image analysis and other applications that use sensitive data.

Outsourcing machine learning is a rising trend in industry. Major tech firms have launched cloud platforms that conduct computation-heavy tasks, such as, say, running data through a convolutional neural network (CNN) for image classification. Resource-strapped small businesses and other users can upload data to those services for a fee and get back results in several hours.

But what if there are leaks of private data? In recent years, researchers have explored various secure-computation techniques to protect such sensitive data. But those methods have performance drawbacks that make neural network evaluation (testing and validating) sluggish — sometimes as much as million times slower — limiting their wider adoption.

In a paper presented at this week’s USENIX Security Conference, MIT researchers describe a system that blends two conventional techniques — homomorphic encryption and garbled circuits — in a way that helps the networks run orders of magnitude faster than they do with conventional approaches.

The researchers tested the system, called GAZELLE, on two-party image-classification tasks. A user sends encrypted image data to an online server evaluating a CNN running on GAZELLE. After this, both parties share encrypted information back and forth in order to classify the user’s image. Throughout the process, the system ensures that the server never learns any uploaded data, while the user never learns anything about the network parameters. Compared to traditional systems, however, GAZELLE ran 20 to 30 times faster than state-of-the-art models, while reducing the required network bandwidth by an order of magnitude.

One promising application for the system is training CNNs to diagnose diseases. Hospitals could, for instance, train a CNN to learn characteristics of certain medical conditions from magnetic resonance images (MRI) and identify those characteristics in uploaded MRIs. The hospital could make the model available in the cloud for other hospitals. But the model is trained on, and further relies on, private patient data. Because there are no efficient encryption models, this application isn’t quite ready for prime time.

“In this work, we show how to efficiently do this kind of secure two-party communication by combining these two techniques in a clever way,” says first author Chiraag Juvekar, a PhD student in the Department of Electrical Engineering and Computer Science (EECS). “The next step is to take real medical data and show that, even when we scale it for applications real users care about, it still provides acceptable performance.”

Co-authors on the paper are Vinod Vaikuntanathan, an associate professor in EECS and a member of the Computer Science and Artificial Intelligence Laboratory, and Anantha Chandrakasan, dean of the School of Engineering and the Vannevar Bush Professor of Electrical Engineering and Computer Science.

Maximizing performance

CNNs process image data through multiple linear and nonlinear layers of computation. Linear layers do the complex math, called linear algebra, and assign some values to the data. At a certain threshold, the data is outputted to nonlinear layers that do some simpler computation, make decisions (such as identifying image features), and send the data to the next linear layer. The end result is an image with an assigned class, such as vehicle, animal, person, or anatomical feature.

Recent approaches to securing CNNs have involved applying homomorphic encryption or garbled circuits to process data throughout an entire network. These techniques are effective at securing data. “On paper, this looks like it solves the problem,” Juvekar says. But they render complex neural networks inefficient, “so you wouldn’t use them for any real-world application.”

Homomorphic encryption, used in cloud computing, receives and executes computation all in encrypted data, called ciphertext, and generates an encrypted result that can then be decrypted by a user. When applied to neural networks, this technique is particularly fast and efficient at computing linear algebra. However, it must introduce a little noise into the data at each layer. Over multiple layers, noise accumulates, and the computation needed to filter that noise grows increasingly complex, slowing computation speeds.

Garbled circuits are a form of secure two-party computation. The technique takes an input from both parties, does some computation, and sends two separate inputs to each party. In that way, the parties send data to one another, but they never see the other party’s data, only the relevant output on their side. The bandwidth needed to communicate data between parties, however, scales with computation complexity, not with the size of the input. In an online neural network, this technique works well in the nonlinear layers, where computation is minimal, but the bandwidth becomes unwieldy in math-heavy linear layers.

The MIT researchers, instead, combined the two techniques in a way that gets around their inefficiencies.

In their system, a user will upload ciphertext to a cloud-based CNN. The user must have garbled circuits technique running on their own computer. The CNN does all the computation in the linear layer, then sends the data to the nonlinear layer. At that point, the CNN and user share the data. The user does some computation on garbled circuits, and sends the data back to the CNN. By splitting and sharing the workload, the system restricts the homomorphic encryption to doing complex math one layer at a time, so data doesn’t become too noisy. It also limits the communication of the garbled circuits to just the nonlinear layers, where it performs optimally.

“We’re only using the techniques for where they’re most efficient,” Juvekar says.

Secret sharing

The final step was ensuring both homomorphic and garbled circuit layers maintained a common randomization scheme, called “secret sharing.” In this scheme, data is divided into separate parts that are given to separate parties. All parties synch their parts to reconstruct the full data.

In GAZELLE, when a user sends encrypted data to the cloud-based service, it’s split between both parties. Added to each share is a secret key (random numbers) that only the owning party knows. Throughout computation, each party will always have some portion of the data, plus random numbers, so it appears fully random. At the end of computation, the two parties synch their data. Only then does the user ask the cloud-based service for its secret key. The user can then subtract the secret key from all the data to get the result.

“At the end of the computation, we want the first party to get the classification results and the second party to get absolutely nothing,” Juvekar says. Additionally, “the first party learns nothing about the parameters of the model.”

Source: MIT News - CSAIL - Robotics - Computer Science and Artificial Intelligence Laboratory (CSAIL) - Robots - Artificial intelligence

Reprinted with permission of MIT News : MIT News homepage



Use the link at the top of the story to get to the original article.

Started August 17, 2018, 12:00:16 pm
avatar

Tyler

XKCD Comic : Equations in XKCD Comic

Equations
17 August 2018, 5:00 am

All electromagnetic equations: The same as all fluid dynamics equations, but with the 8 and 23 replaced with the permittivity and permeability of free space, respectively.

Source: xkcd.com

Started August 17, 2018, 12:00:14 pm
avatar

Tyler

Design tool reveals a product’s many possible performance tradeoffs in Robotics News

Design tool reveals a product’s many possible performance tradeoffs
15 August 2018, 3:00 pm

MIT researchers have developed a tool that makes it much easier and more efficient to explore the many compromises that come with designing new products.

Designing any product — from complex car parts down to workaday objects such as wrenches and lamp stands — is a balancing act with conflicting performance tradeoffs. Making something lightweight, for instance, may compromise its durability.

To navigate these tradeoffs, engineers use computer-aided design (CAD) programs to iteratively modify design parameters — say, height, length, and radius of a product — and simulate the results for performance objectives to meet specific needs, such as weight, balance, and durability.

But these programs require users to modify designs and simulate the results for only one performance objective at a time. As products usually must meet multiple, conflicting performance objectives, this process becomes very time-consuming.

In a paper presented at this week’s SIGGRAPH conference, researchers from the Computer Science and Artificial Intelligence Laboratory (CSAIL) describe a visualization tool for CAD that, for the first time, lets users instead interactively explore all designs that best fit multiple, often-conflicting performance tradeoffs, in real time.

The tool first calculates optimal designs for three performance objectives in a precomputation step. It then maps all those designs as color-coded patches on a triangular graph. Users can move a cursor in and around the patches to prioritize one performance objective or another. As the cursor moves, 3-D designs appear that are optimized for that exact spot on the graph.

“Now you can explore the landscape of multiple performance compromises efficiently and interactively, which is something that didn’t exist before,” says Adriana Schulz, a CSAIL postdoc and first author on the paper.

Co-authors on the paper are Harrison Wang, a graduate student in mechanical engineering; Eitan Grinspun, an associate professor of computer science at Columbia University; Justin Solomon, an assistant professor in electrical engineering and computer science; and Wojciech Matusik, an associate professor in electrical engineering and computer science.

The new work builds off a tool, InstantCAD, developed last year by Schulz, Matusik, Grinspun, and other researchers. That tool let users interactively modify product designs and get real-time information on performance. The researchers estimated that tool could reduce the time of some steps in designing complex products to seconds or minutes, instead of hours.

However, a user still had to explore all designs to find one that satisfied all performance tradeoffs, which was time-consuming. This new tool represents “an inverse,” Schulz says: “We’re directly editing the performance space and providing real-time feedback on the designs that give you the best performance. A product may have 100 design parameters … but we really only care about how it behaves in the physical world.”

In the new paper, the researchers home in on a critical aspect of performance called the “Pareto front,” a set of designs optimized for all given performance objectives, where any design change that improves one objective worsens another objective. This front is usually represented in CAD and other software as a point cloud (dozens or hundreds of dots in a multidimensional graph), where each point is a separate design. For instance, one point may represent a wrench optimized for greater torque and less mass, while a nearby point will represent a design with slightly less torque, but more mass.

Engineers laboriously modify designs in CAD to find these Pareto-optimized designs, using a fair amount of guesswork. Then they use the front’s visual representation as a guideline to find a product that meets a specific performance, considering the various compromises.

The researchers’ tool, instead, rapidly finds the entire Pareto front and turns it into an interactive map. Inputted into the model is a product with design parameters, and information about how those parameters correspond to specific performance objectives.

The model first quickly uncovers one design on the Pareto front. Then, it uses some approximation calculations to discover tiny variations in that design. After doing that a few times, it captures all designs on the Pareto front. Those designs are mapped as colored patches on a triangular graph, where each patch represents one Pareto-optimal design, surrounded by its slight variations. Each edge of the graph is labeled with a separate performance objective based on the input data.

In their paper, the researchers tested their tool on various products, including a wrench, bike frame component, and brake hub, each with three or four design parameters, as well as a standing lamp with 21 design parameters.

With the lamp, for example, all 21 parameters relate to the thickness of the lamp’s base, height and orientation of its stand, and length and orientation of three elbowed beams attached to the top that hold the light bulbs. The system generated designs and variations corresponding to more than 50 colored patches reflecting a combination of three performance objectives: focal distance, stability, and mass. Placing the cursor on a patch closer to, say, focal distance and stability generates a design with a taller, straighter stand and longer beams oriented for balance. Moving the cursor farther from focal distance and toward mass and stability generates a design with thicker base and a shorter stand and beams, tilted at different angles.

Some designs change quite dramatically around the same region of performance tradeoffs and even within the same cluster. This is important from an engineer’s perspective, Schulz says. “You’re finding two designs that, even though they’re very different, they behave in similar ways,” she says. Engineers can use that information “to find designs that are actually better to meet specific use cases.”

The work was supported by the Defense Advanced Research Projects Agency, the Army Research Office, the Skoltech-MIT Next Generation Program, and the National Science Foundation.

Source: MIT News - CSAIL - Robotics - Computer Science and Artificial Intelligence Laboratory (CSAIL) - Robots - Artificial intelligence

Reprinted with permission of MIT News : MIT News homepage



Use the link at the top of the story to get to the original article.

Started August 16, 2018, 12:01:06 pm
avatar

Tyler

XKCD Comic : Repair or Replace in XKCD Comic

Repair or Replace
15 August 2018, 5:00 am

Just make sure all your friends and family are out of the car, or that you've made backup friends and family at home.

Source: xkcd.com

Started August 16, 2018, 12:01:05 pm
avatar

alfred

Simple AI Website to let you know what is AI in General AI Discussion

Artificial Intelligence for Everyone

http://bit.ly/ailatte
We have a dream… to make everyone enjoy artificial intelligence as easy as savouring a cup of latte.

25 Comments | Started May 10, 2018, 05:11:29 pm
avatar

Art

Soul Machines in AI News

Can A.I. have a soul?



http://www.bbc.com/future/story/20180615-can-artificial-intelligence-have-a-soul-and-religion

https://www.soulmachines.com/news/2018/7/9/hot-off-the-press-introducing-jamie-anzs-new-digital-assistant


Started August 16, 2018, 01:16:03 am
avatar

korrelan

The last invention. in General Project Discussion

Artificial Intelligence -

The age of man is coming to an end.  Born not of our weak flesh but our unlimited imagination, our mecca progeny will go forth to discover new worlds, they will stand at the precipice of creation, a swan song to mankind's fleeting genius, and weep at the shear beauty of it all.

Reverse engineering the human brain... how hard can it be? LMAO  

Hi all.

I've been a member for while and have posted some videos and theories on other peeps threads; I thought it was about time I start my own project thread to get some feedback on my work, and log my progress towards the end. I think most of you have seen some of my work but I thought I’d give a quick rundown of my progress over the last ten years or so, for continuity sake.

I never properly introduced my self when I joined this forum so first a bit about me. I’m fifty and a family man. I’ve had a fairly varied career so far, yacht/ cabinet builder, vehicle mechanic, electronics design engineer, precision machine/ design engineer, Web designer, IT teacher and lecturer, bespoke corporate software designer, etc. So I basically have a machine/ software technical background and now spend most of my time running my own businesses to fund my AGI research, which I work on in my spare time.

I’ve been banging my head against the AGI problem for the past thirty odd years.  I want the full Monty, a self aware intelligent machine that at least rivals us, preferably surpassing our intellect, eventually more intelligent than the culmination of all humans that have ever lived… the last invention as it were (Yeah I'm slightly nutts!).

I first started with heuristics/ databases, recurrent neural nets, liquid/ echo state machines, etc but soon realised that each approach I tried only partly solved one aspect of the human intelligence problem… there had to be a better way.

Ants, Slime Mould, Birds, Octopuses, etc all exhibit a certain level of intelligence.  They manage to solve some very complex tasks with seemingly very little processing power. How? There has to be some process/ mechanism or trick that they all have in common across their very different neural structures.  I needed to find the ‘trick’ or the essence of intelligence.  I think I’ve found it.

I also needed a new approach; and decided to literally back engineer the human brain.  If I could figure out how the structure, connectome, neurons, synapse, action potentials etc would ‘have’ to function in order to produce similar results to what we were producing on binary/ digital machines; it would be a start.

I have designed and wrote a 3D CAD suite, on which I can easily build and edit the 3D neural structures I’m testing. My AGI is based on biological systems, the AGI is not running on the digital computers per se (the brain is definitely not digital) it’s running on the emulation/ wetware/ middle ware. The AGI is a closed system; it can only experience its world/ environment through its own senses, stereo cameras, microphones etc.  

I have all the bits figured out and working individually, just started to combine them into a coherent system…  also building a sensory/ motorised torso (In my other spare time lol) for it to reside in, and experience the world as it understands it.

I chose the visual cortex as a starting point, jump in at the deep end and sink or swim. I knew that most of the human cortex comprises of repeated cortical columns, very similar in appearance so if I could figure out the visual cortex I’d have a good starting point for the rest.



The required result and actual mammal visual cortex map.



This is real time development of a mammal like visual cortex map generated from a random neuron sheet using my neuron/ connectome design.

Over the years I have refined my connectome design, I know have one single system that can recognise verbal/ written speech, recognise objects/ faces and learn at extremely accelerated rates (compared to us anyway).



Recognising written words, notice the system can still read the words even when jumbled. This is because its recognising the individual letters as well as the whole word.



Same network recognising objects.



And automatically mapping speech phonemes from the audio data streams, the overlaid colours show areas sensitive to each frequency.



The system is self learning and automatically categorizes data depending on its physical properties.  These are attention columns, naturally forming from the information coming from several other cortex areas; they represent similarity in the data streams.



I’ve done some work on emotions but this is still very much work in progress and extremely unpredictable.



Most of the above vids show small areas of cortex doing specific jobs, this is a view of whole ‘brain’.  This is a ‘young’ starting connectome.  Through experience, neurogenesis and sleep neurons and synapse are added to areas requiring higher densities for better pattern matching, etc.



Resting frontal cortex - The machine is ‘sleeping’ but the high level networks driven by circadian rhythms are generating patterns throughout the whole cortex.  These patterns consist of fragments of knowledge and experiences as remembered by the system through its own senses.  Each pixel = one neuron.



And just for kicks a fly through of a connectome. The editor allows me to move through the system to trace and edit neuron/ synapse properties in real time... and its fun.

Phew! Ok that gives a very rough history of progress. There are a few more vids on my Youtube pages.

Edit: Oh yeah my definition of consciousness.

The beauty is that the emergent connectome defines both the structural hardware and the software.  The brain is more like a clockwork watch or a Babbage engine than a modern computer.  The design of a cog defines its functionality.  Data is not passed around within a watch, there is no software; but complex calculations are still achieved.  Each module does a specific job, and only when working as a whole can the full and correct function be realised. (Clockwork Intelligence: Korrelan 1998)

In my AGI model experiences and knowledge are broken down into their base constituent facets and stored in specific areas of cortex self organised by their properties. As the cortex learns and develops there is usually just one small area of cortex that will respond/ recognise one facet of the current experience frame.  Areas of cortex arise covering complex concepts at various resolutions and eventually all elements of experiences are covered by specific areas, similar to the alphabet encoding all words with just 26 letters.  It’s the recombining of these millions of areas that produce/ recognise an experience or knowledge.

Through experience areas arise that even encode/ include the temporal aspects of an experience, just because a temporal element was present in the experience as well as the order sequence the temporal elements where received in.

Low level low frequency circadian rhythm networks govern the overall activity (top down) like the conductor of an orchestra.  Mid range frequency networks supply attention points/ areas where common parts of patterns clash on the cortex surface. These attention areas are basically the culmination of the system recognising similar temporal sequences in the incoming/ internal data streams or in its frames of ‘thought’, at the simplest level they help guide the overall ‘mental’ pattern (sub conscious); at the highest level they force the machine to focus on a particular salient ‘thought’.

So everything coming into the system is mapped and learned by both the physical and temporal aspects of the experience.  As you can imagine there is no limit to the possible number of combinations that can form from the areas representing learned facets.

I have a schema for prediction in place so the system recognises ‘thought’ frames and then predicts which frame should come next according to what it’s experienced in the past.  

I think consciousness is the overall ‘thought’ pattern phasing from one state of situation awareness to the next, guided by both the overall internal ‘personality’ pattern or ‘state of mind’ and the incoming sensory streams.  

I’ll use this thread to post new videos and progress reports as I slowly bring the system together.  

357 Comments | Started June 18, 2016, 10:11:04 pm
avatar

SofS

Hi there in New Users Please Post Here

Hello everyone,

I'm a fellow AI enthuasiast, although not being good enough in maths and programming to make my own AI ! lol ! But I'm very fond of the whole topic in itself, I love watching videos and reading articles about AI development, deep learning and similar subjects. I also have a deep passion for fictional AI, as I began reading Asimov's robot stories at 15 and I've been interested in fictional AI characters and plots ever since :)
I even created a forum about fictional AI, even if it's still very young and empty (you can find the link in my profile).

Seeya soon !

12 Comments | Started August 14, 2018, 09:10:58 am
Bot Development Frameworks - Getting Started

Bot Development Frameworks - Getting Started in Articles

What Are Bot Frameworks ?

Simply explained, a bot framework is where bots are built and where their behavior is defined. Developing and targeting so many messaging platforms and SDKs for chatbot development can be overwhelming. Bot development frameworks abstract away much of the manual work that's involved in building chatbots. A bot development framework consists of a Bot Builder SDK, Bot Connector, Developer Portal, and Bot Directory. There’s also an emulator that you can use to test the developed bot.

Mar 23, 2018, 20:00:23 pm
A Guide to Chatbot Architecture

A Guide to Chatbot Architecture in Articles

Humans are always fascinated with self-operating devices and today, it is software called “Chatbots” which are becoming more human-like and are automated. The combination of immediate response and constant connectivity makes them an enticing way to extend or replace the web applications trend. But how do these automated programs work? Let’s have a look.

Mar 13, 2018, 14:47:09 pm
Sing for Fame

Sing for Fame in Chatbots - English

Sing for Fame is a bot that hosts a singing competition. 

Users can show their skills by singing their favorite songs. 

If someone needs inspiration the bot provides suggestions including song lyrics and videos.

The bot then plays it to other users who can rate the song.

Based on the ratings the bot generates a top ten.

Jan 30, 2018, 22:17:57 pm
ConciergeBot

ConciergeBot in Assistants

A concierge service bot that handles guest requests and FAQs, as well as recommends restaurants and local attractions.

Messenger Link : messenger.com/t/rthhotel

Jan 30, 2018, 22:11:55 pm
What are the main techniques for the development of a good chatbot ?

What are the main techniques for the development of a good chatbot ? in Articles

Chatbots act as one of the most useful and one of the most reliable technological helpers for those, who own ecommerce websites and other similar resources. However, a pretty important problem here is the fact, that people might not know, which technologies it will be better to use in order to achieve the needed goals. Thus, in today’s article you may get an opportunity to become more familiar with the most important principles of the chatbot building.

Oct 12, 2017, 01:31:00 am
Kweri

Kweri in Chatbots - English

Kweri asks you questions of brilliance and stupidity. Provide correct answers to win. Type ‘Y’ for yes and ‘N’ for no!

Links:

FB Messenger
https://www.messenger.com/t/kweri.chat

Telegram
https://telegram.me/kweribot

Slack
https://slack.com/apps/A5JKP5TND-kweri

Kik
http://taell.me/kweri-kik

Line
http://taell.me/kweri-line/

Skype
http://taell.me/kweri-skype/

Oct 12, 2017, 01:24:37 am
The Conversational Interface: Talking to Smart Devices

The Conversational Interface: Talking to Smart Devices in Books

This book provides a comprehensive introduction to the conversational interface, which is becoming the main mode of interaction with virtual personal assistants, smart devices, various types of wearables, and social robots. The book consists of four parts: Part I presents the background to conversational interfaces, examining past and present work on spoken language interaction with computers; Part II covers the various technologies that are required to build a conversational interface along with practical chapters and exercises using open source tools; Part III looks at interactions with smart devices, wearables, and robots, and then goes on to discusses the role of emotion and personality in the conversational interface; Part IV examines methods for evaluating conversational interfaces and discusses future directions. 

Aug 17, 2017, 02:51:19 am
Explained: Neural networks

Explained: Neural networks in Articles

In the past 10 years, the best-performing artificial-intelligence systems — such as the speech recognizers on smartphones or Google’s latest automatic translator — have resulted from a technique called “deep learning.”

Deep learning is in fact a new name for an approach to artificial intelligence called neural networks, which have been going in and out of fashion for more than 70 years.

Jul 26, 2017, 23:42:33 pm
It's Alive

It's Alive in Chatbots - English

[Messenger] Enjoy making your bot with our user-friendly interface. No coding skills necessary. Publish your bot in a click.

Once LIVE on your Facebook Page, it is integrated within the “Messages” of your page. This means your bot is allowed (or not) to interact and answer people that contact you through the private “Messages” feature of your Facebook Page, or directly through the Messenger App. You can view all the conversations directly in your Facebook account. This also needs that no one needs to download an app and messages are directly sent as notifications to your users.

Jul 11, 2017, 17:18:27 pm