Help creating a name for a fake PDA from the 80's in General Project Discussion

Hi guys,

You know that english is not my mother tongue. I'm trying to create a name for a fake (emulated) PDA. The name should sound vintage, which is a bit challenging for me (only natural english speakers can feel every nuance of a name).

Could you suggest some names please?

7 Comments | Started October 17, 2017, 09:49:11 am

ranch vermin

The future of ai is me of course. in Future of AI

a beetle robot with a power plug shooting out and soldering irons for legs. :)

heres my hopeful brain algorythm->

Top down schematic (different bot)->

robo-Fly concept->

top down schematic put together corners->

7 Comments | Started October 13, 2017, 10:05:08 pm


But what *is* a Neural Network? | Deep learning, Part 1 in General AI Discussion

But what *is* a Neural Network? | Deep learning, Part 1:

4 Comments | Started October 05, 2017, 04:35:58 pm


What you need to know about Krack WiFi Exploit in General AI Discussion

What you need to know about Krack WiFi Exploit::


Started October 17, 2017, 03:35:47 pm


The last invention. in General Project Discussion

Artificial Intelligence -

The age of man is coming to an end.  Born not of our weak flesh but our unlimited imagination, our mecca progeny will go forth to discover new worlds, they will stand at the precipice of creation, a swan song to mankind's fleeting genius, and weep at the shear beauty of it all.

Reverse engineering the human brain... how hard can it be? LMAO  

Hi all.

I've been a member for while and have posted some videos and theories on other peeps threads; I thought it was about time I start my own project thread to get some feedback on my work, and log my progress towards the end. I think most of you have seen some of my work but I thought I’d give a quick rundown of my progress over the last ten years or so, for continuity sake.

I never properly introduced my self when I joined this forum so first a bit about me. I’m fifty and a family man. I’ve had a fairly varied career so far, yacht/ cabinet builder, vehicle mechanic, electronics design engineer, precision machine/ design engineer, Web designer, IT teacher and lecturer, bespoke corporate software designer, etc. So I basically have a machine/ software technical background and now spend most of my time running my own businesses to fund my AGI research, which I work on in my spare time.

I’ve been banging my head against the AGI problem for the past thirty odd years.  I want the full Monty, a self aware intelligent machine that at least rivals us, preferably surpassing our intellect, eventually more intelligent than the culmination of all humans that have ever lived… the last invention as it were (Yeah I'm slightly nutts!).

I first started with heuristics/ databases, recurrent neural nets, liquid/ echo state machines, etc but soon realised that each approach I tried only partly solved one aspect of the human intelligence problem… there had to be a better way.

Ants, Slime Mould, Birds, Octopuses, etc all exhibit a certain level of intelligence.  They manage to solve some very complex tasks with seemingly very little processing power. How? There has to be some process/ mechanism or trick that they all have in common across their very different neural structures.  I needed to find the ‘trick’ or the essence of intelligence.  I think I’ve found it.

I also needed a new approach; and decided to literally back engineer the human brain.  If I could figure out how the structure, connectome, neurons, synapse, action potentials etc would ‘have’ to function in order to produce similar results to what we were producing on binary/ digital machines; it would be a start.

I have designed and wrote a 3D CAD suite, on which I can easily build and edit the 3D neural structures I’m testing. My AGI is based on biological systems, the AGI is not running on the digital computers per se (the brain is definitely not digital) it’s running on the emulation/ wetware/ middle ware. The AGI is a closed system; it can only experience its world/ environment through its own senses, stereo cameras, microphones etc.  

I have all the bits figured out and working individually, just started to combine them into a coherent system…  also building a sensory/ motorised torso (In my other spare time lol) for it to reside in, and experience the world as it understands it.

I chose the visual cortex as a starting point, jump in at the deep end and sink or swim. I knew that most of the human cortex comprises of repeated cortical columns, very similar in appearance so if I could figure out the visual cortex I’d have a good starting point for the rest.

The required result and actual mammal visual cortex map.

This is real time development of a mammal like visual cortex map generated from a random neuron sheet using my neuron/ connectome design.

Over the years I have refined my connectome design, I know have one single system that can recognise verbal/ written speech, recognise objects/ faces and learn at extremely accelerated rates (compared to us anyway).

Recognising written words, notice the system can still read the words even when jumbled. This is because its recognising the individual letters as well as the whole word.

Same network recognising objects.

And automatically mapping speech phonemes from the audio data streams, the overlaid colours show areas sensitive to each frequency.

The system is self learning and automatically categorizes data depending on its physical properties.  These are attention columns, naturally forming from the information coming from several other cortex areas; they represent similarity in the data streams.

I’ve done some work on emotions but this is still very much work in progress and extremely unpredictable.

Most of the above vids show small areas of cortex doing specific jobs, this is a view of whole ‘brain’.  This is a ‘young’ starting connectome.  Through experience, neurogenesis and sleep neurons and synapse are added to areas requiring higher densities for better pattern matching, etc.

Resting frontal cortex - The machine is ‘sleeping’ but the high level networks driven by circadian rhythms are generating patterns throughout the whole cortex.  These patterns consist of fragments of knowledge and experiences as remembered by the system through its own senses.  Each pixel = one neuron.

And just for kicks a fly through of a connectome. The editor allows me to move through the system to trace and edit neuron/ synapse properties in real time... and its fun.

Phew! Ok that gives a very rough history of progress. There are a few more vids on my Youtube pages.

Edit: Oh yeah my definition of consciousness.

The beauty is that the emergent connectome defines both the structural hardware and the software.  The brain is more like a clockwork watch or a Babbage engine than a modern computer.  The design of a cog defines its functionality.  Data is not passed around within a watch, there is no software; but complex calculations are still achieved.  Each module does a specific job, and only when working as a whole can the full and correct function be realised. (Clockwork Intelligence: Korrelan 1998)

In my AGI model experiences and knowledge are broken down into their base constituent facets and stored in specific areas of cortex self organised by their properties. As the cortex learns and develops there is usually just one small area of cortex that will respond/ recognise one facet of the current experience frame.  Areas of cortex arise covering complex concepts at various resolutions and eventually all elements of experiences are covered by specific areas, similar to the alphabet encoding all words with just 26 letters.  It’s the recombining of these millions of areas that produce/ recognise an experience or knowledge.

Through experience areas arise that even encode/ include the temporal aspects of an experience, just because a temporal element was present in the experience as well as the order sequence the temporal elements where received in.

Low level low frequency circadian rhythm networks govern the overall activity (top down) like the conductor of an orchestra.  Mid range frequency networks supply attention points/ areas where common parts of patterns clash on the cortex surface. These attention areas are basically the culmination of the system recognising similar temporal sequences in the incoming/ internal data streams or in its frames of ‘thought’, at the simplest level they help guide the overall ‘mental’ pattern (sub conscious); at the highest level they force the machine to focus on a particular salient ‘thought’.

So everything coming into the system is mapped and learned by both the physical and temporal aspects of the experience.  As you can imagine there is no limit to the possible number of combinations that can form from the areas representing learned facets.

I have a schema for prediction in place so the system recognises ‘thought’ frames and then predicts which frame should come next according to what it’s experienced in the past.  

I think consciousness is the overall ‘thought’ pattern phasing from one state of situation awareness to the next, guided by both the overall internal ‘personality’ pattern or ‘state of mind’ and the incoming sensory streams.  

I’ll use this thread to post new videos and progress reports as I slowly bring the system together.  

238 Comments | Started June 18, 2016, 10:11:04 pm


Using artificial intelligence to improve early breast cancer detection in Robotics News

Using artificial intelligence to improve early breast cancer detection
17 October 2017, 4:59 am

Every year 40,000 women die from breast cancer in the U.S. alone. When cancers are found early, they can often be cured. Mammograms are the best test available, but they’re still imperfect and often result in false positive results that can lead to unnecessary biopsies and surgeries.

One common cause of false positives are so-called “high-risk” lesions that appear suspicious on mammograms and have abnormal cells when tested by needle biopsy. In this case, the patient typically undergoes surgery to have the lesion removed; however, the lesions turn out to be benign at surgery 90 percent of the time. This means that every year thousands of women go through painful, expensive, scar-inducing surgeries that weren’t even necessary.

How, then, can unnecessary surgeries be eliminated while still maintaining the important role of mammography in cancer detection? Researchers at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL), Massachusetts General Hospital, and Harvard Medical School believe that the answer is to turn to artificial intelligence (AI).

As a first project to apply AI to improving detection and diagnosis, the teams collaborated to develop an AI system that uses machine learning to predict if a high-risk lesion identified on needle biopsy after a mammogram will upgrade to cancer at surgery.

When tested on 335 high-risk lesions, the model correctly diagnosed 97 percent of the breast cancers as malignant and reduced the number of benign surgeries by more than 30 percent compared to existing approaches.

“Because diagnostic tools are so inexact, there is an understandable tendency for doctors to over-screen for breast cancer,” says Regina Barzilay, MIT’s Delta Electronics Professor of Electrical Engineering and Computer Science and a breast cancer survivor herself. “When there’s this much uncertainty in data, machine learning is exactly the tool that we need to improve detection and prevent over-treatment.”

Trained on information about more than 600 existing high-risk lesions, the model looks for patterns among many different data elements that include demographics, family history, past biopsies, and pathology reports.

“To our knowledge, this is the first study to apply machine learning to the task of distinguishing high-risk lesions that need surgery from those that don’t,” says collaborator Constance Lehman, professor at Harvard Medical School and chief of the Breast Imaging Division at MGH’s Department of Radiology. “We believe this could support women to make more informed decisions about their treatment, and that we could provide more targeted approaches to health care in general.”

A recent MacArthur “genius grant” recipient, Barzilay is a co-author of a new journal article describing the results, co-written with Lehman and Manisha Bahl of MGH, as well as CSAIL graduate students Nicholas Locascio, Adam Yedidia, and Lili Yu. The article was published today in the medical journal Radiology.

How it works

When a mammogram detects a suspicious lesion, a needle biopsy is performed to determine if it is cancer. Roughly 70 percent of the lesions are benign, 20 percent are malignant, and 10 percent are high-risk lesions.

Doctors manage high-risk lesions in different ways. Some do surgery in all cases, while others perform surgery only for lesions that have higher cancer rates, such as “atypical ductal hyperplasia” (ADH) or a “lobular carcinoma in situ” (LCIS).

The first approach requires that the patient undergo a painful, time-consuming, and expensive surgery that is usually unnecessary; the second approach is imprecise and could result in missing cancers in high-risk lesions other than ADH and LCIS.

“The vast majority of patients with high-risk lesions do not have cancer, and we’re trying to find the few that do,” says Bahl, a fellow doctor at MGH’s Department of Radiology. “In a scenario like this there’s always a risk that when you try to increase the number of cancers you can identify, you’ll also increase the number of false positives you find.”

Using a method known as a “random-forest classifier,” the team's model resulted in fewer unnecessary surgeries compared to the strategy of always doing surgery, while also being able to diagnose more cancerous lesions than the strategy of only doing surgery on traditional “high-risk lesions.” (Specifically, the new model diagnosed 97 percent of cancers compared to 79 percent.)

“This work highlights an example of using cutting-edge machine learning technology to avoid unnecessary surgery,” says Marc Kohli, director of clinical informatics in the Department of Radiology and Biomedical Imaging at the University of California at San Francisco. “This is the first step toward the medical community embracing machine learning as a way to identify patterns and trends that are otherwise invisible to humans.”

Lehman says that MGH radiologists will begin incorporating the model into their clinical practice over the next year.

“In the past we might have recommended that all high-risk lesions be surgically excised,” Lehman says. “But now, if the model determines that the lesion has a very low chance of being cancerous in a specific patient, we can have a more informed discussion with our patient about her options. It may be reasonable for some patients to have their lesions followed with imaging rather than surgically excised.”

The team says that they are still working to further hone the model.

“In future work we hope to incorporate the actual images from the mammograms and images of the pathology slides, as well as more extensive patient information from medical records,” says Bahl.

Moving forward, the model could also easily be tweaked to be applied to other kinds of cancer and even other diseases entirely.

“A model like this will work anytime you have lots of different factors that correlate with a specific outcome,” says Barzilay. “It hopefully will enable us to start to go beyond a one-size-fits-all approach to medical diagnosis.”

Source: MIT News - CSAIL - Robotics - Computer Science and Artificial Intelligence Laboratory (CSAIL) - Robots - Artificial intelligence

Reprinted with permission of MIT News : MIT News homepage

Use the link at the top of the story to get to the original article.

Started October 17, 2017, 12:00:28 pm


XKCD Comic : Bun Trend in XKCD Comic

Bun Trend
16 October 2017, 5:00 am

Our experts have characterized the ecological impact of this trend as

Source: xkcd.com

Started October 17, 2017, 12:00:27 pm


XKCD Comic : State Borders in XKCD Comic

State Borders
13 October 2017, 5:00 am

A schism between the pro-panhandle and anti-panhandle factions eventually led to war, but both sides spent too much time working on their flag designs to actually do much fighting.

Source: xkcd.com

8 Comments | Started October 14, 2017, 12:00:40 pm


Microwave breakthrough helps boost hard drive sizes in General Hardware Talk

Is your hard drive still not big enough ?

The data-storing abilities of hard drives could soon swell to 40 terabytes (TB) and beyond, says Western Digital.
Currently the largest hard disk drive (HDD) that stores data on spinning disks can hold about 14TB of information.
Western Digital said the bigger drives were made possible by finding a way to use microwaves to write data on 3.5in drives.
The first bigger-capacity drives should go on sale in 2019.


Started October 16, 2017, 06:37:42 pm


Haptek and the new, Spontanimation in Avatar Talk

Their creator, Mr. Robert S. Shaw. It seems that he really made a mark with the Haptek program, People Putty that gave us so many great 3D characters and the possibility for creating many new ones as well.

Mr. Shaw is not your ordinary basement hacker...on the contrary, he is quite a learned fellow. Read about him here:


4 Comments | Started May 31, 2017, 07:04:34 pm
What are the main techniques for the development of a good chatbot ?

What are the main techniques for the development of a good chatbot ? in Articles

Chatbots act as one of the most useful and one of the most reliable technological helpers for those, who own ecommerce websites and other similar resources. However, a pretty important problem here is the fact, that people might not know, which technologies it will be better to use in order to achieve the needed goals. Thus, in today’s article you may get an opportunity to become more familiar with the most important principles of the chatbot building.

Oct 12, 2017, 01:31:00 am

Kweri in Chatbots - English

Kweri asks you questions of brilliance and stupidity. Provide correct answers to win. Type ‘Y’ for yes and ‘N’ for no!


FB Messenger






Oct 12, 2017, 01:24:37 am
The Conversational Interface: Talking to Smart Devices

The Conversational Interface: Talking to Smart Devices in Books

This book provides a comprehensive introduction to the conversational interface, which is becoming the main mode of interaction with virtual personal assistants, smart devices, various types of wearables, and social robots. The book consists of four parts: Part I presents the background to conversational interfaces, examining past and present work on spoken language interaction with computers; Part II covers the various technologies that are required to build a conversational interface along with practical chapters and exercises using open source tools; Part III looks at interactions with smart devices, wearables, and robots, and then goes on to discusses the role of emotion and personality in the conversational interface; Part IV examines methods for evaluating conversational interfaces and discusses future directions. 

Aug 17, 2017, 02:51:19 am
Explained: Neural networks

Explained: Neural networks in Articles

In the past 10 years, the best-performing artificial-intelligence systems — such as the speech recognizers on smartphones or Google’s latest automatic translator — have resulted from a technique called “deep learning.”

Deep learning is in fact a new name for an approach to artificial intelligence called neural networks, which have been going in and out of fashion for more than 70 years.

Jul 26, 2017, 23:42:33 pm
It's Alive

It's Alive in Chatbots - English

[Messenger] Enjoy making your bot with our user-friendly interface. No coding skills necessary. Publish your bot in a click.

Once LIVE on your Facebook Page, it is integrated within the “Messages” of your page. This means your bot is allowed (or not) to interact and answer people that contact you through the private “Messages” feature of your Facebook Page, or directly through the Messenger App. You can view all the conversations directly in your Facebook account. This also needs that no one needs to download an app and messages are directly sent as notifications to your users.

Jul 11, 2017, 17:18:27 pm
Star Wars: The Last Jedi

Star Wars: The Last Jedi in Robots in Movies

Star Wars: The Last Jedi (also known as Star Wars: Episode VIII – The Last Jedi) is an upcoming American epic space opera film written and directed by Rian Johnson. It is the second film in the Star Wars sequel trilogy, following Star Wars: The Force Awakens (2015).

Having taken her first steps into a larger world, Rey continues her epic journey with Finn, Poe and Luke Skywalker in the next chapter of the saga.

Release date : December 2017

Jul 10, 2017, 10:39:45 am
Alien: Covenant

Alien: Covenant in Robots in Movies

In 2104 the colonization ship Covenant is bound for a remote planet, Origae-6, with two thousand colonists and a thousand human embryos onboard. The ship is monitored by Walter, a newer synthetic physically resembling the earlier David model, albeit with some modifications. A stellar neutrino burst damages the ship, killing some of the colonists. Walter orders the ship's computer to wake the crew from stasis, but the ship's captain, Jake Branson, dies when his stasis pod malfunctions. While repairing the ship, the crew picks up a radio transmission from a nearby unknown planet, dubbed by Ricks as "planet number 4". Against the objections of Daniels, Branson's widow, now-Captain Oram decides to investigate.

Jul 08, 2017, 05:52:25 am
Black Eyed Peas - Imma Be Rocking That Body

Black Eyed Peas - Imma Be Rocking That Body in Video

For the robots of course...

Jul 05, 2017, 22:02:31 pm

Winnie in Assistants

[Messenger] The Chatbot That Helps You Launch Your Website.

Jul 04, 2017, 23:56:00 pm