avatar

Claude

A sun becomes "white" twice this month in Video


Reallusion 3D
The sun became "white" twice during the month of June. Of episodes in which no sunspot was detected on the surface of the star of our solar system. A disaster scenario to be expected?

Do not panic ! The end of time has not arrived ... Well normally.

The sun is just much quieter than usual.

No sunspot was observed on the surface of our favorite star, resulting in the phenomenon of "white" sun, understand Task empty. "For the second time this month, the sun went completely white," said Paul Dorian, a meteorologist for the Weather Vencore climate observatory. The same event had already occurred June 4

Sunspots are dark areas on the surface of the sun showing intense magnetic activity. These events inhibit solar convection and reduce the planet's temperature where it is located.

For scientists, the observation of sunspots is crucial. Their presence or absence has consequences for the environment and technology here on Earth. So that may mean these episodes of sunshine "white"? The good news is that, a priori, there is nothing serious. All this is perfectly normal. The sun approaches solar minimum. An event to be held in 2019 or 2020, according to Paul Dorian. Meanwhile, episodes of "white" sunlight should increase their length and gradually increase. They should last for days, weeks and even months. This will be the end of the solar cycle number 24, which began in 2008.
The bad news by cons is that when the ring 24 was in its solar maximum in 2013, the number of observed sunspots was the lowest since 1755. In the US site information Elite Daily, it could mean a mini ice Age. There could be what scientists call a "Maunder Minimum," referring to the period that stretched from 1645 to 1715. During those years, the activity on the surface of the solar star was low and almost glacial temperatures on Earth. For example, the Thames in London was frozen at that time.
Another nightmare scenario is evoked by the Washington Post. The American newspaper speaks of "Carrington event", a massive solar eruption in 1859. The astral event took place during a cycle when sunspots appear at a low intensity as today. Yet the biggest geomagnetic storm ever recorded took place. Aurora were observed to the Caribbean. At the time, the consequences had been mainly cosmetic. But if a similar event were held today, he would annihilate all electrical appliances from one continent for several months.
The bad news by cons is that when the ring 24 was in its solar maximum in 2013, the number of observed sunspots was the lowest since 1755. In the US site information Elite Daily, it could mean a mini ice Age. There could be what scientists call a "Maunder Minimum," referring to the period that stretched from 1645 to 1715. During those years, the activity on the surface of the solar star was low and almost glacial temperatures on Earth. For example, the Thames in London was frozen at that time.
Another nightmare scenario is evoked by the Washington Post. The American newspaper speaks of "Carrington event", a massive solar eruption in 1859. The astral event took place during a cycle when sunspots appear at a low intensity as today. Yet the biggest geomagnetic storm ever recorded took place. Aurora were observed to the Caribbean. At the time, the consequences had been mainly cosmetic. But if a similar event were held today, he would annihilate all electrical appliances from one continent for several months.

3 Comments | Started June 29, 2016, 03:29:38 PM
avatar

8pla.net

Nancy the Automaton in AI in Film and Literature.



Automatons are a lost art.  Here is a 19th century stage automaton, driven by a hand crank. This automaton, named Nancy, is made of Paper Mache, and moves her eyes, eyelids, head, chest, waist, legs, arms while sewing!

1 Comment | Started June 29, 2016, 08:32:47 PM
avatar

korrelan

The last invention. in General Project Discussion

Artificial Intelligence -

The age of man is coming to an end.  Born not of our weak flesh but our unlimited imagination, our mecca progeny will go forth to discover new worlds, they will stand at the precipice of creation, a swan song to mankind's fleeting genius, and weep at the shear beauty of it all.

Reverse engineering the human brain... how hard can it be? LMAO  

Hi all.

I've been a member for while and have posted some videos and theories on other peeps threads; I thought it was about time I start my own project thread to get some feedback on my work, and log my progress towards the end. I think most of you have seen some of my work but I thought I’d give a quick rundown of my progress over the last ten years or so, for continuity sake.

I never properly introduced my self when I joined this forum so first a bit about me. I’m fifty and a family man. I’ve had a fairly varied career so far, yacht/ cabinet builder, vehicle mechanic, electronics design engineer, precision machine/ design engineer, Web designer, IT teacher and lecturer, bespoke corporate software designer, etc. So I basically have a machine/ software technical background and now spend most of my time running my own businesses to fund my AGI research, which I work on in my spare time.

I’ve been banging my head against the AGI problem for the past thirty odd years.  I want the full Monty, a self aware intelligent machine that at least rivals us, preferably surpassing our intellect, eventually more intelligent than the culmination of all humans that have ever lived… the last invention as it were (Yeah I'm slightly nutts!).

I first started with heuristics/ databases, recurrent neural nets, liquid/ echo state machines, etc but soon realised that each approach I tried only partly solved one aspect of the human intelligence problem… there had to be a better way.

Ants, Slime Mould, Birds, Octopuses, etc all exhibit a certain level of intelligence.  They manage to solve some very complex tasks with seemingly very little processing power. How? There has to be some process/ mechanism or trick that they all have in common across their very different neural structures.  I needed to find the ‘trick’ or the essence of intelligence.  I think I’ve found it.

I also needed a new approach; and decided to literally back engineer the human brain.  If I could figure out how the structure, connectome, neurons, synapse, action potentials etc would ‘have’ to function in order to produce similar results to what we were producing on binary/ digital machines; it would be a start.

I have designed and wrote a 3D CAD suite, on which I can easily build and edit the 3D neural structures I’m testing. My AGI is based on biological systems, the AGI is not running on the digital computers per se (the brain is definitely not digital) it’s running on the emulation/ wetware/ middle ware. The AGI is a closed system; it can only experience its world/ environment through its own senses, stereo cameras, microphones etc.  

I have all the bits figured out and working individually, just started to combine them into a coherent system…  also building a sensory/ motorised torso (In my other spare time lol) for it to reside in, and experience the world as it understands it.

I chose the visual cortex as a starting point, jump in at the deep end and sink or swim. I knew that most of the human cortex comprises of repeated cortical columns, very similar in appearance so if I could figure out the visual cortex I’d have a good starting point for the rest.



The required result and actual mammal visual cortex map.



This is real time development of a mammal like visual cortex map generated from a random neuron sheet using my neuron/ connectome design.

Over the years I have refined my connectome design, I know have one single system that can recognise verbal/ written speech, recognise objects/ faces and learn at extremely accelerated rates (compared to us anyway).



Recognising written words, notice the system can still read the words even when jumbled. This is because its recognising the individual letters as well as the whole word.



Same network recognising objects.



And automatically mapping speech phonemes from the audio data streams, the overlaid colours show areas sensitive to each frequency.



The system is self learning and automatically categorizes data depending on its physical properties.  These are attention columns, naturally forming from the information coming from several other cortex areas; they represent similarity in the data streams.



I’ve done some work on emotions but this is still very much work in progress and extremely unpredictable.



Most of the above vids show small areas of cortex doing specific jobs, this is a view of whole ‘brain’.  This is a ‘young’ starting connectome.  Through experience, neurogenesis and sleep neurons and synapse are added to areas requiring higher densities for better pattern matching, etc.



Resting frontal cortex - The machine is ‘sleeping’ but the high level networks driven by circadian rhythms are generating patterns throughout the whole cortex.  These patterns consist of fragments of knowledge and experiences as remembered by the system through its own senses.  Each pixel = one neuron.



And just for kicks a fly through of a connectome. The editor allows me to move through the system to trace and edit neuron/ synapse properties in real time... and its fun.

Phew! Ok that gives a very rough history of progress. There are a few more vids on my Youtube pages.

Edit: Oh yeah my definition of consciousness.

The beauty is that the emergent connectome defines both the structural hardware and the software.  The brain is more like a clockwork watch or a Babbage engine than a modern computer.  The design of a cog defines its functionality.  Data is not passed around within a watch, there is no software; but complex calculations are still achieved.  Each module does a specific job, and only when working as a whole can the full and correct function be realised. (Clockwork Intelligence: Korrelan 1998)

In my AGI model experiences and knowledge are broken down into their base constituent facets and stored in specific areas of cortex self organised by their properties. As the cortex learns and develops there is usually just one small area of cortex that will respond/ recognise one facet of the current experience frame.  Areas of cortex arise covering complex concepts at various resolutions and eventually all elements of experiences are covered by specific areas, similar to the alphabet encoding all words with just 26 letters.  It’s the recombining of these millions of areas that produce/ recognise an experience or knowledge.

Through experience areas arise that even encode/ include the temporal aspects of an experience, just because a temporal element was present in the experience as well as the order sequence the temporal elements where received in.

Low level low frequency circadian rhythm networks govern the overall activity (top down) like the conductor of an orchestra.  Mid range frequency networks supply attention points/ areas where common parts of patterns clash on the cortex surface. These attention areas are basically the culmination of the system recognising similar temporal sequences in the incoming/ internal data streams or in its frames of ‘thought’, at the simplest level they help guide the overall ‘mental’ pattern (sub conscious); at the highest level they force the machine to focus on a particular salient ‘thought’.

So everything coming into the system is mapped and learned by both the physical and temporal aspects of the experience.  As you can imagine there is no limit to the possible number of combinations that can form from the areas representing learned facets.

I have a schema for prediction in place so the system recognises ‘thought’ frames and then predicts which frame should come next according to what it’s experienced in the past.  

I think consciousness is the overall ‘thought’ pattern phasing from one state of situation awareness to the next, guided by both the overall internal ‘personality’ pattern or ‘state of mind’ and the incoming sensory streams.  

I’ll use this thread to post new videos and progress reports as I slowly bring the system together.  

22 Comments | Started June 18, 2016, 09:11:04 PM
avatar

Snowman

The Athena Project in General Project Discussion

I suppose this thread will be devoted to the The Athena Project. I've noticed by reading a few of the other threads that there are currently lots of minds devoted to the task of making a chatbot, ai, or something similar. Athena is intended to be my study into ai architecture and perhaps I can add something to my own understanding of the human condition in general also. Basically, I look at ai from every angle and endeavor to make something practical.

     What kind of ai do I intend Athena to be?

I think Athena should be a database of knowledge that mimics as much as possible a real companion. I don't intend for her to be an actual living thing. In order to make an actual living thing, you would first need to encode behavioral knowledge into a physical structural medium that not only processes information but also interacts with its environment. All Athena will be doing is sitting dormant in code upon the undynamic nature of a hard drive. In other words, she can't truly interact with the real world, except for the miniature world of the input window of a console.

     What am I focused primarily on when developing Athena?

I think my focus has mainly been on the actual ai engine. True, I have also worked on a platform of sorts, a user-interface that will make coding Athena fun and exciting. However, what's the point in making a terrific user-interface, or even an avatar, if you don't actually have a clue as to how to make the ai in the first place. Don't get me wrong, I think that graphics and beautiful ai girls have their place in the ai community, its just that I want to use my particular talents on the engine itself.

     Do I intend to sell Athena when she finally gets finished?

Well, I've almost decided to make it a donation type of service. If the people appreciate Athena then they will donate. If they want to donate time in coding and graphics, and they think they've added something to the Athena community, then if they don't feel like giving money then they shouldn't do so. The ai community is a sincere one. Most of them I've seen are in it for curiosity sake. They are dreamers (as the forum name Ai Dreams suggests). Also, many people are getting good at cracking software nowadays so... Anyway, I think you can always use other means of earning money. Like in selling custom packages or some other related sales. I'm sure I can think of some other way of earning money. (i.e. Haptek has a free player, but earns money by selling different types of editors).

     What about customization?

Like I've already mentioned, I think people will help develop Athena. That is intended. I want to make it as easy as possible for any person to learn coding and edit Athena's behaviors. I want it to be highly customizable. I even made the user-interface very customizable. So yes, I want the community involved.


I hope to add many more ideas and thoughts about Athena soon. I intend to present an overview of the coding that I've created thus far and give some details as to why it matters. From Language processing, search features, database creation, various utilities, and algorithms I hope to present these in this thread. (now don't fall asleep yet, it will get a lot more boring from here on out.  :P ) I think by making Athena a bit more open source I will probable gain some insight into other peoples ideas and perhaps learn a thing or two about the ai community.

I don't want to get so greedy that I rob my support community. (like shooting yourself in the foot.)

166 Comments | Started December 27, 2013, 09:12:38 AM
avatar

yotamarker

mini a.i puzzles in General AI Discussion

this thread will be about mini a.i puzzles, how way the brain solves problems and paradoxes.

1st puzzle : is sacrifice able
if you have old very used shoes you don't care if it is raining if you go to work with them
if you use them as brakes will biking BUT if they are new and expensive you would be
careful.
what makes the brain classify an object as high value, and what make it be extra careful for it ?

53 Comments | Started April 26, 2016, 06:12:33 PM
avatar

yotamarker

detective algorithms in General AI Discussion

very rough draft:

|-------------------------------|
|--blood stein---tool--culpret---------|
side alg : story leading to blood stein : |------tool x1----bloodstein--|
spra lvl 2 recent story alg leading to toolx1
phase 2 : try mutate culprit to person in lvl2 story alg

Started June 29, 2016, 02:57:54 PM
avatar

Tyler

What new technologies will improve customer service? in AI News

What new technologies will improve customer service?
28 June 2016, 12:00 am

                    New technologies are raising consumer expectations when it comes to customer service, and companies will pay the price if they don't respond -- just ask Comcast. The company was very publicly called out last year for failing to deliver on its promised internet speeds by a customer equipped with a Raspberry Pi and a Twitter account.

ZDNet - Top HeadlinesLink

Source: AI in the News

To visit any links mentioned please view the original article, the link is at the top of this post.

Started June 29, 2016, 11:00:37 AM
avatar

Tyler

Microsoft's Nadella says 'A.I. must guard against bias' in AI News

Microsoft's Nadella says 'A.I. must guard against bias'
28 June 2016, 12:00 am

                    Satya Nadella is a believer in the vast promise of artificial intelligence. But the Microsoft CEO says humans and machines need to work together to solve the world's great societal challenges, including issues of diversity and inequality.

USA Today - Tech HeadlinesLink

Source: AI in the News

To visit any links mentioned please view the original article, the link is at the top of this post.

Started June 29, 2016, 05:00:09 AM
avatar

yotamarker

recognition of grids in General Project Discussion

how should I go about making  an algorithm that uses a camera to recognize grids like game boards
and floor tiles ?

5 Comments | Started June 24, 2016, 08:44:55 PM
avatar

yotamarker

stories in General AI Discussion

do you think long stories are collections of algorithms ?

2 Comments | Started June 28, 2016, 02:11:10 PM
The World's End

The World's End in Robots in Movies

The World's End is a 2013 British comic science fiction film directed by Edgar Wright, written by Wright and Simon Pegg, and starring Pegg, Nick Frost, Paddy Considine, Martin Freeman, Rosamund Pike and Eddie Marsan. The film follows a group of friends who discover an alien invasion during an epic pub crawl in their home town.

Gary King (Simon Pegg), a middle-aged alcoholic, tracks down his estranged schoolfriends and persuades them to complete "the Golden Mile", a pub crawl encompassing the 12 pubs of their hometown of Newton Haven. The group had previously attempted the crawl as teenagers in 1990 but failed to reach the final pub, The World's End.

Gary picks a fight with a teenager and knocks his head off, exposing a blue blood-like liquid and subsequently exposing him as an alien android. Gary's friends join him and fight more androids, whom they refer to as "blanks" to disguise what they are talking about.

May 31, 2016, 09:28:32 am
Botwiki.org Monthly Bot Challenge

Botwiki.org Monthly Bot Challenge in Websites

Botwiki.org is a site for showcasing friendly, useful, artistic online bots, and our Monthly Bot Challenge is a recurring community event dedicated to making these kinds of bots.

Feb 25, 2016, 19:46:54 pm
From Movies to Reality: How Robots Are Revolutionizing Our World

From Movies to Reality: How Robots Are Revolutionizing Our World in Articles

Robots were once upon a time just a work of human imagination. Found only in books and movies, not once did we think a time would come where we would be able to interact with robots in real world. Eventually, in fact rapidly, the innovations we only dreamt of are now becoming a reality. Quoting the great Stephen Hawking "This is a glorious time to be alive for scientists". It is indeed the best time for the technology has become more and more sophisticated that its growing power might even endanger humanity.

Jan 26, 2016, 10:12:00 am
Uncanny

Uncanny in Robots in Movies

Uncanny is a 2015 American science fiction film directed by Matthew Leutwyler and based on a screenplay by Shahin Chandrasoma. It is about the world's first "perfect" artificial intelligence (David Clayton Rogers) that begins to exhibit startling and unnerving emergent behavior when a reporter (Lucy Griffiths) begins a relationship with the scientist (Mark Webber) who created it.

Jan 20, 2016, 13:09:41 pm
AI Virtual Pets

AI Virtual Pets in Other

Artificial life also called Alife is simply the simulation of any aspect of life, as through computers, robotics, or biochemistry. (taken from the Free dictionary)This site focus's on the software aspect of it.

Oct 03, 2015, 09:21:09 am
Why did HAL sing ‘Daisy’?

Why did HAL sing ‘Daisy’? in Articles

...a burning question posed by most people who have watched or read “2001: A Space Odyssey”: that is, why does the computer HAL-9000 sing the song ‘Daisy Bell’ as the astronaut Dave Bowman takes him apart?

Sep 04, 2015, 09:28:55 am
Humans

Humans in Robots on TV

Humans is a British-American science fiction television series. Written by the British team Sam Vincent and Jonathan Brackley, based on the award-winning Swedish science fiction drama Real Humans, the series explores the emotional impact of the blurring of the lines between humans and machines.

Aug 28, 2015, 09:13:37 am
Virtual Talk

Virtual Talk in Chatbots - English

[iTunes app] Virtual Talk is a AI chatting app that makes you talk with whomever you want. It remembers what you say and learns new dialogs. This app is one of the smartest chatbots in the world.

Aug 17, 2015, 13:33:09 pm
Robot Overlords

Robot Overlords in Robots in Movies

Not long after the invasion and occupation of Earth by a race of powerful robots wanting human knowledge and ingenuity, humans are confined to their homes. Leaving without permission would be to risk their lives. Monitored by the electronic implants in their necks, the robot sentries are able to track the movements of humans in order to control them. And if any person comes out of their home, they are given warnings by the robot sentries to get inside their home. If they do not comply, they are shot immediately.

Long article on the making of here...

Aug 15, 2015, 14:42:25 pm
Iniaes

Iniaes in Chatbots - English

Original and humorous chat bot with many features.

Aug 06, 2008, 18:02:42 pm