Monitoring neural activity with Optical Electrophysiology in General Chat

Reading up on how some researchers are monitoring brain activity at a neural level in living animals I ran into Optical Electrophysiology It's literally monitoring actual neural firings of individual neurons and they can trace the innervation between neurons to see them interact as gated switches, and interact with the neurons to cause them to fire or inhibit them!  :D

And here's a video abstract of a paper of experiments using a mouse. Don't worry the mouse wasn't killed.

2 Comments | Started July 30, 2020, 02:33:35 AM

cool little robotics kit with lots of features. in AI Programming

I was a little blown away by how much is included with this little system,   It looks like the software for a.i. is getting old enough to be more public domainish than before.
https://www.youtube.com/watch?v=KFSgkjNqxLM

Started August 12, 2020, 05:26:11 PM

Data systems that learn to be better in Robotics News

Data systems that learn to be better
10 August 2020, 9:00 pm

Big data has gotten really, really big: By 2025, all the world’s data will add up to an estimated 175 trillion gigabytes. For a visual, if you stored that amount of data on DVDs, it would stack up tall enough to circle the Earth 222 times.

One of the biggest challenges in computing is handling this onslaught of information while still being able to efficiently store and process it. A team from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) believe that the answer rests with something called “instance-optimized systems.”  

Traditional storage and database systems are designed to work for a wide range of applications because of how long it can take to build them — months or, often, several years. As a result, for any given workload such systems provide performance that is good, but usually not the best. Even worse, they sometimes require administrators to painstakingly tune the system by hand to provide even reasonable performance.

In contrast, the goal of instance-optimized systems is to build systems that optimize and partially re-organize themselves for the data they store and the workload they serve.

“It’s like building a database system for every application from scratch, which is not economically feasible with traditional system designs,” says MIT Professor Tim Kraska.

As a first step toward this vision, Kraska and colleagues developed Tsunami and Bao. Tsunami uses machine learning to automatically re-organize a dataset’s storage layout based on the types of queries that its users make. Tests show that it can run queries up to 10 times faster than state-of-the-art systems. What’s more, its datasets can be organized via a series of "learned indexes" that are up to 100 times smaller than the indexes used in traditional systems.

Kraska has been exploring the topic of learned indexes for several years, going back to his influential work with colleagues at Google in 2017.

Harvard University Professor Stratos Idreos, who was not involved in the Tsunami project, says that a unique advantage of learned indexes is their small size, which, in addition to space savings, brings substantial performance improvements.

“I think this line of work is a paradigm shift that’s going to impact system design long-term,” says Idreos. “I expect approaches based on models will be one of the core components at the heart of a new wave of adaptive systems.”

Bao, meanwhile, focuses on improving the efficiency of query optimization through machine learning. A query optimizer rewrites a high-level declarative query to a query plan, which can actually be executed over the data to compute the result to the query. However, often there exists more than one query plan to answer any query; picking the wrong one can cause a query to take days to compute the answer, rather than seconds.

Traditional query optimizers take years to build, are very hard to maintain, and, most importantly, do not learn from their mistakes. Bao is the first learning-based approach to query optimization that has been fully integrated into the popular database management system PostgreSQL. Lead author Ryan Marcus, a postdoc in Kraska’s group, says that Bao produces query plans that run up to 50 percent faster than those created by the PostgreSQL optimizer, meaning that it could help to significantly reduce the cost of cloud services, like Amazon’s Redshift, that are based on PostgreSQL.

By fusing the two systems together, Kraska hopes to build the first instance-optimized database system that can provide the best possible performance for each individual application without any manual tuning.

The goal is to not only relieve developers from the daunting and laborious process of tuning database systems, but to also provide performance and cost benefits that are not possible with traditional systems.

Traditionally, the systems we use to store data are limited to only a few storage options and, because of it, they cannot provide the best possible performance for a given application. What Tsunami can do is dynamically change the structure of the data storage based on the kinds of queries that it receives and create new ways to store data, which are not feasible with more traditional approaches.

Johannes Gehrke, a managing director at Microsoft Research who also heads up machine learning efforts for Microsoft Teams, says that his work opens up many interesting applications, such as doing so-called “multidimensional queries” in main-memory data warehouses. Harvard’s Idreos also expects the project to spur further work on how to maintain the good performance of such systems when new data and new kinds of queries arrive.

Bao is short for “bandit optimizer,” a play on words related to the so-called “multi-armed bandit” analogy where a gambler tries to maximize their winnings at multiple slot machines that have different rates of return. The multi-armed bandit problem is commonly found in any situation that has tradeoffs between exploring multiple different options, versus exploiting a single option — from risk optimization to A/B testing.

“Query optimizers have been around for years, but they often make mistakes, and usually they don’t learn from them,” says Kraska. “That’s where we feel that our system can make key breakthroughs, as it can quickly learn for the given data and workload what query plans to use and which ones to avoid.”

Kraska says that in contrast to other learning-based approaches to query optimization, Bao learns much faster and can outperform open-source and commercial optimizers with as little as one hour of training time.In the future, his team aims to integrate Bao into cloud systems to improve resource utilization in environments where disk, RAM, and CPU time are scarce resources.

“Our hope is that a system like this will enable much faster query times, and that people will be able to answer questions they hadn’t been able to answer before,” says Kraska.

A related paper about Tsunami was co-written by Kraska, PhD students Jialin Ding and Vikram Nathan, and MIT Professor Mohammad Alizadeh. A paper about Bao was co-written by Kraska, Marcus, PhD students Parimarjan Negi and Hongzi Mao, visiting scientist Nesime Tatbul, and Alizadeh.

The work was done as part of the Data System and AI Lab (DSAIL@CSAIL), which is sponsored by Intel, Google, Microsoft, and the U.S. National Science Foundation.

Source: MIT News - CSAIL - Robotics - Computer Science and Artificial Intelligence Laboratory (CSAIL) - Robots - Artificial intelligence

Reprinted with permission of MIT News : MIT News homepage



Use the link at the top of the story to get to the original article.

Started August 11, 2020, 12:01:14 PM

XKCD Comic : 26-Second Pulse in XKCD Comic

26-Second Pulse
10 August 2020, 5:00 am

There are some papers arguing that there's a volcanic component, but I personally think they're just feeling guilty and trying to cover the trail.

Source: xkcd.com

Started August 11, 2020, 12:01:14 PM

utilization of others in General AI Discussion

some AI skills require help from others.
like gaming, you need a partner. or maybe you are injured, you need someone to carry you.

what triggers summoning others ?

2 Comments | Started August 08, 2020, 10:33:50 AM

AI Top Gun August 18 in AI News

https://www.engadget.com/darpa-alphadogfight-championship-043711508.html

Quote
Due to the coronavirus pandemic, DARPA will no longer hold an in-person event for its third and final AlphaDogfight Trial that’s scheduled to take place from August 18th to the 20th. It’ll be held virtually instead, and both participants and viewers alike can watch online as artificial intelligence (AI) algorithms control simulated F-16 fighter planes in aerial combat. By the end of the three-day event, viewers will even get to witness a matchup between the top AI and an experienced Air Force fighter pilot who’ll also be controlling a virtual F-16.

https://events.jhuapl.edu/event/ab896ac7-6eef-4676-975c-74c6a2670dcd/summary

Quote
Interested viewers will have to register beforehand to able to tune in: US citizens have until August 17th to sign up, while everyone else has until August 11th.

That's today folks so get in quick.

Started August 11, 2020, 02:10:43 AM

Releasing full AGI/evolution research in General Project Discussion

This is most of all my notes, images, algorithms etc summarized/unified in their recent forms. Here. I am 24 and started at 18 all this work. I make discoveries in my brain using mostly vision (visual language; shapes, cats, etc which has context, each explain each other like in a dictionary, a small world network of friendly connections).

https://www.youtube.com/watch?v=Us6gqYOMHuU

I have 2 more videos to share but not yet uploaded. And 2 more notes. The last note will have some more very recent good data but these not yet given are of less immediate importance. The long file though does have a lot of recent knowledge in it though still. It's better you know all it, at least the movie.

notes
https://paste.ee/p/mcnEk
https://paste.ee/p/kQLCx
https://paste.ee/p/CvSsB
https://paste.ee/p/lJYMP
https://paste.ee/p/EmCZt

Code-only of my advanced-ngram 'gpt2':
https://paste.ee/p/7DG3M
https://paste.ee/p/XvVp5
result:
The software was made on a
The software was made on a wide variety of devices, and operating apps and applications that users can easily read as an app for android. It is a bit of a difference, but i was able to get it. The developers are not going to make it through a web applications, and devices i have seen in the running for the mobile apps. Applications allows users to access applications development tools, and allow applications of the app store. A multimedia entertainment entertainment device, and allows platforms enabled access to hardware interfaces. Using a bit of html application app developers can enable users to access applications to investors, and provide a more thorough and use of development. The other a little entertainment media, and user development systems integration technology. Applications allows users to automatically provide access to modify, optimize capability allows users to easily enable. Both users and software systems, solutions allowing owners software solutions solutions to integrate widgets customers a day. And if you are accessing services product, and mobile applications remotely access to the software companies can easily automate application access to hardware devices hardware systems creators and technologies. Builders and developers are able to access the desktop applications, allowing users access allows users to
((I checked the 400MB, not too long copies pastes, like only 3-5 words at most))
https://www.youtube.com/watch?v=Mah0Bxyu-UI&t=2s

I almost got GPT-2 understood as shown at end of movie but need help, anyone understand it's inner workings? Looking to collaborate.

I recommend you do this as well and mentor each other.

more data by my top-of-mind pick:
AGI is an intelligent Turing Tape. It has an internal memory tape and an external memory tape - the notepad, the desktop, the internet. Like a Turing Tape it decides where to look/pay attention to, what state to be in now, and what to write, based on what it reads and what state it is in. The what/where brain paths. It will internally translate and change state by staying still or move forwards/backwards in spacetime. It'll decide if to look to external desktop, and where to look - notepad? where on notepad? internet? where on internet?

It's given Big Diverse Data and is trying to remove Big Diverse Data (Dropout/Death) so it can compress the network to lower Cost/Error and hence learn the general facets/patterns of the universe exponentially better while still can re-generate missing data despite having a small world network (all quantinized dictionary words explain each other). It uses Backpropagation to adjust the weights so the input will activate the correct node at the end. That node, can be activated by a few different sorts of images - side view of cat, front view of paw, cat ear, mountain, it's a multi dimensional representation space, Since the network Learns patterns (look up word2vec/Glove, it's same as seq2seq) by lowering error cost by Self-Attention evolution/self-recursion of data augmentation (self-imitation in your brain using quantinized visual features/nodes), it therefore doesn't modify its structure by adjusting existing connections (using weights/strengths) to remove nodes, it rather adjusts its structure by adjusting connections weights to remove error and ignores node count Cost.

Intelligence is defined as being flexible/general using little data, but walker bots are only able to solve what is in front of themselves, we need a thinker like GPT-2, and you can see the mind has evolved to simulate/forecast/predict the future using evolutionary mental RL self-imitation self-recursion of data. And intelligence is for survival, immortality, trying to find food and breed to sustain life systems, it's just an infection/evolution of matter/energy / data evolution.

130 Comments | Started December 25, 2019, 09:53:06 PM

sensible design principles for complex systems in General AI Discussion

https://www.youtube.com/watch?v=jhiAmSLOymc

Since errors and accidents cannot be avoided, we might as well make systems which can deal with, and even benefit from them. If you can’t beat them, join them.

2 Comments | Started August 08, 2020, 03:30:50 AM

Shrinking deep learning’s carbon footprint in Robotics News

Shrinking deep learning’s carbon footprint
7 August 2020, 8:45 pm

In June, OpenAI unveiled the largest language model in the world, a text-generating tool called GPT-3 that can write creative fiction, translate legalese into plain English, and answer obscure trivia questions. It’s the latest feat of intelligence achieved by deep learning, a machine learning method patterned after the way neurons in the brain process and store information.

But it came at a hefty price: at least $4.6 million and 355 years in computing time, assuming the model was trained on a standard neural network chip, or GPU. The model’s colossal size — 1,000 times larger than a typical language model — is the main factor in its high cost.

“You have to throw a lot more computation at something to get a little improvement in performance,” says Neil Thompson, an MIT researcher who has tracked deep learning’s unquenchable thirst for computing. “It’s unsustainable. We have to find more efficient ways to scale deep learning or develop other technologies.”

Some of the excitement over AI’s recent progress has shifted to alarm. In a study last year, researchers at the University of Massachusetts at Amherst estimated that training a large deep-learning model produces 626,000 pounds of planet-warming carbon dioxide, equal to the lifetime emissions of five cars. As models grow bigger, their demand for computing is outpacing improvements in hardware efficiency. Chips specialized for neural-network processing, like GPUs (graphics processing units) and TPUs (tensor processing units), have offset the demand for more computing, but not by enough.

“We need to rethink the entire stack — from software to hardware,” says Aude Oliva, MIT director of the MIT-IBM Watson AI Lab and co-director of the MIT Quest for Intelligence. “Deep learning has made the recent AI revolution possible, but its growing cost in energy and carbon emissions is untenable.”

Computational limits have dogged neural networks from their earliest incarnation — the perceptron — in the 1950s. As computing power exploded, and the internet unleashed a tsunami of data, they evolved into powerful engines for pattern recognition and prediction. But each new milestone brought an explosion in cost, as data-hungry models demanded increased computation. GPT-3, for example, trained on half a trillion words and ballooned to 175 billion parameters — the mathematical operations, or weights, that tie the model together — making it 100 times bigger than its predecessor, itself just a year old.

In work posted on the pre-print server arXiv, Thompson and his colleagues show that the ability of deep learning models to surpass key benchmarks tracks their nearly exponential rise in computing power use. (Like others seeking to track AI’s carbon footprint, the team had to guess at many models’ energy consumption due to a lack of reporting requirements). At this rate, the researchers argue, deep nets will survive only if they, and the hardware they run on, become radically more efficient.

Toward leaner, greener algorithms

The human perceptual system is extremely efficient at using data. Researchers have borrowed this idea for recognizing actions in video and in real life to make models more compact. In a paper at the European Conference on Computer Vision (ECCV) in August, researchers at the MIT-IBM Watson AI Lab describe a method for unpacking a scene from a few glances, as humans do, by cherry-picking the most relevant data.

Take a video clip of someone making a sandwich. Under the method outlined in the paper, a policy network strategically picks frames of the knife slicing through roast beef, and meat being stacked on a slice of bread, to represent at high resolution. Less-relevant frames are skipped over or represented at lower resolution. A second model then uses the abbreviated CliffsNotes version of the movie to label it “making a sandwich.” The approach leads to faster video classification at half the computational cost as the next-best model, the researchers say.

“Humans don’t pay attention to every last detail — why should our models?” says the study’s senior author, Rogerio Feris, research manager at the MIT-IBM Watson AI Lab. “We can use machine learning to adaptively select the right data, at the right level of detail, to make deep learning models more efficient.”

In a complementary approach, researchers are using deep learning itself to design more economical models through an automated process known as neural architecture search. Song Han, an assistant professor at MIT, has used automated search to design models with fewer weights, for language understanding and scene recognition, where picking out looming obstacles quickly is acutely important in driving applications.

In a paper at ECCV, Han and his colleagues propose a model architecture for three-dimensional scene recognition that can spot safety-critical details like road signs, pedestrians, and cyclists with relatively less computation. They used an evolutionary-search algorithm to evaluate 1,000 architectures before settling on a model they say is three times faster and uses eight times less computation than the next-best method.

In another recent paper, they use evolutionary search within an augmented designed space to find the most efficient architectures for machine translation on a specific device, be it a GPU, smartphone, or tiny Raspberry Pi. Separating the search and training process leads to huge reductions in computation, they say.

In a third approach, researchers are probing the essence of deep nets to see if it might be possible to train a small part of even hyper-efficient networks like those above. Under their proposed lottery ticket hypothesis, PhD student Jonathan Frankle and MIT Professor Michael Carbin proposed that within each model lies a tiny subnetwork that could have been trained in isolation with as few as one-tenth as many weights — what they call a “winning ticket.”

They showed that an algorithm could retroactively find these winning subnetworks in small image-classification models. Now, in a paper at the International Conference on Machine Learning (ICML), they show that the algorithm finds winning tickets in large models, too; the models just need to be rewound to an early, critical point in training when the order of the training data no longer influences the training outcome.

In less than two years, the lottery ticket idea has been cited more than more than 400 times, including by Facebook researcher Ari Morcos, who has shown that winning tickets can be transferred from one vision task to another, and that winning tickets exist in language and reinforcement learning models, too.

“The standard explanation for why we need such large networks is that overparameterization aids the learning process,” says Morcos. “The lottery ticket hypothesis disproves that — it's all about finding an appropriate starting point. The big downside, of course, is that, currently, finding these ‘winning’ starting points requires training the full overparameterized network anyway.”

Frankle says he’s hopeful that an efficient way to find winning tickets will be found. In the meantime, recycling those winning tickets, as Morcos suggests, could lead to big savings.

Hardware designed for efficient deep net algorithms

As deep nets push classical computers to the limit, researchers are pursuing alternatives, from optical computers that transmit and store data with photons instead of electrons, to quantum computers, which have the potential to increase computing power exponentially by representing data in multiple states at once.

Until a new paradigm emerges, researchers have focused on adapting the modern chip to the demands of deep learning. The trend began with the discovery that video-game graphical chips, or GPUs, could turbocharge deep-net training with their ability to perform massively parallelized matrix computations. GPUs are now one of the workhorses of modern AI, and have spawned new ideas for boosting deep net efficiency through specialized hardware.

Much of this work hinges on finding ways to store and reuse data locally, across the chip’s processing cores, rather than waste time and energy shuttling data to and from a designated memory site. Processing data locally not only speeds up model training but improves inference, allowing deep learning applications to run more smoothly on smartphones and other mobile devices.

Vivienne Sze, a professor at MIT, has literally written the book on efficient deep nets. In collaboration with book co-author Joel Emer, an MIT professor and researcher at NVIDIA, Sze has designed a chip that’s flexible enough to process the widely-varying shapes of both large and small deep learning models. Called Eyeriss 2, the chip uses 10 times less energy than a mobile GPU.

Its versatility lies in its on-chip network, called a hierarchical mesh, that adaptively reuses data and adjusts to the bandwidth requirements of different deep learning models. After reading from memory, it reuses the data across as many processing elements as possible to minimize data transportation costs and maintain high throughput.

“The goal is to translate small and sparse networks into energy savings and fast inference,” says Sze. “But the hardware should be flexible enough to also efficiently support large and dense deep neural networks.”

Other hardware innovators are focused on reproducing the brain’s energy efficiency. Former Go world champion Lee Sedol may have lost his title to a computer, but his performance was fueled by a mere 20 watts of power. AlphaGo, by contrast, burned an estimated megawatt of energy, or 500,000 times more.

Inspired by the brain’s frugality, researchers are experimenting with replacing the binary, on-off switch of classical transistors with analog devices that mimic the way that synapses in the brain grow stronger and weaker during learning and forgetting.

An electrochemical device, developed at MIT and recently published in Nature Communications, is modeled after the way resistance between two neurons grows or subsides as calcium, magnesium or potassium ions flow across the synaptic membrane dividing them. The device uses the flow of protons — the smallest and fastest ion in solid state — into and out of a crystalline lattice of tungsten trioxide to tune its resistance along a continuum, in an analog fashion.

“Even though is not yet optimized, it gets to the order of energy consumption per unit area per unit change in conductance that’s close to that in the brain,” says the study’s senior author, Bilge Yildiz, a professor at MIT.

Energy-efficient algorithms and hardware can shrink AI’s environmental impact. But there are other reasons to innovate, says Sze, listing them off: Efficiency will allow computing to move from data centers to edge devices like smartphones, making AI accessible to more people around the world; shifting computation from the cloud to personal devices reduces the flow, and potential leakage, of sensitive data; and processing data on the edge eliminates transmission costs, leading to faster inference with a shorter reaction time, which is key for interactive driving and augmented/virtual reality applications.

“For all of these reasons, we need to embrace efficient AI,” she says.

Source: MIT News - CSAIL - Robotics - Computer Science and Artificial Intelligence Laboratory (CSAIL) - Robots - Artificial intelligence

Reprinted with permission of MIT News : MIT News homepage



Use the link at the top of the story to get to the original article.

Started August 08, 2020, 12:00:06 PM

XKCD Comic : Mathematical Symbol Fight in XKCD Comic

Mathematical Symbol Fight
7 August 2020, 5:00 am

Oh no, a musician just burst in through the door confidently twirling a treble clef.

Source: xkcd.com

Started August 08, 2020, 12:00:05 PM
Real Steel

Real Steel in Robots in Movies

Real Steel is a 2011 American science-fiction sports film starring Hugh Jackman and Dakota Goyo.

In 2020, human boxers have been replaced by robots. Charlie Kenton, a former boxer, owns "Ambush", but loses it in a fight against a bull belonging to promoter and carnival owner Ricky, who rigged the fight to mess with Charlie as he sees him as a joke, partially because he beat him up the last time they competed for bailing on the bet. Having made a bet that Ambush would win as a result, Charlie now has a debt to Ricky he can't pay—which he runs out on.

Apr 20, 2020, 14:25:24 pm
Singularity

Singularity in Robots in Movies

Singularity is a Swiss/American science fiction film. It was written and directed by Robert Kouba, and its first shoot, in 2013, starred Julian Schaffner, Jeannine Wacker and Carmen Argenziano. The film was first released in 2017, after further scenes with John Cusack were added.

In 2020, robotics company C.E.O. Elias VanDorne reveals Kronos, the supercomputer he has invented to end all war. Kronos decides that mankind is responsible for all war, and it tries to use robots to kill all humans. VanDorne and Damien Walsh, a colleague, upload themselves into Kronos and watch the destruction. Ninety-seven years later, Andrew, a kind-hearted young man, wakes up in a ruined world. VanDorne and Walsh, still in Kronos, watch Andrew meet Calia, a teenage girl who seeks the last human settlement, the Aurora. Though Calia is first reluctant to let Andrew accompany her, the two later fall in love.

Apr 18, 2020, 13:37:25 pm
Star Wars: Rogue One

Star Wars: Rogue One in Robots in Movies

Rogue One follows a group of rebels on a mission to steal the plans for the Death Star, the Galactic Empire's super weapon, just before the events of A New Hope.

Former scientist Galen Erso lives on a farm with his wife and young daughter, Jyn. His peaceful existence comes crashing down when the evil Orson Krennic takes him away from his beloved family. Many years later, Galen becomes the Empire's lead engineer for the most powerful weapon in the galaxy, the Death Star. Knowing that her father holds the key to its destruction, Jyn joins forces with a spy and other resistance fighters to steal the space station's plans for the Rebel Alliance.

One of the resistance fighters is K-2SO a droid. He is a CGI character voiced and performed through motion capture by Alan Tudyk. In the film, K-2SO is a KX-series security droid originally created by the Empire.

Feb 25, 2020, 18:50:48 pm
Practical Artificial Intelligence: Machine Learning, Bots, and Agent Solutions Using C#

Practical Artificial Intelligence: Machine Learning, Bots, and Agent Solutions Using C# in Books

Discover how all levels Artificial Intelligence (AI) can be present in the most unimaginable scenarios of ordinary lives. This book explores subjects such as neural networks, agents, multi agent systems, supervised learning, and unsupervised learning. These and other topics will be addressed with real world examples, so you can learn fundamental concepts with AI solutions and apply them to your own projects.

People tend to talk about AI as something mystical and unrelated to their ordinary life. Practical Artificial Intelligence provides simple explanations and hands on instructions. Rather than focusing on theory and overly scientific language, this book will enable practitioners of all levels to not only learn about AI but implement its practical uses.

Feb 10, 2020, 00:14:42 am
Robot Awakening (OMG I'm a Robot!)

Robot Awakening (OMG I'm a Robot!) in Robots in Movies

Danny discovers he is not human, he is a robot - an indestructible war machine. His girlfriend was kidnapped by a mysterious organization of spies who are after him & now he must go on a journey to save his girl and find out why the hell he is a robot?!

Feb 09, 2020, 23:55:45 pm
Program Y

Program Y in AIML / Pandorabots

Program Y is a fully compliant AIML 2.1 chatbot framework written in Python 3. It includes an entire platform for building your own chat bots using Artificial Intelligence Markup Language, or AIML for short. 

Feb 01, 2020, 15:37:24 pm
The AvatarBot

The AvatarBot in Tools

The AvatarBot helps you in finding an Avatar for your Chatbot. Answer a few questions and get a match. Keep trying to get the one you really like.

Dec 18, 2019, 14:51:56 pm
Eva

Eva in Chatbots - English

Our chatbot - Eva - was created by Stanusch Technologies SA. Eva, just 4 weeks after launch, competed in Swansea (UK) for the Loebner Prize 2019 with programs such as Mitsuku and Uberbot! Now, she is in the top 10 most-humanlike bots in the world! :)

Is it possible for Eva to pass the turing test? It's creators believe it is.

Eva has her own personality: she is 23 years old, she is a student from the Academy of Physical Education in Katowice (Lower Silesia district/Poland). She is a very charming and nice young women, who loves to play volleyball and to read books.

Dec 14, 2019, 13:10:13 pm
Star Wars: Episode IX – The Rise of Skywalker

Star Wars: Episode IX – The Rise of Skywalker in Robots in Movies

Star Wars: The Rise of Skywalker (also known as Star Wars: Episode IX – The Rise of Skywalker) is an American epic space opera film produced, co-written, and directed by J. J. Abrams.

A year after the events of The Last Jedi, the remnants of the Resistance face the First Order once again—while reckoning with the past and their own inner turmoil. Meanwhile, the ancient conflict between the Jedi and the Sith reaches its climax, altogether bringing the Skywalker saga to a definitive end.

Nov 15, 2019, 22:31:39 pm
Terminator: Dark Fate

Terminator: Dark Fate in Robots in Movies

Terminator: Dark Fate is a 2019 American science fiction action film directed by Tim Miller and created from a story by James Cameron. Cameron considers the film a direct sequel to his films The Terminator (1984) and Terminator 2: Judgment Day. The film stars Linda Hamilton and Arnold Schwarzenegger returning in their roles of Sarah Connor and the T-800 "Terminator", respectively, reuniting after 28 years.

SPOILERS:

In 1998, three years after defeating the T-1000 and averting the rise of the malevolent artificial intelligence (AI) SkynetSarah Connor and her teenage son John are relaxing on a beach at Guatemala. A T-800 Terminator, sent from the future before Skynet's erasure, arrives and shoots John, killing him.

Mackenzie Davis stars as Grace: A soldier from the year 2042 adopted by Resistance leader Daniella Ramos who was converted into a cyborg and sent by her adoptive mother to protect her younger self from a new advanced Terminator prototype.

Oct 29, 2019, 21:27:46 pm
Life Like

Life Like in Robots in Movies

A couple, James and Sophie, buy an android called Henry to help around the house.

In the beginning, this is perfect for both James and Sophie as Henry does housework and makes a good companion to Sophie. But when Henry’s childlike brain adapts by developing emotions, complications begin to arise

Oct 29, 2019, 21:14:49 pm

top