avatar

Freddy

Japanese Robot Evolves Based on Its Surroundings in General Robotics Talk

Quote
Japan has a unique fascination with androids and the quest to make robots more like humans. One of the country’s most original thinkers in this area is Professor Takashi Ikegami of the University of Tokyo. He has created androids filled with sensors and artificial intelligence software. The technology allows them to perceive the outside world and react to it as they see fit. Hello World host Ashlee Vance traveled to Tokyo to meet with Professor Ikegami and see his latest android creation. The robot they encounter flails about and makes strange gurgling noises as it responds to their movements and conversation. While it all looks rudimentary today, the technology is the precursor of what Ikegami predicts will be a new robotic life form that has its own culture, language, and desires. What could go wrong?

Started Today at 08:26:25 pm
avatar

Art

Replika in General Chatbots and Software

A chatbot that learns so much about you over time that it sort of...becomes you...or an entity very much like you.

Check it out at:
https://replika.ai/

 

and



My question is , what is being done with all that information that is being presented to it while one is sharing all those tidbits? Hmm....

1 Comment | Started July 25, 2017, 09:49:19 pm
avatar

LOCKSUIT

Determined and at the core of AI. in General Project Discussion

Hello machine.

This is my project thread.

The reason no one is more determined to create AI than me is because only I collect information from everywhere and create a precise hierarchy 24/7. After initialization, it only took me 1 year before I discovered the field of AI that is actually well developed. And I instantly noticed it. I instantly noticed the core of AI from my first read. That's how fast my Hierarchy self-corrects me. Now it's been 1.5 years since and I am here to tell you that I have empirical knowledge that I have the core of AI, and ASI! 100% guarantee !

All of my posts on the forum are in separate threads, mine, yours, but this thread is going to try to hold my next posts together so you can to quickly and easily find, follow, and understand all of my work. Anything important I've said elsewhere is on my desktop, so you will hear about it again here. You don't currently have access to my desktop, only my website in replace to make up for it, while this thread is an extension of it. But this thread won't be permanently engraved to my desktop/website since anything new on this thread will be copied to my desktop/website. Currently my website (and this extension thread) is awaiting my recent work, which I really shouldn't show you all of it.

- Immortal Discoveries

90 Comments | Started March 12, 2017, 04:12:26 am
avatar

Tyler

Danielle Olson: Building empathy through computer science and art in Robotics News

Danielle Olson: Building empathy through computer science and art
31 May 2017, 4:59 am

Communicating through computers has become an extension of our daily reality. But as speaking via screens has become commonplace, our exchanges are losing inflection, body language, and empathy.

Danielle Olson ’14, a first-year PhD student at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL), believes we can make digital information-sharing more natural and interpersonal, by creating immersive media to better understand each other’s feelings and backgrounds.

Olson’s research focuses on inventing and analyzing new forms of media, from gaming experiences to interactive narratives. Through a course last fall, she contributed to “The Enemy,” a virtual reality experience that lets users stand “face-to-face” with soldiers from opposing sides of global conflicts.

The project is the brainchild of photojournalist Karim Ben Khelifa, who worked on it with Fox Harrell, an associate professor of digital media with appointments in the MIT Comparative Media Studies/Writing Program and CSAIL. Khelifa traveled to places such as Israel, Palestine, and El Salvador to interview soldiers from different sides of conflicts. Under the guidance of Harrell, Olson helped work on algorithms that analyzed users’ body language in different scenarios. That information was then incorporated into the live experience: As the user listens to the soldiers, they can dynamically respond based on the user’s behavior.

Khelifa describes “The Enemy” as an effort to enable the public to develop more meaningful relationships to world events than they would simply by reading news articles.

“You’re looking someone in the eye as they describe death and war conflicts, and seeing their facial expressions and body language,” Olson says. “There’s a different level of empathy that you can cultivate with these sorts of technologies.”

Her other areas of research follow a similar thread of building empathy by examining different cultures. As part of Harrell’s Imagination, Computation, and Expression Laboratory, she’s working on developing interactive narrative experiences to help kids practice dealing with social identity issues. For example, one game involves an elf trying to get past a gatekeeper from a different clan, who may try managing the impressions others have of their identity to get past the gate. This work has already gained attention from notable artists like rapper Lupe Fiasco, who came into Harrell’s lab at MIT and offered feedback.  

Growing up, Olson got a late start to coding. As a kid she wasn't one to play video games or pull apart computers, and didn't even know what MIT was until she watched “Iron Man” as a high-schooler. At 17 she was accepted to MIT's Minority Introduction to Engineering and Science Program (MITES) program, and she returned the following year as an undergraduate.

She says that her passion for education comes from her mother, who came to the U.S. from Cameroon with only an eighth-grade education before going on to earn her master’s degree.

“I always hear my mom’s voice saying that education is the one thing nobody can take away from you,” Olson says.

As an MIT senior she founded Gique, a nonprofit focused on teaching local students skills in STEAM — science, technology, engineering, arts, and math — embracing the intersection of art and technology. Her team creates hands-on curricula, experiments, and activities to help students develop more holistic viewpoints of the world.

“A 2008 study on ‘No Child Left Behind’ showed that half of the nation's districts decreased class time for art, drama, history, and science, which left students with a narrow learning environment,” she says. “We need to fight back against policies that discourage interdisciplinary education.”

Olson says that it’s vital for people in power to use their influence to help give underrepresented groups more access to resources that can level the playing field.

“I had access to programs like FIRST Robotics and MITES because I didn’t have to pay for them,” she says. “They’re sponsored by people who put their money where their mouth is and who aren’t just acknowledging the need for workplace diversity: They’re actually taking steps to invest directly in people of color."

Outside of her research and educational work, Olson feeds her creative pursuits, whether it’s cooking, reading comic books, or taking care of her pet rabbit and cat.

“I see my place as raising the next generation of computer science warriors who ingrain their culture into the fabric of computing,” she says. “I think it’s important to build systems that aren’t catered only to certain populations, but actually represent many values and bolster our political capital as developers, engineers, and makers.”

Fast Facts

Favorite place for news: Twitter

One thing people would be surprised to know about her: She was an MIT cheerleader. The year she started and served as co-captain was the first time in MIT history that the cheerleading team went to nationals.

Advice to incoming students: “You’re going to have failures. The master has failed more times than the beginner has even tried. Make sure you have an identity outside of research, so it’s not threatened when you hit a bump in the road.”

Her tech role model: Stacie LeSure Gregory, a postdoc at the American Association of University Women (AAUW). “She’s dedicated her career to empowering women and underrepresented groups in STEM.”

Source: MIT News - CSAIL - Robotics - Computer Science and Artificial Intelligence Laboratory (CSAIL) - Robots - Artificial intelligence

Reprinted with permission of MIT News : MIT News homepage

Started Today at 12:02:50 pm
avatar

Zero

Is there a "real time" chatbot engine? in General Chatbots and Software

Hi guys,

Yeah, "real time" isn't the right way to say it.

What I mean is... Chatbot engines I know about, Rivescript, AIML, give you an answer instantly, as soon as you keypress Enter. But when you're in a chatroom, people don't answer intantly: you wait a few seconds, then someone says something, then a few seconds later, someone else says something, ...etc.

I know we can easily simulate a delay before the answer of the chatbot, so it feels like someone is typing on a keyboard, but that's not my question.

Chatbot engines I know work like a REPL. But is there a chatbot engine that would work like some sort of TCP server.

I imagine an engine that's permanently looping, thinking. Sometimes it receives a message from its botmaster, sometimes it sends a message. The messages it receives modify its thinking process. It's "real time". Is there such an engine somewhere?

Am I being understandable at least?

24 Comments | Started June 24, 2017, 09:15:43 am
avatar

LOCKSUIT

What do you guys think about these new video games? in General Chat

What do you guys think about these new video games these days? I'm really interested in Art's, korrelan's, etc's view.

I don't really mean the shooter video games, I mean the ones that are like, fun, or somethin..

For example take a look at this new Yoshi video game:



Do you guys like this video game?....

I like the older video games from like 1980-2008. I have never found 1 after that. They all have the same feel. There is no increase in difficulty, are for children, no authenticness, too short, etc. I have a list about this, and a game-list of the best (that I like at least), but, I just wanted to quickly ask you guys, because it's interesting. Of course it's about born-with Rewards, however, Artificial Rewards u make, and *we all have somewhat similar ones with games I'd think, maybe.

I really really like games and food etc, I enjoy all fields. However I currently am not, because I have a mission to fulfill.

5 Comments | Started July 24, 2017, 05:40:16 am
avatar

Tyler

Armando Solar-Lezama: Academic success despite an inauspicious start in Robotics News

Armando Solar-Lezama: Academic success despite an inauspicious start
26 May 2017, 4:59 am

When Armando Solar-Lezama was a third grader in Mexico City, his science class did a unit on electrical circuits. The students were divided into teams of three, and each team member had to bring in a light bulb, a battery, or a switch.

Solar-Lezama, whose father worked for an electronics company, volunteered to provide the switch. Using electrical components his father had brought home from work, Solar-Lezama built a “flip-flop” circuit and attached it to a touch-sensitive field effect transistor. When the circuit was off, touching the transistor turned it on, and when it was on, touching the transistor turned it off. “I was pretty proud of my circuit,” says Solar-Lezama, now an MIT professor of electrical engineering and computer science.

By the time he got to school, however, one of his soldered connections had come loose, and the circuit’s performance was erratic. “They failed the whole group,” Solar-Lezama says. “And everybody was like, ‘Why couldn’t you just go to the store and get a switch like normal people do?’”

The next year, in an introductory computer science class, Solar-Lezama was assigned to write a simple program that would send a few lines of text to a printer. Instead, he wrote a program that asked the user a series of questions, each question predicated on the response to the one before. The answer to the final question determined the text that would be sent to the printer.

This time, the program worked perfectly. But “the teacher failed me because that’s not what the assignment was supposed to be,” Solar-Lezama says. “The educational system was not particularly flexible.”

At that point, Solar-Lezama abandoned trying to import his extracurricular interests into the classroom. “I sort of brushed it off,” he recalls. “I was doing my own thing. As long as school didn’t take too much of my time, it was fine.”

So, in 1997, when Solar-Lezama’s father moved the family to College Station, Texas — the Mexican economy was still in the throes of the three-year-old Mexican peso crisis — the 15-year-old Armando began to teach himself calculus and linear algebra.

Accustomed to the autonomy of living in a huge city with a subway he could take anywhere, Solar-Lezama bridled at having to depend on rides from his parents to so much as go to the library. “For the first three years that I was in Texas, I was convinced that as soon as I turned 18, I was going to go back to Mexico,” he says. “Because what was I doing in this place in the middle of nowhere?” He began systematically educating himself in everything he would need to ace the Mexican college entrance exams.

At his Texan high school, however, he was placed by default in the lowest of the school’s three academic tracks, which is where most immigrants with imperfect English found themselves. Against the recommendations of the school administrators, he insisted on taking physics; within two weeks, his physics teacher had moved him up to a higher-level class.

By his junior year, Solar-Lezama was enrolled in the most demanding math and science classes the school offered, in most of which his classmates were seniors. But in the humanities, where he still struggled with the language — and, he admits, his own lack of interest — he remained on the lower track.

“In the time I was there, I got to move from one track to the other,” Solar-Lezama says. “It was really shocking to realize how different these tracks were.”

Outside the classroom, Solar-Lezama was a member of a team that finished second in the nation in the Department of Energy’s Science Bowl competition. He also won a regional science fair held at Texas A&M with a computer simulation he’d whipped up in an afternoon, when he and some friends realized that they wouldn’t be able to get a scrap-built hovercraft working by the fair deadline. And he started working for a local software startup, doing database coding.

But inside the classroom, “my record was very bimodal,” he says. Though he excelled in math and science, he ended his senior year ranked only about 100th in a class of 400.

Still, he decided to put his return to Mexico on hold. “By the time I was a senior in high school, I sort of found my place,” he says. “I was learning lots of things that I was interested in, and I decided that, ‘Okay, maybe I’ll stay here for college, and then I’ll go back.’”

His spotty academic performance, however, was an obstacle. MIT was one of several universities that denied him undergraduate admission. But the father of one of his Science Bowl teammates taught nuclear engineering at Texas A&M and, recognizing Solar-Lezama’s talent, encouraged him to apply for a generous scholarship offered through the department.

To ensure that international students could navigate the transition to a new educational system and, often, a new language, the university restricted the number of units they could carry as freshmen, and Solar-Lezama, his three years of American high school notwithstanding, counted as an international student. So to keep himself busy, he audited several courses outside the nuclear-engineering curriculum.

One of these was Introduction to Algorithms. Although he wasn’t formally enrolled at the time, Solar-Lezama did all the homework and took all the exams, and he ended up with the highest grade in the class.

“Before that point, I thought of programming as a useful skill,” Solar-Lezama says. “One of the things that really excited me about this class was that you could prove things about algorithms and get some guarantees about how something is going to work, and I found that extremely appealing. So I decided to switch majors to computer science.”

Graduating in three years, Solar-Lezama decided to postpone his return to Mexico a little longer, applying to graduate programs at MIT, Carnegie Mellon University, and the University of California at Berkeley. “I thought, if I don’t get in to any of them, fine, I’ll go back to Mexico,” he says. Once again, MIT turned him down, as did CMU. But he got into Berkeley.

Solar-Lezama arrived at Berkeley planning to continue his work on large parallel computing systems, but his conversations with his advisor, Ras Bodik, quickly took a different turn. Different types of simulations generally required different computational strategies. But implementing those strategies often required reshuffling the same low-level processes. Was it possible, Bodik and Solar-Lezama wondered, to devise a way to formulate the strategies broadly and automate the reshuffling?

Solar-Lezama thus found himself part of a small community of researchers working on “program synthesis,” or the automatic generation of computer programs. His  thesis project was a language called Sketch, which lets programmers describe program functionality in general terms and automatically fills in the computational details.

Sketch treats program synthesis as a search problem: The task is to search the space of all possible programs for one that can meet the requirements imposed by the general description. The chief innovation behind Sketch was a set of algorithms for rapidly paring down the search space, so that a satisfactory program could be found in real time.

“There were three or four of us who were pushing this area and telling everybody who would listen that this was the right direction for programming systems research, and for a long time there was a lot of hostility toward these kinds of ideas,” Solar-Lezama says. “Little by little, we started converting a few more people, and all of a sudden they reached a critical mass, and now it’s an extremely active area of research.”

After graduating from Berkeley, Solar-Lezama went on the job market, and MIT finally made him an offer. In his seven years at the Institute, where he recently earned tenure, Sketch has remained the foundation of his research, which has developed along three parallel tracks.

The first track is the extension of Sketch, so that it can handle more diverse and complex computations. The second is the application of Sketch and its underlying machinery to particular problems — such as orienting new members of large programming teams toward the existing code base, automatically grading programming homework, and parallelizing code for faster execution on multicore chips.

Recently, Solar-Lezama’s group has also begun investigating the application of program synthesis to machine learning. Machine learning involves teaching a computer system to perform some classification task by presenting it with training examples. But suppose that the training data consists of a row of three squares and a row of three circles. Which image belongs to the same class, a row of three stars or four circles arranged in a square?

Existing machine-learning systems are good at learning to recognize circles from examples of circles, but they’re not as good at the kind of abstract pattern matching that humans do intuitively. A program synthesizer, however, is much more likely to converge on a program for producing three-object rows than one that sometimes produces rows and sometimes produces squares.

Having finally made it to city with a good subway system, Solar-Lezama no longer has any plans to move back to Mexico. His wife has a Mexican father and spent much of her childhood in Mexico, but her mother is from Minnesota, and she had planned on settling in the U.S. when she and Solar-Lezama met in Berkeley. Their children, ages 6 and 3, might also find it hard to adjust to life in Mexico. Although they speak Spanish exclusively at home, they speak English at their school in Medford, Massachusetts, and, says Solar-Lezama, “they’re developing a Boston accent.”

Source: MIT News - CSAIL - Robotics - Computer Science and Artificial Intelligence Laboratory (CSAIL) - Robots - Artificial intelligence

Reprinted with permission of MIT News : MIT News homepage

Started July 25, 2017, 12:00:49 pm
avatar

Zero

Robots should be made of wood! in General Robotics Talk

Imagine a billion robots on the face of the earth, in a few decades...

Robots today, from Asimo to Buddy, are made of plastic.

I believe robots should be made of wood, and their look should be inspired from beautiful sailboats, with flowing shapes, slim-line forms, and noble materials. Their presence would then be warm. Also, since some people like to play with CNC wood routers, wooden bots could be DIY open-source community-driven projects, instead of cost-prohibitive plug-n-play toys...

EDIT: Sorry for my poor english... if I could, I would use poetry to make you feel how beautiful wooden bots could be.

17 Comments | Started July 22, 2017, 02:00:02 pm
avatar

8pla.net

Using a forum to store a chabot brain in Bot Conversations

Take a look.  The idea is to use a forum to take the place of what AIML or ChatScript is used for, if you can imagine that.  Forum members teach the chatbot simply by making forum posts. The chatbot matches the subject line of the post to the user's input.  Then the chatbot chooses a response from one of the messages posted to that forum thread.  So, please make as many replies to your own post as you like.  In other words, make a new post, and then reply several times, or as much as you like to your own post.

For Example:

New Forum Thread:

Subject   -> HELLO
Message 1 -> Hi there!
Message 2 -> Hello you.
Message 3 -> Greetings

Then later...

User says "Hello there" to Chatbot
Chatbot matches: Subject   -> HELLO
Chatbot chooses 2nd message posted and responds: Hello you.
------------------------------------------

I temporarily turned off activation, so it is faster and easier to create an new account at http://chatbot.altervista.org  As soon as you create an account, you will have immediate access to the forum.  No need to authenticate by your email.  This is to save you time.

Activation will be turned on again after this alpha test, but you can keep your forum account.  Some here have already created an account, and that account is still good.  The point is, I want to make it easy for you to login to the forum to teach the chatbot.  Remember, the chatbot will use your subject line to match what other people say to it, and then use your posted forum messages, to pick a response to give to other people.  

You are teaching the chatbot what to say simply by using the forum.  Other forum members have already started teaching the chatbot, so you can see how it works from those examples.  As soon as you post to the forum, you can try to ask the chatbot something related to what you just posted to the forum.

Thank you for your support.  This is an experiment for me.  Please let me know what you think about using a forum to store a chatbot brain.

Reference: http://chatbot.altervista.org



-------

Started July 24, 2017, 11:18:17 pm
avatar

Tyler

Faster, more nimble drones on the horizon in Robotics News

Faster, more nimble drones on the horizon
25 May 2017, 8:30 pm

There’s a limit to how fast autonomous vehicles can fly while safely avoiding obstacles. That’s because the cameras used on today’s drones can only process images so fast, frame by individual frame. Beyond roughly 30 miles per hour, a drone is likely to crash simply because its cameras can’t keep up.

Recently, researchers in Zurich invented a new type of camera, known as the Dynamic Vision Sensor (DVS), that continuously visualizes a scene in terms of changes in brightness, at extremely short, microsecond intervals. But this deluge of data can overwhelm a system, making it difficult for a drone to distinguish an oncoming obstacle through the noise.

Now engineers at MIT have come up with an algorithm to tune a DVS camera to detect only specific changes in brightness that matter for a particular system, vastly simplifying a scene to its most essential visual elements.

The results, which they presented this week at the IEEE American Control Conference in Seattle, can be applied to any linear system that directs a robot to move from point A to point B as a response to high-speed visual data. Eventually, the results could also help to increase the speeds for more complex systems such as drones and other autonomous robots.

“There is a new family of vision sensors that has the capacity to bring high-speed autonomous flight to reality, but researchers have not developed algorithms that are suitable to process the output data,” says lead author Prince Singh, a graduate student in MIT’s Department of Aeronautics and Astronautics. “We present a first approach for making sense of the DVS’ ambiguous data, by reformulating the inherently noisy system into an amenable form.”

Singh’s co-authors are MIT visiting professor Emilio Frazzoli of the Swiss Federal Institute of Technology in Zurich, and Sze Zheng Yong of Arizona State University.

Taking a visual cue from biology

The DVS camera is the first commercially available “neuromorphic” sensor — a class of sensors that is modeled after the vision systems in animals and humans. In the very early stages of processing a scene, photosensitive cells in the human retina, for example, are activated in response to changes in luminosity, in real time.

Neuromorphic sensors are designed with multiple circuits arranged in parallel, similarly to photosensitive cells, that activate and produce blue or red pixels on a computer screen in response to either a drop or spike in brightness.

Instead of a typical video feed, a drone with a DVS camera would “see” a grainy depiction of pixels that switch between two colors, depending on whether that point in space has brightened or darkened at any given moment. The sensor requires no image processing and is designed to enable, among other applications, high-speed autonomous flight.

Researchers have used DVS cameras to enable simple linear systems to see and react to high-speed events, and they have designed controllers, or algorithms, to quickly translate DVS data and carry out appropriate responses. For example, engineers have designed controllers that interpret pixel changes in order to control the movements of a robotic goalie to block an incoming soccer ball, as well as to direct a motorized platform to keep a pencil standing upright.

But for any given DVS system, researchers have had to start from scratch in designing a controller to translate DVS data in a meaningful way for that particular system.

“The pencil and goalie examples are very geometrically constrained, meaning if you give me those specific scenarios, I can design a controller,” Singh says. “But the question becomes, what if I want to do something more complicated?”

Cutting through the noise

In the team’s new paper, the researchers report developing a sort of universal controller that can translate DVS data in a meaningful way for any simple linear, robotic system. The key to the controller is that it identifies the ideal value for a parameter Singh calls “H,” or the event-threshold value, signifying the minimum change in brightness that the system can detect.

Setting the H value for a particular system can essentially determine that system’s visual sensitivity: A system with a low H value would be programmed to take in and interpret changes in luminosity that range from very small to relatively large, while a high H value would exclude small changes, and only “see” and react to large variations in brightness.

The researchers formulated an algorithm first by taking into account the possibility that a change in brightness would occur for every “event,” or pixel activated in a particular system. They also estimated the probability for “spurious events,” such as a pixel randomly misfiring, creating false noise in the data.

Once they derived a formula with these variables in mind, they were able to work it into a well-known algorithm known as an H-infinity robust controller, to determine the H value for that system.

The team’s algorithm can now be used to set a DVS camera’s sensitivity to detect the most essential changes in brightness for any given linear system, while excluding extraneous signals. The researchers performed a numerical simulation to test the algorithm, identifying an H value for a theoretical linear system, which they found was able to remain stable and carry out its function without being disrupted by extraneous pixel events.

“We found that this H threshold serves as a ‘sweet-spot,’ so that a system doesn’t become overwhelmed with too many events,” Singh says. He adds that the new results “unify control of many systems,” and represent a first step toward faster, more stable autonomous flying robots, such as the Robobee, developed by researchers at Harvard University.

“We want to break that speed limit of 20 to 30 miles per hour, and go faster without colliding,” Singh says. “The next step may be to combine DVS with a regular camera, which can tell you, based on the DVS rendering, that an object is a couch versus a car, in real time.”

This research was supported in part by the Singapore National Research Foundation through the SMART Future Urban Mobility project.

Source: MIT News - CSAIL - Robotics - Computer Science and Artificial Intelligence Laboratory (CSAIL) - Robots - Artificial intelligence

Reprinted with permission of MIT News : MIT News homepage

Started July 24, 2017, 12:01:07 pm
It's Alive

It's Alive in Chatbots - English

[Messenger] Enjoy making your bot with our user-friendly interface. No coding skills necessary. Publish your bot in a click.

Once LIVE on your Facebook Page, it is integrated within the “Messages” of your page. This means your bot is allowed (or not) to interact and answer people that contact you through the private “Messages” feature of your Facebook Page, or directly through the Messenger App. You can view all the conversations directly in your Facebook account. This also needs that no one needs to download an app and messages are directly sent as notifications to your users.

Jul 11, 2017, 17:18:27 pm
Star Wars: The Last Jedi

Star Wars: The Last Jedi in Robots in Movies

Star Wars: The Last Jedi (also known as Star Wars: Episode VIII – The Last Jedi) is an upcoming American epic space opera film written and directed by Rian Johnson. It is the second film in the Star Wars sequel trilogy, following Star Wars: The Force Awakens (2015).

Having taken her first steps into a larger world, Rey continues her epic journey with Finn, Poe and Luke Skywalker in the next chapter of the saga.

Release date : December 2017

Jul 10, 2017, 10:39:45 am
Alien: Covenant

Alien: Covenant in Robots in Movies

In 2104 the colonization ship Covenant is bound for a remote planet, Origae-6, with two thousand colonists and a thousand human embryos onboard. The ship is monitored by Walter, a newer synthetic physically resembling the earlier David model, albeit with some modifications. A stellar neutrino burst damages the ship, killing some of the colonists. Walter orders the ship's computer to wake the crew from stasis, but the ship's captain, Jake Branson, dies when his stasis pod malfunctions. While repairing the ship, the crew picks up a radio transmission from a nearby unknown planet, dubbed by Ricks as "planet number 4". Against the objections of Daniels, Branson's widow, now-Captain Oram decides to investigate.

Jul 08, 2017, 05:52:25 am
Black Eyed Peas - Imma Be Rocking That Body

Black Eyed Peas - Imma Be Rocking That Body in Video

For the robots of course...

Jul 05, 2017, 22:02:31 pm
Winnie

Winnie in Assistants

[Messenger] The Chatbot That Helps You Launch Your Website.

Jul 04, 2017, 23:56:00 pm
Conversation, Deception and Intelligence

Conversation, Deception and Intelligence in Articles

A blog dedicated to science, technology, and my interests in music, art, film and especially to Alan Turing for his Imitation Game: a measure for machine intelligence through text-based dialogue.

Jul 04, 2017, 22:29:29 pm
Transformers: The Last Knight

Transformers: The Last Knight in Robots in Movies

Transformers: The Last Knight is a 2017 American science fiction action film based on the toy line of the same name created by Hasbro. It is the fifth installment of the live-action Transformers film series and a direct sequel to 2014's Transformers: Age of Extinction. Directed by Michael Bay, the film features Mark Wahlberg returning from Age of Extinction, along with Josh Duhamel and John Turturro reprising their roles from the first three films, with Anthony Hopkins joining the cast.

Humans and Transformers are at war, Optimus Prime is gone. The key to saving our future lies buried in the secrets of the past, in the hidden history of Transformers on Earth.

Jun 26, 2017, 03:20:32 am
Octane AI

Octane AI in Tools

Our pre-built features make it easy for you to add content, messages, discussions, filling out forms, showcasing merchandise, and more to your bot.

Convos are conversational stories that you can share with your audience. It’s as easy as writing a blog post and the best way to increase distribution to your bot.

Jun 25, 2017, 02:57:50 am
Chatfuel

Chatfuel in Tools

Chatfuel was born in the summer of 2015 with the goal to make bot-building easy for anyone. We started on Telegram and quickly grew to millions of users. Today we're focusing mainly on making it easy for everyone to build chatbots on Facebook Messenger, where our users include NFL and NBA teams, publishers like TechCrunch and Forbes, and millions of others.

We believe in the power of chatbots to strengthen your connection to your audience—whether that's your customers, readers, fans, or others. And we're committed to making that as easy as we can.

Jun 24, 2017, 01:10:12 am