Language feat. in General AI Discussion

In your opinion, what does a language need to sustain intelligence? That language could be formal (or not) and be a tool for social interactions, inner discourse, and so on. What's the minimal set of features needed?

For example, distinguishing past/present/future seems useful. Some kind of "syntactic sugar" so to speak, to say "me", seems useful also. These are examples, and I'm looking for a common subset.

I know a language named Toki Pona. I love it, but I don't know if it's rich enough.

14 Comments | Started January 09, 2022, 05:58:50 pm

Human trials begin in 2022! in General Chat

Getting a wired interface to your brain that can directly connect to your PC, phone, or the internet, in general, is happening! Knowing that firms always want to skim on early adopters I think I'm going to wait until version 2.0 comes out at least. That way there's probably going to be more bandwidth and fewer complications than those that will be first. But I will eventually get one implanted.  8)

Wired

5 Comments | Started December 29, 2021, 04:23:19 pm

Where we are at with AI so far in General Chat

So I know lots of people are working on different parts of AI, but let's look at these below, as they are just a few AIs but pack a lot of power. As you know I have tried JUKEBOX, and GPT-3, which feel very close to human level text completion and audio completion ex. techno, in that you can give them any context and they will react/ predict what to do next like we would. I have not tried though NUWA which is text to video and more. Below I try multi-modal AI, it too is close to human level. It is amazing these AI train on a ton of data like a human adult brain has "seen" in its lifetime, and they are using multiple senses! And they are using sparse attention, Byte Pair Encoding, relational embeds, and more! Deepmind is also looking into looking at a corpus to copy matches, like a painter looks at the model to make sure they paint the truth onto the canvas as a guide. We might get AGI by 2029!


Note: Download any image below if you want to see it's full resolution, most are close however any image that has ex. 16 image stuck together left-to-right, those are just badly shrunken, do download those ones if interested in them.

Note: small GLIDE, made by openAI, below, is only trained on 67-147 million text-image pairs or so, not 250M like the real GLIDE, and is 10x less parameters ("neurons")! 300 million.


Here is a no text prompt, all i fed in was half an image:
https://ibb.co/dQHhc0F


Using text prompts and choosing which completion I liked, I made this by stitching them together (it only could be fed a square image, but still came good!):
original image
https://ibb.co/88XLykd

elongated (scroll down page)
https://ibb.co/Rz9L03X


More:

"tiger and tree" + https://ibb.co/txsWY9h
= https://ibb.co/SnqWYr4

"tigers and forest" + https://ibb.co/XLGbHdw
= https://ibb.co/9GZ2s6p

"tigers in river and forest" + above
= https://ibb.co/zGw6kQY

"circuit board" + https://ibb.co/P6vnpwK
 = https://ibb.co/61ySX7H

"wildlife giraffe" + https://ibb.co/d4C3cH1
 = https://ibb.co/zXSTF3N

"bathroom" + https://ibb.co/KzGqtFz
= https://ibb.co/9H1YqWz

"laboratory machine" + https://ibb.co/cTyXzTG
 = https://ibb.co/6NjsJDK

"pikachu" + image
= https://ibb.co/3zJgWPw

"humanoid robot body android" + https://ibb.co/XWbN42K
 = https://ibb.co/pQWZ6Vd

"bedroom" + https://ibb.co/41y0Q4q
 = https://ibb.co/2Y0wSPd

"sci fi alien laboratory" + https://ibb.co/7JnH6wB
 = https://ibb.co/kBtDjQc

"factory pipes lava" + https://ibb.co/88ZqdX9
 = https://ibb.co/B2X1bn3

"factory pipes lava" + https://ibb.co/hcxmHN0
 = https://ibb.co/wwSxtVM

"toy store aisle" + https://ibb.co/h9PdRQQ
 = https://ibb.co/DwGz4zx

"fancy complex detailed royal wall gold gold gold gold"
https://ibb.co/BGGT9Zx

"gold gates on clouds shining laboratory"
https://ibb.co/qjdcPcR

"gold dragons"
https://ibb.co/L5qkmFS


It generates the rest of an image based on the upper half, or left side, or what's around the hole you made (in-painting), and based on the text prompt provided. You can use only text prompt, or only image prompt.

The use cases are many, you can generate the rest of artwork, or a diagram, or blueprint, or short animation (4 pages stuck together as 1 image), or to figure out what a criminal looks like, or loved one.

You can tell it to generate a penguin's head for the missing top and but with a cigar in its mouth, lol, and so on. You can ask for just some text request and it'll pop out such image.

Ya with these AIs you can get more of your favorite content easily and if it had a slider, you could easily elongate the image or song and backtrack on what you didn't like it made (choose which completion, ex. for the next 5 seconds)
GLIDE also works with no text prompt, it does fine, just maybe 2x worse

--no text prompts--
https://ibb.co/TqPC18x
https://ibb.co/XynhTWS
https://ibb.co/ScFLrpk
https://ibb.co/s9jhrvb
https://ibb.co/Bcr3WXr
https://ibb.co/chJBkTJ
https://ibb.co/PFJkKFw
https://ibb.co/GF2HwXP


To use GLIDE, search Google for github glide openai. I use it in kaggle, as its faster than colab for sure. You must make an account then verify number then open this in colab and only then can you see on right side the settings panel and in there u need to turn on GPU and internet. Upload images to right side top Upload, and then in the image calling part of the code that says ex. grass.png you put there simply ex. see i have:

# Source image we are inpainting
source_image_256 = read_image('../input/123456/tiger2.png', size=256)
source_image_64 = read_image('../input/123456/tiger2.png', size=64)

To control the mask change the 40: thingy to ex. 30 or 44. To control the mask sideways, add another one ex. [:0, :0, :30, :30] or something like that if i got it wrong, you just add one to the end i mean haha. Apparently you can add more than 1 mask (grey box) by doing ex:
mask[.....]
mask[.....]
mask[.....]
.....

Batch size sets the number of images to generate.

Once its done, click console to get the iamge and right click it to save it.


Here's mine for the opensource minDALL_E (this one had no image prompt allowed. So, just text.)

minDALL-E was only trained on 14 million text-image pairs. OpenAI's was trained on 250M. And the model is only 1.5 billion parameters, ~10x smaller. This means we almost certainly HAVE IT! Anyway look how good these are, compared to OpenAI.com's post!!

"a white robot standing on a red carpet, in a white room. the robot is glowing. an orange robotic arm near the robot is injecting the robot's brain with red fuel rods. a robot arm is placing red rods into the robot brain."
https://ibb.co/0VL8Rvx

"3 pikachu standng on red blocks lined up on the road under the sun, holding umbrellas, surrounded by electric towers"
 https://ibb.co/xDQT3f6

"box cover art for the video game mario adventures 15. mario is jumping into a tall black pipe next to a system of pipes. the game case is red."
https://ibb.co/VBXVWsn

an illustration of a baby capybara in a christmas sweater staring at its reflection in a mirror
https://ibb.co/5WZWLT4

an armchair in the shape of an avocado. an armchair imitating an avocado.
https://ibb.co/nwwf1v4

an illustration of an avocado in a suit walking a dog
https://ibb.co/bvfPkxf

pikachu riding a wave under clouds inside of a large jar on a table
https://ibb.co/jHjV7mf

a living room with 2 white armchairs and a painting of a mushroom. the painting of a mushroom is mounted above a modern fireplace.
https://ibb.co/VmKqbHk

a living room with 2 white armchairs and a painting of the collosseum. the painting is mounted above a modern fireplace.
https://ibb.co/K5fPkvj

pikachu sitting on an armchair in the shape of an avocado. pikachu sitting on an armchair imitating an avocado.
https://ibb.co/XLJV4Hb

an illustration of pikachu in a suit staring at its reflection in a mirror
https://ibb.co/nMQRccf

"a cute pikachu shaped armchair in a living room. a cute armchair imitating pikachu. a cute armchair in the shape of pikachu"
https://ibb.co/dbJ1Ks6

To use it, go to this link below, make a kaggle account, verify phone number, then in this link below, click edit it, then go to setting panel at right and turn on GPU and internet. Then replace the code below, it's nearly same but makes it print more images. If you don't, it doesn't seem to work good.

https://www.kaggle.com/annas82362/mindall-e

images = images[rank]



n = num_candidates



fig = plt.figure(figsize=(6*int(math.sqrt(n)), 6*int(math.sqrt(n))))

for i in range(n):

ax = fig.add_subplot(int(math.sqrt(n)), int(math.sqrt(n)), i+1)

ax.imshow(images)

ax.set_axis_off()



plt.tight_layout()

plt.show()






NUWA - just wow

https://github.com/microsoft/NUWA
 
 
OpenAI is working on solving math problems if you look at their website openAI.com. The ways to do this IMO (learning to carry over numbers) is either by trial and error, or reading to do it, or quickly coming to the idea using "good ideas". This carrying over method is what can allow it to solve any math problem like 5457457*35346=?. If openAI can do this, they can do other reasoning problems!!
 
 
Facebook's Blender chatbot version 2 is a good thing also, it looks like the way forward.
 
 
There's one other thing they may need to make AGI but I'm still researching it so I'll just keep it to myself in the meantime.
 
 
I know these AIs, while look amazing or seem like magic, feel like they are play doh i.e. they are not actually AGI or conscious. But we are close, yes, the human brain just predicts the rest of a dream using memories that are similar to a new problem, the new problem is new but it's similar! It is this ability that would allow an AI to be tasked a goal, and solve it.
 
 
The cerebellum is said to be solved once the rest of the brain (or was it  the neocortex?) is solved. People with their Cerebellum removed shake when move and miss targets when grasping objects. IMO this is because our brains pop out predictions like say 4 frames a second, so what needs to be done that our Cerebellum does is it looks at the input "in background mode", and listens to the original predicted video/image and decides to move the motors such way again, but this time it will cause it to readjust. For example: One predicts seeing their hand move onto a computer mouse, but the Cerebellum would have time to see the hand is thrown over to the left and therefore will now predict the hand moving down+rightwards to move to the mouse as a 'video prediction' instead of just down-wards like originally predicted. Now, why don't our brains just run this fast? Maybe because it is too fast or too much energy. I mean why don't our brains think 10x faster, anyways? There's limits but the Cerebellum has found way maybe to use it or gain it.

9 Comments | Started January 03, 2022, 07:10:15 am

Peril Quest - A Simulated adventure (A completed work of fiction) in General Chat

Hey everyone,

As an old stalwart of the AI Dreams forum, I am back here after a year's break as I've been hidden away, busily crafting and polishing my now complete novel which is centered around the first A.I, neural network games engine. Now I am beginning to sniff the tender green stems of 2022 I'm going to get involved with these forums again, see what everyone has been up to and offer my 'two cents' worth (or two bits worth if you excuse the pun) on anything I can wrap my head around.

 'Peril Quest - A Simulated adventure' is the name of my novel, and I would be darn delighted if any of you kind folks of AI dreams would spare just a few minutes of your time to read the first short, opening chapters  of the story I've recently posted up on Wattpad (and if the story grabs you in any way please do read on!) - Link here https://www.wattpad.com/story/293741097-peril-quest-a-simulated-adventure

Cheers everyone and I look forward to getting stuck into as many posts as I'm able over the next few months.

OllieGee

5 Comments | Started January 12, 2022, 09:43:15 pm

savannah digital assistant in General Project Discussion

i making a digital assistant that i can roleplay with.
i am using rivescript, tinketer,pyttsx3 and pyjokes to make her.
i am implimenting the ability to touch the digital assistant in role play.
she will be able to remember what i have done and have not done in roleplay.
her name is actually savannah.
i did the same thing with my fun lady chatbot on personalityforge websight.
i am going to give her house with rooms represented by text and pictures.


here is the script i made to play gifs for a certain amount of time then the window will disappear.

Code
import tkinter
from PIL import Image, ImageTk, ImageSequence
from time import time
############you need to install pillow with pip###########
class App:
    def __init__(self, parent):
       
        self.parent = parent
        self.canvas = tkinter.Canvas(parent, width=400, height=400)
        self.canvas.pack()
        self.sequence = [ImageTk.PhotoImage(img)
                            for img in ImageSequence.Iterator(

##############################put the full path to your gif file between the quotation marks after r with double slash between directories###########
                                    Image.open(
                                    r""))]
        self.image = self.canvas.create_image(200,200, image=self.sequence[0])
        self.animate(1)
    def animate(self, counter):
        self.canvas.itemconfig(self.image, image=self.sequence[counter])
        self.parent.after(20, lambda: self.animate((counter+1) % len(self.sequence)))

root = tkinter.Tk()
app = App(root)
start = time()

# in after method 5000 milliseconds
# is passed i.e after 5 seconds
# main window i.e root window will
# get destroyed
root.after(6000, root.destroy)
# running the application

root.mainloop()
end = time()
print('Destroyed after % d seconds' % (end-start))
Modify message
i really like the replika chatbot.

4 Comments | Started January 16, 2022, 06:32:59 pm

Exp-Log (a deductive system) in General Project Discussion

Introduction to expression logic formal language

[Intended audience]
Beginners in language parsing, term rewriting, and logic deduction

[Short description]
Languages can be seen as streams of symbols used to carry on, process, and exchange informations. Expression logic is also a language, but it is a general kind of metalanguage capable of describing and hosting any other language. Expression logic is also able to perform any intermediate data processing upon recognizing hosted languages. Being a general kind of metalanguage, expression logic represents all of the following:

  • Expression recognizer and generator
  • SMT solver
  • Deductive system
  • Term rewriting framework
  • Metatheory language formalization
These descriptions render expression logic as a general solution verifier and problem solver, which seems to be a required minimum characteristic to process a wider range of formal languages.

[Project reference]
https://github.com/contrast-zone/exp-log

37 Comments | Started January 04, 2021, 05:44:26 pm

RiveScript Version: v2.0.3 in A.L.I.C.E (AIML)

In RiveScript

Version: v2.0.3

A.L.I.C.E.

I have the transformations working as designed:


You> I am to you what you are to me.
Bot> Oh, I what you are to me?


Transformations pseudo code:


i am -> YOU ARE
me -> YOU
my -> YOUR
you are -> I AM
your -> MY


But I would the transformations to be applied to the entire wildcard contents:


You> I am to you what you are to me.
Bot> Oh, you are to me what I am to you?


May I extend the open source code to do this?

15 Comments | Started January 13, 2022, 06:55:22 am

Remembering Chatbot user in RS in AI Programming

First time posting. While using rivescript and Javascript is there anyway to save developed variables from a conversation into an object for later use? My intent is to be able to call on the object later in the chat or at least be able to use the rivescript variables in javascript. How can I go about this, very much a noob at this but am fascinated none the less.

3 Comments | Started March 19, 2018, 12:21:04 pm

A world is a mind in AI Programming

Hi old friends,

I hope you're fine, in good health and feeling good. I wish you a nice end of year.
 :)

I propose:


Don't try to create a mind. That's impossible.
Instead, only try to build an entire world.
Then you'll realize that, this world is a mind.


What do you think?

16 Comments | Started December 17, 2021, 07:30:01 pm

Project Thread: building Blinky in Home Made Robots

Time to try and implement some of the things we have been theorizing about. Here I will be detailing my first attempt at assembling something that is more than the sum of its parts.

125 Comments | Started March 08, 2019, 07:46:52 pm
Real Steel

Real Steel in Robots in Movies

Real Steel is a 2011 American science-fiction sports film starring Hugh Jackman and Dakota Goyo.

In 2020, human boxers have been replaced by robots. Charlie Kenton, a former boxer, owns "Ambush", but loses it in a fight against a bull belonging to promoter and carnival owner Ricky, who rigged the fight to mess with Charlie as he sees him as a joke, partially because he beat him up the last time they competed for bailing on the bet. Having made a bet that Ambush would win as a result, Charlie now has a debt to Ricky he can't pay—which he runs out on.

Apr 20, 2020, 14:25:24 pm
Singularity

Singularity in Robots in Movies

Singularity is a Swiss/American science fiction film. It was written and directed by Robert Kouba, and its first shoot, in 2013, starred Julian Schaffner, Jeannine Wacker and Carmen Argenziano. The film was first released in 2017, after further scenes with John Cusack were added.

In 2020, robotics company C.E.O. Elias VanDorne reveals Kronos, the supercomputer he has invented to end all war. Kronos decides that mankind is responsible for all war, and it tries to use robots to kill all humans. VanDorne and Damien Walsh, a colleague, upload themselves into Kronos and watch the destruction. Ninety-seven years later, Andrew, a kind-hearted young man, wakes up in a ruined world. VanDorne and Walsh, still in Kronos, watch Andrew meet Calia, a teenage girl who seeks the last human settlement, the Aurora. Though Calia is first reluctant to let Andrew accompany her, the two later fall in love.

Apr 18, 2020, 13:37:25 pm
Star Wars: Rogue One

Star Wars: Rogue One in Robots in Movies

Rogue One follows a group of rebels on a mission to steal the plans for the Death Star, the Galactic Empire's super weapon, just before the events of A New Hope.

Former scientist Galen Erso lives on a farm with his wife and young daughter, Jyn. His peaceful existence comes crashing down when the evil Orson Krennic takes him away from his beloved family. Many years later, Galen becomes the Empire's lead engineer for the most powerful weapon in the galaxy, the Death Star. Knowing that her father holds the key to its destruction, Jyn joins forces with a spy and other resistance fighters to steal the space station's plans for the Rebel Alliance.

One of the resistance fighters is K-2SO a droid. He is a CGI character voiced and performed through motion capture by Alan Tudyk. In the film, K-2SO is a KX-series security droid originally created by the Empire.

Feb 25, 2020, 18:50:48 pm
Practical Artificial Intelligence: Machine Learning, Bots, and Agent Solutions Using C#

Practical Artificial Intelligence: Machine Learning, Bots, and Agent Solutions Using C# in Books

Discover how all levels Artificial Intelligence (AI) can be present in the most unimaginable scenarios of ordinary lives. This book explores subjects such as neural networks, agents, multi agent systems, supervised learning, and unsupervised learning. These and other topics will be addressed with real world examples, so you can learn fundamental concepts with AI solutions and apply them to your own projects.

People tend to talk about AI as something mystical and unrelated to their ordinary life. Practical Artificial Intelligence provides simple explanations and hands on instructions. Rather than focusing on theory and overly scientific language, this book will enable practitioners of all levels to not only learn about AI but implement its practical uses.

Feb 10, 2020, 00:14:42 am
Robot Awakening (OMG I'm a Robot!)

Robot Awakening (OMG I'm a Robot!) in Robots in Movies

Danny discovers he is not human, he is a robot - an indestructible war machine. His girlfriend was kidnapped by a mysterious organization of spies who are after him & now he must go on a journey to save his girl and find out why the hell he is a robot?!

Feb 09, 2020, 23:55:45 pm
Program Y

Program Y in AIML / Pandorabots

Program Y is a fully compliant AIML 2.1 chatbot framework written in Python 3. It includes an entire platform for building your own chat bots using Artificial Intelligence Markup Language, or AIML for short. 

Feb 01, 2020, 15:37:24 pm
The AvatarBot

The AvatarBot in Tools

The AvatarBot helps you in finding an Avatar for your Chatbot. Answer a few questions and get a match. Keep trying to get the one you really like.

Dec 18, 2019, 14:51:56 pm
Eva

Eva in Chatbots - English

Our chatbot - Eva - was created by Stanusch Technologies SA. Eva, just 4 weeks after launch, competed in Swansea (UK) for the Loebner Prize 2019 with programs such as Mitsuku and Uberbot! Now, she is in the top 10 most-humanlike bots in the world! :)

Is it possible for Eva to pass the turing test? It's creators believe it is.

Eva has her own personality: she is 23 years old, she is a student from the Academy of Physical Education in Katowice (Lower Silesia district/Poland). She is a very charming and nice young women, who loves to play volleyball and to read books.

Dec 14, 2019, 13:10:13 pm
Star Wars: Episode IX – The Rise of Skywalker

Star Wars: Episode IX – The Rise of Skywalker in Robots in Movies

Star Wars: The Rise of Skywalker (also known as Star Wars: Episode IX – The Rise of Skywalker) is an American epic space opera film produced, co-written, and directed by J. J. Abrams.

A year after the events of The Last Jedi, the remnants of the Resistance face the First Order once again—while reckoning with the past and their own inner turmoil. Meanwhile, the ancient conflict between the Jedi and the Sith reaches its climax, altogether bringing the Skywalker saga to a definitive end.

Nov 15, 2019, 22:31:39 pm
Terminator: Dark Fate

Terminator: Dark Fate in Robots in Movies

Terminator: Dark Fate is a 2019 American science fiction action film directed by Tim Miller and created from a story by James Cameron. Cameron considers the film a direct sequel to his films The Terminator (1984) and Terminator 2: Judgment Day. The film stars Linda Hamilton and Arnold Schwarzenegger returning in their roles of Sarah Connor and the T-800 "Terminator", respectively, reuniting after 28 years.

SPOILERS:

In 1998, three years after defeating the T-1000 and averting the rise of the malevolent artificial intelligence (AI) SkynetSarah Connor and her teenage son John are relaxing on a beach at Guatemala. A T-800 Terminator, sent from the future before Skynet's erasure, arrives and shoots John, killing him.

Mackenzie Davis stars as Grace: A soldier from the year 2042 adopted by Resistance leader Daniella Ramos who was converted into a cyborg and sent by her adoptive mother to protect her younger self from a new advanced Terminator prototype.

Oct 29, 2019, 21:27:46 pm
Life Like

Life Like in Robots in Movies

A couple, James and Sophie, buy an android called Henry to help around the house.

In the beginning, this is perfect for both James and Sophie as Henry does housework and makes a good companion to Sophie. But when Henry’s childlike brain adapts by developing emotions, complications begin to arise

Oct 29, 2019, 21:14:49 pm

top