Ai Dreams Forum

Artificial Intelligence => General AI Discussion => Topic started by: infurl on March 08, 2021, 11:33:37 am

Title: No-code AI documentary preview.
Post by: infurl on March 08, 2021, 11:33:37 am (

Here's a teaser for a documentary about "no code AI" that is in the works. I cannot help rolling my eyes whenever I hear terms like "no code" and "no sql" because the folks who think they can get by without the kinds of technical skills that I take for granted will never be able to achieve the marvellous things that I can achieve, or will they?

The nature of technology is that things that only worked in the lab last year and were no more than ideas the year before are being packaged up and made available to people to use on their phones with nothing but their eyes and a finger next year. This is especially obvious with things like the media tools on display in this video but I'm not so sure about other areas.

Can anyone suggest anything else that is suddenly no longer out of reach of those who are unable to program a computer?
Title: Re: No-code AI documentary preview.
Post by: ivan.moony on March 08, 2021, 06:01:10 pm
It depends what AI composers pack with what interface. NN technology  may be as universal as whole set of CPU instructions. You can do anything with it if you expose the right wires.

I'm waiting for an empty GPT equivalent, so that people can raise it (over years) like a real child, through conversations. The code is already opensource, someone only has to pack it in the right way.
Title: Re: No-code AI documentary preview.
Post by: MagnusWootton on March 08, 2021, 09:12:35 pm
We all know how to walk, but robots cant do it so well, but the idea is basic. If you need a robot to take out the trash, then the concept there is quite braindead, easy to understand by anyone. but we seem to not be able to do it so well.  I wonder why...
How come I cant point my finger at something to teach the computer what it is?   What a braindead act that is!

Well, pixel brightnesses arent good enough at separating the objects from each other,   and reading the vector of the finger requires a 2d to 3d conversion,  what if the commander is out of view?  So many problems, might as well give up.

After all the methods are written to make the robot function, then using the robot afterwards is the braindead bit anyone can do, unless of course its AGI, then itll be thinking your the braindead one.