Recent Posts

Pages: [1] 2 3 ... 10
This is my analogy, the question is how to activate the specific immune system instead.
General AI Discussion / Re: AI and the Ruination of the World
« Last post by LOCKSUIT on Today at 12:04:05 AM »

Some of these gangs in the U.S. are probably Anarchists. We can see at bottom of this link, they are violent and inconsistent and Utopians. I can see exactly through why. They don't want higher connections / larger features. They think they are the final Utopian answer, no need for higher layers of a neural network / government. So, they are violent. They are the top layer. The inconsistency comes from not being cooperative, in a world where it would help - today it's better to work together, we know how, a bit. The inconsistency is not adding up with all the data / truth, they should work together as agents/ nodes in the network, but are not, they are not normal humans, they are more dangerous than cooperative/ useful. They are worse mutants. Good mutants are advanced engineers that no one can understand, but still kind of recognized too.

So basically they got trapped in local optima, thinking they are the best and don't want to work together, hence are more vulnerable to poverty and therefore are more violent/ will rob. But it's inconsistent, they "can" work together in [this] world.
General AI Discussion / Re: GPT-3
« Last post by Korrelan on June 01, 2020, 11:00:21 PM »
What the authors are saying is that building a neural network that just predicts probabilities of the next word in any sentence or phrase may have its limits. Just making it ever-more-powerful and stuffing it with ever-more-text may not yield better results. That's a significant acknowledgement within a paper that is mostly celebrating the achievement of throwing more computing horsepower at a problem.

General AI Discussion / Re: GPT-3
« Last post by LOCKSUIT on June 01, 2020, 09:19:57 PM »
Ivan, you're talking about a [learning] or [growing] algorithm/system. Even Brute Force is one.

The "actual" data or army force will Not fit in 20 lines of code lol. Trillions of images/ facts, or nanobots and Dyson spheres, are really really big things and don't fit in 20 lines of code. It takes time to attain that size.

The seed can be awfully small I guess, ya. But the time to grow takes at least some time! My point was when you have more data and a bigger army, you are much more capable.

While the most advanced DNA seed can be incredibly small and still grow into nanolord in a day, still, a too small seed will not be able to do much, but evolve much more slowly.

It's when the seed has grown will it show its pretty-ness :)

edit: yes i knew algoroirthms have a trade-off for time/memory/complexity. Ex. fast but big mem. Smaller code or RAM again makes it slower to learn/ grow.

Bigger cities get bigger faster. Big companies Pool/suck in cash etc lol.
General AI Discussion / Re: GPT-3
« Last post by ivan.moony on June 01, 2020, 09:10:20 PM »
@L, Did you know that the same program may be programmed in 20 lines, and do the same thing as the one programmed in 1000 lines? Also that one of 20 lines may be many, many times faster than that one of 1000 lines. It is said that 20 lines one is an optimized version of 1000 lines one, and it does the same thing. Further, algorithms may be optimized either for speed, for size, or for both.
General AI Discussion / Re: GPT-3
« Last post by LOCKSUIT on June 01, 2020, 08:53:12 PM »
Ye Ye,

OpenAI's work on language has been part of the history of a steady progression of one kind of approach, with increasing success as the technology was made bigger and bigger and bigger.

The original GPT, and GPT-2, are both adaptations of what's known as a Transformer, an invention pioneered at Google in 2017.

I knew this for at least a few months. More data does increase intelligence, the dictionary literally defines information as "intelligence". Bigger systems can beat smaller versions of themselves; A device made of only 100 atoms that is the highest technology possible (made by aliens 99,999 years from now) still can't do much (unless it grows, at least).

However there's multiple ways to get "loads" of "free" [information] from, the same sized dataset. You need a better data insight extractor / pattern finder to get more virtual data. Data isn't random 1s & 0s. As well, then, now, throwing more non-virtual dataset size at it will also inflate the virtual data, so throwing x10 more data at it may give you 100x free data inside, and with a better extractor 10x the datset may feel like 10000000x more data inside instead of 100x. You won't necessary have to extract that much, certain information simply is so powerful. Predict what follows "cat cat cat cat _". You don't need a bigger dataset to find information here. Attention is drawn to active nodes.

General AI Discussion / Re: GPT-3
« Last post by yotamarker on June 01, 2020, 07:05:53 PM »
how do I use it with java tho ?
AI News / Re: Meet Sophia, World's First AI Humanoid Robot | Tony Robbins
« Last post by on June 01, 2020, 07:04:01 PM »

I respect and recognize your expert opinion.  As someone who has been in the news for A.I. (like many of us) I appreciate the attention your popularity brings to this topic.

Let's discuss the human simulation aspect of social humanoid robots. Humans use teleprompters to show (pre) scripted panels, hidden from view, all part of a robotic TV camera system.  Think about that for a moment, humans are  (pre) scripted by an articulated, newsroom robot.

For the Science of A.I. why wouldn't social humanoid robots simulate a teleprompter?  Isn't Sophia simply simulating what humans, in the news, do?

Thank you, squarebear.

Robotic Camera goes crazy!
General Chatbots and Software / Microsoft's A.I. tools
« Last post by frankinstien on June 01, 2020, 06:47:42 PM »
Does anybody on this forum use any of Microsoft's tools like Infer.NET, ML.NET, Microsoft Cognitive Toolkit and/or botframework-sdk?
Pages: [1] 2 3 ... 10

Users Online

68 Guests, 1 User
Users active in past 15 minutes:
[Trusty Member]

Most Online Today: 91. Most Online Ever: 340 (March 26, 2019, 09:47:57 PM)