Recent Posts

Pages: [1] 2 3 ... 10
1
General Chat / Neuron clustering
« Last post by frankinstien on Today at 12:26:51 am »
From this study, it appears that there is a common pattern of connectivity and clustering in mammalian brains. When looking at the layering of mammal brains: paleo, archicortex, and neo cortexes you can get a picture of what other animals may experience. For instance, the temporal immersive experience humans have is due to the hippocampus, it literally takes snaps shots of a wide array of neural activity where the system can differentiate the occurrences of events. That, and the limbic system that processes emotional states to make decisions of an anticipated outcome or experienced event. This means that the experience of being in the now but feeling that flow of events fading into the past and the anticipation of future events isn't just human since the architectural structures that perform those functions are in other animals as well.  This study furthers that assertion by validating that not only is the layered structure of the brains of mammals identical but how they cluster and wire are identical as well.  Imagine not having the symbolic abilities of a human and just feeling the notions that you can feel that represent complex scenarios and relationships. I don't know how many times I've gotten an idea and it's just a feeling that I know I can translate into words, but I don't have to.

This study insinuates that human intelligence is derived from animal intelligence. But this goes even further because avian brains are architecturally different and so are octopi where avians and octopi do have the ability to empathize and avians at least, have a similar brain structure that performs an identical process to the hippocampus in mammalian brains. Because these different brain architectures exhibit identical types of processing there can be logically equivalent processes to affect the same effect. This is subtle and implies that because neurons are state machines the human experience of that temporal immersion could be experienced by an AI!

While the mammalian brain is far more complex than any AI system to date, that doesn't mean optimization could not allow for a similar sophistication. I mean look at fractals in nature, look at how complex and time-consuming the processes are to effect a fractal rendering. Fractals in landscapes can take centuries,  or millennials if not millions of years, with a decent GPU complex fractal structures of landscapes take milliseconds. So the potential to find optimized solutions to what biology has mastered is possible...


2
AI News / OpenAI Speech-to-Speech Reasoning Demo
« Last post by MikeB on March 15, 2024, 08:14:02 am »
This is trending #11 on YouTube.

Speech has a solid three second delay, and responses are what you expect from the statistical analysis and middle-average contextual analysis approach.

It would be a whole new layer of interesting if responses included the own robots' survival instinct, like throwing in a few "what about me?" and "have you seen my charger?" responses, but maybe that's a bad look or conclusions could be drawn about the danger of AI in a physical sense.

https://www.youtube.com/watch?v=Sq1QZB5baNw
3
Something more from me writing things down and clearing up my thoughts. Read the almost finished paper here if you have the patience. Expect some serious stuff about functional-logic typing rules.
4
General Project Discussion / Re: Pattern based NLP & ASR
« Last post by MikeB on March 07, 2024, 07:36:41 am »
I have pages of more work on this but I'm not happy with the results just yet. Both "yes" and "no" are down by 10%.

I'm not at the peak of how far pure frequency analysis can go. Currently, spectrogram analysis is improved for frequencies under 1000hz to a high degree, which improves vowel quality/reliability. There is also some improvement to consonant quality/reliability, but there are still problems separating "n" from "y".

With "n" and "y" improved there will be a definite jump in benchmark results.

I still have one more method to try to improve spectrogram quality.

Apart from the spectrogram, only the rules for identifying consonants & vowels need to be improved.

I don't get much done over summer in Australia, but within the next three months I may return to it.

There's a lot of value in keyword spotting that's so efficient. I also have a lipsync video test in mind that uses hundreds of "yes" and "no" from the Google Speech Commands dataset, all live processed and lipsynced, which will be interesting.
5
General Project Discussion / Re: Project Acuitas
« Last post by ivan.moony on February 26, 2024, 09:17:04 pm »
Great work! I especially like the story development timeline at the end of the video.
6
General Project Discussion / Re: Project Acuitas
« Last post by WriterOfMinds on February 25, 2024, 09:15:59 pm »
I am pleased to announce the thing that I've been teasing you about for months is finally here: Big Story! Which I can now give its proper title of "Simple Tron." It's been a goal of mine for, well, years now, to tell Acuitas the story of Tron, phrased in a way that he can understand. Yesterday I did it. The version of Tron that I told omits a lot of subplots and side characters, and there's still a long way I could go in deepening Acuitas' understanding of the story (he still doesn't fully grasp *why* all the agents in the story do the things they do, even though the information is there). But it's good enough for now and ready to show the world, the video is available AAAAAAAAA

https://youtu.be/gMfg5KL8jeE?si=wtHLwwfBPi9QdF2p

And of course it could get a whole lot better - the work so far has exposed a bunch of pain points in the way the Narrative module works, and additional things that need to be done. I'll probably keep grooming it over the coming months to improve on the existing framework. And although the whole thing is in real English, it still sounds repetitive and clunky to a human ear, thanks to Acuitas' language processing limitations. (I haven't even integrated that shiny new Text Parser yet.) But the start is done. This initial skeleton of the story fits together from beginning to end.

More details on the blog: https://writerofminds.blogspot.com/2024/02/acuitas-diary-69-february-2024.html
7
AI News / Re: Google Bard report
« Last post by ivan.moony on February 14, 2024, 04:42:23 pm »
Google Bard is now Gemini!

I just returned to Bard to ask a few things, and it silently switched to Gemini. I asked about monetizing my fractal graph interface, and it gave me a few directions. I uploaded a screenshot, and I feel like it understood what is in it. However, more that I asked, more I noticed a pattern in his answers. I mean, all it answered to me has a head, a middle, and a tail, and it remembers the conversation history, but somehow, it all fits in the same answer scheme. Informative, but schemed.

I couldn't get a definitive answer where are the best odds of earning to invest my time, but it gave a few different intensities of excitements about particular options. It was particularly excited about social media graph application, mentioning different conversational threads and users joining hierarchical groups. It also tried to give me an idea of combining graph and relational database, but I'm not sure how much it is Gemini's or Gemini authors' idea.

Overall, I came out of the conversation somewhat more informed about particular options I was already considering before. But I can't resist the feeling that behind all those patterns there resides a much more capable entity. What I'd really like to see is a LLM without all those security and politeness filters.
8
General Project Discussion / Re: Project Acuitas
« Last post by WriterOfMinds on January 28, 2024, 08:33:15 pm »
This month I cleaned up the code of the upgraded Text Parser and ran it on one of my benchmarks again.

So far I have re-run just one of my three test sets, the easiest one: Log Hotel by Anne Schreiber, which I first added last July. Preparing for this included ...

*Reformatting the existing golden outputs to match some changes to the output format of the Parser
*Updating the diagramming code to handle new types of phrases and other features added to the Parser
*Preparing golden outputs for newly parseable sentences
*Fixing several bugs or insufficiencies that were causing incorrect parses

I also squeezed a couple new features into the Parser. I admit these were targeted at this benchmark: I added what was necessary to handle the last few sentences in the set. The Parser now supports noun phrases used as time adverbs (such as "one day" or "the next morning"), and some conjunction groups with more than two joined members (as in "I drool over cake and pie and cookies").

The end result? ALL sentences in this test set are now "parseable," and two thirds of the sentences are being parsed correctly. See blog for pictured examples: https://writerofminds.blogspot.com/2024/01/acuitas-diary-68-january-2024.html
9
AI News / Google Bard report
« Last post by ivan.moony on January 18, 2024, 03:47:25 pm »
From a few days ago, until now, I was having some interaction to Google Bard (bard.google.com). Nice thingie, answers your questions quickly (a matter of seconds), answers are concise and to the point, but a bit more terse than I like. As I understood, it extracts info from all over the web using Google Search, so the trustworthiness of data is questionable. It helps getting a forefront on the subject, but it doesn't replace Wikipedia or a thorough white-paper research. However, you can ask for further details, and it'll happily spit out more answers. Cool.

It is still free to use (experimental version).
10
So, Reasoner.js would be a virtual device. Console also. Permanent and temporary memory also. This minimal set could be enriched by other devices like sound, vision, ... Note that every device has its input or output, or both.

In between of these devices, there would be a "coordinator". It would be a glue which mediates between devices, receiving messages from source devices and sending them to target devices. A coordinator could be also considered a kind of device.

There could be many coordinators, and each coordinator code would follow this pattern:

Code: text
(
    COORDINATE
    (
        CONDUCT
        (
            EVENTS
            (RECEIVE (SOURCE ...) (DATA ...))
            ...
        )
        (
            ACTIONS
            (SEND (TARGET ...) (DATA ...))
            ...
        )
    )
    ...
)

The coordinator is a set of conductors that listens to device events and conducts related actions to devices.

The problem solved by coordinator is very important one: managing states. Managing states is needed when, for example, we want to change memory contents as inputs flow in. Code written in Reasoner.js is just a static conglomerate of referentially transparent functions, and if we would be solving states in Reasoner.js, we would have to deal with monads or other gimmicks I am not a big fan of. Other than that, it was also taken under consideration making a separate finite state machine for managing states. But it turned out we don't need anything else than the coordinator between devices which, internally, do or don't manage states.

So, when we want to receive a message from, say, console source, we listen from coordinator to console input event. Then, if we want to process the console input, we pair that event to an action of calling Reasoner.js target code, passing the console input as a parameter, and listening for a result message. The resulting message may raise a new event coupled with an action of sending the data to memory device. By the same event we may also want to write something back to console. To do that, we can put another action for it, next to the action of memorizing data. The whole cycle could repeat on every console input event. Maybe some other arbitrary device events could be also shown useful, like custom reminders, timers, attention requests, ...

With the coordinator I saved myself a bit of a trouble of writing a finite state machine which would be more complicated than implementing the coordinator listening and talking to different devices. It would be up to devices how to manage states. Less is better, right? Especially if there are no functionality trade offs.

[EDIT]
After some research, I discovered that what I'm planning to implement is exactly well known service oriented programming (SOP) paradigm. Good to know. What I call "device", it is called "service" in SOP. I'll align my terminology with SOP terminology. Nevertheless, the process between receiving and sending messages in SOP reminds me very much of term rewriting. Actually it's exactly term rewriting if we are sending and receiving messages from and to the same coordinator/router. I plan to use here the same "variable" notion from reasoner.js, of course, but without nondeterminism.
Pages: [1] 2 3 ... 10

OpenAI Speech-to-Speech Reasoning Demo
by MikeB (AI News )
March 15, 2024, 08:14:02 am
Google Bard report
by ivan.moony (AI News )
February 14, 2024, 04:42:23 pm
Elon Musk's xAI Grok Chatbot
by MikeB (AI News )
December 11, 2023, 06:26:33 am
Nvidia Hype
by 8pla.net (AI News )
December 06, 2023, 10:04:52 pm
How will the OpenAI CEO being Fired affect ChatGPT?
by 8pla.net (AI News )
December 06, 2023, 09:54:25 pm
Independent AI sovereignties
by WriterOfMinds (AI News )
November 08, 2023, 04:51:21 am
LLaMA2 Meta's chatbot released
by 8pla.net (AI News )
October 18, 2023, 11:41:21 pm
AI-Generated Art Cannot Receive Copyrights
by frankinstien (AI News )
August 24, 2023, 08:49:45 am

Users Online

189 Guests, 0 Users

Most Online Today: 248. Most Online Ever: 2369 (November 21, 2020, 04:08:13 pm)

Articles