My HAl Rig

  • 8 Replies
  • 750 Views
*

frankinstien

  • Starship Trooper
  • *******
  • 368
    • Knowledgeable Machines
My HAl Rig
« on: May 18, 2020, 01:33:38 am »
Since I believe the neural network approach isn't the most efficient use of digital resources because it effectively is a linear search in a problem or knowledge domain, no matter how much pruning of a network you do. So I'm using a good deal of RAM, 128GB, and two 10 core Xeons. This doesn't mean I don't use GPUs, I do and its a Vega 56, I use it primarily to work with fuzzy logic and fast index calculations as well as process video and audio data. Below is a diagram of how I organize memory from RAM, SSD to hard drives.



The SSD is a 1 TB drive and it stores data from a noSql database I engineered that uses O(1) lookup indexing. The Hybrid hard drive is more of a backup of the SSD but will store very large datasets like long video. You'll notice a block called "Cross Domain Harness" I really wanted to avoid a lot of serializing and deserializing between processes, so the harness allows the dynamic injection of various programs under a single domain. Since when I prototype I haven't necessarily integrated all the pieces yet and I don't want to deal with the latency of serialization. This way I can pass data by reference so long as the programs have imports to the datatypes used. It makes life a lot easier to troubleshoot particularly when the amount of data loading is in the 10s of GBs.

The machine was built three years ago so upgrading is starting to seem like a good idea since AMD's processors are reasonably priced, however, the RAM prices haven't dropped by much. So if I did upgrade it's a 50% improvement in performance using the Ryzen 9 3900X 12-core. Because my current machine is a dual-socket Xeon board it has 16 memory slots which allows me to use lower-cost 8GB sticks. In fact, I do debate if it's worth putting up the money for the newer Ryzen or just getting 16GB sticks to work with larger datasets?

*

infurl

  • Administrator
  • **********
  • Millennium Man
  • *
  • 1049
  • Humans will disappoint you.
    • Home Page
Re: My HAl Rig
« Reply #1 on: May 18, 2020, 03:18:20 am »
That's a pretty good setup that you've got there. I'd hold off on upgrading your hardware until you can at least double your performance. In the meantime, work on improving your algorithms or implementation because ultimately that's where the most gains are to be made. When it's too easy to keep getting faster hardware, it's easy to forget that.

What exactly can you do with it at the moment? You can store an awful lot of data, is there any particular problem that you are trying to solve?

*

frankinstien

  • Starship Trooper
  • *******
  • 368
    • Knowledgeable Machines
Re: My HAl Rig
« Reply #2 on: May 18, 2020, 02:49:57 pm »
What exactly can you do with it at the moment? You can store an awful lot of data, is there any particular problem that you are trying to solve?

The objective is to build the infrastructure for episodic memories and process and store information across a spectrum of sensory inputs, so it can be freely associated, yet be very consumable for digital processing.

*

Yervelcome

  • Roomba
  • *
  • 16
  • Not Morgan Freeman
    • Artificial Knowledge Workers
Re: My HAl Rig
« Reply #3 on: May 18, 2020, 04:22:34 pm »
Do you have a write up where I can read about it?

*

frankinstien

  • Starship Trooper
  • *******
  • 368
    • Knowledgeable Machines
Re: My HAl Rig
« Reply #4 on: May 18, 2020, 06:29:17 pm »
Do you have a write up where I can read about it?

I don't have any formal material fully explaining the concepts of software right now but it's a "To Do" in the near future.

*

infurl

  • Administrator
  • **********
  • Millennium Man
  • *
  • 1049
  • Humans will disappoint you.
    • Home Page
Re: My HAl Rig
« Reply #5 on: May 18, 2020, 10:15:31 pm »
Do you have a write up where I can read about it?
I don't have any formal material fully explaining the concepts of software right now but it's a "To Do" in the near future.

Since you've piqued our interest I daresay the coming barrage of questions will prompt you to make a start very soon.  ;D

I'm curious about the custom solution that you're developing. You described it as noSQL and having O(1) access characteristics which implies hashing. You said you were using your GPU for indexing which I understand to mean that you are using it for generating your hash keys. I can guess why you would want to do that if you're interested in turning media like video into associative memory.

How about the symbolic side of things. Have you done anything with relational databases or triple stores such as Resource Description Framework (RDF)? What about ontologies?

*

frankinstien

  • Starship Trooper
  • *******
  • 368
    • Knowledgeable Machines
Re: My HAl Rig
« Reply #6 on: May 19, 2020, 02:15:35 am »
I'm curious about the custom solution that you're developing. You described it as noSQL and having O(1) access characteristics which implies hashing. You said you were using your GPU for indexing which I understand to mean that you are using it for generating your hash keys. I can guess why you would want to do that if you're interested in turning media like video into associative memory.

Yes hashing is used but it's a bit more exotic than other applications since it has to work with fuzzy sets of data. For the video, the approach is breaking visual data into manageable pieces that realize into generalizations where those generalizations allow for associations.

How about the symbolic side of things. Have you done anything with relational databases or triple stores such as Resource Description Framework (RDF)? What about ontologies?
The ontological framework is custom as well and uses NoSql for long term storage and is structurally a graph database. The entire approach is object-oriented and allows for sperate threads to crawl through knowledge domains as well as allowing for updates to propagate through the graph instantly. It was inspired by Roget's Thesaurus. It's used to evaluate speech or text where that data can be analyzed as to what kind of bias generalizations can be derived from the hierarchal nodes. Words are endowed with properties and allow for validation of a word's use within a sentence. This helps in quantifying logic, context, and meaning of a sentence or groups of sentences.

*

infurl

  • Administrator
  • **********
  • Millennium Man
  • *
  • 1049
  • Humans will disappoint you.
    • Home Page
Re: My HAl Rig
« Reply #7 on: May 19, 2020, 03:28:18 am »
Yes hashing is used but it's a bit more exotic than other applications since it has to work with fuzzy sets of data. For the video, the approach is breaking visual data into manageable pieces that realize into generalizations where those generalizations allow for associations.

How does that differ from say fingerprinting a photo or music to find duplicates, or Google's ability to identify copyright material in YouTube videos?

The ontological framework is custom as well and uses NoSql for long term storage and is structurally a graph database. The entire approach is object-oriented and allows for separate threads to crawl through knowledge domains as well as allowing for updates to propagate through the graph instantly. It was inspired by Roget's Thesaurus. It's used to evaluate speech or text where that data can be analyzed as to what kind of bias generalizations can be derived from the hierarchical nodes. Words are endowed with properties and allow for validation of a word's use within a sentence. This helps in quantifying logic, context, and meaning of a sentence or groups of sentences.

It sounds like you are doing more than just sentiment analysis then. Do you have some examples of inputs and outputs to illustrate what you can use it for?

*

frankinstien

  • Starship Trooper
  • *******
  • 368
    • Knowledgeable Machines
Re: My HAl Rig
« Reply #8 on: May 19, 2020, 06:43:54 pm »
How does that differ from say fingerprinting a photo or music to find duplicates, or Google's ability to identify copyright material in YouTube videos?

I'm not sure what Google is doing to uniquely identify visual images to protect copyrights, but the method I'm using is based on fractal dimensions.

It sounds like you are doing more than just sentiment analysis then. Do you have some examples of inputs and outputs to illustrate what you can use it for?

I will be putting together some examples and videos for my website and I'll post them on this forum as well.

 


Loebner Prize 2021
by squarebear (AI News )
January 26, 2021, 08:29:08 pm
GTP-3 is bigoted!
by ivan.moony (AI News )
January 22, 2021, 01:47:01 pm
White House Launches National AI Office
by MikeB (AI News )
January 15, 2021, 03:45:53 am
DALL-E text & image 2 image !!!!!
by LOCKSUIT (AI News )
January 06, 2021, 08:14:52 pm
Creating a VR sense of temperature
by frankinstien (AI News )
January 02, 2021, 09:19:51 am
Robots dancing better than us?
by 8pla.net (Robotics News)
December 31, 2020, 05:54:29 am
Bjarne Stroustrup (the creator of C++) talks about Machine Learning
by MikeB (AI News )
December 19, 2020, 12:26:31 pm
Wheels Are Better Than Feet for Legged Robots
by HS (Robotics News)
December 10, 2020, 07:51:29 pm

Users Online

156 Guests, 0 Users

Most Online Today: 159. Most Online Ever: 2369 (November 21, 2020, 04:08:13 pm)

Articles