Nvidia Hype

  • 1 Replies
  • 63042 Views
*

frankinstien

  • Replicant
  • ********
  • 653
    • Knowledgeable Machines
Nvidia Hype
« on: December 06, 2023, 09:52:57 pm »
Currently a H100 at its lowest price is about $35,000! The A100s new are about $6200, and used I've seen as low as $4500, but the A100 can't do fp8. However the RTX 4090 does support fp8, but its version is 8.9 and the H100 version is 9.0. So the cuda libraries only support fp8 for the H100, and Nvidia seems to be crawly at opening up fp8 for version 8.9. OK, but there are some other open source libs that do support fp8 for the RTX4090 and you can get by. So hears the problem the Nvidia specs for the RTX 4090 states that is supports fp8 and it can do a petaflop+ in performance. A white paper was written up using a RTX 4090 that was OCed and the best they could get with fp8 is 660 TFLOPs. This is frustrating since I almost bought a RTX 4090 because Nvidia states it can do 1 petaflop+, but apparently that is a lie... :tickedoff:

The fp16 using the Tensor cores of the RTX 30 series, specifically for RTX 3090 ti is 320 TFLOPs. Where latest prices for RTX 3090 ti used is $720 to $1200. Considering two RTX 3090 ti at the low end would be $1440 where you get at least 48GB of memory between the two, as an enthusiast, it might be a better route.

The RTX 4090s go as low as $1699 to $2800, if you go the the Nvidia shop links.

*

8pla.net

  • Trusty Member
  • ***********
  • Eve
  • *
  • 1307
  • TV News. Pub. UAL (PhD). Robitron Mod. LPC Judge.
    • 8pla.net
Re: Nvidia Hype
« Reply #1 on: December 06, 2023, 10:04:52 pm »
That's a great idea! It would certainly be beneficial to get two RTX 3090 ti cards at the lower end of the price range. You'd get double the memory, and if you have the right setup to support two cards, this could bring a significant boost in performance. Be sure to do your research to ensure it makes sense for your particular needs.
My Very Enormous Monster Just Stopped Using Nine

 


Requirements for functional equivalence to conscious processing?
by DaltonG (General AI Discussion)
November 19, 2024, 11:56:05 am
Will LLMs ever learn what is ... is?
by HS (Future of AI)
November 10, 2024, 06:28:10 pm
Who's the AI?
by frankinstien (Future of AI)
November 04, 2024, 05:45:05 am
Project Acuitas
by WriterOfMinds (General Project Discussion)
October 27, 2024, 09:17:10 pm
Ai improving AI
by infurl (AI Programming)
October 19, 2024, 03:43:29 am
Atronach's Eye
by WriterOfMinds (Home Made Robots)
October 13, 2024, 09:52:42 pm
Running local AI models
by spydaz (AI Programming)
October 07, 2024, 09:00:53 am
Hi IM BAA---AAACK!!
by MagnusWootton (Home Made Robots)
September 16, 2024, 09:49:10 pm
LLaMA2 Meta's chatbot released
by spydaz (AI News )
August 24, 2024, 02:58:36 pm
ollama and llama3
by spydaz (AI News )
August 24, 2024, 02:55:13 pm
AI controlled F-16, for real!
by frankinstien (AI News )
June 15, 2024, 05:40:28 am
Open AI GPT-4o - audio, vision, text combined reasoning
by MikeB (AI News )
May 14, 2024, 05:46:48 am
OpenAI Speech-to-Speech Reasoning Demo
by MikeB (AI News )
March 31, 2024, 01:00:53 pm
Say good-bye to GPUs...
by MikeB (AI News )
March 23, 2024, 09:23:52 am
Google Bard report
by ivan.moony (AI News )
February 14, 2024, 04:42:23 pm
Elon Musk's xAI Grok Chatbot
by MikeB (AI News )
December 11, 2023, 06:26:33 am

Users Online

299 Guests, 0 Users

Most Online Today: 461. Most Online Ever: 2369 (November 21, 2020, 04:08:13 pm)

Articles