Pattern based NLP & ASR

  • 66 Replies
  • 289494 Views
*

MikeB

  • Autobot
  • ******
  • 220
Re: Pattern based NLP
« Reply #60 on: July 10, 2023, 10:20:33 am »
Speech Recognition / Phoneme Extraction Tool:

Updated equal-loudness
Reduced noise floor dynamic step up/down amount
Lowered initial volume pick up
Replaced Inverse Blackman window with a 2x amplified Hann window. Similar quality, large speed gain.
Improved consonant framing (plosive find)
Redesigned consonant identification
Added Vowel formant 1 Focusing. Frequencies starting at or above 656hz are now soley detected to transition at or under this amount (to avoid interference from strong nasal tone).
Reduced nasal tone volume (875-1000hz) by half

Working:
"kay, kee, kai"
"no" x3 variations

*

MikeB

  • Autobot
  • ******
  • 220
Re: Pattern based NLP
« Reply #61 on: August 07, 2023, 02:45:27 pm »
Speech Recognition / Phoneme Extraction Tool:

Redesigned consonant plosive-find
Redesigned consonant identification
Updated consonant frequency definitions. Added 2000 (512, 1000, 1500, 2000, 3000, 4000, 5500, 8000)
Consonant viseme upper tone/cheek emphasis changed from boolean to variable.
Updated vowel frequency definitions. Added 2333, 2666, 3500.
Updated vowel identification
Added Vowel format 2 focusing:
    1) F2 frequencies starting above 1000hz cannot transition to 1000hz, to avoid interference
     from nasal tone, and
    2) F2 frequencies where the last two values are the same can only transition up/down
    by one.
Improved Vowel F1 transition analysis to include power & average centre, and for flat transitions to re-alias centre frequency to a defined frequency group. This improves fine frequency transition detection, and for flat frequencies resilience to errors.
Updaed Phoneme Emphasis
Updated equal-loudness
Updated GUI

Working:
"kay, kee, kai"
"no" x3 variations
"yes" x3 variations ("ye")

A lot of range in "no" and "yes" (and by extension all other vowels & consonants) is supported now, and will be setting up a benchmark to test it more.

Auto Phoneme Emphasis depends upon the peak volume in a range of frames. There are three ranges (2 - 5 - 5 frames). Volume detection in the first two frames is important as many loud plosives (eg "k", "t", "p") exist there and need to be represented accurately, as well as account for mouth-open, meaning emphasis travels high in advance. They are all precise to the audio and don't use general curves, so if they're wrong they're wrong, but when they're right they're really right. An emphasis multiplier determines max emphasis.

"Yes" and "No":





*

MikeB

  • Autobot
  • ******
  • 220
Re: Pattern based NLP & ASR
« Reply #62 on: September 14, 2023, 08:14:34 am »
Lipsync:

Creating a Unity script for Face, Blink, and Viseme morphs to use with Daz3D models.
Face blend shapes (happy, sad/lost, angry, shock), plus four others through combination.
Blink blend shapes. Auto-blink with ranged values. Min, max, stop after open/close (stare/sleep), blink once now, squint.
Viseme blend shapes. Emphasis modifier.

Speech Recognition:

Added Visemes to Speech Recognition engine.
Added/Updated Word Hashing and Word Search.

Working: Viseme lipsync "no" x 3, "yes" x 3. (simulated from speech recognition data)

Lipsync is already as good if not better than current SOTA lip syncing.

The default viseme blendshapes from Daz3D work well. However they're single frames and some of the phonemes need two (start and end). So for "oh"; "ah" and "w" are used.

Both the timing and the two-frame "oh" in "no" boost quality by quite a lot.

The audio used is from a few samples of the Google Speech Commands dataset.

"no" x 3, "yes" x 3.


« Last Edit: September 16, 2023, 04:50:36 pm by MikeB »

*

MikeB

  • Autobot
  • ******
  • 220
Re: Pattern based NLP & ASR
« Reply #63 on: October 15, 2023, 09:59:11 am »
Speech Recognition (benchmark of "yes"/"no"):

C version/Console:
Custom wave file loader for benchmarking
Benchmarking a target word
Noise Floor now based on RMS instead of Volume Peak.
Noise Floor Raise 'step' changed from fixed value to 2x current noise floor RMS.
Voice volume minimum changed to 5x current noise floor RMS (range: 3x - 6x).
Removed one-frame "click/pop" noise.
Combined Vol Peak Normalisation with FFT loading for a speed improvement.
Updated Vowel F1 & F2 transition analysis
Updated Vowel frequency group definitions. First three (281, 375, 468hz...).
Updated Equal Loudness
Updated Consonant identification
Other fixes

Working:
BEFORE                        AFTER
"no" (405 records):
"no" = 9.14%                  to: 36.30% (goal 50%)
"n" or "oh" = 36.5%        to: 86.9% (goal 90%)

Time: 0-1ms each.

Very tough benchmark. Many samples are thwart with noise (blips, click/pop, static, paper sounds). There was a large change to detection by only changing noise floor from Peak to RMS. Another large change after redesigning consonant identification.

Work on noise elimination is needed, and testing "yes".

*

MikeB

  • Autobot
  • ******
  • 220
Re: Pattern based NLP & ASR
« Reply #64 on: November 07, 2023, 12:51:46 pm »
Speech Recognition (benchmark of "yes"/"no"):

C version/Console:
Added filtering for "knock/tap" noise
Changed Vol Peak Normalisation to use RMS x 2.4 instead of highest Vol Peak. Slightly more accurate versus background noise, faster, and phoneme/viseme loudness is represented better.
Updated framing. Now using 256 samples up to the first plosive in a syllable, then 512 samples after. This helps in Consonant framing.
Replaced fixed framing 256-512-512-512-[...] with dynamic method mentioned above.
Updated Consonant Plosive Find step values. Initial min step, and F1 group power from last frame.
Updated Consonant identification
Updated Vowel Formant 1 & 2 Focusing
Updated Equal Loudness (increased/doubled some 4khz resonance numbers to better match spectrograms in Praat. 656hz (550-650), 1333hz (1000-1333), 2000-2333hz (1666-2333), 8000hz was already double).

Speech Commands Benchmark:
NO (405 records):
"no" = 44.69% (goal 50%) . "n" or "oh" = 81.7% (goal 90%)
Error | "yes" = 0.99%

YES (419 records):
"ye"/"yeah" = 22.43% (goal 50%). "y",  "e", or "air" = 67.5%  (goal 90%)
Error | "no" = 2.63%

Time: 0-2ms each.

A lot of changes were made, especially in Consonant framing to tell the difference between "y" and "n" better. Small improvements in vowel detection.

Many of the problems in detecting more "yes" (the "y") are noise or volume related. Other issues, are in vowel transition detection, and finding the trailing "s" in "yes".

If "yes" improves to >40% I'll then test "on" & "off".

« Last Edit: November 07, 2023, 03:54:50 pm by MikeB »

*

MikeB

  • Autobot
  • ******
  • 220
Re: Pattern based NLP & ASR
« Reply #65 on: December 21, 2023, 08:59:52 am »
Speech Recognition (benchmark of "yes"/"no"):

C version/Console:
Redesigned Volume Normalisation to improve clarity above 500hz, and per-frame clarity. Samples are now run through a Hilbert filter to reduce bass < 500hz, and Peak Volume Normalisation is at the end of each frame instead of all frames.
The new Hilbert filter and frame-based Peak Vol Normalisation improve these issues:
  1: Deep voices no longer interfere with/reduce volume normalisation of frequencies above 500hz.
  2: Initial frame is now normalised to itself. Other frames are normalised to the peak volume of current & previous frames (which ever is louder - peak volume does not adjust lower). As Vowels are detected to follow Consonants and are generally louder, both are now maximally normalised.
Deleted first frame 1.5x boost.
Updated Consonant Formants and identification.
Redesigned Plosive detection. Frame one plosive ("k"/"p"/"t") loudness minimum must match a fixed value. Frame two(+) plosive ("n"/"m"/"y"/"r") loudness minimum must match ~50% of the last plosive minimum + ~50% of the power of the last frame.
Updated Vowel F1 & F2 transition detection to gradually favour frequencies towards the end of the sound.
Range tuning. Better detection of plosives in consonants "k", "n", "y", detecting both sudden sound increases and rolling increases at any input sound volume range, noise, etc.
Other fixes.

Speech Commands Benchmark:
NO (405 records):
"no" = 48.40% (goal 50%) . "n", "oh", "o", "uh" = 82% (goal 90%).
Error | "yes" = 0.99%

YES (419 records):
"ye"/"yer"/"yeah" = 28.64% (goal 50%). "y",  "e", "er", "air", "ah", "a", "uh" = 70-80%  (goal 90%).
words.
Error | "no" = 2.15%

Time: 1-2ms each

+4% increase to "no", and +6% increase to "yes". Consonant identification needs to be reworked as the definition for "y" and "n" are too close. "kay-key-kai" works.

The benchmark is more of a test of noise rejection and volume normalisation than anything else. Very happy with the robustness of this now.

New consonant identification should see at least a 10%+ improvement to "yes" with less error. If error stays around <= 1% then more alternate vowels can be used and a higher result is possible.

*

MikeB

  • Autobot
  • ******
  • 220
Re: Pattern based NLP & ASR
« Reply #66 on: March 07, 2024, 07:36:41 am »
I have pages of more work on this but I'm not happy with the results just yet. Both "yes" and "no" are down by 10%.

I'm not at the peak of how far pure frequency analysis can go. Currently, spectrogram analysis is improved for frequencies under 1000hz to a high degree, which improves vowel quality/reliability. There is also some improvement to consonant quality/reliability, but there are still problems separating "n" from "y".

With "n" and "y" improved there will be a definite jump in benchmark results.

I still have one more method to try to improve spectrogram quality.

Apart from the spectrogram, only the rules for identifying consonants & vowels need to be improved.

I don't get much done over summer in Australia, but within the next three months I may return to it.

There's a lot of value in keyword spotting that's so efficient. I also have a lipsync video test in mind that uses hundreds of "yes" and "no" from the Google Speech Commands dataset, all live processed and lipsynced, which will be interesting.

 


OpenAI Speech-to-Speech Reasoning Demo
by MikeB (AI News )
March 31, 2024, 01:00:53 pm
Say good-bye to GPUs...
by MikeB (AI News )
March 23, 2024, 09:23:52 am
Google Bard report
by ivan.moony (AI News )
February 14, 2024, 04:42:23 pm
Elon Musk's xAI Grok Chatbot
by MikeB (AI News )
December 11, 2023, 06:26:33 am
Nvidia Hype
by 8pla.net (AI News )
December 06, 2023, 10:04:52 pm
How will the OpenAI CEO being Fired affect ChatGPT?
by 8pla.net (AI News )
December 06, 2023, 09:54:25 pm
Independent AI sovereignties
by WriterOfMinds (AI News )
November 08, 2023, 04:51:21 am
LLaMA2 Meta's chatbot released
by 8pla.net (AI News )
October 18, 2023, 11:41:21 pm

Users Online

292 Guests, 0 Users

Most Online Today: 343. Most Online Ever: 2369 (November 21, 2020, 04:08:13 pm)

Articles