Re: Simulating guitar stack
- In reply to: Goran_Mekić : "Re: Simulating guitar stack"
- Go to: [ bottom of page ] [ top of archives ] [ this month ]
Date: Tue, 30 Jul 2024 20:01:15 UTC
On Sunday, July 28, 2024 9:21:00 PM CEST Goran Mekić wrote: > As for neural amp modeling, I manually compiled following plugins: > * https://github.com/mikeoliphant/neural-amp-modeler-lv2 > * https://github.com/AidaDSP/aidadsp-lv2 > > They both work but first one has only one preset (hopefully others will > follow as time goes by), and second one is not much better: > https://tonehunt.org/models?tags%5B0%5D=aida-x. From such a limited > experience I would say the technology is promising. There's a lot more > to be done in this field to make it really shine, but what I heard I > already like more than guitarix and/or rakarrack. That's not a very high standard, I think both guitarix and rakarrack amps sound quite lifeless. I mean it's better than nothing, but I always wondered why people use that. Although I've seen people shell out a lot of money for expensive Kemper hardware, which also sounds lifeless, so... To each their own :-) > > As for capture, I do have problems. Luckily I found aliki from > https://kokkinizita.linuxaudio.org/linuxaudio/ and managed to compile it > and run it, but not yet capture any IR (I have a clue what's the > problem, I'll write if I get stuck). Capturing for neural amp is > trickier. No matter how I record it with ardour I get different number > of samples for input and output file. I tried appending silence to both > files (well, tracks really, as it's ardour I'm using for capturing) and > aligning them, listening to what jack_iodelay tells me, reseting > latencies ... everything I could think of, but in and out file always > have different number of samples. My reamping works great and I can hear > my gear just fine, but managing to produce the proper file(s) is tricky. > The procedures I followed: > * neural amp modeler: > https://colab.research.google.com/github/sdatkinson/NAMTrainerColab/blob/mai > n/notebook.ipynb * aida: > https://colab.research.google.com/github/AidaDSP/Automated-GuitarAmpModellin > g/blob/aidadsp_devel/AIDA_X_Model_Trainer.ipynb (it has some errors > currently, but reports different sample sizes) > > I stream-exported ardour tracks with "apply track processing" off, > otherwise all files are saved as stereo. If anyone can recommend how to > capture the sound correctly for training any of the two AI > implementations, I would be really grateful. I tried to record my studio > FX processor which introduces additional latency, but I also set > latencies according to jack_iodelay. Please advise if you have any ideas! Ardour does support exact position and length parameters for recordings inside tracks. Try the context menu, it's buried there somewhere. You may have to set the unit to samples or some very precise timestamp. As an alternative, you could stem export the tracks or copy the wav files in the Ardour project folder directly, and use an external editor. I suppose a program like Audacity should be able to trim such files at exact sample lengths. Good luck! Florian