You can, though you might run into memory limitations running it on a GPU. There can be tuning done to lower the VRAM utilization, but I have been lucky enough to not need this - I do some CG work and ran into VRAM limitations there, so I'm on a 3090 with 24GB.
You can always run it on a CPU and utilize your RAM instead if needed, though the training might extend to 24+ hours that way.
You can always run it on a CPU and utilize your RAM instead if needed, though the training might extend to 24+ hours that way.
Edit: Here's an example of someone successfully using textual inversion - https://www.reddit.com/r/StableDiffusion/comments/wz88lg/i_g...