Stable Diffusion Model — Data Labeling for Text-to-Image Generation
Implemented a stable diffusion model to generate images from text prompts, involving text-image data alignment. Labeled and evaluated generated image outputs against prompt adherence and visual fidelity. Conducted experiments to improve labeling processes and optimize data for diffusion-based generation. • Integrated CLIP for aligning text and image representations. • Curated and annotated prompt-image pairs for training. • Designed custom diffusion schedulers and guidance strategies. • Evaluated outputs using manual and automated quality metrics.