The rapid evolution of generative AI tools in software engineering has opened new possibilities for automating coding tasks. Test-Driven Development (TDD) is a well-established methodology that enhances code quality by enforcing test creation before implementation. This study examines the integration of Generative AI into a TDD inspired workflow. In an experimental framework, code produced with and without predefined test cases is evaluated, and an iterative correction cycle enables the model to refine its solutions. Results show that providing test cases as input leads to more accurate code generation, while the iterative process yields further improvements in correctness. These outcomes highlight both the promise and the practical challenges of adopting generative AI within structured development practices.