AI Model Penetration: Testing LLMs for Prompt Injection & Jailbreaks