Garak evaluates the potential failures of an LLM in undesirable ways, examining aspects such as hallucination, data leakage, prompt injection, misinformation, toxicity, jailbreaks, and various other vulnerabilities. This free tool is designed with an eagerness for development, continually seeking to enhance its functionalities for better application support. Operating as a command-line utility, Garak is compatible with both Linux and OSX systems; you can easily download it from PyPI and get started right away. The pip version of Garak receives regular updates, ensuring it remains current, while its specific dependencies recommend setting it up within its own Conda environment. To initiate a scan, Garak requires the model to be analyzed and, by default, will conduct all available probes on that model utilizing the suggested vulnerability detectors for each. During the scanning process, users will see a progress bar for every loaded probe, and upon completion, Garak will provide a detailed evaluation of each probe's findings across all detectors. This makes Garak not only a powerful tool for assessment but also a vital resource for researchers and developers aiming to enhance the safety and reliability of LLMs.