In this episode, we dive into what happens when one of our jobs fails during processing, specifically using our install nginx job as an example. We intentionally introduce an exception so we can observe the failure scenario in action.
You'll see that when a job in a batch fails, all the related tasks and the server itself stick around in the database. We don't immediately wipe everything out—this is by design. It's important for users to be able to see what went wrong if a job didn't complete successfully. We explore how the observer logic is set up to prevent deleting the server if there are failed jobs, helping users diagnose problems.
Next, we look at what happens when you try to destroy a server with failed jobs still attached. The UI will let you trigger a destroy action, and, with a quick dive into the database, you'll notice all associated records are only deleted once you manually destroy the server. We show you how to cover both scenarios—successful and failed jobs—by tweaking the server deletion logic so it's handled gracefully.
By the end of this video, you'll understand how job failures affect cleanup logic, why we don't always delete everything immediately, and where in your observer you can adjust this process to match your application's needs. Now that you’re familiar with handling batches and observer callbacks, you’ve got the tools to make this work in whatever workflow you’re building!