This episode is for members only

Sign up to access "Job Batching Progress with Laravel" right now.

Get started
Already a member? Sign in to continue
Playing
18. Cleaning up after failure

Transcript

00:00
OK, let's take a look at what happens when one of our jobs fails. So if we head back over to our install nginx job and just pull back in this exception, I'm going to do this just before we sleep, just so it's a little bit quicker.
00:13
Let's go over and either restart or run our queue. So let's go ahead and run queue work. And let's hop over to the UI and hit create server. OK, so once this first job completes, we will see this nginx one fail.
00:26
And we're just stuck at this state now. So all of the tasks are still in the database. What happens if we now want to destroy our server? Well, what's going to happen is when we click on this, it's going to redirect us,
00:38
because obviously we have a redirect in there. If we head over to the database, you can see that the server and the tasks still hanging around. Now, the reason this happens is we have failed jobs within our batch. Remember, we set up the condition a little bit earlier over in our observer
00:54
to say that if this didn't have failed jobs, then we then go ahead and delete the server. The reason that we did this is because if we get a failed job, we don't want to delete everything. We don't want to delete the server because we want the user to be able to see
01:09
over on that server what has gone wrong. OK, this is still hanging around in the database. So let's grab the ID of this and just put this directly into here. And let's look at what happens now when we need to destroy this server.
01:20
Effectively, we're just going to have to manually destroy it. So over in show server, when we destroy this server, let's also at this point delete the server too. And that will basically cover both cases.
01:33
So let's say this server and delete, and then we redirect. OK, let's see the difference. So I'm going to hit destroy server. That gets deleted.
01:40
Let's head over to the database. And of course, that's been deleted as well. So feel free to tweak around any of the callbacks that we have within our observer just here and change this up based on what you need to do.
01:54
We're being very specific here with this example. But now that you know about each of these callbacks and also about how to grab a batch and do something with a batch, you can just tweak this to work with the flow of whatever you're processing.

Episode summary

In this episode, we dive into what happens when one of our jobs fails during processing, specifically using our install nginx job as an example. We intentionally introduce an exception so we can observe the failure scenario in action.

You'll see that when a job in a batch fails, all the related tasks and the server itself stick around in the database. We don't immediately wipe everything out—this is by design. It's important for users to be able to see what went wrong if a job didn't complete successfully. We explore how the observer logic is set up to prevent deleting the server if there are failed jobs, helping users diagnose problems.

Next, we look at what happens when you try to destroy a server with failed jobs still attached. The UI will let you trigger a destroy action, and, with a quick dive into the database, you'll notice all associated records are only deleted once you manually destroy the server. We show you how to cover both scenarios—successful and failed jobs—by tweaking the server deletion logic so it's handled gracefully.

By the end of this video, you'll understand how job failures affect cleanup logic, why we don't always delete everything immediately, and where in your observer you can adjust this process to match your application's needs. Now that you’re familiar with handling batches and observer callbacks, you’ve got the tools to make this work in whatever workflow you’re building!

Episode discussion

No comments, yet. Be the first!