In this episode, we're taking the next step with our queuing setup by actually queuing the job that checks our endpoints. First up, we do a bit of housekeeping by clearing out previous checks from the database so we can see our changes more clearly.
Next, we jump into the endpoint check job and add the ShouldQueue
interface. This means when the job is dispatched, it gets put on our queue instead of running immediately. We run a quick test using php artisan short-schedule:run
to dispatch jobs and watch them pop up in the jobs table. However, there's a problem: our jobs are piling up really quickly because nothing is stopping them from being queued repeatedly every second (since the next check date isn't being updated until after the job runs).
We explore why this happens (the core of it: the job's update to the next check date doesn't happen until the job actually processes, so the scheduler keeps queuing it if the previous one hasn't finished), and see that we end up with duplicate jobs for the same endpoint.
To fix this, we introduce job uniqueness by using Laravel's ShouldBeUnique
interface. By assigning a unique key to each job (based on the endpoint's ID), we make sure only one job per endpoint can be in the queue at a time. As a further safeguard, we make the unique ID even more specific (like 'endpoint_1'), in case we have other jobs with overlapping IDs.
After making these adjustments, we run both our scheduler and queue worker together and watch the database to confirm: now we're only processing one check per endpoint at a time, just as we intended! The times might lag by a second or two (which is normal), but everything is running smoothly without duplicate checks.