[Solved] SQLSTATE[23000]: Integrity constraint violation: 1062 Duplicate entry for key 'failed_jobs_uuid_unique'

Updated: 13th June 2024
Tags: php laravel

If you are using long jobs you may encounter something like this:

SQLSTATE[23000]: Integrity constraint violation: 1062 Duplicate entry  for key 'failed_jobs_uuid_unique'

So the problem is that if your job timeout is bigger than 'retry_after' => 90, in queue.php config file, it will run your job again even if it doesn't fail. (Yeah, that is very strange).

So to solve this problem 3 ways:

If your all jobs are small, you can make retry_after to be bigger than the biggest timeout in your jobs.
Example: you have

job1 has timeout 60
job2 has timeout 70
job3 has timeout 80
job4 has timeout 90

Then you set retry_after=91 in file queue.php

//queue.php
//....
    'database' => [
            'driver' => 'database',
            'table' => 'jobs',
            'queue' => 'default',
            'retry_after' => 91,
            'after_commit' => false,
        ],

In job class with big timeout, add line public $tries = 1;. This will disable autoretrying. I use this.

Additionaly you can add try catch and in catch method call $this->release();. If you go for it add some counter so you don't retry it forever. Not tested yet.

Example:

<?php
class MyLongJob implements ShouldQueue
{
    use InteractsWithQueue, Queueable, SerializesModels;
    
    public int $tries = 1; //set tries to 1 so it will not run again after retry_after
    public int $timeout = 6000;

    public function __construct()
    {
        //....

If you want manually retry it you can use something like this

<?php
class MyLongJob implements ShouldQueue
{
    use InteractsWithQueue, Queueable, SerializesModels;
    
    public int $tries = 1; //set tries to 1 so it will not run again after retry_after
    public int $timeout = 6000;


    //note we use this only because this job is very long and others are fast. So we don't want use default tries and increase retry after in queue.php for all jobs
    protected $retryCount = 0;
    protected $maxRetries = 2; // Maximum additional retries

    public function __construct()
    {
         try {
            // your long job that can fail
            
        } catch (\Exception $e) {
            // Increment the retry count
            $this->retryCount++;

            // Check if the maximum number of retries has been reached
            if ($this->retryCount <= $this->maxRetries) {
                // If an error occurs, release the job back to the queue
                $this->release(90); // Re-queue the job after 90 seconds
            } else {
                // Handle the case where maximum retries have been reached
                // This could be logging, notifications, etc.
                \Log::error('Job failed after maximum retries: ' . $e->getMessage());
            }
        }
 

Method 3

You can have different queue drivers for big and small jobs. I don't like mixing, but if you do you can use it.