Queue & Scheduler Setup
Overview
UnoPim uses Laravel's queue system for background processing of imports, exports, completeness calculations, and other asynchronous tasks. The scheduler handles periodic tasks like cache cleanup and scheduled exports.
This guide covers setting up Supervisor for queue workers and cron for the scheduler on bare-metal installations.
Docker Users
If you are running UnoPim with Docker, queue workers and the scheduler are configured inside the container. You do not need to follow this guide. See the Docker installation guide for container-specific queue configuration.
Queue Driver Configuration
UnoPim supports multiple queue drivers. Configure the driver in your .env file.
Redis (Recommended)
Redis is the recommended queue driver for production. It provides fast, reliable message queuing with minimal overhead.
QUEUE_CONNECTION=redis
REDIS_HOST=127.0.0.1
REDIS_PORT=6379
REDIS_PASSWORD=nullMake sure Redis is installed and running:
redis-cli ping # Should return PONGDatabase
The database driver stores jobs in a database table. It requires no additional infrastructure but is slower than Redis.
QUEUE_CONNECTION=databaseRun the migration to create the jobs table (if not already present):
php artisan queue:table
php artisan migrateSync (Development Only)
The sync driver processes jobs immediately during the request. This is useful for development and debugging but should never be used in production.
QUEUE_CONNECTION=syncWARNING
The sync driver blocks the HTTP request until the job finishes. Long-running jobs like imports and exports will cause timeouts. Always use redis or database in production.
Queue Worker with Supervisor
Supervisor is a process control system that keeps queue workers running and restarts them if they fail.
Install Supervisor
Ubuntu / Debian:
sudo apt install -y supervisorCentOS / RHEL:
sudo dnf install -y supervisor
sudo systemctl enable supervisord
sudo systemctl start supervisordSupervisor Configuration
Ubuntu / Debian - create /etc/supervisor/conf.d/unopim-worker.conf:
sudo nano /etc/supervisor/conf.d/unopim-worker.conf[program:unopim-worker]
process_name=%(program_name)s_%(process_num)02d
command=php /var/www/unopim/artisan queue:work redis --queue=system,completeness,default --tries=3 --timeout=90 --max-jobs=1000 --max-time=3600
autostart=true
autorestart=true
stopasgroup=true
killasgroup=true
user=www-data
numprocs=2
redirect_stderr=true
stdout_logfile=/var/log/supervisor/unopim-worker.log
stopwaitsecs=3600CentOS / RHEL - create /etc/supervisord.d/unopim-worker.ini:
sudo nano /etc/supervisord.d/unopim-worker.ini[program:unopim-worker]
process_name=%(program_name)s_%(process_num)02d
command=php /var/www/unopim/artisan queue:work redis --queue=system,completeness,default --tries=3 --timeout=90 --max-jobs=1000 --max-time=3600
autostart=true
autorestart=true
stopasgroup=true
killasgroup=true
user=nginx
numprocs=2
redirect_stderr=true
stdout_logfile=/var/log/supervisor/unopim-worker.log
stopwaitsecs=3600Configuration Options Explained
| Option | Value | Description |
|---|---|---|
--queue | system,completeness,default | Queue names in priority order |
--tries | 3 | Maximum number of attempts before a job is marked as failed |
--timeout | 90 | Maximum seconds a job can run before being killed |
--max-jobs | 1000 | Restart the worker after processing 1000 jobs (prevents memory leaks) |
--max-time | 3600 | Restart the worker after running for 1 hour |
numprocs | 2 | Number of parallel worker processes |
stopasgroup | true | Stop all child processes when the worker is stopped |
killasgroup | true | Kill all child processes when the worker is killed |
Start the Workers
sudo supervisorctl reread
sudo supervisorctl update
sudo supervisorctl start unopim-worker:*Common Supervisor Commands
# Check worker status
sudo supervisorctl status
# Restart workers (after code deployment)
sudo supervisorctl restart unopim-worker:*
# Stop workers
sudo supervisorctl stop unopim-worker:*
# View worker logs
sudo tail -f /var/log/supervisor/unopim-worker.logWARNING
After deploying new code, always restart queue workers so they pick up the latest changes:
sudo supervisorctl restart unopim-worker:*Cron Scheduler Setup
The Laravel scheduler runs periodic tasks every minute. UnoPim uses it for cache cleanup, scheduled exports, and other maintenance tasks.
Add the Cron Entry
Ubuntu / Debian (user: www-data):
sudo crontab -u www-data -eCentOS / RHEL (user: nginx):
sudo crontab -u nginx -eAdd the following line:
* * * * * cd /var/www/unopim && php artisan schedule:run >> /dev/null 2>&1Verify the Cron Entry
# Ubuntu / Debian
sudo crontab -u www-data -l
# CentOS / RHEL
sudo crontab -u nginx -lView Scheduled Tasks
To see which tasks are scheduled:
cd /var/www/unopim
php artisan schedule:listMonitoring Failed Jobs
When a queued job fails after exhausting all retry attempts, it is stored in the wk_failed_jobs table.
List Failed Jobs
cd /var/www/unopim
php artisan queue:failedRetry a Failed Job
# Retry a specific job by ID
php artisan queue:retry <job-id>
# Retry all failed jobs
php artisan queue:retry allDelete Failed Jobs
# Delete a specific failed job
php artisan queue:forget <job-id>
# Delete all failed jobs
php artisan queue:flushMonitor Queue Size
Check how many jobs are pending in each queue:
# With Redis driver
redis-cli LLEN queues:system
redis-cli LLEN queues:completeness
redis-cli LLEN queues:defaultTroubleshooting
Workers not processing jobs
Check that the queue driver in
.envmatches the--connectionin the Supervisor config:bashgrep QUEUE_CONNECTION /var/www/unopim/.envVerify Supervisor is running:
bashsudo systemctl status supervisor # Ubuntu/Debian sudo systemctl status supervisord # CentOS/RHELCheck worker logs:
bashsudo tail -f /var/log/supervisor/unopim-worker.log
Jobs failing immediately
Check the Laravel log for errors:
bashtail -f /var/www/unopim/storage/logs/laravel.logList failed jobs and inspect the error message:
bashphp artisan queue:failed
Workers consuming too much memory
Reduce the --max-jobs value or add --memory=128 to limit memory usage per worker:
command=php /var/www/unopim/artisan queue:work redis --queue=system,completeness,default --tries=3 --timeout=90 --max-jobs=500 --max-time=3600 --memory=128Scheduler not running
Verify the cron entry exists:
bashsudo crontab -u www-data -l # Ubuntu/Debian sudo crontab -u nginx -l # CentOS/RHELCheck that the cron service is running:
bashsudo systemctl status cron # Ubuntu/Debian sudo systemctl status crond # CentOS/RHELTest the scheduler manually:
bashcd /var/www/unopim php artisan schedule:run
Redis connection refused
Verify Redis is running:
bashsudo systemctl status redis-server # Ubuntu/Debian sudo systemctl status redis # CentOS/RHELTest the connection:
bashredis-cli pingCheck
.envRedis settings match the actual Redis configuration.