Programming

MiniZinc cumulatives: model staff schedules & breaks

Model office hours, lunch breaks and weekends in MiniZinc using cumulatives. Time-compression, per-time capacities, and task-splitting to allow resumes.

1 answer 1 view

How to model office hours, lunch breaks, and weekends in a MiniZinc staff scheduling problem using the cumulatives constraint?

I’m using MiniZinc to model a staff scheduling problem where tasks with given durations (in hours) are assigned to staff members. Currently, I use the cumulatives constraint to minimize makespan, with time measured in hourly granularity. However, I need to handle non-working periods correctly:

  • Work only during office hours (e.g., 9am-5pm)
  • No work on weekends
  • No work during lunch breaks or similar recurring breaks

Simulating these with fake activities keeps staff busy but is too restrictive, as it prevents tasks from spanning days. For example, in an 8-hour workday, a staff member should be able to complete a 4-hour task, work 4 hours on a 5-hour task, and resume the remaining hour the next morning.

What are standard modeling approaches in MiniZinc or constraint programming for handling recurring non-working times while allowing tasks to span multiple days?

For a MiniZinc staff schedule that uses the cumulatives constraint, there are three standard patterns: compress working hours into contiguous “working slots” and run cumulatives on that reduced timeline; discretize full wall‑clock time and enforce per‑timepoint capacity (set capacity=0 for lunch/weekends); or split each job into ordered sub‑tasks (daily or hourly chunks) and schedule those parts on working slots so a long task can pause and resume. Which you pick depends on whether tasks are preemptible and on problem size — compression + chunking usually gives the best mix of solver performance and the ability to resume work across days.


Contents


Overview: MiniZinc scheduling and the cumulatives constraint

The MiniZinc global constraints (cumulative / cumulatives) enforce that the sum of overlapping tasks’ resource “heights” does not exceed a fixed capacity, so they’re ideal when you want to limit how many staff can work at the same time; see the MiniZinc Handbook — Scheduling constraints for the exact library signatures. But cumulatives assumes a constant capacity over its timeline and doesn’t intrinsically model recurring non‑working periods or task preemption. So to represent office hours, lunch breaks and weekends you need a wrapping encoding that either (a) removes non‑working time from the timeline, (b) enforces capacity=0 at specific wall‑clock points, or © breaks tasks into parts that are scheduled only during allowed slots — then keep using cumulatives (or a per‑timepoint sum) to enforce staff limits.


Approach 1 — Time‑compression (slot indexing) for staff schedule

What it is

  • Build a compressed timeline containing only working hours (e.g., for 5 workdays with 9–17 hours you have 5×8 = 40 slots).
  • Express durations in working‑hour units and schedule tasks against that compressed index.
  • Apply cumulative/cumulatives exactly as you do now — the non‑working hours are simply not part of the domain, so you never need special “forbid” rules for lunch or weekends.

Why this helps

  • Smaller horizon → fewer timepoints and often much faster solving.
  • You can use the solver’s optimized cumulative implementation directly.
  • Mapping back to wall clock is a simple arithmetic transform (day = slot div hours_per_day, hour = work_start + slot mod hours_per_day).

Limitations

  • If tasks are non‑preemptive (must run continuously), they can’t be split across explicit breaks unless you explicitly model split behavior. If you want a 5‑hour job to be split (4 hours today + 1 hour tomorrow), you must decompose the job into chunks (see Approach 3).
  • Durations must be given in working‑hour units (not raw wall‑clock hours), so you’ll either convert inputs or preprocess durations.

When to use

  • Use this when staff work only inside recurring windows and you’re happy to measure time in “working slots” (common for typical rostering).

Reference: MiniZinc scheduling primitives — lib-globals-scheduling.


Approach 2 — Time‑indexed capacity (per‑timepoint constraints)

What it is

  • Keep a full wall‑clock timeline (every hour in your planning horizon).
  • Build an array capacity[t] where capacity[t] = number_of_staff for working hours and capacity[t] = 0 for lunch/weekend slots.
  • For each time t enforce: sum_{tasks active at t} usage(task) ≤ capacity[t]. A common MiniZinc idiom uses bool2int(…) to test activity.

Why this helps

  • Direct and explicit: you can express arbitrary time‑dependent availability (different lunch times, half‑day closures, single holiday dates).
  • No special “cumulative” support for time‑varying capacities is required.

Limitations

  • If tasks are modelled as monolithic non‑preemptive activities, they cannot cross zero‑capacity times (they’d violate the constraint) — to allow pausing/resuming you must decompose tasks into units or chunks.
  • The number of indicator checks (one per task × per timepoint or per subtask × per timepoint) can grow big and slow the model.

When to use

  • Use this when you need flexible, irregular availability windows or holidays that don’t repeat cleanly; or when you want fine control of capacity per clock hour.

Practical pointer: a StackOverflow pattern for the per‑timepoint predicate is a good starting point — see an example implementation sketch here: https://stackoverflow.com/questions/56731839/is-there-a-minizinc-predicate-to-model-time-dependant-resource-bounds-like-cumu.


Approach 3 — Task‑splitting / preemption emulation (resume across days)

What it is

  • Model each original task of duration D as an ordered sequence of parts (chunks). Chunk sizes can be 1 hour or a larger “max chunk” (e.g., up to a workday).
  • Schedule parts only on working slots (use time‑compression or the time‑indexed timeline restricted to working hours).
  • Add precedence constraints so part_k finishes before part_{k+1} starts. Gaps are allowed, so a part can end before lunch and the next part can start after lunch or the next morning.

Why this helps

  • Emulates preemption while keeping each sub‑task non‑preemptive (solver‑friendly).
  • You can keep using cumulatives: treat every chunk as a separate task in the cumulatives call (heights usually = 1), and global capacity ensures staff limits.
  • You can also add a staff‑assignment variable and require all chunks of a job to be assigned to the same staff if per‑worker continuity is needed.

Design notes

  • Precompute the chunk counts and durations (e.g., ceil(D / max_chunk)); these are constants so the model is simpler.
  • Enforce ordering: start_part[i,p+1] >= start_part[i,p] + dur_part[i,p]. That allows gaps.
  • If you must ensure the same person performs all chunks, either:
  • model separate cumulatives per staff (per‑staff resource id) or
  • add assignment variables and per‑staff no_overlap / disjunctive constraints across all chunks assigned to that staff.

When to use

  • Use this when tasks must be resumable across non‑working intervals. This is the standard CP trick: keep tasks as small atomic chunks, sequence them, and schedule chunks only during allowed hours.

Examples of this pattern appear in many rostering models (see hakank’s nurse rostering examples for availability/sequence patterns): https://github.com/hakank/hakank/blob/master/minizinc/nurse_rostering_with_availability.mzn.


Implementation notes and solver tips

  • Prefer compressing time when many hours are non‑working. A 24×D horizon is usually much larger than a compressed working‑slot horizon; smaller horizons speed up search. See MiniZinc “effective modelling” advice for performance tips: https://minizinc-doc-test.readthedocs.io/en/stable/efficient.html.
  • Use the solver’s cumulative/cumulatives primitive where possible — they’re usually implemented efficiently in solvers (Gecode/Chuffed) and beat hand‑rolled per‑timepoint sums once problem size grows. See the global constraint catalog for cumulative variants: https://sofdem.github.io/gccat/gccat/Ccumulative.html.
  • If you need time‑varying capacity (e.g., capacity 3 from 9–12, capacity 2 from 13–17), the common trick is either to split the horizon into segments with constant capacity or to use per‑timepoint sums. There’s no single built‑in cumulatives variant that takes an arbitrary array of capacities for every hour.
  • Solver specifics matter: some solvers expose richer scheduling primitives than others; consult solver docs and test encodings. The Gecode/MiniZinc integration has some historical differences — see discussion here: https://github.com/Gecode/gecode/issues/80.
  • Reduce symmetry: if you use chunk decomposition, break symmetry between identical chunks (indexing or ordering) and add reasonable bounds for start variables to prune search.
  • Start small: prototype with a 1–2 week horizon and a handful of tasks/staff to compare encodings (compression vs time‑indexed vs chunked). The fastest encoding for small tests is not always best at scale.

MiniZinc code sketches (three patterns)

Below are compact, annotated sketches — adapt names/domains to your data. These are conceptual but use standard MiniZinc idioms (bool2int, cumulative, ordering). Check the MiniZinc docs for exact signature variations: https://docs.minizinc.dev/en/stable/lib-globals-scheduling.html.

  1. Time‑compression + unit‑hour chunk decomposition (works well when tasks are resumable)
minizinc
% compress working hours: D days, WORK_START..WORK_END (e.g. 9..17)
int: D = 5;
int: WORK_START = 9;
int: WORK_END = 17;
int: H_DAY = WORK_END - WORK_START; % 8
int: SLOTS = D * H_DAY; % compressed slots

int: NUM_STAFF = 3;
int: N = 3; % number of jobs
array[1..N] of int: job_dur_wh = [4,5,2]; % durations in working-hours (integers)

% Decompose each job into unit-hour chunks
int: M = sum(job_dur_wh);
set of int: CHUNKS = 1..M;
array[CHUNKS] of var 0..SLOTS-1: start_chunk;
array[CHUNKS] of int: dur_chunk = [1 | _ in CHUNKS];
array[CHUNKS] of int: height_chunk = [1 | _ in CHUNKS];
% mapping job -> chunk indices (precompute)
array[1..N] of int: first_chunk = [1,5,10]; % example, compute via prefix sums in real model
array[1..N] of int: num_chunks = job_dur_wh;

% sequencing: each job's k-th chunk must finish before (or <=) next chunk starts
constraint
 forall(i in 1..N, k in 1..num_chunks[i]-1) (
 start_chunk[first_chunk[i]+k-1] + dur_chunk[first_chunk[i]+k-1]
 <= start_chunk[first_chunk[i]+k]
 );

% global staff capacity enforced with cumulative over compressed slots
constraint cumulative(start_chunk, dur_chunk, height_chunk, NUM_STAFF);

% objective example: minimize last finishing chunk
var 0..SLOTS: makespan = max([start_chunk[j] + dur_chunk[j] | j in CHUNKS]);
solve minimize makespan;

Notes: chunk indices and first_chunk/num_chunks are typically computed from prefix sums of job durations; above they’re placeholders to show the pattern.

  1. Full wall‑clock time + per‑timepoint capacity (explicit capacities = 0 for breaks)
minizinc
int: DAYS = 7;
int: HORIZON = DAYS * 24;
int: WORK_START = 9;
int: WORK_END = 17;
int: NUM_STAFF = 3;

array[0..HORIZON-1] of int: capacity =
 [ if ((t mod 24) >= WORK_START /\ (t mod 24) < WORK_END) /\ ((t div 24) mod 7) < 5
 then NUM_STAFF else 0 endif | t in 0..HORIZON-1 ];

int: N = 3;
array[1..N] of int: dur = [4,5,2];
array[1..N] of var 0..HORIZON-1: start;

% If tasks are monolithic (non-preemptive), then the following loop enforces they only occupy working hours:
constraint forall(t in 0..HORIZON-1) (
 sum(i in 1..N)(bool2int(start[i] <= t /\ t < start[i] + dur[i])) <= capacity[t]
);

% If you want preemption/resume then decompose tasks into unit-hour chunks similar to pattern 1,
% and apply the same per-timepoint sum over all chunks instead of the monolithic tasks.
solve satisfy;
  1. Chunking into day‑sized pieces (max chunk = workday) + cumulatives on compressed timeline
  • Same idea as (1) but use max chunk = H_DAY so each chunk fits entirely inside a workday; fewer chunks than per-hour decomposition, lower variable count, still allows resume across days.

Which approach to choose? (quick guide)

  • If tasks are allowed to be preempted and you want best solver performance: compress working hours and split each task into chunks (Approach 1 or 3), then use cumulative/cumulatives on the compressed timeline. This is usually the sweet spot.
  • If you need per‑hour, irregular, or one‑off closures (holidays, variable lunch times): time‑indexed capacity (Approach 2) is the most flexible but may be slower.
  • If tasks must be non‑preemptive and must respect uninterrupted wall‑clock continuity: you’ll need a mapping function from work‑time durations to wall‑clock start/end (complex) — often easier is to forbid starts that would force crossing a closure, or to redesign the process so tasks fit into allowed windows.
  • If you must guarantee the same staff performs all parts: add an assignment variable and per‑staff disjunctive/no_overlap constraints or use per‑staff cumulatives.

Sources


Conclusion

To model office hours, lunch breaks and weekends in a MiniZinc staff schedule while still using the cumulatives constraint, pick the encoding that matches your requirements: compress the timeline and schedule in working‑hour slots (fast, use cumulatives directly), discretize wall‑clock and enforce capacity[t]=0 where staff can’t work (flexible but heavier), or split tasks into ordered chunks so long tasks can pause and resume across non‑working periods (enables preemption while letting you keep cumulatives). For typical staff scheduling where resumes across days are allowed, compression + chunking is the practical first choice — it keeps your model compact and keeps the cumulatives constraint doing what it does best. If you want, I can turn one of the sketches above into a ready‑to‑run MiniZinc model adapted to your exact data (durations, staff count, planning horizon, and whether all chunks must be performed by the same staff).

Authors
Verified by moderation
Moderation
MiniZinc cumulatives: model staff schedules & breaks