The parallel tempering algorithm is applied to optimize the knapsack contents. Don't know what parallel tempering is? Read this.

This example reconsiders the 0/1 knapsack problem from example 2A using a penalizing instead of a mandatory constraint. Please take a look at that example before proceeding.

A penalizing constraint is more flexible than a mandatory constraint. It does not cause solutions that violate the constraint to be discarded, but assigns a penalty to such solution's evaluation. Usually, this penalty reflects the severeness of the violation so that solutions closer to satisfaction are favoured. Solutions within the constraint are not penalized. This approach has the advantage that a search may cross regions of the search space containing solutions that do not satisfy the contraint. However, a penalizing constraint is more difficult to design than a mandatory constraint: the penalties should be carefully chosen as there is a tradeoff between a solution's value and penalty. There is no absolute guarantee that the best found solution will actually satisfy the constraint, which may or may not be desired.

Here we will penalize solutions that exceed the knapsack capacity. From the selection in a subset solution we can infer the minimum number of items that have to be removed to satisfy the maximum capacity, by sorting selected IDs on the weight of the corresponding items (descending). This number is multiplied with the highest profit of all available items, increased by one, to obtain the penalty: penalty = minRemove * (highestProfitPerItem + 1).

From our penalty definition, it follows that any solution outside the constraint can be transformed in a better solution (higher penalized evaluation) by removing e.g. the item with the highest weight. Repeating this action eventually always yields a solution within the constraint. Thus, it is not possible that an invalid solution is a local optimum for a neighbourhood that considers single deletion moves, such as the single perturbation neighbourhood applied below. This ensures that the best found solution will satisfy the constraint, which would not necessarily be the case if the penalty had been poorly designed.

A penalizing constraint is required to implement the PenalizingConstraint interface which extends the Constraint interface. It actually has the same two methods for full (required) and delta validation (optional), respectively, but demands that the returned validation objects implement the PenalizingValidation interface. The latter extends the Validation interface by including a method getPenalty() in addition to the method passed().

The implementation below defines a full validation only, corresponding to the definition from above. The solution and data type are set to SubsetSolution and KnapsackData respectively. The constraint produces a SimplePenalizingValidation object that merely wraps a boolean value (like a SimpleValidation) indicating whether the constraint is satisfied, and a double value for the assigned penalty. If the constraint is satisfied, the penalty does not need to be specified as it will be set to 0 anyway.


The SimplePenalizingValidation class provides a constant PASSED that can be used to indicate that the solution passed validation and no penalty is assigned. A corresponding static factory method FAILED(penalty) can be used when the solution does not satisfy the constraint and the given penalty is to be assigned.


The implementation below requires that IDs are ordered based on the weight of the corresponding items (descending). If not, results of validation are arbitrary. It is explained later how to impose this (or any other) order on the IDs in a subset solution.
public class PenalizingKnapsackConstraint implements PenalizingConstraint<SubsetSolution, KnapsackData> {

    // maximum total weight
    private double maxWeight;
    // highest profit (of all items in the data set)
    private double highestProfitPerItem;

    public PenalizingKnapsackConstraint(double maxWeight, double highestProfitPerItem){
        this.maxWeight = maxWeight;
        this.highestProfitPerItem = highestProfitPerItem;

    // WARNING: only works if IDs are ordered based on weight of corresponding items (descending)
    public PenalizingValidation validate(SubsetSolution solution, KnapsackData data) {

        // step 1: compute total weight of selected items
        double curWeight = solution.getSelectedIDs().stream().mapToDouble(data::getWeight).sum();

        // step 2: compute minimum number of items to remove from selection to satisfy constraint
        int minRemove = 0;
        Iterator<Integer> it = solution.getSelectedIDs().iterator();
        // continue until capacity is not exceeded
        while(it.hasNext() && curWeight > maxWeight){
            // subtract weight of next most heavy item to be removed
            curWeight -= data.getWeight(;
            // update counter

        // step 3: produce penalizing validation
        if(minRemove > 0){
            // assign penalty (capacity exceeded)
            double penalty = minRemove * (highestProfitPerItem + 1);
            return SimplePenalizingValidation.FAILED(penalty);
        } else {
            // constraint satisfied
            return SimplePenalizingValidation.PASSED;



It is of course possible to also provide a more efficient delta validation, e.g. by dynamically updating the total weight of the selection as in example 2A. This is not discussed here.

As in example 2A we first create the problem from the objective and data, and then add the constraint, now using addPenalizingConstraint(c). As indicated above, the constraint implementation requires that IDs are ordered based on the weight of the corresponding items (descending). This order can be imposed by specifying an appropriate comparator when creating the subset problem. All generated subset solutions will then impose this order on the set of selected, unselected and all IDs.

// set weights, profits and capacity
double[] weights = ...
double[] profits = ...
double capacity = ...

// create data object
KnapsackData data = new KnapsackData(weights, profits);
// create objective
KnapsackObjective obj = new KnapsackObjective();
// create penalizing constraint
double highestProfitPerItem = computeHighestProfit(data);
PenalizingKnapsackConstraint constraint = new PenalizingKnapsackConstraint(capacity, highestProfitPerItem);

// create subset problem (all sizes allowed)
// IMPORTANT: IDs are ordered based on the weight of the corresponding items
//            (descending), as required by the applied penalizing constraint
SubsetProblem<KnapsackData> problem = new SubsetProblem<>(
                                            data, obj,
                                            0, data.getIDs().size(),
// add penalizing constraint

The highest profit of all available items, as required to initialize the constraint, is easily computed.

double computeHighestProfit(KnapsackData data){
    return data.getIDs().stream().mapToDouble(data::getProfit).max().getAsDouble();

We can now apply a parallel tempering search as in example 2A. It should give similar results as compared to the approach with a mandatory constraint. Note that it is not required now to provide an initial solution within the constraint. Because the penalties have been carefully chosen, the search will automatically find its way to a valid solution.

// set temperature range, scaled according to average profit of knapsack items
double scale = computeAverageProfit(data);
double minTemp = scale * 0.001;
double maxTemp = scale * 0.1;
// create parallel tempering with single perturbation neighbourhood (10 replicas)
int numReplicas = 10;
ParallelTempering<SubsetSolution> search = new ParallelTempering<>(problem,
                                                            new SinglePerturbationNeighbourhood(),
                                                            numReplicas, minTemp, maxTemp);

// set maximum runtime
long timeLimit = ...
search.addStopCriterion(new MaxRuntime(timeLimit, TimeUnit.SECONDS));
// attach listener (see example 1A)
search.addSearchListener(new ProgressSearchListener());
// NOTE: not required to set an initial solution within the constraint (<-> mandatory constraint)

// start search
// print results
if(search.getBestSolution() != null){
    System.out.println("Items in knapsack: " + search.getBestSolution().getNumSelectedIDs() + "/" + data.getIDs().size());
    System.out.println("Total profit: " + search.getBestSolutionEvaluation());
    System.out.println("Total weight: " + computeSelectionWeight(search.getBestSolution(), data) + "/" + capacity);
    System.out.println("Constraint satisifed: " + (problem.getViolatedConstraints(search.getBestSolution()).isEmpty() ? "yes" : "no"));
} else {
    System.out.println("No valid solution found...");
// dispose search

The complete source code of this example is available on GitHub, including some additional code to read the input from a text file. You can also download a ZIP file that contains the Java sources of all examples, a compiled JAR (including all dependencies) as well as some input files for in case you want to run any of the examples. To run this example, execute

$ java -cp james-examples.jar org.jamesframework.examples.knapsack2.KnapSack2 <inputfile> <capacity> <runtime>

from the command line. The input file should be a text file in which the first row contains a single number N that indicates the number of available knapsack items. The subsequent N rows each contain the profit and weight (in this order) of a single item, separated by one or more spaces. The runtime is given in seconds.