We define a general Pareto product operator *Par on scoring schemes. Independent of a particular algorithm, we prove that for two scoring schemes A and B used in dynamic programming, the scoring scheme A *Par B correctly performs Pareto optimization over the same search space. We show that a Pareto-eager implementation of dynamic programming can achieve the same asymptotics as a single-objective optimization which computes the same number of results. For a concrete application in RNA structure prediction, we show that the empirical size of the Pareto front remains within reasonable bounds. Without artificial amalgamation of objectives, and with no heuristics involved, Pareto optimization is faster than computing the same number of answers separately for each objective.
The proceedings of CMSR'14 are distributed under the terms of a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License. Ownership of the copyright for the articles is retained by their authors. They allow anyone to download, reuse, reprint, distribute, and/or copy articles for any non-commercial purposes, provided that the original authors and source are appropriately cited.