
General Questions about SCIP 
SCIP is a constraint integer program solver. It can be used as a framework for branchcutandprice
and contains all necessary plugins to serve as a standalone solver for MIP, MINLP, and PBO.
This FAQ contains separate sections covering each of these usages of SCIP. It further considers specific questions for some features.

If you are either looking for a fast noncommercial MIPsolver or for a
branchcutandpriceframework in which you can directly implement your own methods
— which can even be for more general purposes than MIP.

I heard something about licenses. Do I have to pay for using SCIP?
As long as you use it for academic, noncommercial purposes: No.
This will not change. For the other cases, check the explanation of the
ZIB academic license and always feel free to ask us.
If you want to use SCIP commercially, please write an email to koch@zib.de.

An easy way is to use the SCIPbinaries and call SCIP from a shell, see here for a tutorial.
For that, you just have to download one of the precompiled binaries from the
download section, or the zipped source code and compile it
with your favorite settings. This is described in detail in the INSTALL file in the SCIP main directory.
Another way is to use SCIP as a solver integrated into your own program source code.
See the directories "examples/MIPsolver/" and "examples/Queens/"
for simple examples and this point.
A third way is to implement your own plugins into SCIP.
This is explained in the HowTos for all plugin types, which you can find in the
doxygen documentation.
See also How to start a new project.


I have installation problems. What can I do?
Read the INSTALL file in the SCIP root directory. It contains hints of how to get around problems. You can also try
the binaries available on the SCIP page.
If you want to use SoPlex as the underlying LPsolver, you can try the
following:
First, download the SCIP Optimization Suite.
Then, extract the file, change into the scipoptsuite directory, and enter 'make'.
As long as you have all the necessary libraries installed in your system, it should generate a SCIP
binary linked to ZIMPL and Soplex. The necessary system libraries are:
 ZLIB (libz.a)
 GMP (libgmp.a)
 Readline (libreadline.a)
If you do not have all of these libraries, read the INSTALL file
in the SCIP Optimization Suite directory.
As a summary, the call of make ZIMPL=false ZLIB=false READLINE=false should work on most systems.
If you have any problems while using an LPsolver different from SoPlex,
please read the SCIP INSTALL file first.
If you encounter compilation problems dealing with the explicit keyword you can either try a newer
compiler or set the flags LEGACY=true for SoPlex and SPX_LEGACY=true for SCIP.

I changed to a new version of SCIP and now compiling breaks with some error messages which I don't understand.
Do you have a general hint on that?
Maybe the parameters of a function in SCIP changed. Relevant changes between version are listed below.

Compile SCIP in debug mode: make OPT=dbg . Put the binary into a debugger, e.g., gdb
and let it run again. If you get an impression which component is causing the trouble, set #define SCIP_DEBUG as the first
line of the corresponding *.c file, recompile and let it run again. This will print debug messages from that piece of code.
Find a short debugging tutorial here.

SCIP prints error messages, aborts, produces segmentation faults, or just behaves strangely. What should I do?
See above. Often, the asserts that show up in debug mode already help to clarify misunderstandings
and suggest fixes, if you were calling SCIP functions in an unexpected manner. If you, however, have the impression
that there is a bug in SCIP, send us a bug report
and include a log file and ideally a backtrace from a debugger.

I would like to check whether some functionality is implemented in SCIP.
How does the naming of the methods work? Where do I find the most common methods?
For an explanation of the naming see the
coding style guidelines.
The methods you might want to use should normally be found in scip.h or in pub_*.h,
see also here.
In the doxygen documentation of scip.h,
the methods are ordered by topics.

Can I use SCIP as a pure CPSolver?
Yes. SCIP can be used as a pure CPSolver by typing
set emphasis cpsolver in the shell or by using the function SCIPsetEmphasis() .
Furthermore, you can compile SCIP without any LPSolver by make LPS=none .
See here for more on changing the behavior of SCIP.

Can I use SCIP as a pure LPSolver?
Since LPs are only special types of MIPs and CIPs, the principal answer is yes. If you feed a pure LP to SCIP, it
will first apply presolving and then hand this presolved problem to the underlying LP solver. If the LP is solved to
optimality, you can query the optimal solution values as always.
However, there are certain limitations to this: Dual multipliers and reduced costs are not accessible. If the LP
turns out to be infeasible, you cannot obtain the Farkas proof. Strong LP duality theory does
not apply to MIP or CIP and there is no special functionality implemented in SCIP for the case
that the solved problem is an LP.
Hence, if you need more, "LP specific", information than the primal solution, you are better off using an LPSolver
directly. If you are using the SCIP Optimization Suite, you could, e.g., use the included LP solver
SoPlex. If you want to solve an LP not from the command line, but within your
C/C++ program, you could also use SCIP's LPInterface, see also here.

Which kind of MINLPs are supported by SCIP?
SCIP supports nonlinear constraints of the form lhs ≤ f(x) ≤ rhs, where the function f(x) is an algebraic
expression that can be represented as expression tree. Such an expression tree has constants and variables as terminal
nodes and operands as nonterminal nodes.
Expression operands supported by SCIP include addition, subtraction, multiplication, division, exponentiation and
logarithm. Trigonometric functions are not yet supported by SCIP.
Nonlinear objective functions are not supported by SCIP and must be modeled as constraint function.
Note, that the support for nonquadratic nonlinear constraints is still in a BETAstadium and not yet as
robust as the rest of SCIP.
Missing bounds on nonlinear variables and tiny or huge coefficients can easily lead to numerical problems,
which can be avoided by careful modeling.

How can I use the SCIP makefiles?

What is this business with .a and .so libraries in the directory lib/ ?
When SCIP builds the binary it needs to link with the corresponding libraries of the LPsolver. There are usually
two ways to distribute a library. In the first (.a) the library is linked statically to SCIP; this means
that all of its information is packed into the binary. In the second way (.so) the library is a shared
library. In this case, the code of the library is not inserted into the binary itself, but is loaded at
runtime. This has the advantage that the binaries are smaller, but it comes at the cost that you have to make
sure that the library is found at runtime (for most systems it suffices to put the path of the library into the
LD_LIBRARY_PATH variable).
Note: Depending on the system shared or static libraries
are standard. If both links are present in the lib directory, the linker chooses which version to take
(on newer Linux systems this is usually the shared version). If you do not want this to happen you have to delete
the link that is not intended.

Can I compile SCIP as a shared library?
You can use the SHARED=true option
when making SCIP. This will generate the libraries of SCIP in shared format. The binary then
also uses this form. Note that the path to the lib directory of SCIP is used to
locate the libraries. If you want to move the libraries, you might to set the
LD_LIBRARY_PATH environment variable to include the new path.
If you are using your own building system: The "magic" changes are the FPIC
compiler/linker option and the Wl,rpath option.

The methods SCIPgetVarSol() and SCIPvarGetSol() seem to have the same
functionality. Which one should I use?
In fact, there is a slight difference: SCIPvarGetSol() is also able to return pseudo solution values.
If you do not have an idea, what pseudo solutions are, SCIPgetVarSol() should be just fine. This
should be the only case of 'duplicate methods'. If you find, however, another one, please contact us.

Is there a way to visualize the branch and bound tree?
Yes, currently there are two tree visualizations supported, the vbctool,
and the BranchandBound Analysis Kit.
The first comes with a viewer that has an option to uncover the nodes onebyone (each time you
hit the space key). Additional node information such as its lower bound, depth, and number are accessed through a
context menu. The BAK is a command line tool written in the Python programming language. It offers several
noninteractive visualizations of the tree at a userdefined frequency.
For using one of these tools, SCIP lets you define file names set visual vbcfilename somefilename.vbc and
set visual bakfilename somefilename.dat .
For those who want to use the stepbystep functionality of vbctool,
it is necessary to use a timestep counter for the visualization instead of the real time. The corresponding parameter
is changed via set visual realtime FALSE .
For users of the callable library, the corresponding parameters are called "visual/bakfilename", "visual/vbcfilename", and
"visual/realtime".
Using SCIP as a standalone solver 

What do the cryptic abbreviations for the columns mean which are displayed
during the solving process of SCIP?
Type display display in the interactive shell to get an explanation of them.
By the way: If a letter appears in front of a display row, it indicates,
which heuristic found the new primal bound, a star representing an integral LPrelaxation.
Typing display statistics after finishing or interrupting the solving process gives
you plenty of extra information about the solving process.
(Typing display heuristics gives you a list of the heuristics including their letters.)

Why does SCIP claim that it could not find the user parameters "scip.set"? Where can I get such a file?
SCIP comes with default settings that are automatically active when you start the interactive shell.
However, you have the possibility to save customized settings via the set save and set diffsave commands.
Both commands will prompt you to enter a file name and save either all or customized parameters only to the specified file.
A user parameter file that you save as "scip.set" has a special meaning; Whenever you invoke SCIP from a directory containing a file named "scip.set",
the settings therein overwrite the default settings. For more information about customized settings, see the
Tutorial on the interactive shell.
Settings files can become incompatible with later releases if we decide to rename/delete a parameter.
Information about this can be found in the CHANGELOG for every release, see also this related question.

How do I change the behavior of SCIP?
You can switch the settings for all presolving, heuristics, and separation plugins to three different modes
via the set {presolving, heuristics, separation} emphasis parameters in the interactive shell.
off turns off the respective type of plugins, fast chooses settings that lead to less time spent
in this type of plugins, decreasing their impact, and aggressive increases the impact
of this type of plugins.
You can combine these general settings for cuts, presolving, and heuristics arbitrarily.
display parameters shows you which settings currently differ from their default,
set default resets them all.
Furthermore, there are complete settings that can be set by set emphasis , i.e. settings for pure
feasibility problems, solution counting, and CP like search.

How can I learn more about/from the presolve reasoning SCIP applies to my combinatorial optimization problem?
You can look at the statistics (type display statistics in the
interactive shell or call SCIPprintStatistics() when using SCIP as a library).
This way you can see, which of the presolvers, propagators or constraint
handlers performed the reductions.
Then, add a #define SCIP_DEBUG as first line of the corresponding *.c file
in src/scip (e.g. cons_linear.c or presol_probing.c)
Recompile and run again. You will get heaps of information now.
Looking into the code and documentation of the corresponding plugin and ressources from the literature
helps with the investigation.

I recognized that one special plugin works very poorly / very well for my problem and
I want to disable it / weaken its influence / intensify its influence. How do I do this?

For using a nondefault branching rule or node selection strategy as
standard, you just have to give it the highest priority, using
SCIP> set branching <name of a branching rule> priority 9999999
SCIP> set nodeselectors <name of a node selector> priority 9999999
With the commands
SCIP> display branching
SCIP> display nodeselectors
you get a list of all branching rules and node selectors, respectively.
These lists give information about the different priorities.

If you want to completely disable a heuristic or a separator you have
to set its frequency to 1 and the sepafreq to 1 for separation by constraint handlers,
respectively. The commands looks like this:
SCIP> set heuristics <name of a heuristic> freq 1
SCIP> set separators <name of a separator> freq 1
SCIP> set constraints <name of a constraint handler> sepafreq 1

For disabling a presolver, you have to set its maxrounds parameter to 0.
SCIP> set presolvers <name of a presolver> maxrounds 0

If you want to intensify the usage of a heuristic,
you can reduce its frequency to some smaller, positive value, and/or raise the quotient and
offset values (maxlpiterquot for diving heuristics, nodes for LNS heuristics).
SCIP> set heuristics <name of a heuristic> freq <some value>
SCIP> set heuristic <name of a diving heuristic> maxlpiterquot <some value>
SCIP> set heuristic <name of a LNS heuristic> nodesquot <some value>

For intensifying the usage of a separator,
you can raise its maxroundsroot and maxsepacutsroot values.
SCIP> set separators <name of a separator> maxroundsroot <some value>
SCIP> set separators <name of a separator> maxrounds <some value>
If you also want to use this separator locally, you have to set its frequency to a positive value
and possibly raise maxrounds and maxsepacuts.
SCIP> set separators <name of a separator> freq <some value>
SCIP> set separators <name of a separator> maxsepacuts <some value>
Compare the parameters of the heuristic/separator in the appropriate aggressive setting

For weakening, you should just do the opposite operation, i.e.,
reducing the values you would raise for intensification and vice versa.

How can I use my own functions in the interactive shell/extend the set of available interactive shell commands?
If you want to keep the interactive shell functionality, you could add a
dialog handler, that introduces a new SCIP shell command that
 solves the problem and calls your function afterwards or
 checks whether the stage is SOLVED and only calls your function.
Search SCIPdialogExecOptimize in src/scip/dialog_default.c to see how
the functionality of the "optimize" command is invoked.
Also, in src/scip/cons_countsols.c, you can see an example of a dialog handler
being added to SCIP.
If this is the way you go, please check the How to add dialogs section
of the doxygen documentation.

Using SCIP included in another source code 
How do I construct a problem instance in SCIP?
First you have to create a SCIP object via SCIPcreate() , then you start to build the problem via
SCIPcreateProb() . Then you create variables via SCIPcreateVar() and add them to the
problem via SCIPaddVar() .
The same has to be done for the constraints. For example, if you want to fill in the rows of a general MIP, you
have to call SCIPcreateConsLinear() , SCIPaddConsLinear() and additionally
SCIPreleaseCons() after finishing. If all variables and constraints are present, you can initiate
the solution process via SCIPsolve() .
Make sure to also call SCIPreleaseVar() if you do not need the variable pointer anymore. For an
explanation of creating and releasing objects, please see the doxygen documentation..

I already know a solution in advance, which I want to pass to SCIP.
How do I do this?
First you have to build your problem (at least all variables have to
exist), then there are two options:

You have the solution in file which fits the solution format of
SCIP, then you can use
SCIPreadSol() to pass that
solution to SCIP.

You create a new SCIP primal solution candidate by
calling
SCIPcreateSol() and set all nonzero values by
calling SCIPsetSolVal() . After that, you add this
solution by calling SCIPaddSol() (the variable
stored should be true afterwards, if your solution was
added to solution candidate store) and then release it by calling
SCIPsolFree() . Instead of adding and releasing
sequentially, you can use SCIPaddSolFree() which
tries to add the solution to the candidate store and
free the solution afterwards.

What operational stages of SCIP are there and are they important for me?
There are fourteen different stages during a run of SCIP.
There are some methods which cannot be called in all stages, consider for example SCIPtrySol()
(see previous question).


Why do the names, e.g., in debug messages often differ from the ones I
defined?
This can have several reasons. Especially names of binary variables can get different prefixes and suffixes. Each
transformed variable and constraint (see here) gets a "t_" as prefix. Apart from
that, the meaning of original and transformed variables and constraints is identical.
General integers with bounds that differ just by 1 will be aggregated to binary variables which get the same name
with the suffix "_bin" . E.g. an integer variable t_x with lower bound 4 and upper bound 5
will be aggregated to a binary variable t_x_bin = t_x  4 .
Variables can have negated counterparts, e.g. for a binary t_x its (also binary) negated would be
t_x_neg = 1  t_x .
The knapsack constraint handler is able to disaggregate its constraints to cliques, which are set packing
constraints, and create names that consist of the knapsack's name and a suffix
"_clq_<int> ". E.g., a knapsack constraint
knap: x_1 + x2 +2 x_3 ≤ 2 could be disaggregated to the set packing
constraints knap_clq_1: x_1 + x_3 ≤ 1 and
knap_clq_2: x_2 + x_3 ≤ 1 .

What is SCIP_CALL()? Do I need this?
Yes, you do. SCIP_CALL() is a global define, which handles the return codes of all methods
which return a SCIP_RETCODE and should therefore parenthesize each such method.
SCIP_OKAY is the code which is returned if everything worked well;
there are 17 different error codes, see type_retcode.h.
Each method that calls methods which return a SCIP_RETCODE should itself return a SCIP_RETCODE.
If this is not possible, use SCIP_CALL_ABORT() to catch the return codes of the methods.
If you do not want to use this either, you have to do the exception handling
(i.e. the case that the return code is not SCIP_OKAY) on your own.

I want to stop the solving process after a certain time. How can I do this?
Limits are given by parameters in SCIP, for example limits/time for a time limit or
limits/nodes for a node limit.
If you want to set a limit, you have to change these parameters.
For example, for setting the time limit to one hour, you have to call
SCIP_CALL( SCIPsetRealParam(scip, "limits/time", 3600) ) .
In the interactive shell, you just enter set limits time 3600 .
For more examples, please have a look into heur_rens.c.
Using SCIP as a BranchCutAndPriceFramework 
How do I start a project?

What types of plugins can I add and how do I do this?

When should I implement a constraint handler, when should I implement a separator?
This depends on whether you want to add constraints or only cutting planes. The main
difference is that constraints can be "model constraints", while cutting planes are only
additional LP rows that strengthen the LP relaxation.
A model constraint is a constraint that is important for the feasibility of the integral
solutions. If you delete a model constraint, some infeasible integral vectors would
suddenly become feasible in the reduced model.
A cutting plane is redundant w.r.t. integral solutions. The set of feasible integral
vectors does not change if a cutting plane is removed. You can, however, relax this
condition slightly and add cutting planes that do cut off feasible solutions, as long as
at least one of the optimal solutions remains feasible.
You want to use a constraint handler in the following cases:

Some of your feasibility conditions can not be expressed by existing constraint types
(e.g., linear constraints), or you would need too many of them. For example, the
"nosubtour" constraint in the TSP is equivalent to exponentially many linear constraints.
Therefore, it is better to implement a "nosubtour" constraint handler that can inspect
solutions for subtours and generate subtour elimination cuts and others (e.g., comb
inequalities) to strengthen the LP relaxation.
 Although you can express your feasibility condition by a reasonable number of existing
constraint types, you can represent and process the condition in a more efficient way. For
example, it may be that you can, due to your structural knowledge, implement a stronger or
faster domain propagation or find tighter cutting planes than what one could do with the
sum of the individual "simple" constraints that model the feasibility condition.
You want to use a cutting plane separator in the following cases:

You have a general purpose cutting plane procedure that can be applied to any MIP. It
does not use problem specific knowledge. It only looks at the LP, the integrality
conditions, and other deduced information like the implication graph.
 You can describe your feasibility condition by a set C of constraints of existing type
(e.g., linear constraints). The cuts you want to separate are model specific, but apart
from these cuts, there is nothing you can gain by substituting the set C of constraints
with a special purpose constraint. For example, the preprocessing and the domain
propagation methods for the special purpose constraint would do basically the same as what
the existing constraint handler does with the set C of constraints. In this case, you
don't need to implement the more complex constraint handler. You add constraints of
existing type to your problem instance in order to produce a valid model, and you enrich
the model by your problem specific cutting plane separator to make the solving process
faster. You can easily evaluate the performance impact of your cutting planes by enabling
and disabling the separator.
Note that a constraint handler is defined by the type of constraints that it manages. For
constraint handlers, always think in terms of constraint programming. For example, the
"nosubtour" constraint handler in the TSP example
(see "ConshdlrSubtour.cpp" in the directory "scip/examples/TSP/src/")
manages "nosubtour" constraints, which demand
that in a given graph no feasible solution can contain a tour that does not contain all
cities. In the usual TSP problem, there is only one "nosubtour" constraint, because there
is only one graph for which subtours have to be ruled out.
The "nosubtour" constraint handler has various ways of enforcing the "nosubtour" property
of the solutions. A simple way is to just check each integral solution candidate (in the
CONSCHECK, CONSENFOLP, and CONSENFOPS callback methods) for subtours. If there is a subtour, the
solution is rejected. A more elaborate way includes the generation of "subtour elimination
cuts" in the CONSSEPALP callback method of the constraint handler. Additionally, the
constraint handler may want to separate other types of cutting planes like comb
inequalities in its CONSSEPALP callback.

Can I remove unnecessary display columns or—even better—add my own ones?
Can I change the statistics displayed at the end of solving?
Setting the status of a display column to 0 turns it off. E.g., type set display memused status 0 in the
interactive shell to disable the memory information column, or include the line SCIPsetIntParam(scip,
"display/memused/status", 0) into your source code. Adding your own display column can be done
by calling the SCIPincludeDisp() method, see the doxygen
documentation. The statistic display, which is shown by display statistics and
SCIPprintStatistics() , respectively, cannot be changed.

What do LProws look like in SCIP?
Each row is of the form lhs ≤ Σ(val[j]·col[j]) + const
≤ rhs. For now, val[j]·col[j] can be interpreted as
a_{ij}·x_{j} (for the difference between columns and variables
see here). The constant is essentially needed for collecting the influence of presolving
reductions like variable fixings and aggregations. The lhs and rhs may take infinite values: a
lessthan inequality would have lhs = ∞, and a greaterthan inequality would have rhs =
+∞. For equations lhs is equal to rhs. An infinite left hand side can be recognized by
SCIPisInfinity(scip, lhs) , an infinite right hand side can be recognized by
SCIPisInfinity(scip, rhs) .

How do I get the data of the current LPrelaxation?
You can get all rows in the current LPrelaxation by calling SCIPgetLPRowsData() . The methods
SCIProwGetConstant() , SCIProwGetLhs() , SCIProwGetRhs() ,
SCIProwGetVals() , SCIProwGetNNonz() , SCIProwGetCols() then give you
information about each row, see previous question.
You get a columnwise representation by
calling SCIPgetLPColsData() . The methods SCIPcolGetLb() and SCIPcolGetUb()
give you the locally valid bounds of a column in the LP relaxation of the current branchandboundnode.
If you are interested in global information, you have to call SCIPcolGetVar() to get the variable
associated to a column (see next question), which you can ask for global bounds via
SCIPvarGetLbGlobal() and SCIPvarGetUbGlobal() as well as the type of the variable
(binary, general integer, implicit integer, or continuous) by calling SCIPvarGetType() . For more
information, also see this question.

What is the difference between columns and variables, rows and constraints?
The terms columns and rows always refer to the representation in the current LPrelaxation, variables and
constraints to your global Constraint Integer Program. Each column has an associated variable, which it
represents, but not every variable must be part of the current LPrelaxation. E.g., it could be already fixed,
aggregated to another variable, or be priced out if a column generation approach was implemented.
Each row has either been added to the LP by a constraint handler or by a cutting plane separator. A constraint handler is able
to, but does not need to, add one or more rows to the LP as a linear relaxation of each of its constraints. E.g.,
in the usual case (i.e. without using dynamic rows) the linear constraint handler adds one row to the LP for each
linear constraint.

Are the variables and rows sorted in any particular order?
The variable array which you get by SCIPgetVars() is internally sorted by variable types.
The ordering is binary, integer, implicit integer and continuous variables, i.e., the binary variables are stored at
position [0,...,nbinvars1], the general integers at [nbinvars,...,nbinvars+nintvars1], and so on. It holds that
nvars = nbinvars + ninitvars + nimplvars + ncontvars. There is no further sorting within these sections, as well as
there is no sorting for the rows. But each column and each row has a unique index, which can be obtained by
SCIPcolGetIndex() and SCIProwGetIndex() , respectively.

When should I use which of the numerical comparison functions?
There are various numerical comparison functions available, each of them using a different
epsilon in its comparisons. Let's take the equality comparison as an example. There are
the following methods available: SCIPisEQ(), SCIPisSumEQ(), SCIPisFeasEQ(), SCIPisRelEQ(),
SCIPisSumRelEQ() .

SCIPisEQ() should be used to compare two single values that are either results of a simple
calculation or are input data. The comparison is done w.r.t. the "numerics/epsilon" parameter, which is
1e9 in the default settings.

SCIPisSumEQ() should be used to compare the results of two scalar products or other "long"
sums of values. In these sums, numerical inaccuracy can occur due to cancellation of digits in the addition of
values with opposite sign. Therefore, SCIPisSumEQ() uses a relaxed equality tolerance of
"numerics/sumepsilon", which is 1e6 in the default settings.

SCIPisFeasEQ() should be used to check the feasibility of some result, for example after you have
calculated the activity of a constraint and compare it with the left and right hand sides. The feasibility is
checked w.r.t. the "numerics/feastol" parameter, and equality is defined in a relative fashion in
contrast to absolute differences. That means, two values are considered to be equal if their difference divided by
the larger of their absolute values is smaller than "numerics/feastol". This parameter is 1e6 in the
default settings.

SCIPisRelEQ() can be used to check the relative difference between two values, just like what
SCIPisFeasEQ() is doing. In contrast to SCIPisFeasEQ() it uses
"numerics/epsilon" as tolerance.

SCIPisSumRelEQ() is the same as SCIPisRelEQ() but uses "numerics/sumepsilon"
as tolerance. It should be used to compare two results of scalar products or other "long" sums.

How do I solve an LP inside my SCIP plugin?
If the LP is only a slightly modified version of the LP relaxation  changed variable bounds or objective
coefficients  then you can use SCIP's diving mode: methods SCIPstartDive() ,
SCIPchgVarLbDive() , SCIPsolveDiveLP() , etc.
Alternatively, SCIP's probing mode allows for a tentative depth first search in the tree and can solve the LP
relaxations at each node: methods SCIPstartProbing() , SCIPnewProbingNode() ,
SCIPfixVarProbing() , etc. However, you cannot change objective coefficients or enlarge variable bounds
in probing mode.
If you need to solve a separate LP, creating a subSCIP is not recommended because of the overhead involved
and because dual information is not accessible (compare here). Instead you can use SCIP's LP
interface. For this you should include lpi/lpi.h and call the methods provided therein.
Note that the LPI can be used independently from SCIP.
Specific questions about Column Generation and BranchAndPrice with SCIP 
What can I expect when using SCIP as a BranchCutandPrice framework?
If you want to use SCIP as a branchandprice framework, you normally need to implement a reader to read in your
problem data and build the problem, a pricer to generate new columns, and a branching rule to do the branching
(see also this question to see how to store branching decisions, if needed).
SCIP takes care about everything else,
for example the branchandbound tree management and LP solving including storage of warmstart bases. Moreover,
many of SCIP's primal heuristics will be used and can help improve your primal bound.
However, this also comes with a few restrictions: You are not allowed to change the objective function coefficients
of variables during the solving process, because that means that previously computed dual bounds might have to be updated.
This prevents the use of dual variable stabilization techniques based on a (more or less strict) bounding box in the dual.
We are working on making this possible and recommend to use a weighted sum stabilization approach until then.
Another point that SCIP does for you is the dynamic removal of columns from the LP due to aging (see also the next two questions).
However, due to the way simplex bases are stored in SCIP, columns can only be removed at the same node where they were created.

Why are not all variables in the LP?
With SCIPgetLPColsData() you can obtain the columns of the current LP relaxation. It is correct that
not all variables are necessarily part of the current LP relaxation. In particular, in branchandprice the
variables generated at one node in the tree are not necessarily included in the LP relaxation of a different node
(e.g., if the other node is not a descendant of the first node). But even if you are still at the same node or at
a descendant node, SCIP can remove columns from the LP, if they are 0 in the LP relaxation. This dynamic column
deletion can be avoided by setting the "removable" flag to FALSE in the SCIPcreateVar() call.

I only implemented one pricer, why is there a second one, called variable pricer?
As described in the previous question, it may happen, that some variables are not in the
current LP relaxation. Nevertheless, these variables still exist, and SCIP can calculate their reduced costs and
add them to the LP again, if necessary. This is the job of the variable pricer. It is called before all other
pricers.

How can I store branching decisions?
This is a very common problem in BranchAndPrice, which you can deal nicely with using SCIP.
There are basically three different options.
The first one is to add binary variables to the problem that encode branching decisions. Then constraints should
be added that enforce the corresponding branching decisions in the subtrees.
If you have complex pricer data like a graph and need to update it after each branching decision,
you should introduce "marker constraints" that are added to the branching nodes and store
all the information needed (see the next question).
The third way is to use an event handler, which is described here.

I want to store some information at the nodes and update my pricer's data structures
when entering a new node. How can I do that?
This can be done by creating a new constraint handler with constraint data that can store the information and
do/undo changes in the pricer's data structures.
Once you have such a constraint handler, just create constraints of this type and add them to the child nodes of
your branching by SCIPaddConsNode() . Make sure to set the "stickingatnode" flag to TRUE in order to
prevent SCIP from moving the constraint around in the tree.
In general, all methods of the constraint handler (check, enforcing, separation, ...) should be empty (which means
that they always return the status SCIP_FEASIBLE for the fundamental callbacks), just as if all constraints of this type
are always feasible. The important callbacks are the CONSACTIVE and CONSDEACTIVE methods for communicating the
constraints along the active path to your pricer, and the CONSDELETE callback for deleting data of constraints at nodes
which became obsolete.
The CONSACTIVE method is always called when a node is entered on which the constraint has been added. Here, you
need to apply the changes to your pricing data structures. The CONSDEACTIVE method will be called if the node is
left again. Since the CONSACTIVE and CONSDEACTIVE methods of different constraints are always called in a
stacklike fashion, this should be exactly what you need.
All data of a constraint need to be freed by implementing an appropriate CONSDELETE callback.
If you need to fix variables for enforcing your branching decision, this can be done in the propagation callback
of the constraint handler. Since, in general, each node is only propagated once, in this case you will have to check
in your CONSACTIVE method whether new variables were added after your last propagation of this node. If this is
the case, you will have to mark this node for repropagation by SCIPrepropagateNode() .
You can look into the constraint handler of the coloring problem (examples/Coloring/src/cons_storeGraph.c) to get
an example of a constraint handler that does all these things.

How can an event handler help me with my branching?
An event handler can watch for events like local bound changes on variables. So, if your
pricer wants to be informed whenever a local bound of a certain variable changes, add an
event handler, catch the corresponding events of the variable, and in the event handler's
execution method adjust the data structures of your pricer accordingly.

How can I add locally valid variables to the problem in my branchandprice code?
Variables in SCIP are always added globally. If you want to add them locally, because they are forbidden
in another part of the branchandboundtree, you should ensure that they are locally fixed to 0
in all subtrees where they are not valid. A description of how this can be done is given here.

My pricer generates the same column twice. How can I solve this problem?
First check whether your pricing is correct. Are there upper bounds on variables that you have forgotten to take
into account? If your pricer cannot cope with variable bounds other than 0 and infinity, you have to mark
all constraints containing priced variables as modifiable, and you may have to disable reduced cost
strengthening by setting propagating/rootredcost/freq to 1.
If your pricer works correctly and makes sure that the same column is added at most once in one pricing round, this
behavior is probably caused by the PRICER_DELAY property of your pricer.
If it is set to FALSE, the following may have happened: The variable pricer (see this question)
found a variable with negative dual feasibility that was not part of the current LP relaxation and added it to the
LP. In the same pricing round, your own pricer found the same column and created a new variable for it. This might
happen, since your pricer uses the same dual values as the variable pricer. To avoid this behavior, set
PRICER_DELAY to TRUE, so that the LP is reoptimized after the variable pricer added variables to the LP. You can
find some more information about the PRICER_DELAY property at How to add variable pricers .

Which default plugins should be deactivated in order to get a working branchandprice code?
In most cases you should deactivate separators since cutting planes that are added to your master problem may
destroy your pricing problem. Additionally, it may be necessary to deactivate some presolvers, mainly the dual
fixing presolver. This can be done by not including these plugins into SCIP, namely by not calling
SCIPincludeSepaXyz() and SCIPincludePresolXyz() in your own pluginsincluding
files. Alternatively, you can set the parameters maxrounds and maxroundsroot to zero for all separators and
maxrounds to zero for the presolvers.

What are the lazy bounds for variables in SCIP and what do I need them for?
In many BranchandPrice applications, you have binary variables, but you do not want to impose
upper bounds on these variables in the LP relaxation, because the upper bound is already implicitly enforced
by the problem constraints and the objective. If the upper bounds are explicitly added to the LP,
they lead to further dual variables, which may be hard to take into account in the pricing problem.
There are two possibilities for how to solve this problem. First, you could change the binary variables
to general integer variables, if this does not change the problem. However, if you use special linear
constraints like set partitioning/packing/covering, you can only add binary variables to these constraints.
In order to still allow the usage of these types of constraints in a branchandprice approach, the
concept of lazy bounds was introduced in SCIP 2.0.
For each variable, you can define lazy upper and lower bounds, i.e. bounds, that are implicitly enforced by
constraints and objective. SCIP adds variable bounds to the LP only if the bound is tighter than the
corresponding lazy bound.
Note that lazy bounds are explicitly put into and removed from the LP when starting and ending diving mode,
respectively. This is needed because changing the objective in diving might reverse the implicitly enforced
bounds.
For instance, if you have set partitioning constraints in your problem, you can define variables contained in these
constraints as binary and set the lazy upper bound to 1, which allows you to use the better propagation methods
of the setppc constraint handler compared to the linear constraint handler without taking care about upper
bounds on variables in the master.

Can I stop the pricing process before the master problem is solved to optimality?
In a column generation approach, you usually have to solve the master problem to optimality; otherwise,
its objective function value is not a valid dual bound. However, there is a way in SCIP to
stop the pricing process earlier, called "early branching".
The reduced cost pricing method of a pricer has a result pointer that should be set each time the method is called.
In the usual case that the pricer either adds a new variable or ensures that there are no further
variables with negative dual feasibility, the result pointer should be set to SCIP_SUCCESS.
If the pricer aborts pricing without creating a new variable, but there might exist additional
variables with negative dual feasibility, the result pointer should be set to SCIP_DIDNOTRUN.
In this case, the LP solution will not be used as a lower bound.
Typically, early branching goes along with the computation of a Lagrangian bound in each pricing iteration.
The pricer store store this valid lower bound in the lowerbound pointer in order to update the lower
bound of the current node.
Since SCIP 3.1, it is even possible to state that pricing should be stopped early even though new variables were created
in the last pricing round. For this, the pricer has to set the stopearly pointer to TRUE.

How can I delete variables?
SCIP features the functionality to delete variables from the problem when performing branchandprice.
This feature is still in a beta status and can be activated by switching the parameters
pricing/delvars and pricing/delvarsroot to TRUE in order to allow deletion
of variables at the root node and at all other nodes, respectively.
Furthermore, variables have to be marked to be deletable by SCIPvarMarkDeletable() , which has to be done
before adding the variable to the problem. Then, after a node of the branchandboundtree is processed,
SCIP automatically deletes variables from the problem that were created at the current node and whose corresponding
columns were already removed from the LP. Note that due to the way SCIP stores basis information, it is not possible to
completely delete a variable that was created at another node than the current node.
You might want to change the parameters lp/colagelimit , lp/cleanupcols , and lp/cleanupcolsroot ,
which have an impact on when and how fast columns are removed from the LP.
Constraint handlers support a new callback function that deletes variables from constraints in which they were marked to be deleted.
Thus, when using automatic variable deletion, you should make sure that all used constraint handlers implement this callback.
By now, the linear, the set partitioning/packing/covering and the knapsack constraint handler support this callback, which should
be sufficient for most branchandprice applications. Note that set covering constraints can be used instead of logicor constraints.
Instead of deleting a variable completely, you can also remove it from the problem by either fixing the variable to zero
using SCIPfixVar() , which fixes the variable globally or using SCIPchgVarUbNode() and
SCIPchgVarLbNode() , which changes the bounds only for the current subtree.

How do I branch on constraints?
Constraintbased branching is rather straightforward to implement in
SCIP. You have to add a new branching rule that uses the methods
SCIPcreateChild() and SCIPaddConsNode() in its branching callbacks.
A very good example for this is the Ryan/Foster branching rule that has
been implemented in the binpacking example from the examples section.
Sometimes it might be more appropriate to implement a constraint
handler instead of a branching rule. This is the case if, e.g., the added constraints alone
do NOT ensure integrality of the integer variables, or if you still want to use the available branching rules.
In the ENFOLP callback of your constraint handler, the
branching really happens. The integrality constraint
handler calls the branching rules within the ENFOLP callback. Give your constraint handler a positive enforcement priority
to trigger your constraint branching before the integrality constraint handler and perform the constraint branching.
Specific questions about the copy functionality in SCIP 
The functionality of copying a SCIP model was added in SCIP version 2.0.0. It gives the possibility to
generate a copy of the current SCIP model. This functionality is of interest, for example, in large
neighborhood heuristics (such as heur_rens.c). They can now easily copy the complete problem and fix a certain
set of variables to work on a reasonable copy of the original problem.

How do I get a copy of a variable or a constraint?
For the variables and constraints there are the methods SCIPgetVarCopy() and
SCIPgetConsCopy() which provide a copy for a variable or a constraint, respectively.

What does the valid pointer in the copy callback of the constraint handler and variable pricer mean?
SCIP would like to know if the copied problem is a one to one copy. That is, if all problem defining objects
were successfully copied. If this is the case, all reductions made in the copy can be transferred to the
original instance. The problem defining objects in SCIP are the constraint handlers and the variable pricers.
