# Difference between revisions of "Lessons Learned from Quantitative Dynamical Modeling in Systems Biology"

m (→Evidence of the outcomes) |
m |
||

Line 26: | Line 26: | ||

==== Evidence of the outcomes ==== | ==== Evidence of the outcomes ==== | ||

The following aspects should be considered if the level of evidence is assessd. | The following aspects should be considered if the level of evidence is assessd. | ||

− | |||

===== Outcome O1 ===== | ===== Outcome O1 ===== | ||

* Four different parallelization levels (1 vs. 2 cores, 1 vs. 4, 1 vs. 8, and 1 vs. 16) | * Four different parallelization levels (1 vs. 2 cores, 1 vs. 4, 1 vs. 8, and 1 vs. 16) | ||

* The outcome was generated for a single model (Becker model) | * The outcome was generated for a single model (Becker model) | ||

* The outcome was generated for 1000 randomly drawn parameter settings | * The outcome was generated for 1000 randomly drawn parameter settings | ||

− | |||

===== Outcome O2 ===== | ===== Outcome O2 ===== | ||

* A toy model woas used to obtain this outcome | * A toy model woas used to obtain this outcome | ||

* 200 parameter estimation runs for 100 different simulated data sets were evaluated | * 200 parameter estimation runs for 100 different simulated data sets were evaluated | ||

− | |||

===== Outcome O3 ===== | ===== Outcome O3 ===== | ||

* Two application models (Becker and Bachmann) were used for this outcome | * Two application models (Becker and Bachmann) were used for this outcome | ||

Line 41: | Line 38: | ||

* 100 optimization runs with different randomly drawn initial guesses were evaluated | * 100 optimization runs with different randomly drawn initial guesses were evaluated | ||

* Computational speed has been evaluated in terms of number of function evaluations | * Computational speed has been evaluated in terms of number of function evaluations | ||

− | |||

===== Outcome O4 ===== | ===== Outcome O4 ===== | ||

* Two application models (Becker and Bachmann) were used for this outcome | * Two application models (Becker and Bachmann) were used for this outcome | ||

Line 47: | Line 43: | ||

* 100 optimization runs with different randomly drawn initial guesses were evaluated | * 100 optimization runs with different randomly drawn initial guesses were evaluated | ||

* Computational speed has been evaluated in terms of number of function evaluations | * Computational speed has been evaluated in terms of number of function evaluations | ||

− | |||

===== Outcome O5 ===== | ===== Outcome O5 ===== | ||

* Two application models (Becker and Bachmann) were used for this outcome. A performance benefit was only visible for the Bachmann model. | * Two application models (Becker and Bachmann) were used for this outcome. A performance benefit was only visible for the Bachmann model. | ||

* 100 optimization runs with different randomly drawn initial guesses were evaluated | * 100 optimization runs with different randomly drawn initial guesses were evaluated | ||

* Computational speed has been evaluated in terms of number of function evaluations | * Computational speed has been evaluated in terms of number of function evaluations | ||

− | |||

===== Outcome O6 ===== | ===== Outcome O6 ===== | ||

* The hybrid algorithm was evaluated with default configuration parameters | * The hybrid algorithm was evaluated with default configuration parameters |

## Revision as of 14:48, 7 August 2018

## Contents

## Lessons Learned from Quantitative Dynamical Modeling in Systems Biology

[Lessons Learned from Quantitative Dynamical Modeling in Systems Biology] Raue A, Schilling M, Bachmann J, Matteson A, Schelke M, et al. (2013) Lessons Learned from Quantitative Dynamical Modeling in Systems Biology. PLOS ONE 8(9): e74335. https://doi.org/10.1371/journal.pone.0074335

### Summary

This paper consideres modelling intracellular interaction networks with ordinary differential equation models (ODEs). Several aspects for robust and efficient estimation of model parameters were investigated.

### Study outcomes

In this paper, the following approaches were compared:

- Outcome O1: The reduction in computation time was shown if ODE models are fitted in a
**parallel implementation** - Outcome O2: The bias of parameter estimation was smaller if
**error parameters are estimated simultaneously**instead of estimating measurement errors as a preprocessing step by averaging over replicates. - Outcome O3:
**Stochastic optimization**algorithms exhibited a weak performance compared to deterministic optimization methods - Outcome O4: Derivatives calculated by
**sensitivities**was superior to "finite differences" - Outcome O5:
**Reparametrization**of the model equations improved the performance for one model - Outcome O6: A
**hybrid optimization**method combining deterministic and stochastic optimization exhibited intermediate performance (better than pure stochastic, worse than pure deterministic) but at the same time had required the largest number of function evaluations

The paper discusses further aspects which are outside the benchmarking scope.

### Study design and evidence level

#### Application settings

Three models are investigated:

- A toy model was used to obtain study outcome O2
- The so-called Becker model REF with 16 parameters and 85 experimental data points was used to derive study outcomes O3, O4 and O5.
- The so-called Bachmann model REF with 115 paraemters and 541 experimental data points was used to derive study outcomes O1, O3, O4 and O5.

#### Evidence of the outcomes

The following aspects should be considered if the level of evidence is assessd.

##### Outcome O1

- Four different parallelization levels (1 vs. 2 cores, 1 vs. 4, 1 vs. 8, and 1 vs. 16)
- The outcome was generated for a single model (Becker model)
- The outcome was generated for 1000 randomly drawn parameter settings

##### Outcome O2

- A toy model woas used to obtain this outcome
- 200 parameter estimation runs for 100 different simulated data sets were evaluated

##### Outcome O3

- Two application models (Becker and Bachmann) were used for this outcome
- Untuned, standard configuration parameters were used for stochastic optimization
- 100 optimization runs with different randomly drawn initial guesses were evaluated
- Computational speed has been evaluated in terms of number of function evaluations

##### Outcome O4

- Two application models (Becker and Bachmann) were used for this outcome
- The observed performance benefit could be explained by illstrating non-smooth outcomes for finite differences if a parameter is varied and by showing a dependency on the finite difference step-size. Both issues did not occur for the solution of sensitivity equations.
- 100 optimization runs with different randomly drawn initial guesses were evaluated
- Computational speed has been evaluated in terms of number of function evaluations

##### Outcome O5

- Two application models (Becker and Bachmann) were used for this outcome. A performance benefit was only visible for the Bachmann model.
- 100 optimization runs with different randomly drawn initial guesses were evaluated
- Computational speed has been evaluated in terms of number of function evaluations

##### Outcome O6

- The hybrid algorithm was evaluated with default configuration parameters
- 100 optimization runs with different randomly drawn initial guesses were evaluated
- Computational speed has been evaluated in terms of number of function evaluations

### Further References

V. Becker, M. Schilling, J. Bachmann, U. Baumann, A. Raue, T. Maiwald, J. Timmer, U. Klingmueller. Covering a broad dynamic range: Information processing at the erythropoietin receptor. Science 328, 2010, 1404-1408

J. Bachmann, A. Raue, M. Schilling, M. BÃ¶hm, A.C. Pfeifer, C. Kreutz, D. Kaschek, H. Busch, N. Gretz, W.D. Lehmann, J. Timmer, U. Klingmueller. Division of labor by dual feedback regulators controls JAK2/STAT5 signaling over broad ligand range. Mol. Sys. Bio. 7, 2011, 516