![]() ![]() The problem appears to be that of finding a reasonable ‘middle-ground’ between full fledged mechanistic approaches and qualitative phenomenological descriptions based on coarse-grained quantities only. At the same time, though, the deluge of data coming from both sides (experiments and bioinformatics) begs for the development of bridges connecting them, not just as descriptive frameworks and predictive tools but also as guides for novel targeted experiments and bioengineering applications. ![]() In addition, these models would most likely require some tuning of the multitude of parameters that characterize intracellular affairs, rendering overfitting a very concrete prospect. ![]() Therefore, understanding how all internal variables might coordinate so that certain “macroscopic” quantities, like the cell's growth rate, behave as observed in experiments is quite possibly a hopeless task. On the other hand, experiments necessarily probe only a tiny portion of these states. protein levels, RNA levels, metabolite levels, reaction fluxes, etc.) which, collectively, can take on an intimidatingly large number of physico-chemically viable states. In large part, this is due to the fact that cells have an enormous number of degrees of freedom (e.g. Even if we possessed detailed information about all sub-cellular parts and processes (including intracellular machines, their interaction partners, regulatory pathways, mechanisms controlling the exchange with the medium, etc.), it would be hard to build a comprehensive mechanistic model of a cell, and possibly even harder to infer deep organization principles from it. Yet, at least to some degree, these approaches still appear hard to integrate into quantitive predictive models of cellular behaviour. It is not unfair to say that the major drivers of biological discovery are currently found in increasingly accurate experimental techniques, now allowing to effectively probe systems over scales ranging from the intracellular environment to single cells to multi-cellular populations, and in increasingly efficient bioinformatic tools, by which intracellular components and their putative interactions can be mapped at genome and metabolome resolution. Finally, we highlight some extensions and potential limitations of the maximum entropy approach, and point to more recent developments that are likely to play a key role in the upcoming challenges of extracting structures and information from increasingly rich, high-throughput biological data. As examples, we focus specifically on the problem of reconstructing gene interaction networks from expression data and on recent work attempting to expand our system-level understanding of bacterial metabolism. Here we try to concisely review the basic elements of the maximum entropy principle, starting from the notion of ‘entropy’, and describe its usefulness for the analysis of biological systems. Both its broad applicability and the success it obtained in different contexts hinge upon its conceptual simplicity and mathematical soundness. ![]() A cornerstone of statistical inference, the maximum entropy framework is being increasingly applied to construct descriptive and predictive models of biological systems, especially complex biological networks, from large experimental data sets. ![]()
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |