Professor Michael Todinov
MSc, PhD, DEng/DSc
Professor in Mechanical Engineering
School of Engineering, Computing and Mathematics
Role
Michael Todinov conducts research and teaching in the area of Reliability, Risk , Probabilistic modelling, Applications of algebraic inequalities, Network optimization, Uncertainty quantification, Mechanics of materials and Engineering Mathematics. From the University of Birmingham, Michael Todinov holds a PhD related to mathematical modelling of thermal and residual stresses and a higher doctorate Doctor of Engineering (DEng) which is the engineering equivalent of Doctor of Science (DSc). The higher doctorate was awarded for fundamental contributions in the area of new probabilistic concepts and models in Engineering.
Areas of expertise
 Reliability and risk modelling, Uncertainty quantification
 General methods for improving reliability and reducing risk
 Mechanics of Materials, Material Science and Avanced Stress analysis
 Computer Science, Algorithms, Discrete mathematics
 Nontrivial algebraic inequalities and their applications
 Applied probability, probabilistic modelling, Monte Carlo simulation techniques
 Modelling and simulation of heat and thermochemical treatment of materials
 Stochastic flow networks, repairable flow networks. static flow networks, networks with
 disturbed flows, reliability networks, stochastic graphs
 Mathematical optimisation and optimisation algorithms under uncertainty
 Advanced C/C++ programming
 MATLAB programming
Teaching and supervision
Modules taught
 Engineering Reliability and Risk Management
 Engineering Mathematics and Modelling
 Fracture Mechanics
 Advanced Stress Analysis
 MATLAB programming and modelling with MATLAB
Research
M.Todinov's name is associated with creating the method of algebraic inequalities for generating new knowledge in science and technology which can be used for optimisation of systems and processes, the foundations of riskbased reliability analysis (driven by the cost of failure), the theory of repairable flow networks and networks with disturbed flows and the introduction of new domainindependent methods for improving reliability and reducing risk. M.Todinov also created analytical methods for evaluating the risk associated with overlapping of random events on a time interval.
A sample of M.Todinov's results includes: the discovery of closed and dominated parasitic flow loops in real networks; the proof that the Weibull distribution is an incorrect model for the distribution of breaking strength of materials and deriving the correct alternative of the Weibull model; a theorem regarding the exact upper bound of properties from random sampling of multiple sources; a general equation for the probability of failure of brittle components with complex shape, the formulation and proof of the necessary and sufficient conditions of the PalmgrenMiner rule and Scheil's additivity rule; deriving the correct alternative of the JohnsonMehlAvramiKolmogorov equation; formulating the dual network theorems for static flows networks and networks with disturbed flows; discovering the binomial expansion model for evaluating risk associated with overlapping random events on a time interval, developing the methods of separation, segmentation, selfreinforcement (selfstrengthening) and inversion as domainindependent methods for improving reliability and reducing risk.
M.Todinov’s research has been funded by the automotive industry, nuclear industry, the oil and gas industry and various research councils.
Research grants and awards
 Recipient of the 2017 prestige IMechE award for risk reduction in Mechanical Engineering (IMechE, UK, 2017)
 Recipient of a best lecturer teaching award, as voted by students (Cranfield University, 2005)
 Highspeed algorithms for the output flow in stochastic flow networks, (20092013), research project funded by The Leverhulme Trust, UK.
 Highspeed algorithms for the output flow in stochastic flow networks with tree topology, (20072008), consultancy project funded by British Petroleum.
 Reliability Value Analysis for BP Taurt Development (20052006), consultancy project funded by Cooper Cameron.
 Reliability allocation in complex systems based on minimizing the total cost (20042007), research project funded by by EPSRC.
 Modelling the probability of failure of mechanical components caused by defects (20032005), research project sponsored by British Petroleum.
 Developing the BP reliability strategy, generic models and software tools for reliability analysis and setting reliability requirements based on cost of failure and minimum failurefree operating periods (20022004), research project funded by British Petroleum.
 Modelling a singlechannel AET production system versus a dualchannel AET system (2005), consultancy project sponsored by Total.
 Reliability case for allelectric subsea control system (2004), consultancy project funded by BP and Total.
 Modelling the uncertainty associated with the ductiletobrittle transition temperature of inhomogeneous welds (2002), research project funded by NII/HSE, UK.
 Developing efficient statistical models and software for determining the uncertainty in the location of the ductiletobrittle transition region for multirun welds (20012002), research project sponsored by the Nuclear Installations Inspectorate, HSE/NII, UK.
 Developing efficient statistical methods and software for fitting the variation of the impact energy in the ductile/brittle transition region for sparse data sets (19982000), research project sponsored by the Nuclear Installations Inspectorate, HSE/NII, UK.
 Statistical modelling of Brittle and Ductile Fracture in Steels, research project funded by EPSRC (19982000).
 Probabilistic Approach for Fatigue Design and Optimisation of Cast Aluminium Structures (19971998) research project funded by EPSRC.
 Modelling the temporal and residual stresses of SiMn automotive suspension springs, (19941997), research project funded by EPSRC and DTI.
 Six research projects related to mathematical modelling of heat and masstransfer during heat treatment of steels and mathematical modelling of nonisothermal phase transformation kinetics during heat treatment of steels, funded by the Bulgarian Ministry of Science and Education in the period (19881994).
 Optimal guillotine cutting out of oneand twodimensional stock in the batch production, (19861987), research project funded by the Union of the Mathematicians, Bulgaria.
Research impact
 Creating the method of algebraic inequalities for generating new knowledge in science and technology
 Creating the foundations of the theory of repairable flow networks and networks with disturbed flows. Highspeed algorithms for analysis, optimisation and control in real time of repairable flow networks.
 Discovering the existence of closed and dominated flow loops in real networks and developing algorithms for their removal.
 Developing new domainindependent methods for reiability improvement and risk reduction.
 Creating the foundations of riskbased reliability analysis – driven by the cost of system failure. Formulation of the principle of riskbased design.
 Creating the theoretical foundations of the maximum risk reduction attained within limited riskreduction resources.
 Creating the theoretical foundations for evaluating the risk associated with overlapping random demands on a time interval.
 Introducing the concept 'stochastic separation' and a new reliability measure based on stochastic separation.
 Introducing the method of 'stochastic pruning' and creating on its basis ultrafast algorithms for determining the production availability of complex networks.
 Formulation and proof of the upper bound variance theorem regarding the exact upper bound of properties from sampling multiple sources.
 Formulation and proof of the damage factorisation theorem – the necessary and sufficient condition for the validity of the PalmgrenMiner rule.
 An equation for the probability of fracture controlled by random flaws for components with complex shape.
 Theoretical and experimental proof that the Weibull distribution does not describe correctly the probability of failure of materials with flaws and a derivation of the correct alternative.
 A general equation related to reliability dependent on the relative configurations of random variables.
 Revealing the drawbacks of the maximum expected profit criterion in the case of risky prospects containing a limited number of riskreward bets.
 Formulation and proof of the damage factorisation theorem – the necessary and sufficient condition for the validity of the PalmgrenMiner rule.
 An equation for the probability of fracture controlled by random flaws for components with complex shape.
 Theoretical and experimental proof that the Weibull distribution does not describe correctly the probability of failure of materials with flaws and the derivation of the correct alternative.
 A general equation related to reliability dependent on the relative configurations of random variables.
 Revealing the drawbacks of the maximum expected profit criterion in the case of risky prospects containing a limited number of riskreward bets.
Groups
Publications
Journal articles

Todinov MT, 'A general class of algebraic inequalities for generating new knowledge and optimising the design of systems and processes'
Research in Engineering Design 33 (2022) pp.161171
ISSN: 09349839 eISSN: 14356066AbstractPublished here Open Access on RADARA special class of general inequalities has been identified that provides the opportunity for generating new knowledge that can be used for optimising systems and processes in diverse areas of science and technology. It is demonstrated that inequalities belonging to this class can always be interpreted meaningfully if the variables and separate terms of the inequalities represent additive quantities. The meaningful interpretation of a new algebraic inequality based on the proposed general class of inequalities led to developing a lightweight design for a supporting structure based on cantilever beams, reducing
the maximum force upon impact, generating new knowledge about the deflection of elastic elements connected in parallel and series and optimising the allocation of resources to maximise expected benefit. The interpretation of the new inequality yielded that the deflection of elastic elements connected in parallel is at least n^2 times smaller than the deflection of the same elastic elements connected in series, irrespective of the individual stiffness values of the elastic elements. The interpretation of another algebraic inequality from the proposed general class led to a method for decreasing the stiffness of a mechanical
assembly by cyclic permutation of the elastic elements building the assembly. The analysis showed that a decrease of stiffness exists only if asymmetry of the stiffness values in the connected elements is present. 
Todinov MT, 'Optimising processes and generating knowledge by interpreting a new algebraic inequality'
International Journal of Modelling, Identification and Control in press (2022)
ISSN: 17466172 eISSN: 17466180AbstractThis paper focuses on optimising processes and generating knowledge based on interpreting a new algebraic inequality. An interpretation of the new inequality yielded a strategy for reducing the amount of pollutants released from an industrial process. An alternative interpretation of the same inequality established that the deflection of n elastic elements connected in series is at least n^2 times larger than the deflection of the same elements connected in parallel, irrespective of the individual stiffness values of the elements. In addition, an
alternative interpretation of the new inequality yielded a counterintuitive result concerning improving the chances of picking a winning lottery ticket. Finally, the paper introduces a method for improving reliability by increasing the level of balancing and novel interpretations of algebraic inequalities related to this method. This is done by assessing the probability of selecting items of the same variety and determining the lower and upper bounds of this probability. 
Todinov M, 'Generation of new knowledge and optimisation of systems and processes through meaningful interpretation of algebraic inequalities'
International Journal of Mathematical Modelling and Numerical Optimisation 11 (4) (2021) pp.428449
ISSN: 20403607 eISSN: 20403615AbstractPublished here Open Access on RADARThe paper introduces a method for increasing the impact of additive quantities by meaningful interpretation of multivariate subadditive and superadditive functions. The paper demonstrates that the segmentation of additive quantities through subadditive and superadditive functions can be used to generate new knowledge and optimise systems and processes and the presented algebraic inequalities are applicable to any area of science and technology. The meaningful interpretation of the modified CauchySchwarz inequality, led to a method for increasing of the power output from a voltage source and to a method for increasing the capacity for absorbing strain energy of loaded mechanical components. It was found that the existence of asymmetry is essential to increasing the strain energy absorbing capacity and the power output. Loaded elements experiencing the same displacement do not yield an increase of the absorbed strain energy. Similarly, loaded resistances experiencing the same current do not yield an increase of the power output. Finally, the meaningful interpretation of an algebraic inequality in terms of potential energy, resulted in a general necessary condition for minimising the sum of powers of distances to a fixed number of points in space.

Todinov M, 'Meaningful interpretation of algebraic inequalities to achieve uncertainty and risk reduction'
Proceedings of the Institution of Mechanical Engineers, Part O: Journal of Risk and Reliability Online first (2021)
ISSN: 1748006X eISSN: 17480078AbstractPublished hereThe paper develops an important method related to using algebraic inequalities for uncertainty and risk reduction and enhancing systems performance. The method consists of creating relevant meaning for the variables and different parts of the inequalities and linking them with real physical systems or processes. The paper shows that inequalities based on multivariable subadditive functions can be interpreted meaningfully and the generated new knowledge used for optimising systems and processes in diverse areas of science and technology. In this respect, an interpretation of the Bergstrom inequality, which is based on a subadditive function, has been used to increase the accumulated strain energy in components
loaded in tension and bending. The paper also presents an interpretation of the Chebyshev’s sum inequality that can be used to avoid the risk of overestimation of returns from investments and an interpretation of a new algebraic inequality that can be used to construct the most reliable seriesparallel system. The meaningful interpretation of other algebraic inequalities yielded a highly counterintuitive result related to assigning devices of different types to missions composed of identical tasks. In the case where the probabilities of a successful accomplishment of a task, characterising the devices, are unknown, the best strategy for a successful accomplishment of the mission consists of selecting randomly an arrangement including devices of the same type. This strategy is always correct, irrespective of existing
uknown interdependencies among the probabilities of successful accomplishment of the tasks characterising the devices. 
Todinov M, 'Optimised design of systems and processes using algebraic inequalities'
Proceedings of the Institution of Mechanical Engineers, Part C: Journal of Mechanical Engineering Science In press (2021)
ISSN: 09544062 eISSN: 20412983AbstractPublished hereA method for optimising the design of systems and processes has been introduced that consists of interpreting the left and the righthand side of a correct algebraic inequality as the outputs of two alternative design configurations delivering the same required function. In this way, on the basis of an algebraic inequality, the superiority of one of the configurations is established. The proposed method opens wide opportunities for enhancing the performance of systems and processes and is very useful for design in general. The method has been demonstrated on systems and processes from diverse application domains.
The meaningful interpretation of an algebraic inequality based on a singlevariable subadditive function led to developing a lightweight design for a supporting structure based on cantilever beams. The interpretation of a new algebraic inequality based on a multivariable subadditive function led to a method for increasing the kinetic energy absorbing capacity during inelastic impact. The interpretation of a new inequality has been used for maximising the mass of deposited substance during electrolysis and for generating new knowledge about the deflection of elastic elements connected in parallel and series. 
Michel Todinov, 'Reducing uncertainty and obtaining superior performance by segmentation based on algebraic inequalities'
International Journal of Reliability and Safety 14 (2/3) (2020) pp.103115
ISSN: 1479389X eISSN: 14793903AbstractPublished here Open Access on RADARThe paper demonstrates for the first time uncertainty reduction and attaining superior performance through segmentation based on algebraic inequalities. Meaningful interpretation of algebraic inequalities has been used for generating new knowledge in unrelated application domains. Thus, the method of segmentation through an abstract inequality led to a new theorem related to electrical circuits. The power output from a source with particular voltage, on elements connected in series, is smaller than the total power output
from the segmented sources applied to the individual elements. Segmentation attained through the same abstract inequality led to another new theorem related to electrical capacitors. The energy stored by a charge of given size on a single capacitor is smaller than the total energy stored in multiple capacitors with the same equivalent capacity, by segmenting the initial charge over the separate capacitors. Finally, inequalities based on subadditive and superadditive functions have been introduced for reducing uncertainty and obtaining
superior performance by a segmentation or aggregation of controlling factors. By a meaningful interpretation of subadditive and superadditive inequalities, superior performance has been achieved for processes described by a powerlaw dependence. 
Michael Todinov, 'Using algebraic inequalities to reduce uncertainy and risk'
ASCEASME Journal of Risk and Uncertainty in Engineering Systems, Part B: Mechanical Engineering 6 (4) (2020)
ISSN: 23329017AbstractPublished here Open Access on RADARThe paper discusses applications of the domainindependent method of algebraic inequalities, for reducing uncertainty and risk. Algebraic inequalities are used for revealing the intrinsic reliability of competing systems and ranking the systems in terms of reliability in the absence of knowledge related to the reliabilities of their components. An algebraic inequality has also been used to establish the principle of the wellordered parallelseries systems which, in turn, has been applied to maximise the reliability of common parallelseries systems.
The paper introduces a method of linking an abstract inequality to a real process by a meaningful interpretation of the variables entering the inequality and its left and righthand part. The meaningful interpretation of a simple algebraic inequality led to a counterintuitive result. If two items from varieties are present in a large batch, the probability of selecting randomly two items of different variety does not exceed 0.5.

Todinov M, 'On two fundamental approaches for reliability improvement and risk reduction by using algebraic inequalities'
Quality and Reliability Engineering International 37 (2) (2020) pp.820840
ISSN: 07488017AbstractPublished here Open Access on RADARThe paper introduces two fundamental approaches for reliability improvement and risk reduction by using nontrivial algebraic inequalities: (a) by proving an inequality derived or conjectured from a real system or process and (b) by creating meaningful interpretation of an existing nontrivial abstract inequality
relevant to a real system or process. A formidable advantage of the algebraic inequalities can be found in their capacity to produce tight bounds related to reliabilitycritical design parameters in the absence of any knowledge about the variation of the controlling variables. The effectiveness of the first approach has
been demonstrated by examples related to decisionmaking under deep uncertainty and examples related to ranking systems built on components whose reliabilities are unknown. To demonstrate the second approach, meaningful interpretation has been created for an inequality that is a special case of the CauchySchwarz inequality. By varying the interpretation of the variables, the same inequality holds for elastic elements, resistors, and capacitors arranged in series and parallel. The paper also shows that meaningful interpretation of superadditive and subadditive inequalities can be used with success for optimizing various systems and processes. Meaningful interpretation of superadditive and subadditive inequalities has been used for maximizing the stored elastic strain energy at a specified total displacement and for optimizing the profit from
an investment. Finally, meaningful interpretation of an algebraic inequality has been used for reducing uncertainty and the risk of incorrect prediction about the magnitude ranking of sequential random events. 
Todinov M, 'REDUCING THE RISK OF FAILURE BY DELIBERATE WEAKNESSES'
International Journal of Risk and Contingency Management 9 (2) (2020) pp.3353
ISSN: 21609624AbstractPublished here Open Access on RADARThe deliberate weaknesses are points of weakness towards which a potential failure is channelled in order to limit the magnitude of the consequences from failure. The paper shows that reducing risk by deliberate weaknesses is a powerful domainindependent method which transcends mechanical engineering and works in various unrelated areas of human activity. A classification has been proposed of categories and classes of deliberate weaknesses reducing risk as well as discussion related to the underlying mechanisms of risk reduction. It is shown that introducing and repositioning existing weaknesses is an effective riskreduction strategy which transcends engineering and can be applied in many unrelated domains. The paper shows that in the case where the cost of failure of the separate components in a system varies significantly, an approach based on deliberate weaknesses has a significant advantage to the equalreliability/equalstrength design approach.

Todinov M, 'Improving reliability and reducing risk by using inequalities'
Safety and Reliability 38 (4) (2019) pp.222245
ISSN: 09617353 eISSN: 24694126AbstractPublished here Open Access on RADARThe paper introduces a powerful domainindependent method for improving reliability and reducing risk based on algebraic inequalities, which transcends mechanical engineering and can be applied in many unrelated domains. The paper demonstrates the application of inequalities to reduce the risk of failure by producing tight uncertainty bounds for properties and riskcritical parameters. Numerous applications of the upperboundvariance inequality have been demonstrated in bounding uncertainty from multiple sources, among which is the estimation of uncertainty in setting positioning distance and increasing the robustness of electronic devices. The rearrangement inequality has been used to maximise the reliability of components purchased from suppliers. With the help of the rearrangement inequality, a highly counterintuitive result has been obtained. If no information about the component reliability characterising the individual suppliers is available, purchasing components from a single supplier or from the smallest possible number of suppliers maximises the probability of a highreliability assembly. The CauchySchwartz inequality has been applied for determining sharp bounds of mechanical properties and the Chebyshev's inequality for determining a lower bound for the reliability of an assembly. The inequality of the inversely correlated random events has been introduced and applied for ranking risky prospects involving units with unknown probabilities of survival.

Michael Todinov, 'Reliability improvement and risk reduction by inequalities and segmentation'
Proceedings of the Institution of Mechanical Engineers, Part O: Journal of Risk and Reliability 234 (1) (2019) pp.6373
ISSN: 1748006X eISSN: 17480078AbstractPublished here Open Access on RADARThe paper introduces new domainindependent methods for improving reliability and reducing risk based on algebraic inequalities and chainrule segmentation. Two major advantages of algebraic inequalities for reducing risk have been demonstrated: (i) ranking risky prospects in the absence of any knowledge related to the individual building parts and (ii) reducing the variability of a riskcritical critical output parameter. The paper demonstrates a highly counterintuitive result derived by using inequalities: if no information about the component reliability characterising the individual suppliers is available, purchasing components from a single supplier or from the smallest possible number of suppliers maximises the probability of a highreliability assembly. The paper also demonstrates the benefits from combining domainindependent methods
and domainspecific knowledge for achieving risk reduction in several unrelated domains: decisionmaking, manufacturing, strength of components and kinematic analysis of complex mechanisms. In this respect, the paper introduces the chain rule segmentation method and applies it to reduce the risk of computational errors in kinematic analysis of complex mechanisms. The paper also demonstrates that combining the domainindependent method of segmentation and domainspecific knowledge in stress analysis leads to a significant reduction of the internal stresses and reduction of the risk of overstress failure. 
Todinov MT, 'Domainindependent approach to risk reduction'
Journal of Risk Research 23 (6) (2019) pp.796810
ISSN: 13669877 eISSN: 14664461AbstractPublished here Open Access on RADARThe popular domainspecific approach to risk reduction created the illusion that efficient risk reduction can be delivered successfully solely by using methods offered by the specific domain. As a result, many industries have been deprived from efficient risk reducing strategy and solutions. This paper argues that risk reduction is underlined by domainindependent methods and principles which, combined with knowledge from the specific domain, help to generate effective risk reduction solutions. In this respect, the paper introduces a powerful method for reducing the likelihood of computational errors based on combining the domainindependent method of segmentation and local knowledge of the chain rule for differentiation. The paper also demonstrates that lack of knowledge of domainindependent principles for risk reduction misses opportunities to reduce the risk of failure even in such mature field like stress analysis. The domainindependent methods for risk reduction do not rely on reliability data or knowledge of physical mechanisms underlying possible failure modes and are particularly well suited for developing new designs, with unknown failure mechanisms and failure history. In many cases, the reliability improvement and risk reduction by using the domainindependent methods reduces risk at no extra cost or at a relatively small cost. The presented domainindependent methods work across totally unrelated domains and this is demonstrated by the supplied examples which range from various areas of engineering and technology, computer science, project management, health risk management, business and even mathematics. The domainindependent risk reduction methods presented in this paper promote building products and systems characterised by highreliability and resilience.

Todinov MT, 'Reliability Improvement and Risk Reduction through Selfreinforcement'
International Journal of Risk Assessment and Management 22 (1) (2018) pp.1843
ISSN: 14668297AbstractThe method of selfreinforcement has been introduced as a domainindependent method for improving reliability and reducing risk. A key feature of selfreinforcement is that increasing the external/internal forces intensifies the system‘s response against these forces. As a result, the driving net force towards precipitating failure is reduced. In many cases, the selfreinforcement mechanisms achieve remarkable reliability increase at no extra cost. Two principal ways of selfreinforcement have been identified: reinforcement by capturing a proportional compensating factor and reinforcement by using feedback loops. Mechanisms of transforming forces and motion into selfreinforcing response have been introduced and demonstrated through appropriate examples. Mechanisms achieving selfreinforcement response by selfaligning, selfanchoring and modified geometry have also been introduced For the first time, the potential of positive feedback loops to achieve selfreinforcement and risk reduction was demonstrated. In this respect, it is shown that selfenergizing, fast growth and fast transition provided by positive feedback loops can be used with success for achieving reliability improvement. Finally, a classification was proposed of methods and techniques for reliability improvement and risk reduction based on the method of selfreinforcement.Published here Open Access on RADAR 
Todinov MT, 'IMPROVING RELIABILITY AND REDUCING RISK BY MINIMIZING THE RATE OF DAMAGE ACCUMULATION'
Safety and Reliability 37 (2/3) (2018) pp.148176
ISSN: 09617353 eISSN: 24694126AbstractThe paper introduces the principle of minimized rate of damage accumulation as a domainindependent principle of reliability improvement and risk reduction. A classification is proposed of methods for reducing the rate of damage accumulation. The paper introduces the method of substitution for reducing the rate of damage accumulation. The original assembly/system is substituted with assembly/system performing the same function and based on different physical principles. Such a substitution often eliminates failure modes characterised by intensive damage accumulation. One of the methods discussed is an optimal replacement resulting in the smallest rate of damage accumulation and maximum system reliability. A method for achieving the smallest rate of damage accumulation for a system with components logically arranged in series has been proposed for the first time. A dynamic programming algorithm for determining the optimal variation of multiple damageinducing factors to minimize the rate of damage accumulation, has also been proposed for the first time. The paper shows that the necessary and sufficient condition for using the additivity rule for calculating the threshold of accumulated damage precipitating failure is the factorisation of the rate of damage accumulation into a function of the amount of damage and a function of the damageinducing factor.Published here Open Access on RADAR 
Todinov M, 'Closed parasitic flow loops and dominated loops in networks'
International Journal of Operational Research 36 (4) (2017) pp.555590
ISSN: 17457645AbstractThe paper raises awareness of the presence of closed parasitic flow loops in the solutions of published algorithm for maximising the throughput flow in networks. If the rooted commodity is interchangeable commodity, a closed parasitic loop can effectively be present even if the routed commodity does not physically travel along a closed loop. The closed parasitic flow loops are highly undesirable loops of flow, which effectively never leave the network. Parasitic flow loops increase the cost of transportation of the flow unnecessarily, consume residual capacity from the edges of the network, increase the likelihood of deterioration of perishable products, increase congestion and energy wastage. Accordingly, the paper presents a theoretical framework related to parasitic flow loops in networks. By using the presented framework, it is demonstrated that the probability of existence of closed and dominated flow loops in networks is surprisingly high.Published here Open Access on RADARThe paper also demonstrates that the successive shortest path strategy for minimising the total length of transportation routes from multiple interchangeable origins to multiple destinations fails to minimise the total length of the routes. It is demonstrated that even in a network with multiple origins and a single destination, the successive shortest path strategy could still fail to minimise the total length of the routes. By using the developed theoretical framework, it is shown that a minimum total length of the transportation routes in a network with multiple interchangeable origins, is attained if and only if no closed parasitic flow loops and dominated flow loops exist in the network. Accordingly, an algorithm for minimising the total length of the transportation routes by eliminating all dominated parasitic flow loops is proposed.

Todinov MT, 'Mechanisms for improving reliability and reducing risk by stochastic and deterministic separation'
Journal of Risk Research 22 (4) (2017) pp.448474
ISSN: 13669877 eISSN: 14664461AbstractPublished here Open Access on RADARThe paper provides for the first time a comprehensive introduction into the mechanisms through which the method of separation achieves risk reduction and into the ways it can be implemented in engineering designs. The concept stochastic separation of critical random events on a time interval, which consists of guaranteeing with a specified probability a specified degree of distancing between the random events, is introduced. Efficient methods for providing stochastic separation by reducing the duration times of overlapping critical random events on a time interval are presented. The paper shows that the probability of overlapping of critical events, randomly appearing on a time interval, is practically insensitive to the distribution of their duration times and to the variance of the duration times as long as the mean of the duration times remains the same. A rigorous proof is presented that this statement is valid even for two random events on a time interval. The paper also provides insight into various mechanisms through which deterministic separation improves reliability and reduces risk. It is demonstrated that the separation on properties is an efficient technique for compensating the drawbacks associated with homogeneous properties. It is demonstrated that improving reliability by including redundancy, improving reliability by segmentation and some of the deliberate weak link techniques and stress limiters techniques for reducing risk are effectively special cases of a deterministic separation. Finally, the paper demonstrates that in a number of cases, the way to extract benefit from the method of separation is to build and analyse a mathematical model based on the method of separation. A comprehensive classification of the discussed methods for stochastic and deterministic separation is also presented.

Todinov MT, 'Reliability and risk controlled by the simultaneous presence of random events on a time interval'
ASCEASME Journal of Risk and Uncertainty in Engineering Systems, Part B: Mechanical Engineering 4 (2) (2017)
ISSN: 23329017AbstractThe paper treats the important problem related to risk controlled by the simultaneous presence of critical events, randomly appearing on a time interval and shows that the expected time fraction of simultaneously present events does not depend on the distribution of events durations. In addition, the paper shows that the probability of simultaneous presence of critical events is practically insensitive to the distribution of the events durations. These counterintuitive results provide the powerful opportunity to evaluate the risk of overlapping of random events through the mean duration times of the events only, without requiring the distributions of the events durations or their variance. A closedform expression for the expected fraction of unsatisfied demand for random demands following a homogeneous Poisson process in a time interval is introduced for the first time. In addition, a closedform expression related to the expected time fraction of unsatisfied demand, for a fixed number of consumers initiating random demands with a specified probability, is also introduced for the first time. The concepts stochastic separation of random events based on the probability of overlapping and the average overlapped fraction are also introduced. Methods for providing stochastic separation and optimal stochastic separation achieving balance between risk and cost of risk reduction are presented.Published here Open Access on RADAR 
Todinov M, 'Reducing Risk Through Inversion and SelfStrengthening'
International Journal of Risk and Contingency Management 6 (1) (2017) pp.1442
ISSN: 21609625 eISSN: 21609632AbstractA number of new techniques for reliability improvement and risk reduction based on the inversion method, such as: ‘inverting design variables’, ‘inverting by maintaining an invariant’, ‘inverting resulting in a reinforcing counterforce’, ‘negating basic required functions’ and ‘moving backwards to general and specific contributing factors’ have been introduced for the first time. By using detailed calculations, it has been demonstrated how the new technique ‘repeated inversion maintaining an invariant’ can be applied to reduce the risk of collision for multiple ships travelling at different times and with variable speeds. It has been demonstrated that for pressure vessels, inversion of the geometric parameters by maintaining an invariant volume could result not only in an increased safety but also in a significantly reduced weight.Published here Open Access on RADAR
The method of selfstrengthening has been introduced for the first time as a systematic method for improving reliability and reducing risk. The method of selfstrengthening by capturing a proportional compensating factor and selfstrengthening by creating a positive feedback loop have been proposed for the first time as reliability improvement tools. Finally, classifications have been proposed of methods and techniques for risk reduction based on the methods of inversion and selfstrengthening. 
Todinov M, 'Improving reliability and reducing risk by separation'
International Journal of Risk and Contingency Management 6 (4) (2017) pp.1639
AbstractThe paper introduces the method of separation for improving reliability and reducing technical risk and provides insight into the various mechanisms through which the method of separation attains this goal. A comprehensive classification of techniques for improving reliability and reducing risk, based on the method of separation has been proposed for the first time. From this classification, three principal categories of separation techniques have been identified: (i) assuring distinct functions/properties/behaviour for distinct components or parts (ii) assuring distinct properties/behaviour at distinct time, value of a parameter, conditions or scale and (iii) distancing riskcritical factors.Published here Open Access on RADARThe concept ‘stochastic separation’ of random events and methods for providing a stochastic separation have been introduced. It is shown that separation of properties is an efficient technique for compensating the drawbacks associated with a selection based on homogeneous properties. It is also demonstrated that the method of deliberate weak links and the method of segmentation can be considered as a special case of the method of separation. Finally, the paper demonstrates that the traditional reliability measure ‘safety margin’ is misleading and should not be used as a measure of the relative separation between load and strength

Todinov M, 'Reducing Risk by Segmentation'
International Journal of Risk and Contingency Management 6 (3) (2017) pp.2746
ISSN: 21609624 eISSN: 21609632AbstractThe paper provides analysis of the various mechanisms through which the segmentation improves reliability and reduces technical risk and presents a classification of riskreduction techniques based on segmentation. On the basis of theoretical arguments and examples, it is demonstrated that segmentation increases the tolerance of components to flaws causing local damage, reduces the rate of damage accumulation and damage escalation and reduces the hazard potential. The paper also demonstrates that segmentation essentially replaces a sudden failure on a macrolevel with gradual deterioration of the system on a microlevel through noncritical failures. It is demonstrated that segmentation can even reduce the likelihood of a loss from opportunity bets and the likelihood of erroneous conclusion from imperfect tests. Finally, a comprehensive classification of methods and techniques for reducing risk, based on segmentation, has been proposed.Published here Open Access on RADAR 
Todinov MT, 'Stochastic pruning and its application for fast estimation of the expected total output of complex systems'
Electronic Notes in Theoretical Computer Science 327 (2016) pp.109123
ISSN: 15710661AbstractA powerful method referred to as stochastic pruning is introduced for analysing the performance of common complex systems whose component failures follow a homogeneous Poisson process. The method has been applied to create a very fast solver for estimating the production availability of large repairable flow networks with complex topology. It is shown that the key performance measures production availability and system reliability are all properties of a stochastically pruned network with corresponding pruning probabilities. The highspeed solver is based on an important result regarding the average total output of a repairable system including components characterised by constant failure/hazard rates. The average output over a specified operation time interval is given by the ratio of the expected momentary output of the stochastically pruned system, where the separate components are pruned with probabilities equal to their unavailabilities, and the maximum momentary output in the absence of component failures. The running time of the algorithm for determining the expected total output of the system over a specified time interval is independent of the length of the operational interval and the failure frequencies of the edges. The highspeed solver has been embedded in a software tool, with graphics user interface by which a flow network topology is drawn on screen and the parameters characterising the edges and the nodes are easily specified. The software tool has been used to analyse a gas production network and to study the impact of the network topology on the network performance. It is shown that two networks built with identical type and number of components may have very different performance levels, because of slight differences in their topology.Published here Open Access on RADAR 
Todinov M, 'Evaluating the risk of unsatisfied demand on a time interval'
Artificial Intelligence Research 5 (1) (2016) pp.6777
ISSN: 19276974 eISSN: 19276982AbstractThis paper focuses on an important and very common problem and presents a theoretical framework for solving it: “determining the risk of unsatisfied request from users placing random demands on a time interval”. For the common case of a single source servicing a number of consumers, a closedform solution has been derived for the risk of collision of random demands. Based on the closedform solution, an efficient optimisation method has been developed for determining the optimal number of consumers that can be serviced by a single source, such that the probability of unsatisfied demand remains below a maximal tolerable level. A central part of the proposed theoretical framework is a general equation evaluating the risk of unsatisfied demand by the expected fraction of time of unsatisfied demand. The derived equation covers multiple sources servicing multiple consumers. Finally, the conducted parametric studies revealed an unexpected finding: the risk of collision of random demands on a time interval is practically insensitive to the standard deviations of the durations of demands. This surprising result provides the valuable opportunity to work with random demand times characterised by their means only, without supplying their probability distributions or variances.Published here 
Todinov MT, Same A, 'A fracture condition incorporating the most unfavourable orientation of the crack'
International Journal of Mechanics and Materials in Design 11 (3) (2015) pp.243252
ISSN: 15691713AbstractA fracture condition incorporating the most unfavourable orientation of the crack has been derived to improve the safety of loaded brittle components with complex shape, whose loading results in a threedimensional stress state. With a single calculation, an answer is provided to the important question whether a randomly oriented crack at a particular location in the stressed component will cause fracture. Brittle fracture is a dangerous failure mode and requires a conservative design calculation. The presented experimental results show that the locus of stress intensity factors which result in mixedmode fracture is associated with significant uncertainty. Consequently, a new approach to design of safety–critical components has been proposed, based on a conservative safe zone, located away from the scatter band defining fracture states. A postprocessor based on the proposed fracture condition and conservative safe zone can be easily developed, for testing loaded safety–critical components with complex shape. For each finite element, only a single computation is made, which guarantees a high computational speed. This makes the proposed approach particularly useful for incorporation in a design optimisation loop.Published here Open Access on RADAR 
Todinov MT, 'Reducing risk through segmentation, permutations, time and space exposure, inverse states, and separation'
International Journal of Risk and Contingency Management 4 (3) (2015) pp.121
ISSN: 21609624 eISSN: 21609632AbstractThe paper features a number of new generic principles for reducing technical risk with a very wide application area. Permutations of interchangeable components/operations in a system can reduce significantly the risk of system failure at no extra cost. Reducing the time of exposure and the space of exposure can also reduce risk significantly. Technical risk can be reduced effectively by introducing inverse states countering negative effects during service. The application of this principle in logistic supply networks leads to a significant reduction of the risk of congestion and delays. The associated reduction of transportation costs and environmental pollution has the potential to save billions of dollars to the world economy. Separation is a riskreduction principle which is very efficient in the cases of separating functions to be carried out by different components and for blocking out a common cause. Segmentation is a generic principle for risk reduction which is particularly efficient in reducing the load distribution, vulnerability to a single failure, the hazard potential and damage escalation.Published here Open Access on RADAR 
Todinov M, 'The same sign local effects principle and its application to technical risk reduction'
International Journal of Reliability and Safety 9 (4) (2015)
ISSN: 1479389XAbstractA simple yet powerful general riskreduction principle has been formulated related to systems each state of which can be obtained from a given initial state by adding the effects from a specified set of modifications. An important application of the formulated principle has been found in determining the global extremum of multivariable functions whose partial derivatives maintain the same sign in a rectangular domain. The proposed generic principle has also been applied with success to minimise the transportation costs related to a set of interchangeable sources servicing a set of destinations. A counterexample has been given which demonstrates for the first time that selecting the nearest available source to supply the destinations along the shortest available paths does not guarantee an optimal solution. This counterintuitive result is contrary to the longstanding and wellestablished practices in network optimisation. The application of the proposed generic principle in logistic supply networks leads to a significant reduction of the risk of congestion and delays.Published here Open Access on RADAR 
Todinov MT, 'Dominated parasitic flow loops in networks'
International Journal of Operations Research 11 (1) (2014) pp.117
ISSN: 1813713X eISSN: 18137148AbstractThe paper introduces the concept ‘dominated parasitic flow loops’ and demonstrates that these occur naturally in real networks transporting interchangeable commodity. The dominated parasitic flow loops are augmentable broken loops which have a dominating flow in one particular direction of traversing. The dominated parasitic flow loops arePublished hereassociated with transportation losses, congestion and increased pollution of the environment and are highly undesirable in real flow networks.
The paper derives a necessary and sufficient condition for the nonexistence of dominated parasitic flow loops in the case of presence of paths with zero and nonzero flow. The necessary and sufficient condition is at the basis of a method
for determining the probability of a dominated parasitic flow loop. The results demonstrate that the probability of a dominated parasitic flow loop is very large and increases very quickly with increasing the number of flow paths.
Dominated parasitic flow loops can be drained by augmenting them with flow, which results in an overall decrease of the transportation cost, without affecting the quantity of delivered commodity from sources to destinations. Accordingly, an
efficient algorithm for removing dominated parasitic flow loops has been presented and a number of important applications have been identified. The presented algorithm has the potential to save a significant amount of resources to the world economy.

Todinov MT, 'Optimal allocation of limited resources among discrete riskreduction options'
Artificial Intelligence Research 3 (4) (2014)
ISSN: 19276974 eISSN: 19276982AbstractThis study exposes a critical weakness of the (01) knapsack dynamic programming approach, widely used for optimal allocationof resources. The (01) knapsack dynamic programming approach could waste resources on insignificant improvements andprevent the more efficient use of the resources to achieve maximum benefit. Despite the numerous extensive studies, this criticalshortcoming of the classical formulation has been overlooked. The main reason is that the standard (01) knapsack dynamicprogramming approach has been devised to maximise the benefit derived from items filling a space with no intrinsic value. While this is an appropriate formulation for packing and cargo loading problems, in applications involving capital budgeting, this formulation is deeply flawed. The reason is that budgets do have intrinsic value and their efficient utilisation is just as important as the maximisation of the benefit derived from the budget allocation.Published here Open Access on RADARAccordingly, a new formulation of the (01) knapsack resource allocation model is proposed where the weighted sum of thebenefit and the remaining budget is maximised instead of the total benefit. The proposed optimisation model produces solutionssuperior to both – the standard (01) dynamic programming approach and the costbenefit approach.
On the basis of common parallelseries systems, the paper also demonstrates that because of synergistic effects, sets includingthe same number of identical options could remove different amount of total risk. The existence of synergistic effects doesnot permit the application of the (01) dynamic programming approach. In this case, specific methods for optimal resource allocation should be applied. Accordingly, the paper formulates and proves a theorem stating that the maximum amount of removed total risk from operations and systems with parallelseries logical arrangement is achieved by using preferentially the available budget on improving the reliability of operations/components belonging to the same parallel branch. Improving the reliability of randomly selected operations/components not forming a parallel branch leads to a suboptimal risk reduction.The theorem is a solid basis for achieving a significant risk reduction for systems and processes with parallelseries logical arrangement.

Todinov MT, 'The throughput flow constraint theorem and its applications'
International Journal of Advanced Computer Science and Applications 5 (3) (2014)
ISSN: 2158107XAbstractThe paper states and proves an important result related to the theory of flow networks with disturbed flows:“the throughput flow constraint in any network is always equal to the throughput flow constraint in its dual network”. After the failure or congestion of several edges in the network, the throughput flow constraint theorem provides the basis of a very efficient algorithm for determining the edge flows which correspond to the optimal throughput flow from sources to destinations which is the throughput flow achieved with the smallest amount of generation shedding from the sources. In the case where a failure of an edge causes a loss of the entire flow through the edge, the throughput flow constraint theorem permits the calculation of the new maximum throughput flow to be done in time, where m is the number of edges in the network.In this case, the new maximum throughput flow is calculated by inspecting the network only locally, in the vicinity of the failed edge, without inspecting the rest of the network. The superior average running time of the presented algorithm, makes it particularly suitable for decongesting overloaded transmission links of telecommunication networks, in real time.In the paper, it is also shown that the deliberate choking of flows along overloaded edges, leading to a generation of momentary excess and deficit flow, provides a very efficient mechanism for decongesting overloaded branches.Published here 
Todinov MT, 'Fast augmentation algorithms for maximising the output flow in repairable flow networks after edge failures'
International Journal of Systems Science 44 (10) (2013) pp.1807183024
ISSN: 00207721AbstractThe article discuses a number of fundamental results related to determining the maximum output flow in a network after edge failures. On the basis of four theorems, we propose very efficient augmentation algorithms for restoring the maximum possible output flow in a repairable flow network, after an edge failure. In many cases, the running time of the proposed algorithm is independent of the size of the network or varies linearly with the size of the network. The high computational speed of the proposed algorithms makes them suitable for optimising the performance of repairable flow networks in real time and for decongesting overloaded branches in networks. We show that the correct algorithm for maximising the flow in a static flow network, with edges fully saturated with flow, is a special case of the proposed reoptimisation algorithm, after transforming the network into a network with balanced nodes. An efficient twostage augmentation algorithm has also been proposed for maximising the output flow in a network with empty edges. The algorithm is faster than the classical flow augmentation algorithms. The article also presents a study on the link between performance, topology and size of repairable flow networks by using a specially developed software tool. The topology of repairable flow networks has a significant impact on their performance. Two networks built with identical type and number of components can have very different performance levels because of slight differences in their topology.Published here 
Todinov MT, 'New algorithms for optimal reduction of technical risk'
Engineering Optimization 45 (6) (2013) pp.719743
ISSN: 0305215X eISSN: 10290273AbstractThe article features exact algorithms for reduction of technical risk by (1) optimal allocation of resources in the case where the total potential loss from several sources of risk is a sum of the potential losses from the individual sources; (2) optimal allocation of resources to achieve a maximum reduction of system failure; and (3) making an optimal choice among competing risky prospects. The article demonstrates that the number of activities in a risky prospect is a key consideration in selecting the risky prospect. As a result, the maximum expected profit criterion, widely used for making risk decisions, is fundamentally flawed, because it does not consider the impact of the number of riskreward activities in the risky prospects. A popular view, that if a single riskreward bet with positive expected profit is unacceptable then a sequence of such identical riskreward bets is also unacceptable, has been analysed and proved incorrect.Published here 
Todinov MT, 'Parasitic flow loops in networks'
International Journal of Operations Research 10 (3) (2013) pp.109122
ISSN: 1813713X eISSN: 18137148AbstractParasitic flow loops in real networks are associated with transportation losses, congestion and increased pollution of the environment. The paper shows that complex networks dispatching the same type of interchangeable commodity exhibit parasitic flow loops and the commodity does not need to be physically travelling around a closed contour for a parasitic flow loop to be present. Consequently, a theorem giving the necessary and sufficient condition for a parasitic flow loop on randomly oriented sourcedestination paths in a plane has been formulated and a simple expression has been obtained for the probability of a directed flow loop. A closedform expression has also been derived for determining the probability of a parasitic flow loop on a fixed lattice with flows whose directions are random. The results demonstrate that even for a relatively small number of intersecting flow paths, the probability of a directed flow loop is very large, which shows that the existence of directed flow loops in large and complex networks is practically inevitable. Consequently, a simple and efficient recursive algorithm has also been proposed for discovering and removing parasitic flow loops in real networks. The paper also shows that for any possible number and for any possible orientation of straightline flow paths on a plane, it is always possible to choose the flows in the paths in such a way, that no parasitic flow loops are present between the points of intersection.
In this paper, we also raise awareness of a fundamental flaw of algorithms for maximising the throughput flow published since 1956. They all leave highly undesirable parasitic flow loops in the optimised networks and are unsuitable for network optimisation without an additional stage aimed at removing them.

Todinov MT, 'The dual network theorem for static flow networks and its application for maximising the throughput flow'
Artificial Intelligence Research 2 (1) (2013) pp.81106
ISSN: 19276974 eISSN: 19276982AbstractThe paper discuses a new fundamental result in the theory of flow networks referred to as the ‘dual network theorem forstatic flow networks’. The theorem states that the maximum throughput flow in any static network is equal to the sum ofthe capacities of the edges coming out of the source, minus the total excess flow at all excess nodes, plus the maximumthroughput flow in the dual network. For very few imbalanced nodes in a flow network, determining the throughput flowin the dual network is a task significantly easier than determining the throughput flow in the original network. This createsthe basis of a very efficient algorithm for maximising the throughput flow in a network, by maximising the throughputflow in its dual network.Consequently, a new algorithm for maximising the throughput flow in a network has been proposed. For networks withvery few imbalanced nodes, in the case where only the maximum throughput flow is of interest, the proposed algorithmwill outperform any classical method for determining the maximum throughput flow.In this paper we also raise awareness of a fundamental flaw in classical algorithms for maximising the throughput flow instatic networks with directed edges. Despite the years of intensive research on static flow networks, the classicalalgorithms leave undesirable directed loops of flow in the optimised networks. These directed flow loops are associatedwith wastage of energy and resources and increased levels of congestion in the optimised networks. Consequently, analgorithm is also proposed for discovering and removing directed loops of flow in networks.Published here Open Access on RADAR 
Todinov M, 'Algorithms for minimising the lost flow due to failed components in repairable flow networks with complex topology'
International Journal of Reliability and Safety 6 (4) (2012) pp.283310
ISSN: 1479389XAbstractA number of fundamental theorems related to nonreconfigurable repairable flow networks, have been stated and proved. For a specified sourcetosink path, the difference between the sum of the unavailabilities of its forward edges and the sum of the unavailabilities of its backward edges is the path resistance. In a repairable flow network, the absence of augmentable cyclic paths with negative resistance is a necessary and sufficient condition for a minimum lost flow due to edge failures. For a specified sourcetosink path, the difference between the sum of the hazard rates of its forward empty edges and the sum of the hazard rates its backward empty edges is the flow disruption number of the path. The absence of augmentable cyclic paths with a negative flow disruption number is a necessary and sufficient condition for a minimum probability of undisturbed throughput flow, by edge failures.Published here 
Todinov MT, 'Topology optimisation of repairable flow networks for a maximum average availability'
Computers and Mathematics with Applications 64 (12) (2012) pp.37293746
ISSN: 08981221 eISSN: 18737668AbstractWe state and prove a theorem regarding the average production availability of a repairable flow network, composed of independently working edges, whose failures follow a homogeneous Poisson process. The average production availability is equal to the average of the maximum output flow rates on demand from the network, calculated after removing the separate edges with probabilities equal to the edges unavailabilities. This result creates the basis of extremely fast solvers for the production availability of complex repairable networks, the running time of which is independent of the length of the operational interval, the failure frequencies, or the lengths of the downtimes for repair. The computational speed of the production availability solver has been extended further by a new algorithm for maximising the output flow in a network after the removal of several edges, which does not require determining the feasible edge flows in the network. The algorithm for maximising the network flow is based on a new theorem, referred to as ‘the maximum flow after edge failures theorem’, stated and proved for the first time.Published hereFinally, unlike heuristic optimisation algorithms, the proposed algorithm for a topology optimisation of the network always determines the optimal solution.
The high computational speed of the developed production availability solver created the possibility for embedding it in simulation loops, performing a topology optimisation of large and complex repairable networks, aimed at attaining a maximum average availability within a specified budget for building the network. An exact optimisation method has been proposed, based on pruning the fullcomplexity network by using the branch and bound method as a way of exploring possible network topologies. This makes the proposed algorithm much more efficient, compared to an algorithm implementing a full exhaustive search. In addition, the proposed method produces an optimal solution compared to heuristic optimisation methods.
The application of the bound and branch method is possible because of the monotonic dependence of the production availability on the number of the edges pruned from the fullcomplexity network.

Todinov M, 'Analysis and optimization of repairable flow networks with complex topology'
IEEE Transactions on Reliability 60 (1) (2011) pp.111124
ISSN: 00189529 eISSN: 15581721AbstractWe propose a framework for analysis and optimization of repairable flow networks by (i) stating and proving the maximum flow minimum flow path resistance theorem for networks with merging flows (ii) a discreteevent solver for determining the variation of the output flow from repairable flow networks with complex topology (iii) a procedure for determining the threshold flow rate reliability for repairable networks with complex topology (iv) a method for topology optimization of repairable flow networks and (v) an efficient algorithm for maximizing the flow in nonreconfigurable flow networks with merging flows. Maximizing the flow in a static flow network does not necessarily guarantee that the flow in the corresponding nonreconfigurable repairable network will be maximized. In this respect, we introduce a new concept related to repairable flow networks:"a specific resistance of a flowpath" which is essentially the average percentage of losses from component failures for a flowpath fromthe source to the sink.Avery efficient algorithm based on adjacency arrays has also been proposed for determining all minimal flow paths in a network with complex topology and cycles. We formulate and prove a fundamental theorem about nonreconfigurable repairable flow networks with merging flows. The flow in a repairable flow network with merging flows can be maximized by preferentially saturating directed flow paths from the sources to the sink, characterized by the largest average availability. The procedure starts with the flow path with the largest average availability (the smallest specific resistance), and continues by saturating the unsaturated directed flow path with the largest average availability until no more flow paths can be saturated. A discreteevent solver for reconfigurable repairable flow networks with complex topology has also been constructed. The proposed discreteevent solver maximizes the flow rate in the network upon each component failure and return from repair. By maximizing the flow rate upon each component failure and return from repair, the discreteevent solver ensures a larger total output flow during a specified time interval. The designed simulation procedure for determining the threshold flowrate reliability is particularly useful for comparing flow network topologies, and selecting the topology characterized by the largest threshold flow rate reliability. It is also very useful in deciding whether the resources allocated for purchasing extra redundancy are justified. Finally, we propose a new optimization method for determining the network topology combining a maximum output flow rate attained within a specified budget for building the network. The optimization method is based on a branch and bound algorithm combined with pruning the fullcomplexity network as a way of exploring the possible repairable networks embedded in the fullcomplexity network.Published here 
Todinov M, 'The cumulative stress hazard density as an alternative to the Weibull model'
International Journal of Solids and Structures 47 (24) (2010) pp.32863296
ISSN: 00207683AbstractA simple, easily reproduced experiment based on artificial flaws has been proposed which demonstrates that the distribution of the minimum failure load does not necessarily follow a Weibull distribution. The experimental result presented in the paper clearly indicates that the Weibull distribution with its strictly increasing function, is incapable of approximating a constant probability of failure over a loading region. New fundamental concepts have been introduced referred to as 'hazard stress density' and 'cumulative hazard stress density'. These concepts helped derive an equation giving the probability of failure without making use of the notions 'flaws' and 'locally initiated failure by flaws'. As a result, the derived equation is more general than earlier models. The cumulative hazard stress density is an important fingerprint of materials and can be used for determining the reliability of loaded components. It leaves materials to 'speak for themselves' by not imposing a power law dependence on the variation of the critical flaws which is always the case if the Weibull model is used. An important link with earlier models has also been established. We show that the cumulative hazard stress density is numerically equal to the product of the number density of the flaws with a potential to cause failure and the probability that a flaw will be critical at the specified loading stress. We show that, predictions of the probability of failure from tests related to a small gauge length to a large gauge length are associated with large errors which increase in proportion with the ratio of the gauge lengths. Large gauge length ratios amplify the inevitable errors in the probability of failure associated with the small gauge length to a level which renders the predicted probability for failure of the large gauge length meaningless. Finally, a general integral has been derived, giving the reliability associated with time interval and random loading of a material with flaws. The integral has been validated by a Monte Carlo simulation.Published here 
Todinov M, 'Is Weibull distribution the correct model for predicting probability of failure initiated by noninteracting flaws?'
International Journal of Solids and Structures 46 (34) (2009) pp.887901
ISSN: 00207683AbstractThe utility of the Weibull distribution has been traditionally justified with the belief that it is the mathematical expression of the weakestlink concept in the case of flaws locally initiating failure in a stressed volume. This paper challenges the Weibull distribution as a mathematical formulation of the weakestlink concept and its suitability for predicting probability of failure locally initiated by flaws. The paper shows that the Weibull distribution predicts correctly the probability of failure locally initiated by flaws if and only if the probability that a flaw will be critical is a power law or can be approximated by a power law of the applied stress. Contrary to the common belief, on the basis of a theoretical analysis and Monte Carlo simulations we show that in general, for noninteracting flaws randomly located in a stressed volume, the distribution of the minimum failure stress is not necessarily a Weibull distribution. For the simple cases of a single group of identical flaws or two flaw size groups each of which contains identical flaws, for example, the Weibull distribution fails to predict correctly the probability of failure. Furthermore, if in a particular load range, no new critical flaws are created by increasing the applied stress, the Weibull distribution also fails to predict correctly the probability of failure of the component. In all these cases however, the probability of failure is correctly predicted by the suggested alternative equation. This equation is the correct mathematical formulation of the weakestlink concept related to random flaws in a stressed volume. The equation does not require any assumption concerning the physical nature of the flaws and the physical mechanism of failure and can be applied in cases of locally initiated failure by noninteracting entities.Published here 
Todinov M, 'Robust design using variance upper bound theorem'
International Journal of Performability Engineering 5 (4) (2009) pp.339356
ISSN: 09731318AbstractThe exact upper bound of the variance of properties from multiple sources is attained from sampling not more than two sources. This paper discusses important applications of this result referred to as variance upper bound theorem. A new conservative, nonparametric estimate has been proposed for the capability index of a process whose output combines contributions from multiple sources of variation. A new method for assessing and increasing the robustness of processes, operations and products where the mean value can be easily adjusted or is not critical has been presented, based on the variance upper bound theorem. We show that the worstcase variation of a property from multiple sources, obtained by using the variance upper bound theorem, can be used as a basis for developing robust engineering designs and products. If a design is capable of accommodating the worstcase variation of the reliabilitycritical parameters, it will also be capable of accommodating the variation of the reliabilitycritical parameters from any combination of sources of variation and mixing proportions. In this respect, a new algorithm for virtual testing based on the variance upper bound theorem has been proposed for determining the probability of a faulty assembly from multiple sources. For sources of variation that can be removed, the robustness can be improved further, by removing the source that yields the largest decrease in the variance upper bound. Consequently, the correspondent algorithm is also presented. A number of engineering applications have been discussed where the variance upper bound theorem can be used to assess and increase the robustness of mechanical and electrical components, manufacturing processes and operations.Published here 
Todinov M, 'Potential benefit, potential loss and potential gain from competing opportunity and failure events'
International Journal of Risk Assessment and Management 10 (40940) (2008)
ISSN: 14668297AbstractA quantitative framework is presented dealing with competing opportunity and failure events in a finite time interval. The framework is based on the new fundamental concepts potential benefit, potential loss and potential gain, for which closedform expressions regarding their distributions are derived and verified by a simulation. It is demonstrated that a decision strategy based on multiple event occurrences yields a very different gain compared with a decision strategy based on the next event occurrence. The results are illustrated by examples supporting decision making.Published here 
Todinov M, 'Riskbased design on limiting the probability if system failure at a minimum total cost'
Risk Management 10 (2) (2008) pp.104121
ISSN: 14603799AbstractA basic principle for riskbased design has been formulated: the larger the losses from failure of a component, the smaller the upper bound of its hazard rate, the larger the required minimum reliability level from the component. A generalized version and analytical expression for this important principle have also been formulated for multiple failure modes. It is argued that the traditional approach based on a risk matrix is suitable only for single failure modes/scenarios. In the case of multiple failure modes (scenarios), the individual risks should be aggregated and compared with the maximum tolerable risk. In this respect, a new method for riskbased design is proposed, based on limiting the probability of system failure below a maximal acceptable level at a minimum total cost (the sum of the cost for building the system and the risk of failure). The essence of the method can be summarized in three steps: developing a system topology with the maximum possible reliability, reducing the resultant system to a system with generic components, for each of which several alternatives exist including nonexistence of the component, and a final step involving selecting a set of alternatives limiting the probability of system failure at a minimum total cost. An exact recursive algorithm for determining the set of alternatives for the components is also proposed.Published here 
Todinov M, 'A comparative method for improving the reliability of brittle components'
Nuclear Engineering and Design 239 (2) (2008) pp.214220
ISSN: 00295493AbstractCalculating the absolute reliability built in a product is often an extremely difficult task because of the complexity of the physical processes and physical mechanisms underlying the failure modes, the complex influence of the environment and the operational loads, the variability associated with reliabilitycritical design parameters and the nonrobustness of the prediction models. Predicting the probability of failure of loaded components with complex shape for example is associated with uncertainty related to: the type of existing flaws initiating fracture, the size distributions of the flaws, the locations and the orientations of the flaws and the microstructure and its local properties. Capturing these types of uncertainty, necessary for a correct prediction of the reliability of components is a formidable task which does not need to be addressed if a comparative reliability method is employed, especially if the focus is on reliability improvement. The new comparative method for improving the resistance to failure initiated by flaws proposed here is based on an assumed failure criterion, an equation linking the probability that a flaw will be critical with the probability of failure associated with the component and a finite element solution for the distribution of the principal stresses in the loaded component. The probability that a flaw will be critical is determined directly, after a finite number of steps equal to the number of finite elements into which the component is divided. An advantage of the proposed comparative method for improving the resistance to failure initiated by flaws is that it does not rely on a Monte Carlo simulation and does not depend on knowledge of the size distribution of the flaws and the material properties. This essentially eliminates uncertainty associated with the material properties and the population of flaws. On the basis of a theoretical analysis we also show that, contrary to the common belief, in general, for noninteracting flaws randomly located in a stressed volume, the distribution of the minimum failure stress is not necessarily described by a Weibull distribution. For the simple case of a single group of flaws all of which become critical beyond a particular threshold value for example, the Weibull distribution fails to predict correctly the probability of failure. If in a particular load range, no new critical flaws are created by increasing the applied stress, the Weibull distribution also fails to predict correctly the probability of failure of the component. In these cases however, the probability of failure is correctly predicted by the suggested alternative equation. The suggested equation is the correct mathematical formulation of the weakestlink concept related to random flaws in a stressed volume. The equation does not require any assumption concerning the physical nature of the flaws and the physical mechanism of failure and can be applied in any situation of locally initiated failure by noninteracting entities.Published here 
Todinov M, 'Efficient algorithm and discreteevent solver for stochastic flow networks with converging flows'
International Journal of Reliability and Safety 2 (4) (2008) pp.286308
ISSN: 1479389XAbstractAn efficient algorithm is proposed for determining the quantity of transferred flow and the losses from failures of repairable stochastic networks with converging flows. We show that the computational speed related to determining the variation of the flow through a stochastic flow network can be improved enormously if the topology of the network is exploited directly. The proposed algorithm is based on a new result related to maximising the flow in networks with converging flows. An efficient discreteevent solver for repairable networks with converging flows has also been developed, based on the proposed algorithm. The solver handles repairable networks with multiple sources of production flow, multicommodity flows, overlapping failures, multiple failure modes, redundant components and redundant branches of components. The solver is capable of tracking the cumulative distribution of the potential losses from failures associated with the whole network and with each component in the network.Published here 
Iacopino G, Todinov M, 'Monte Carlo simulation of multiaxial fracture in brittle components containing flaws'
Operation Maintenance and Materials Issues 5 (2) (2008) pp.117
ISSN: 17405181AbstractAnalysis of the effect of the variability associated with the material microstructure, due to the presence of flaws such as inclusions and pores, on the strength distribution of mechanical components is conducted. For this purpose, a computational procedure, based on the coupled use of Finite Element Analysis and Monte Carlo Simulation, is proposed to evaluate the failure probability of mechanical components. Finite element analysis is employed to determine the stress field generated by the applied load. The random distribution of flaws in the material microstructure is modelled by a homogenous Poisson process and its effect on the probability of failure evaluated by a Monte Carlo simulation. A mixedmode fracture criterion, based on the coplanar strainenergy release rate is used to establish whether a flaw is unstable. The proposed model has been applied to determine the probability of failure initiated by flaws for a turbine blade. For various loading configurations, the component strength distribution has been evaluated. The effect of the random distribution of flaws is analysed and discussed. The proposed approach allows the designer to identify the regions in the component characterised by a high probability of initiating fracture. 
Todinov M, 'Selecting designs with high resistance to overstress failure initiated by flaws'
Computational Materials Science 42 (2008) pp.306315
ISSN: 09270256AbstractA powerful new technology is proposed for creating reliable and robust designs, characterized by a high resistance to failure. The new technology is based on a new mixedmode failure criterion, and computationally very efficient simulation technique for calculating the probability of failure of a component with complex shape. The new technology handles design alternatives with complex shape and arbitrary loading. For each design shape or a loading alternative, a finite element model is created by using a standard finite element package. Next, a specially designed postprocessor reads the output files from the static stress analyses and calculates the probability of failure associated with each design alternative. Finally, the design alternative characterised by the smallest probability of failure is selected. Limitations of existing approaches to statistics of failure locally initiated by flaws are also discussed. Central to the traditional approaches is the assumption that the number density of the critical flaws is a power function of the applied stress. In this paper, on the basis of counterexamples, we show that for a material with flaws, the power law assumption does not hold in common cases, such as spherical flaws in a homogeneous matrix.Published here 
Todinov M, 'Riskbased reliability allocation and topological optimisation based on minimising the total cost'
International Journal of Reliability and Safety 1 (4) (2007) pp.489512
ISSN: 1479389XAbstractA new method for optimisation of the topology of engineering systems is proposed, based on reliability allocation by minimising the total cost  the sum of the cost for building the system and the risk of failure. The essence of the proposed method can be summarised in three steps: developing a system topology with the maximum possible reliability; reducing the resultant system to a system with generic components, for each of which several alternatives exist; and a third step that involves reliability allocation minimising the total cost. A heuristic optimisation algorithm and an exact recursive algorithm are also proposed. Central to the proposed methods is an efficient algorithm for determining the probability of system failure. The proposed algorithms are generic and applicable to any engineering system. They are very efficient for topologically complex reliability networks containing a large number of nodes.Published here 
Todinov M, 'An efficient algorithm for determining the risk of structural failure locally initiated by faults '
Probabilistic Engineering Mechanics 22 (1) (2006) pp.1221
ISSN: 02668920AbstractAn efficient algorithm has been proposed for determining the probability of failure of structures containing flaws. The algorithm is based on a powerful generic equation, a central parameter in which is the conditional individual probability of initiating failure by a single flaw. The equation avoids conservative predictions related to the probability of locally initiated failure and is a powerful alternative to existing approaches. It is based on the concept of"conditional individual probability of initiating failure" characterising a single fault, which permits us to relate in a simple fashion the conditional individual probability of failure characterising a single fault to the probability of failure characterising a population of faults. A method for estimating the conditional individual probability has been proposed based on combining a Monte Carlo simulation and a failure criterion. The generic equation has been modified to determine the probability of fatigue failure initiated by flaws. Other important applications discussed in the paper also include: comparing different types of loading and selecting the type of loading associated with the smallest probability of overstress failure; optimizing designs by minimizing their vulnerability to overstress failure initiated by flaws; determining failure triggered by random faults in a large system and determining the probability of overloading of a supply system from random demands.Published here 
Todinov MT, 'Equations and a fast algorithm for determining the probability of failure initiated by flaws'
International Journal of Solids and Structures 43 (17) (2006) pp.51825195
ISSN: 00207683AbstractPowerful equations and an efficient algorithm are proposed for determining the probability of failure of loaded components with complex shape, containing multiple types of flaws. The equations are based on the concept ‘conditional individual probability of initiating failure’ characterising a single flaw given that it is in the stressed component. The proposed models relate in a simple fashion the conditional individual probability of failure characterising a single flaw (estimated by a Monte Carlo simulation) to the probability of failure characterising a population of flaws. The derived equations constitutes the core of a new statistical theory of failure initiated by flaws in the material, with important applications in optimising designs by decreasing their vulnerability to failure initiated by flaws during overloading or fatigue cycling.Published hereMethods have also been developed for specifying the maximum acceptable level of the flaw number density and the maximum size of the stressed volume which guarantee that the probability of failure initiated by flaws remains below a maximum acceptable level. An important parameter referred to as ‘detrimental factor’ is also introduced. Components with identical geometry and material, with the same detrimental factors are characterised by the same probability of failure. It is argued that eliminating flaws from the material should concentrate on types of flaws characterised by large detrimental factors.
The equations proposed avoid conservative predictions resulting from equating the probability of failure initiated by a flaw in a stressed region with the probability of existence of the flaw in that region.

Todinov MT, 'Reliability analysis of complex systems based on the losses from failures'
International Journal of Reliability, Quality and Safety Engineering 13 (2) (2006) pp.127148
ISSN: 02185393 eISSN: 17936446AbstractThe conventional reliability analysis is based on the premise that increasing the reliability of a system will decrease the losses from failures. In this paper it is demonstrated that increasing the reliability of the system does not always mean decreasing the losses from failures. An inappropriate increase of the reliability of the system may lead to a simultaneous increase of the losses from failure. In other words, a system reliability improvement, which is disconnected from the losses from failure does not necessarily reduce the losses from failures.Published hereAn efficient discreteevent simulation model and algorithm have been proposed for reliability analysis based on the losses from failure for production systems with complex topology. The model links reliability with losses from failures. A new algorithm has also been developed for system reliability analysis related to productions systems based on multiple production units where the absence of critical failure means that at least m out n production units are working.
The parametric study conducted on the basis of the developed models revealed that a dualcontrol production system is characterized by enhanced production availability, which increases with increasing the number of production units in the system. A production unit from a dualcontrol production system including multiple production units is characterized by a larger availability compared to a production unit from a dualcontrol production system including a single production unit.
The proposed approach has been demonstrated by comparing the losses from failures and the net present values of two competing design topologies: one based on a singlechannel control and the other based on a dualchannel control. The proposed models have been successfully applied and tested for reliability value analysis of productions systems in deepwater oil and gas production.
It is also argued that the reliability allocation in a production system should be done to maximize the net profit/value obtained from the system. Consequently, a method for setting reliability requirements and reliability allocation maximizing the net profit by minimizing the sum of the capital costs and the expected losses from failures has been proposed. Reliability allocation which maximizes the net profit in case of a system consisting of blocks arranged in series is achieved by determining for each block individually, the reliabilities of the components which minimize the sum of the capital costs and the expected losses from failures.

Todinov MT, 'Reliability value analysis of complex production systems based on the losses from failures'
International Journal of Quality and Reliability Management 23 (6) (2006) pp.696718
ISSN: 0265671XAbstractPurposePublished here– The aim of this paper is to propose efficient models and algorithms for reliability value analysis of complex repairable systems linking reliability and losses from failures.
Design/methodology/approach
– The conventional reliability analysis is based on the premise that increasing the reliability of a system will decrease the losses from failures. In this paper it is demonstrated that a system with larger reliability does necessarily mean a system with smaller losses from failures. In other words, a system reliability improvement, which is disconnected from the losses from failures does not necessarily reduce the losses from failures. An efficient discrete‐event simulation model and algorithm are proposed for tracking the losses from failures for systems with complex topology. A new algorithm is also proposed for system reliability analysis related to productions systems based on multiple production units where the absence of a critical failure means that at least m out n production units are working.
Findings
– A model for determining the distribution of the net present value (NPV) characterising the production systems is developed. The model has significant advantages compared to models based on the expected value of the losses from failures. The model developed in this study reveals the variation of the NPV due to variation of the number of critical failures and their times of occurrence during the entire life‐cycle of the systems.
Practical implications
– The proposed models have been successfully applied and tested for reliability value analysis of productions systems in deepwater oil and gas production.
Originality/value
– The proposed approach has been demonstrated by comparing the losses from failures and the NPVs of two competing design topologies: one based on a single‐channel control and the other based on a dual‐channel control.
Books

Todinov M, Interpretation of Algebraic Inequalities: Practical Engineering Optimisation and Generating New Knowledge, CRC Press (2021)
ISBN: 9781032059174 eISBN: 9781003199830AbstractOpen Access on RADARThis book introduces a new powerful method based on algebraic inequalities for optimising engineering systems and processes, with applications in mechanical engineering, electrical engineering, reliability engineering, risk management and operational research.
The book shows that the application potential of algebraic inequalities in engineering and technology is farreaching and certainly not limited to specifying design constraints. Algebraic inequalities are capable of handing deep unstructured uncertainty associated with design variables and control parameters. With the method presented in this book, powerful new knowledge about systems and processes can be generated through meaningful interpretation of algebraic inequalities. By covering various types of algebraic inequalities suitable for interpretation, the book demonstrates how the generated knowledge can be applied for enhancing system and process performance. Depending on the specific interpretation, knowledge, applicable to systems and processes from diverse application domains, can be generated from the same algebraic inequality. Furthermore, an important class of algebraic inequalities is introduced that can be used for optimising systems and processes in any area of science and technology provided that the variables and separate terms of the inequalities stand for additive quantities.
With the presented method and various examples, the book will be of interest to engineers, students and researchers in the fields of optimisation, mechanical and electrical engineering, reliability engineering, risk management and operational research.

Michael T. Todinov, RISK AND UNCERTAINTY REDUCTION BY USING ALGEBRAIC INEQUALITIES, CRC Press (2020)
ISBN: 9780367898007 eISBN: 9781003032502AbstractPublished hereThis book covers the application of algebraic inequalities for reliability improvement and for uncertainty and risk reduction. It equips readers with powerful domainindependent methods for reducing risk based on algebraic inequalities and demonstrates the significant benefits derived from the application for risk and uncertainty reduction.
Algebraic inequalities:
• Provide a powerful reliability improvement, risk and uncertainty reduction method that transcends engineering and can be applied in various domains of human activity
• Present an effective tool for dealing with deep uncertainty related to key reliabilitycritical parameters of systems and processes
• Permit meaningful interpretations which link abstract inequalities with the real world
• Offer a tool for determining tight bounds for the variation of riskcritical parameters and complying the design with these bounds to avoid failure
• Allow optimising designs and processes by minimising the deviation of critical output parameters from their specified values and maximising their performance
This book is primarily for engineering professionals and academic researchers in virtually all existing engineering disciplines.

M.Todinov, Methods for reliability improvement and risk reduction, Wiley (2019)
ISBN: 9781119477587 eISBN: 9781119477624AbstractPublished here Open Access on RADARReliability is one of the most important attributes for the products and processes of any company or organization. This important work provides a powerful framework of domainindependent reliability improvement and risk reducing methods which can greatly lower risk in any area of human activity. It reviews existing methods for risk reduction that can be classified as domainindependent and introduces the following new domainindependent reliability improvement and risk reduction methods: Separation Stochastic separation Introducing deliberate weaknesses Segmentation Selfreinforcement Inversion Reducing the rate of accumulation of damage Permutation Substitution Limiting the space and time exposure Comparative reliability models The domainindependent methods for reliability improvement and risk reduction do not depend on the availability of past failure data, domainspecific expertise or knowledge of the failure mechanisms underlying the failure modes. Through numerous examples and case studies, this invaluable guide shows that many of the new domainindependent methods improve reliability at no extra cost or at a low cost. Using the proven methods in this book, any company and organisation can greatly enhance the reliability of its products and operations.Supplied by publisher.

Todinov M, Reliability and Risk Models: Setting Reliability Requirements, Wiley (2015)
ISBN: 9781118873328 
Todinov MT, Flow networks, Elsevier (2013)
ISBN: 9780123983961 eISBN: 9780123984067AbstractThis book develops the theory, algorithms and applications related to repairable flow networks and networks with disturbed flows. 
Todinov MT, Riskbased reliability analysis and generic principles for risk reduction, Elsevier (2007)
ISBN: 9780080447285 eISBN: 9780080467559
Book chapters

Todinov MT, 'Virtual accelerated life testing of complex systems' in Bouvry P, GonzalezVelez H, Kolodziej J (ed.), Intelligent decision systems in largescale distributed environment, Springer (2011)
ISBN: 9783642212703 eISBN: 9783642212710AbstractA method has been developed for virtual accelerated testing of complex systems. Part of the method are an algorithm and a software tool for extrapolating the life of a complex system from the accelerated lives of its components. This makes the expensive task of building test rigs for life testing of complex engineering systems unnecessary and reduces drastically the amount of time and resources needed for accelerated life testing of complex systems. The impact of the acceleration stresses on the reliability of a complex system can also be determined by using the developed method. The proposed method is based on Monte Carlo simulation and is particularly suitable for topologically complex systems, containing a large number of components. Part of the method is also an algorithm for finding paths in complex networks. Compared to existing pathfinding algorithms, the proposed algorithm determines the existence of paths to multiple end nodes and not only to a single end node. This makes the proposed algorithm ideal for revealing the reliability of engineering systems where more than a single operating component is controlled.Published here 
Todinov M, 'A new criterion for design of brittle components and for assessing their vulnerability to brittle fracture' in Guedes Soares, C (ed.), Advances in Safety, Reliability and Risk Management, Springer Verlag (Germany) (2011)
ISSN: 03769429 ISBN: 9780415683791 eISBN: 9780203135105Published here
Conference papers

Todinov MT, 'On two optimisation problems related to unsatisfied demand on a time interval'
(2016) pp.15051515
ISBN: 9788460860822AbstractThis paper focuses on two important optimisation problems: (i) the maximum size of the system that can be serviced by a given number of sources so that the unsatisfied demand does not exceed a tolerable level and (ii) the minimum number of sources needed to service random demands so that the unsatisfied demand does not exceed a tolerable level. To solve these problems, a computational framework for determining the expected fraction of unsatisfied demand on a time interval has been created and closedform solutions for the expected fraction of unsatisfied demand have been derived.Published here 
Todinov M, 'Maximising the Amount of Transmitted Flow Through Repairable Flow Networks'
(2012) pp.163168
ISBN: 9781424466146AbstractA fundamental theorem related to maximizing the flow in a repairable flow network with arbitrary topology has been stated and proved. `The flow transmitted through a repairable network with arbitrary topology and a single source and sink can be maximized by (i) determining, all possible flow paths from the start node (the source) to the end node (the sink); (ii) arranging the flow paths in ascending order according to their specific flow path resistance and (iii) setting up the flow in the network by a sequential saturation of the flow paths starting with the one with the smallest specific resistance, until the entire flow network is saturated'. Based on the proved theorem, a new method for maximizing the flow in repairable flow networks has been proposed. The method is based on the new concept `specific resistance of a flow path'. Finally, a new stochastic optimization method has been proposed for determining the network topology combining a maximum flow and minimum cost.Published here 
Todinov M, 'Fast augmentation algorithms for maximizing the output flow in repairable flow networks after a component failure'
IEEE Transactions on Reliability (2011) pp.505512
ISSN: 00189529 ISBN: 9781457703836 eISBN: 9780769543888AbstractThe paper discusses new, very efficient augmentation algorithms and theorems related to maximising the flow in singlecommodity and multicommodity networks. For the first time, efficient algorithms with linear average running time O(m) in the size m of the network, are proposed for restoring the maximum flow in singlecommodity and multicommodity networks after a component failure. The proposed algorithms are particularly suitable for discreteevent simulators of repairable production networks whose analysis requires generating thousands of simulation histories, each including hundreds of component failures. In this respect, a new, very efficient augmentation method with linear running time has been proposed for restoring the maximum output flow of oil in oil and gas production networks, after a component failure. Another important application of the proposed algorithms is in networks controlled in real time, where upon failure, the network flows need to be redirected quickly in order to maintain a maximum output flow.Published here 
Todinov M, 'A Discreteevent Solver for Repairable Flow Networks With Complex Topology'
(2010) pp.232237
ISBN: 9781424478378 eISBN: 9780769541587AbstractThe paper presents a discreteevent simulator of repairable flow networks with complex topology. The solver is based on an efficient algorithm for maximizing the flow in repairable flow networks with complex topology. The discreteevent solver maximizes the flow through the repairable network upon each component failure and return from repair. This ensures a larger output flow compared to a flow maximization conducted on the static flow network. Because of the flow maximization upon failure and return from repair, the simulator also tracks naturally the variation of the output flow from multiple overlapping failures. The discreteevent solver determines the basic performance characteristic of repairable flow networks  the expected output flow delivered during a specified time interval in the presence of component failures.Published here
Further details
Other experience
 CRANFIELD UNIVERSITY (20052006), HEAD OF RISK AND RELIABILITY
Leading the research, consultancy and teaching in the area of Reliability, Risk and Uncertainty modelling in the School of Applied Sciences, Cranfield University  CRANFIELD UNIVERSITY (20022004), BP LECTURER IN RELIABILITY ENGINEERING AND RISK MANAGEMENT
Research, consultancy, teaching, and supervision in the area of Reliability, Risk and Uncertainty quantification in the School of Applied Sciences  THE UNIVERSITY OF BIRMINGHAM (19942001), RESEARCH SCIENTIST
Managed and conducted research in the area of uncertainty modelling related to fracture and fatigue; modelling the uncertainty in the location of the ductiletobrittle region of nuclear pressure vessel steels; probability of fracture initiated by flaws; improving the reliability of mechanical components through mathematical modelling.  TECHNICAL UNIVERSITY OF SOFIA (19891994), BULGARIA, RESEARCH SCIENTIST
Managed a number of research projects in the area of modelling and simulation of heat and mass transfer and modelling phase transformation kinetics. Successfully accomplished a challenging project related to optimal cutting of sheet and bar stock in mass production. Most of the projects were funded by the Bulgarian Ministry of Science and Education and Bulgarian industry.