Finding the best solution to complex social issues like health is a constant challenge for government at all levels. In this blog, I will be sharing insights into some exciting SIPHER work seeking new scalable benchmark problems that can represent some of these real-world challenges and drive us closer to developing a practical tool to help decision makers and researchers.
Local authorities are routinely faced with complex social problems where they would ideally like to achieve several different objectives. Finding the optimum solution can only be achieved by involving the different teams (“disciplines” in the language of optimization) with their own specialist knowledge, and ensuring everyone works together.
The local authority has its top-level objectives to fulfil i.e., staying within budget, maintaining services (which could be broken down into smaller goals). Meanwhile, the disciplines are each focused on targeted objectives within their specialised area. As an example, education will involve various disciplines including institutions – nurseries, schools, colleges and adult education centres – as well as charities and businesses. They all have an interest in education, but their interests may or may not aligned with the objectives of the local authority.
Different disciplines are also likely to interact with each other; if we implement change in one discipline, we will see knock-on effects in another. Build more housing, for example, and the transport discipline will need to adjust their design variables in response, because there is now a need for more buses or different routes to service the public transport needs of the people in the new housing.
The Solution – MO-MDO
The types of challenges faced by local authorities are not unique. Large engineering organisations and industries are also faced with these type of complex issues. And they came up with an answer that works in practice – “multidisciplinary optimization (MDO)”
MDO means we have more than one discipline, or disciplinary team, in the optimization problem, and we want them to work together in some way to deliver an objective. Think of a complex engineered product such as a car. Multiple disciplinary teams are involved in designing the car with many different focuses – safety, ride handling, noise and vibration, performance, and efficiency.
These disciplinary teams can’t work in a completely independent fashion; they need to share information with each other. In an example system with two disciplines, A and B, the outcome of discipline A is linked to the choices made by discipline B. In our car example, the wheel tyre size affects the ride handling, performance, and vibration. This means that the disciplinary teams must interact with each other to find a solution that is, overall, optimal for some performance metric such as cost or weight (in the case of the car), as well as guaranteeing compatibility between disciplines. Here, compatibility means the disciplines need to reach agreement about shared choices and about how each discipline’s actions will affect the other disciplines.
How do we do this? Our MDO formulation contains three types of variables: global, which can be used by any discipline, local, which can only be used by one subsystem, and linking variables, which allow the disciplines to ‘talk’ to each other.
When we combine this with “multi-objective” – delivering more than one objective or goal in the most optimised solution – we get multi-objective, multidisciplinary optimization (MO-MDO).
The aim is to end up with a problem that uses specialised disciplines to reach some optimisation goal, but also permits multiple objectives. More importantly, in the programme of work we are currently undertaking, we are aiming to formalise a SIPHER problem into this format, which we can then solve using some of the methods presented by the MO-MDO scientific community.
If we want to apply MDO methods to other complex systems, including the social systems SIPHER is focused on, we need to create some benchmark problems.
A benchmark is an easily understood test problem that is solved by an optimization algorithm or architecture, usually where we know the ‘correct answer’ in advance. The accuracy of the answer obtained by the algorithm, the speed at which the problem is solved, and the resources used by the computer are just three metrics which are useful outcomes of benchmarking.
We need reliable benchmarking to make sure the methods we apply to our SIPHER problems are both accurate and efficient.
Despite the fact that much of the early MDO research originated in large industrial applications such as aerospace engineering, when starting this project, there were no available MO-MDO ‘benchmark’ problems that were suitable for SIPHER to adopt. We require benchmarks with large numbers of constituent components (such as local authorities), design variables, constraints, objectives and so on, that we would see in complex real-world SIPHER problems.
In 2000, a multi-objective benchmarking test set called ZDT was introduced. This test set is popular and still used today in multi-objective optimization research. We extended the ZDT benchmark problems so they could be applied to MDO problems. Scalable benchmarking for multi-objective MDO is a novel topic, so we chose this problem set to build on because of the simplicity, scalable nature, and separability of the problems.
We were keen to share our approach, learn from others, and encourage researchers from around the world to think about these problems and their solutions, which could be of benefit to SIPHER stakeholders. We presented our approach Toward scalable benchmark problems for multi-objective multidisciplinary optimization in Singapore in December 2022 at the IEEE Symposium Series On Computational Intelligence.
Our second conference paper extended the framework to more complex discipline linkages. A scalable test suite for bi-objective multidisciplinary optimization was presented in Leiden, the Netherlands, in March 2023 as part of the Evolutionary Multi-Criterion Optimization conference.
However, the structure of these problems only involved one optimization problem at the system level, and the disciplines only ran analyses or models. In a typical SIPHER problem, we would expect that the each discipline would have their own objectives, which might be in conflict with the system-level objective.
We needed to extend our work to allow the subsystems to run their own objectives and have their own constraints (restrictions such as those on design variables – a SIPHER example might be that the funding on certain disciplines may be limited).
We addressed this in our third conference paper, A distributed multi-disciplinary design optimization benchmark test suite with constraints and multiple conflicting objectives, presented in Lisbon this July. Here we extend our programme of work to distributed optimization problems. The disciplines now have objectives to minimise, as well as their own analyses to complete. This scalable test problem also contains constraints at the disciplinary levels.
The engagement with these publications at the three conferences was good. There were questions around our use of the ZDT test set due to its lack of complexity, when compared with other, more modern test sets. However, there was also interest in the possible practical applications of MO-MDO, within complex social problems.
While these are currently just benchmark problems, SIPHER is one step closer to finding a practical support tool for those seeking to address the complex population health and health inequalities issues.
To move this research forward we are engaging across SIPHER and our policy partners. Our Causal System Dynamics Modelling team – workstrand 4 – will be engaging with the Greater Manchester Combined Authority research team to help define the next steps of the project: how to use existing models to build a practical SIPHER optimisation problem and work out which indicators are most suitable.
We welcome input from anyone interested in helping advance this project on MO-MDO in a SIPHER setting, especially from those who can assist with pre-existing models, data and research.
- A distributed multi-disciplinary design optimization benchmark test suite with constraints and multiple conflicting objectives, GECCO ’23 Companion: Proceedings of the Companion Conference on Genetic and Evolutionary Computation, Lisbon, Portugal Jul 2023 (doi.org/10.1145/3583133.3596414)
- A scalable test suite for bi-objective multidisciplinary optimization, Evolutionary Multi-Criterion Optimization Conference (EMO) Leiden, The Netherlands, March 2023, doi.org/10.1007/978-3-031-27250-9_23
- Toward scalable benchmark problems for multi-objective multidisciplinary optimization, 2022 IEEE Symposium Series on Computational Intelligence (SSCI), Singapore, 2022, pp. 133-140.
- Comparison of Multiobjective Evolutionary Algorithms: Empirical Results (ZDT) Evolutionary Computation Volume 8 Issue 2pp 173–195 https://doi.org/10.1162/106365600568202
The views and opinions expressed in this blog are those of the author/authors.