Jinming Xu – Selected ProjectsDistributed Optimization for Large-Scale Data ProcessingThis project aims to develop distributed algorithms for large-scale statistical signal processing and machine learning problems. Most existing distributed algorithms usually require a perfect synchronization mechanism and decaying stepsize for achieving the exact optimum, restricting it from being asynchronously implemented and resulting in slower convergence rates. In addition, the assumption of the boundedness of (sub)gradient is often made for convergence analysis, which is quite restrictive in unconstrained problems. To overcome these issues, in this project, we developed two novel distributed algorithms both of which involve an extra step of consensus in each iteration and can be thus regarded as augmented versions of existing counterparts. Both algorithms, namely Distributed Forward-Backward Bregman Splitting (D-FBBS) and Augmented Distributed Gradient Method (AugDGM), are able to converge to the exact optimum even with constant stepsize over stochastic (even asynchronous) networks without the assumption of boundedness of (sub)gradient and achieves the best known convergence rates in the existing literature. Related Papers:
Complexity Bounds for Distributed Optimization AlgorithmsThis project aims to investigate the fundamental computational and communicational complexity bounds of distributed optimization algorithms. In this project, we provide a lower bound for a class of first-order distributed optimization algorithms for smooth functions that allow for multiple communication and computation steps at each iteration, and a corresponding “optimal” algorithm, termed OPTRA, that matches this lower bound. For the case where the cost functions are also strongly convex, we provide a theoretical guarantee for the number of inner loops of communications needed for the distributed optimization algorithm to achieve the performance of the centralized counterpart, leading to linear speedup in terms of overall iterations. Related Papers:
Distributed Optimization for Networked Dynamical SystemsThis project aims to develop distributed control schemes for large-scale networked dynamic systems. In this project, we proposed a novel distributed simultaneous perturbation approach (D-SPA) to achieve coordinated control of systems based on simultaneous perturbation techniques as well as consensus strategies. The proposed method is model-free and requires little knowledge on the coupling structure of the problem to be optimized. Using singular perturbation and averaging theory, we show that the proposed scheme is able to converge to the neighborhood of the Pareto optimum of the problem so long as the energy of perturbation signals is sufficiently small. To illustrate its effectiveness, we applied the proposed approach to a simulated offshore wind farm for energy maximization and make a comprehensive comparison with the existing state-of-the-art technique. We also extended this approach to account for (unknown) constraints imposed on the system, yielding a distributed primal-dual simultaneous perturbation approach. Related Papers:
Distributed Optimization for Resource AllocationThis project aims to develop efficient distributed algorithms for general composite resource allocation problems over networks, which arise from many application areas such as economic dispatch, demand response and network utility maximization. Conventionally, we have to resort to centralized or parallel approaches to solve these problems. It is well known that distributed solutions are more preferable due to their robustness to component failures and great scalability to the size of the network. Thus, we attempt to solve the resource allocation problem from its dual counterpart, which, together with certain splitting techniques, leads to a novel distributed algorithm that is able to solve the above primal problem not only in a distributed manner but also with a fast convergence rate. Related Papers:
Please feel free to contact me should you be interested in my research. |