math.OC
76 postsarXiv:2501.00643v1 Announce Type: cross Abstract: The design space of dynamic multibody systems (MBSs), particularly those with flexible components, is considerably large. Consequently, having a means to efficiently explore this space and find the optimum solution within a feasible timeframe is crucial. It is well known that for problems with several design variables, sensitivity analysis using the adjoint variable method extensively reduces the computational costs. This paper presents the novel extension of the discrete adjoint variable method to the design optimization of dynamic flexible MBSs. The extension involves deriving the adjoint equations directly from the discrete, rather than the continuous, equations of motion. This results in a system of algebraic equations that is computationally less demanding to solve compared to the system of differential algebraic equations produced by the continuous adjoint variable method. To describe the proposed method, it is integrated with a numerical time-stepping algorithm based on geometric variational integrators. The developed technique is then applied to the optimization of MBSs composed of springs, dampers, beams and rigid bodies, considering both geometrical (e.g., positions of joints) and non-geometrical (e.g., mechanical properties of components) design variables. To validate the developed methods and show their applicability, three numerical examples are provided.
arXiv:2409.08422v2 Announce Type: replace-cross Abstract: In this study, we consider the application of max-plus-linear approximators for Q-function in offline reinforcement learning of discounted Markov decision processes. In particular, we incorporate these approximators to propose novel fitted Q-iteration (FQI) algorithms with provable convergence. Exploiting the compatibility of the Bellman operator with max-plus operations, we show that the max-plus-linear regression within each iteration of the proposed FQI algorithm reduces to simple max-plus matrix-vector multiplications. We also consider the variational implementation of the proposed algorithm which leads to a per-iteration complexity that is independent of the number of samples.
arXiv:2501.00915v1 Announce Type: new Abstract: Machine learning has demonstrated remarkable promise for solving the trajectory generation problem and in paving the way for online use of trajectory optimization for resource-constrained spacecraft. However, a key shortcoming in current machine learning-based methods for trajectory generation is that they require large datasets and even small changes to the original trajectory design requirements necessitate retraining new models to learn the parameter-to-solution mapping. In this work, we leverage compositional diffusion modeling to efficiently adapt out-of-distribution data and problem variations in a few-shot framework for 6 degree-of-freedom (DoF) powered descent trajectory generation. Unlike traditional deep learning methods that can only learn the underlying structure of one specific trajectory optimization problem, diffusion models are a powerful generative modeling framework that represents the solution as a probability density function (PDF) and this allows for the composition of PDFs encompassing a variety of trajectory design specifications and constraints. We demonstrate the capability of compositional diffusion models for inference-time 6 DoF minimum-fuel landing site selection and composable constraint representations. Using these samples as initial guesses for 6 DoF powered descent guidance enables dynamically feasible and computationally efficient trajectory generation.
arXiv:2501.00172v1 Announce Type: cross Abstract: Recent advances in learning-based control have increased interest in stable inversion to meet growing performance demands. Here, we establish necessary and sufficient conditions for stable inversion, addressing challenges in non-minimum phase, non-square, and singular systems. An H-Infinity based algebraic approximation is introduced for near-perfect tracking without preview. Additionally, we propose a novel robust control strategy combining the nominal model with dual feedforward control to form a feedback structure. Numerical comparison demonstrates the approach's effectiveness.
arXiv:2206.09642v5 Announce Type: replace Abstract: How should one leverage historical data when past observations are not perfectly indicative of the future, e.g., due to the presence of unobserved confounders which one cannot "correct" for? Motivated by this question, we study a data-driven decision-making framework in which historical samples are generated from unknown and different distributions assumed to lie in a heterogeneity ball with known radius and centered around the (also) unknown future (out-of-sample) distribution on which the performance of a decision will be evaluated. This work aims at analyzing the performance of central data-driven policies but also near-optimal ones in these heterogeneous environments and understanding key drivers of performance. We establish a first result which allows to upper bound the asymptotic worst-case regret of a broad class of policies. Leveraging this result, for any integral probability metric, we provide a general analysis of the performance achieved by Sample Average Approximation (SAA) as a function of the radius of the heterogeneity ball. This analysis is centered around the approximation parameter, a notion of complexity we introduce to capture how the interplay between the heterogeneity and the problem structure impacts the performance of SAA. In turn, we illustrate through several widely-studied problems -- e.g., newsvendor, pricing -- how this methodology can be applied and find that the performance of SAA varies considerably depending on the combinations of problem classes and heterogeneity. The failure of SAA for certain instances motivates the design of alternative policies to achieve rate-optimality. We derive problem-dependent policies achieving strong guarantees for the illustrative problems described above and provide initial results towards a principled approach for the design and analysis of general rate-optimal algorithms.
arXiv:2412.11049v2 Announce Type: replace Abstract: We study the distributed facility location games with candidate locations, where agents on a line are partitioned into groups. Both desirable and obnoxious facility location settings are discussed. In distributed location problems, distortion can serve as a standard for quantifying performance, measuring the degree of difference between the actual location plan and the ideal location plan. For the desirable setting, under the max of sum cost objective, we give a strategyproof distributed mechanism with $5$-distortion, and prove that no strategyproof mechanism can have a distortion better than $\sqrt{2}+1$. Under the sum of max cost objective, we give a strategyproof distributed mechanism with $5$-distortion, and prove that no strategyproof mechanism can have a distortion better than $\frac{\sqrt{5}+1}{2}$. Under the max of max cost, we get a strategyproof distributed mechanism with $3$-distortion, and prove that no strategyproof mechanism can have a distortion better than $\frac{\sqrt{5}+1}{2}$. For the obnoxious setting, under three social objectives, we present that there is no strategyproof mechanism with bounded distortion in the case of discrete candidate locations, and no group strategyproof mechanism with bounded distortion in the case of continuous candidate locations.
arXiv:2501.00511v1 Announce Type: new Abstract: In minimax optimization, the extragradient (EG) method has been extensively studied because it outperforms the gradient descent-ascent method in convex-concave (C-C) problems. Yet, stochastic EG (SEG) has seen limited success in C-C problems, especially for unconstrained cases. Motivated by the recent progress of shuffling-based stochastic methods, we investigate the convergence of shuffling-based SEG in unconstrained finite-sum minimax problems, in search of convergent shuffling-based SEG. Our analysis reveals that both random reshuffling and the recently proposed flip-flop shuffling alone can suffer divergence in C-C problems. However, with an additional simple trick called anchoring, we develop the SEG with flip-flop anchoring (SEG-FFA) method which successfully converges in C-C problems. We also show upper and lower bounds in the strongly-convex-strongly-concave setting, demonstrating that SEG-FFA has a provably faster convergence rate compared to other shuffling-based methods.
arXiv:2501.00799v1 Announce Type: new Abstract: We consider the problem of \textit{online sparse linear approximation}, where one predicts the best sparse approximation of a sequence of measurements in terms of linear combination of columns of a given measurement matrix. Such online prediction problems are ubiquitous, ranging from medical trials to web caching to resource allocation. The inherent difficulty of offline recovery also makes the online problem challenging. In this letter, we propose Follow-The-Approximate-Sparse-Leader, an efficient online meta-policy to address this online problem. Through a detailed theoretical analysis, we prove that under certain assumptions on the measurement sequence, the proposed policy enjoys a data-dependent sublinear upper bound on the static regret, which can range from logarithmic to square-root. Numerical simulations are performed to corroborate the theoretical findings and demonstrate the efficacy of the proposed online policy.
arXiv:2501.01002v1 Announce Type: new Abstract: Data is essential for secondary use, but ensuring its privacy while allowing such use is a critical challenge. Various techniques have been proposed to address privacy concerns in data sharing and publishing. However, these methods often degrade data utility, impacting the performance of machine learning (ML) models. Our research identifies key limitations in existing optimization models for privacy preservation, particularly in handling categorical variables, assessing data utility, and evaluating effectiveness across diverse datasets. We propose a novel multi-objective optimization model that simultaneously minimizes information loss and maximizes protection against attacks. This model is empirically validated using diverse datasets and compared with two existing algorithms. We assess information loss, the number of individuals subject to linkage or homogeneity attacks, and ML performance after anonymization. The results indicate that our model achieves lower information loss and more effectively mitigates the risk of attacks, reducing the number of individuals susceptible to these attacks compared to alternative algorithms in some cases. Additionally, our model maintains comparative ML performance relative to the original data or data anonymized by other methods. Our findings highlight significant improvements in privacy protection and ML model performance, offering a comprehensive framework for balancing privacy and utility in data sharing.
arXiv:2501.01096v1 Announce Type: new Abstract: Machine learning techniques have demonstrated their effectiveness in achieving autonomy and optimality for nonlinear and high-dimensional dynamical systems. However, traditional black-box machine learning methods often lack formal stability guarantees, which are critical for safety-sensitive aerospace applications. This paper proposes a comprehensive framework that combines control Lyapunov functions with supervised learning to provide certifiably stable, time- and fuel-optimal guidance for rendezvous maneuvers governed by Clohessy-Wiltshire dynamics. The framework is easily extensible to nonlinear control-affine systems. A novel neural candidate Lyapunov function is developed to ensure positive definiteness. Subsequently, a control policy is defined, in which the thrust direction vector minimizes the Lyapunov function's time derivative, and the thrust throttle is determined using minimal required throttle. This approach ensures that all loss terms related to the control Lyapunov function are either naturally satisfied or replaced by the derived control policy. To jointly supervise the Lyapunov function and the control policy, a simple loss function is introduced, leveraging optimal state-control pairs obtained by a polynomial maps based method. Consequently, the trained neural network not only certifies the Lyapunov function but also generates a near-optimal guidance policy, even for the bang-bang fuel-optimal problem. Extensive numerical simulations are presented to validate the proposed method.
arXiv:2501.00726v1 Announce Type: cross Abstract: Unsupervised feature selection (UFS) is widely applied in machine learning and pattern recognition. However, most of the existing methods only consider a single sparsity, which makes it difficult to select valuable and discriminative feature subsets from the original high-dimensional feature set. In this paper, we propose a new UFS method called DSCOFS via embedding double sparsity constrained optimization into the classical principal component analysis (PCA) framework. Double sparsity refers to using $\ell_{2,0}$-norm and $\ell_0$-norm to simultaneously constrain variables, by adding the sparsity of different types, to achieve the purpose of improving the accuracy of identifying differential features. The core is that $\ell_{2,0}$-norm can remove irrelevant and redundant features, while $\ell_0$-norm can filter out irregular noisy features, thereby complementing $\ell_{2,0}$-norm to improve discrimination. An effective proximal alternating minimization method is proposed to solve the resulting nonconvex nonsmooth model. Theoretically, we rigorously prove that the sequence generated by our method globally converges to a stationary point. Numerical experiments on three synthetic datasets and eight real-world datasets demonstrate the effectiveness, stability, and convergence of the proposed method. In particular, the average clustering accuracy (ACC) and normalized mutual information (NMI) are improved by at least 3.34% and 3.02%, respectively, compared with the state-of-the-art methods. More importantly, two common statistical tests and a new feature similarity metric verify the advantages of double sparsity. All results suggest that our proposed DSCOFS provides a new perspective for feature selection.
arXiv:2501.00930v1 Announce Type: cross Abstract: This work introduces Transformer-based Successive Convexification (T-SCvx), an extension of Transformer-based Powered Descent Guidance (T-PDG), generalizable for efficient six-degree-of-freedom (DoF) fuel-optimal powered descent trajectory generation. Our approach significantly enhances the sample efficiency and solution quality for nonconvex-powered descent guidance by employing a rotation invariant transformation of the sampled dataset. T-PDG was previously applied to the 3-DoF minimum fuel powered descent guidance problem, improving solution times by up to an order of magnitude compared to lossless convexification (LCvx). By learning to predict the set of tight or active constraints at the optimal control problem's solution, Transformer-based Successive Convexification (T-SCvx) creates the minimal reduced-size problem initialized with only the tight constraints, then uses the solution of this reduced problem to warm-start the direct optimization solver. 6-DoF powered descent guidance is known to be challenging to solve quickly and reliably due to the nonlinear and non-convex nature of the problem, the discretization scheme heavily influencing solution validity, and reference trajectory initialization determining algorithm convergence or divergence. Our contributions in this work address these challenges by extending T-PDG to learn the set of tight constraints for the successive convexification (SCvx) formulation of the 6-DoF powered descent guidance problem. In addition to reducing the problem size, feasible and locally optimal reference trajectories are also learned to facilitate convergence from the initial guess. T-SCvx enables onboard computation of real-time guidance trajectories, demonstrated by a 6-DoF Mars powered landing application problem.
arXiv:2409.19212v4 Announce Type: replace Abstract: This paper investigates a class of stochastic bilevel optimization problems where the upper-level function is nonconvex with potentially unbounded smoothness and the lower-level problem is strongly convex. These problems have significant applications in sequential data learning, such as text classification using recurrent neural networks. The unbounded smoothness is characterized by the smoothness constant of the upper-level function scaling linearly with the gradient norm, lacking a uniform upper bound. Existing state-of-the-art algorithms require $\widetilde{O}(1/\epsilon^4)$ oracle calls of stochastic gradient or Hessian/Jacobian-vector product to find an $\epsilon$-stationary point. However, it remains unclear if we can further improve the convergence rate when the assumptions for the function in the population level also hold for each random realization almost surely (e.g., Lipschitzness of each realization of the stochastic gradient). To address this issue, we propose a new Accelerated Bilevel Optimization algorithm named AccBO. The algorithm updates the upper-level variable by normalized stochastic gradient descent with recursive momentum and the lower-level variable by the stochastic Nesterov accelerated gradient descent algorithm with averaging. We prove that our algorithm achieves an oracle complexity of $\widetilde{O}(1/\epsilon^3)$ to find an $\epsilon$-stationary point. Our proof relies on a novel lemma characterizing the dynamics of stochastic Nesterov accelerated gradient descent algorithm under distribution drift with high probability for the lower-level variable, which is of independent interest and also plays a crucial role in analyzing the hypergradient estimation error over time. Experimental results on various tasks confirm that our proposed algorithm achieves the predicted theoretical acceleration and significantly outperforms baselines in bilevel optimization.
arXiv:2410.15543v3 Announce Type: replace Abstract: In Bayesian optimization, a black-box function is maximized via the use of a surrogate model. We apply distributed Thompson sampling, using a Gaussian process as a surrogate model, to approach the multi-agent Bayesian optimization problem. In our distributed Thompson sampling implementation, each agent receives sampled points from neighbors, where the communication network is encoded in a graph; each agent utilizes their own Gaussian process to model the objective function. We demonstrate theoretical bounds on Bayesian average regret and Bayesian simple regret, where the bound depends on the structure of the communication graph. Unlike in batch Bayesian optimization, this bound is applicable in cases where the communication graph amongst agents is constrained. When compared to sequential single-agent Thompson sampling, our bound guarantees faster convergence with respect to time as long as the communication graph is connected. We confirm the efficacy of our algorithm with numerical simulations on traditional optimization test functions, demonstrating the significance of graph connectivity on improving regret convergence.
arXiv:2501.00421v1 Announce Type: new Abstract: We consider the problem of estimating the state transition matrix of a linear time-invariant (LTI) system, given access to multiple independent trajectories sampled from the system. Several recent papers have conducted a non-asymptotic analysis of this problem, relying crucially on the assumption that the process noise is either Gaussian or sub-Gaussian, i.e., "light-tailed". In sharp contrast, we work under a significantly weaker noise model, assuming nothing more than the existence of the fourth moment of the noise distribution. For this setting, we provide the first set of results demonstrating that one can obtain sample-complexity bounds for linear system identification that are nearly of the same order as under sub-Gaussian noise. To achieve such results, we develop a novel robust system identification algorithm that relies on constructing multiple weakly-concentrated estimators, and then boosting their performance using suitable tools from high-dimensional robust statistics. Interestingly, our analysis reveals how the kurtosis of the noise distribution, a measure of heavy-tailedness, affects the number of trajectories needed to achieve desired estimation error bounds. Finally, we show that our algorithm and analysis technique can be easily extended to account for scenarios where an adversary can arbitrarily corrupt a small fraction of the collected trajectory data. Our work takes the first steps towards building a robust statistical learning theory for control under non-ideal assumptions on the data-generating process.
arXiv:2501.00191v1 Announce Type: new Abstract: We study a networked economic system composed of $n$ producers supplying a single homogeneous good to a number of geographically separated markets and of a centralized authority, called the market maker. Producers compete \`a la Cournot, by choosing the quantities of good to supply to each market they have access to in order to maximize their profit. Every market is characterized by its inverse demand functions returning the unit price of the considered good as a function of the total available quantity. Markets are interconnected by a dispatch network through which quantities of the considered good can flow within finite capacity constraints. Such flows are determined by the market maker, who aims at maximizing a designated welfare function. We model such competition as a strategic game with $n+1$ players: the producers and the market game. For this game, we first establish the existence of Nash equilibria under standard concavity assumptions. We then identify sufficient conditions for the game to be potential with an essentially unique Nash equilibrium. Next, we present a general result that connects the optimal action of the market maker with the capacity constraints imposed on the network. For the commonly used Walrasian welfare, our finding proves a connection between capacity bottlenecks in the market network and the emergence of price differences between markets separated by saturated lines. This phenomenon is frequently observed in real-world scenarios, for instance in power networks. Finally, we validate the model with data from the Italian day-ahead electricity market.
arXiv:2501.00200v1 Announce Type: new Abstract: Recently, cutting-plane methods such as GCP-CROWN have been explored to enhance neural network verifiers and made significant advances. However, GCP-CROWN currently relies on generic cutting planes (cuts) generated from external mixed integer programming (MIP) solvers. Due to the poor scalability of MIP solvers, large neural networks cannot benefit from these cutting planes. In this paper, we exploit the structure of the neural network verification problem to generate efficient and scalable cutting planes specific for this problem setting. We propose a novel approach, Branch-and-bound Inferred Cuts with COnstraint Strengthening (BICCOS), which leverages the logical relationships of neurons within verified subproblems in the branch-and-bound search tree, and we introduce cuts that preclude these relationships in other subproblems. We develop a mechanism that assigns influence scores to neurons in each path to allow the strengthening of these cuts. Furthermore, we design a multi-tree search technique to identify more cuts, effectively narrowing the search space and accelerating the BaB algorithm. Our results demonstrate that BICCOS can generate hundreds of useful cuts during the branch-and-bound process and consistently increase the number of verifiable instances compared to other state-of-the-art neural network verifiers on a wide range of benchmarks, including large networks that previous cutting plane methods could not scale to. BICCOS is part of the $\alpha,\beta$-CROWN verifier, the VNN-COMP 2024 winner. The code is available at http://github.com/Lemutisme/BICCOS .
arXiv:2501.00219v1 Announce Type: new Abstract: This paper investigates the potential of autonomous minibuses which take on-demand directional routes for pick-up and drop-off in a grid network of wider area with low density, followed by fixed routes in areas with demand. Mathematical formulation for generalized costs demonstrates its benefits, with indicators proposed to select existing bus routes for conversion with the options of zonal express and parallel routes. Simulations on modeled scenarios and case studies with bus routes in Chicago show reductions in both passenger costs and generalized costs over existing fixed-route bus service between suburban areas and CBD.
arXiv:2501.00258v1 Announce Type: new Abstract: In optimizing real-world structures, due to fabrication or budgetary restraints, the design variables may be restricted to a set of standard engineering choices. Such variables, commonly called categorical variables, are discrete and unordered in essence, precluding the utilization of gradient-based optimizers for the problems containing them. In this paper, incorporating the Gumbel-Softmax (GSM) method, we propose a new gradient-based optimizer for handling such variables in the optimal design of large-scale frame structures. The GSM method provides a means to draw differentiable samples from categorical distributions, thereby enabling sensitivity analysis for the variables generated from such distributions. The sensitivity information can greatly reduce the computational cost of traversing high-dimensional and discrete design spaces in comparison to employing gradient-free optimization methods. In addition, since the developed optimizer is gradient-based, it can naturally handle the simultaneous optimization of categorical and continuous design variables. Through three numerical case studies, different aspects of the proposed optimizer are studied and its advantages over population-based optimizers, specifically a genetic algorithm, are demonstrated.
arXiv:2412.06735v2 Announce Type: replace-cross Abstract: In this review/tutorial article, we present recent progress on optimal control of partially observed Markov Decision Processes (POMDPs). We first present regularity and continuity conditions for POMDPs and their belief-MDP reductions, where these constitute weak Feller and Wasserstein regularity and controlled filter stability. These are then utilized to arrive at existence results on optimal policies for both discounted and average cost problems, and regularity of value functions. Then, we study rigorous approximation results involving quantization based finite model approximations as well as finite window approximations under controlled filter stability. Finally, we present several recent reinforcement learning theoretic results which rigorously establish convergence to near optimality under both criteria.