Stationary PointEdit
A stationary point is a point on the graph of a function at which the rate of change momentarily pauses. In the simplest, one-variable setting, this occurs where the first derivative vanishes, so the slope is zero. In more formal terms, if f is a function with a differentiable domain, a point x0 is stationary when f'(x0) = 0. If the derivative does not exist at that point, some treatments still call the point stationary if the function has a local extremum there or otherwise exhibits a horizontal tangent. The intuition is straightforward: the curve stops rising or falling, at least for an instant.
Stationary points are central to the practice of optimization. When you want to maximize profit, minimize cost, or optimize any objective that can be described by a function, the optimum is often found at a stationary point. After locating candidates with f'(x0) = 0, one then uses further tests to distinguish whether the point is a local maximum, a local minimum, or something else. The ideas extend beyond economics and business to any field that models behavior with a function of one or more variables, including physics and engineering, where the same mathematics underpins stability and efficiency. See optimization and economics for broad uses, and the idea of a local extremum at local maximum or local minimum for more specific cases.
In higher dimensions, stationary points are found where the gradient vanishes. That is, a point x0 in the domain satisfies ∇f(x0) = 0. Classification then relies on second-derivative information: the Hessian matrix (the matrix of second partial derivatives) reveals whether the stationary point is a local minimum, a local maximum, or a saddle point. These concepts appear in multivariable calculus and are central to how models behave near optima in multiple dimensions. See also gradient and Hessian for the technical apparatus.
Formal definitions
For a real-valued function f: D → R defined on an interval D ⊆ R, a point x0 ∈ D is a stationary point if f is differentiable at x0 and f'(x0) = 0. In several variables, for f: D ⊆ R^n → R, x0 ∈ D is stationary if all first-order partial derivatives vanish: ∂f/∂x_i(x0) = 0 for every i. The broader notion of a critical point extends this to include points where derivatives do not exist but the function is nonetheless at a local extremum or satisfies an appropriate limiting condition; see critical point for the broader taxonomy. Stationary points are not automatically extrema, and endpoints or boundary points often require separate treatment.
Backwards compatibility in practice means: one first locates candidates by solving f'(x) = 0 (one-variable) or ∇f(x) = 0 (multivariable). Then one applies tests to determine their nature. See second derivative test and saddle point for classification in higher dimensions.
Tests and classification
The most familiar tool is the second derivative test. In one dimension, if f'(x0) = 0 and f''(x0) > 0, x0 is a local minimum; if f''(x0) < 0, x0 is a local maximum; if f''(x0) = 0 the test is inconclusive and higher-order derivatives or other criteria are used. In several variables, the Hessian H(x0) replaces f'': a positive definite Hessian indicates a local minimum, a negative definite Hessian indicates a local maximum, and an indefinite Hessian indicates a saddle point. See second derivative test and Hessian for details.
When derivative information is incomplete or the landscape is flat over regions, one may rely on global considerations like convexity. If the objective is a convex function, any stationary point is a global minimum; in non-convex problems, multiple stationary points can exist, with only some yielding the best global solution. The broader landscape of optimization theory includes results such as the No Free Lunch Theorem No Free Lunch Theorem and the emphasis on convex optimization convex optimization when seeking tractable, predictable results. See also optimization and critical point for related ideas.
Multivariable stationary points and constrained problems
In the setting of several variables, stationary points play a key role in both unconstrained and constrained optimization. For unconstrained problems, solving ∇f(x) = 0 gives candidate points, which are then classified via the Hessian as described above. For constrained problems, one often uses Lagrange multipliers to convert a constrained search into a system where the gradient of the objective aligns with the gradient of the constraint, yielding stationary points of the constrained problem. See Lagrange multiplier and calculus of variations for extended methods and interpretations.
Outside pure mathematics, stationary points appear in physics as equilibria of potentials, in engineering in design optimization, and in data science during model fitting, where one seeks parameter values that minimize a loss function and thus correspond to stationary points of that loss. In physics, the idea that a system follows a path that makes an action stationary under the principle of least action connects calculus of variations to fundamental dynamics; see principle of least action and calculus of variations for a broader perspective.
Applications and interpretations
Economics and business: Stationary points help identify optimal levels of production, pricing, or consumption under given constraints, informing decisions about efficiency and resource allocation. See economics and optimization.
Engineering and physics: Stable configurations often correspond to local minima of potential energy, with Hessian-based tests providing stability criteria. The idea that physical laws minimize or stationarize an action or energy is foundational in many theories; see potential energy and stationary action.
Numerical methods and data analysis: Algorithms like gradient descent search for stationary points of loss or objective functions. The convergence properties and choice of method depend on the problem structure, including convexity and smoothness. See gradient descent and loss function.
Mathematics and theory: The study of stationary points touches on topics from basic calculus to advanced topics in calculus of variations and optimization, linking local behavior to global outcomes under appropriate assumptions.