Jordan Decomposition Theorem measure space
Jordan Decomposition Theorem
If m is a signed measure, there exist unique positive measures m+ and m– such that m = m+ – m–, with m+ and m– mutually singular. This is called the Jordan decomposition of m, and the measures m+ and m– are called the positive and negative variations of m. The total variation of m is defined to be the sum of the positive and negative variations of m.
The laws of planetary motion discovered by Johannes Kepler. See the article for an exposition.
laws of exponents
The following rules govern the behavior of exponents:
Additionally, x0 = 1 for all x except 0, and 00 = 0 or is left undefined (i.e., it is an indeterminate form).
Cf. rational exponent.
laws of logarithms
The following rules governing the behavior of logarithms are easily derived, and very useful in calculations:
Students often confuse these rules, so it is worth memorizing them as “the sum of the logs is the log of the product – and not the product of the logs,” etc.
- log A + log B = log (A×B)
- log A – log B = log (A/B)
- log Ap = p × log A
least upper bound
An upper bound which is less than or equal to every other upper bound.
Cf. greatest lower bound.
least upper bound axiom
“Any subset of the real numbers which has an upper bound has a least upper bound.” This axiom, together with the field axioms, completely characterizes the set of real numbers.
The unique measure m on the real line, generated by the outer measure whose value on intervals is the length of the intervals, is called Lebesgue measure.
If a limit is in one of the indeterminate forms “infinity over infinity” or ”zero over zero,” then we have
That is, the limit is the same after taking the derivatives of both the numerator and denominator. L’Hospital’s Rule may be applied as many times as needed. Students often misapply the Rule by using it when the limit is not in one of the above indeterminate forms. This often results in an incorrect evaluation of the limit.
The concept of a limit is central to our modern understanding of the differential and integral calculus. Naively, an expression involving a variable limits to some value L if it becomes arbitrarily close to L for appropriate choices of the variable. See the article for a full exposition.
limit comparison test
A test for the convergence of a series. Refer to the related article for a complete description.
Given a sequence of points ai , i = 1, 2, 3, ... , a point L is called a limit point of the sequence if every neighborhood of L contains all but finitely many of the ai .
If X is a metric space, then L is a limit point of X if a sequence from X may be chosen so that L is the limit point of that sequence. Limit points are not in general unique.
See also: accumulation point, perfect set.
Naively, “line” is a primitive concept, generally connoting a straight path through space. In mathematics this notion is abstracted formally in different ways depending on the type of mathematics under consideration.
Geometry: Euclid defined the term “line” as a point extended indefinitely in space. However, in modern Euclidean geometry, “line” is a primitive term, left purposely undefined, whose meaning is informed purely by the axioms in which the term appears.
Analytic geometry: Any set of ordered pairs (x, y) of all the points in the Cartesian plane satisfying an equation of the form ax + by + c = 0, where a, b, and c are real numbers. (There are three common and useful forms of this equation; see the entry for linear function.) This definition of the line may be generalized to a Euclidean space of n dimensions to include any set of ordered n-tuples (x1, x2, ..., xn) satisfying a1x1 + a2x2 + ... + anxn + c = 0. In polar coordinates a line is any set of all the ordered pairs (r, q) satisfying an equation of the form r = p sec (q - a), where p is the perpendicular distance from the pole to the line, and a is the angle of inclination of the line to the polar axis.
A polynomial function of degree one. The graph is a straight line. There are three special forms of the linear equation:
In the first form, called the slope-interecept form, m is the slope of the line and b is the y-intercept. In the second form, called the point-slope form, m is the slope of the line and (x 0, y 0) is any point on the line. In the third form a is the x-intercept and b is the y-intercept.
A topological space is locally compact if every point of the space has a neighborhood whose closure is compact.
A measurable function on a finite-dimensional real space with range in the complex numbers is said to be locally integrable if for all compact sets K we have:
The space of all such functions is denoted L1LOC.
Cf. Lebesgue integral.
A function which acts on its argument with a logarithm.
A rectangular array of, usually, real or complex numbers, organized into rows and columns. When specifying the size of a matrix, the number of rows is stated first. For example, here is a 2 by 4 real matrix:
The numbers in a matrix are called entries, and are specified by the row and then column in which they appear. In the example above, for instance, A1,2 is the entry in the first row and second column, i.e. 4.98.
Two matrices of the same size (with the same number of rows and columns) can be added componentwise, that is (A + B)i, j = Ai, j + Bi, j. Matrix multiplication is a little more complicated. A matrix M can be multiplied on the right by another matrix T only if T has the same number of rows as M has columns, and the result will have as many rows as M and as many columns as T. The entry of M T appearing in row i and column j is the sum of the pairwise products of the entries in row i of M and column j of T. In other words, if M is an n by m matrix, T must be an m by l matrix, and
Notice that this operation is not, in general, commutative.
A matrix that has the same number of rows and columns is called a square matrix, and these are particularly interesting as they can be added and multiplied, and the results will have the same dimensions.
The most standard use of matrices is to represent linear operators, but they have a wide variety of other uses, and are interesting to study in their own right. Matrices can have a large number of different types of entries, including various kinds of numbers, elements of abstract fields and other algebraic structures, functions, and so forth.
Given measure spaces (X, M, m) and (Y, N, n), a function f from X to Y is said to be measurable if the inverse image of every measurable set in Y is measurable in X.
A set X together with a s-algebra of sets M defined on it is called a measurable space, usually denoted (X, M). The elements of M are called measurable sets.
Given a set X and a s-algebra of sets M defined on X, a measure on X is a positive real-valued function m whose domain is M, and which satisfies:
If the measure of every set in M is finite, then m is called a finite measure. If X can be expressed as a union of sets in M all of whose measures are finite, then m is called a s-finite measure. If for every set E in the domain of m whose measure is infinite there is a subset F of E which has finite measure, then m is called a semi-finite measure. If for any set E in M which has zero measure all the subsets of E are also in M (and therefore have zero measure), then m is called a complete measure.
- the measure of the empty set is zero.
- (Countable additivity) For any countable, disjoint sequence of sets in the domain of m, the measure of the union of the sequence is equal to the sum of the measures of the sets in the sequence.
Cf. measure space, signed measure, outer measure, Lebesgue measure, Borel measure.
A set X together with a s-algebra of sets M defined on it and a measure m defined on M is called a measure space, usually denoted by (X, M, m).