Loading
  • 21 Aug, 2019

  • By, Wikipedia

Portal:Mathematics

Mathematics is the study of representing and reasoning about abstract objects (such as numbers, points, spaces, sets, structures, and games). Mathematics is used throughout the world as an essential tool in many fields, including natural science, engineering, medicine, and the social sciences. Applied mathematics, the branch of mathematics concerned with application of mathematical knowledge to other fields, inspires and makes use of new mathematical discoveries and sometimes leads to the development of entirely new mathematical disciplines, such as statistics and game theory. Mathematicians also engage in pure mathematics, or mathematics for its own sake, without having any application in mind. There is no clear line separating pure and applied mathematics, and practical applications for what began as pure mathematics are often discovered. (Full article...)

Refresh with new selections below (purge)
  Featured articles are displayed here, which represent some of the best content on English Wikipedia.

  • Image 3 The first 15,000 partial sums of 0 + 1 − 2 + 3 − 4 + ... The graph is situated with positive integers to the right and negative integers to the left. In mathematics, 1 − 2 + 3 − 4 + ··· is an infinite series whose terms are the successive positive integers, given alternating signs. Using sigma summation notation the sum of the first m terms of the series can be expressed as '"`UNIQ--postMath-00000001-QINU`"' The infinite series diverges, meaning that its sequence of partial sums, (1, −1, 2, −2, 3, ...), does not tend towards any finite limit. Nonetheless, in the mid-18th century, Leonhard Euler wrote what he admitted to be a paradoxical equation: '"`UNIQ--postMath-00000002-QINU`"' (Full article...)
    The first 15,000 partial sums of 0 + 1 − 2 + 3 − 4 + ... The graph is situated with positive integers to the right and negative integers to the left.


    In mathematics, 1 − 2 + 3 − 4 + ··· is an infinite series whose terms are the successive positive integers, given alternating signs. Using sigma summation notation the sum of the first m terms of the series can be expressed as


    The infinite series diverges, meaning that its sequence of partial sums, (1, −1, 2, −2, 3, ...), does not tend towards any finite limit. Nonetheless, in the mid-18th century, Leonhard Euler wrote what he admitted to be a paradoxical equation:
    (Full article...)
  • Image 4 In algebraic geometry and theoretical physics, mirror symmetry is a relationship between geometric objects called Calabi–Yau manifolds. The term refers to a situation where two Calabi–Yau manifolds look very different geometrically but are nevertheless equivalent when employed as extra dimensions of string theory. Early cases of mirror symmetry were discovered by physicists. Mathematicians became interested in this relationship around 1990 when Philip Candelas, Xenia de la Ossa, Paul Green, and Linda Parkes showed that it could be used as a tool in enumerative geometry, a branch of mathematics concerned with counting the number of solutions to geometric questions. Candelas and his collaborators showed that mirror symmetry could be used to count rational curves on a Calabi–Yau manifold, thus solving a longstanding problem. Although the original approach to mirror symmetry was based on physical ideas that were not understood in a mathematically precise way, some of its mathematical predictions have since been proven rigorously. (Full article...)
    In algebraic geometry and theoretical physics, mirror symmetry is a relationship between geometric objects called Calabi–Yau manifolds. The term refers to a situation where two Calabi–Yau manifolds look very different geometrically but are nevertheless equivalent when employed as extra dimensions of string theory.

    Early cases of mirror symmetry were discovered by physicists. Mathematicians became interested in this relationship around 1990 when Philip Candelas, Xenia de la Ossa, Paul Green, and Linda Parkes showed that it could be used as a tool in enumerative geometry, a branch of mathematics concerned with counting the number of solutions to geometric questions. Candelas and his collaborators showed that mirror symmetry could be used to count rational curves on a Calabi–Yau manifold, thus solving a longstanding problem. Although the original approach to mirror symmetry was based on physical ideas that were not understood in a mathematically precise way, some of its mathematical predictions have since been proven rigorously. (Full article...)
  • Image 5 Damage from Hurricane Katrina in 2005. Actuaries need to estimate long-term levels of such damage in order to accurately price property insurance, set appropriate reserves, and design appropriate reinsurance and capital management strategies. An actuary is a professional with advanced mathematical skills who deals with the measurement and management of risk and uncertainty. The name of the corresponding field is actuarial science which covers rigorous mathematical calculations in areas of life expectancy and life insurance. These risks can affect both sides of the balance sheet and require asset management, liability management, and valuation skills. Actuaries provide assessments of financial security systems, with a focus on their complexity, their mathematics, and their mechanisms. While the concept of insurance dates to antiquity, the concepts needed to scientifically measure and mitigate risks have their origins in the 17th century studies of probability and annuities. Actuaries of the 21st century require analytical skills, business knowledge, and an understanding of human behavior and information systems to design and manage programs that control risk. The actual steps needed to become an actuary are usually country-specific; however, almost all processes share a rigorous schooling or examination structure and take many years to complete. (Full article...)

    An actuary is a professional with advanced mathematical skills who deals with the measurement and management of risk and uncertainty. The name of the corresponding field is actuarial science which covers rigorous mathematical calculations in areas of life expectancy and life insurance. These risks can affect both sides of the balance sheet and require asset management, liability management, and valuation skills. Actuaries provide assessments of financial security systems, with a focus on their complexity, their mathematics, and their mechanisms.

    While the concept of insurance dates to antiquity, the concepts needed to scientifically measure and mitigate risks have their origins in the 17th century studies of probability and annuities. Actuaries of the 21st century require analytical skills, business knowledge, and an understanding of human behavior and information systems to design and manage programs that control risk. The actual steps needed to become an actuary are usually country-specific; however, almost all processes share a rigorous schooling or examination structure and take many years to complete. (Full article...)
  • Image 6 Euclid's method for finding the greatest common divisor (GCD) of two starting lengths BA and DC, both defined to be multiples of a common "unit" length. The length DC being shorter, it is used to "measure" BA, but only once because the remainder EA is less than DC. EA now measures (twice) the shorter length DC, with remainder FC shorter than EA. Then FC measures (three times) length EA. Because there is no remainder, the process ends with FC being the GCD. On the right Nicomachus's example with numbers 49 and 21 resulting in their GCD of 7 (derived from Heath 1908:300). In mathematics, the Euclidean algorithm, or Euclid's algorithm, is an efficient method for computing the greatest common divisor (GCD) of two integers (numbers), the largest number that divides them both without a remainder. It is named after the ancient Greek mathematician Euclid, who first described it in his Elements (c. 300 BC). It is an example of an algorithm, a step-by-step procedure for performing a calculation according to well-defined rules, and is one of the oldest algorithms in common use. It can be used to reduce fractions to their simplest form, and is a part of many other number-theoretic and cryptographic calculations. The Euclidean algorithm is based on the principle that the greatest common divisor of two numbers does not change if the larger number is replaced by its difference with the smaller number. For example, 21 is the GCD of 252 and 105 (as 252 = 21 × 12 and 105 = 21 × 5), and the same number 21 is also the GCD of 105 and 252 − 105 = 147. Since this replacement reduces the larger of the two numbers, repeating this process gives successively smaller pairs of numbers until the two numbers become equal. When that occurs, that number is the GCD of the original two numbers. By reversing the steps or using the extended Euclidean algorithm, the GCD can be expressed as a linear combination of the two original numbers, that is the sum of the two numbers, each multiplied by an integer (for example, 21 = 5 × 105 + (−2) × 252). The fact that the GCD can always be expressed in this way is known as Bézout's identity. (Full article...)
    Euclid's method for finding the greatest common divisor (GCD) of two starting lengths BA and DC, both defined to be multiples of a common "unit" length. The length DC being shorter, it is used to "measure" BA, but only once because the remainder EA is less than DC. EA now measures (twice) the shorter length DC, with remainder FC shorter than EA. Then FC measures (three times) length EA. Because there is no remainder, the process ends with FC being the GCD. On the right Nicomachus's example with numbers 49 and 21 resulting in their GCD of 7 (derived from Heath 1908:300).


    In mathematics, the Euclidean algorithm, or Euclid's algorithm, is an efficient method for computing the greatest common divisor (GCD) of two integers (numbers), the largest number that divides them both without a remainder. It is named after the ancient Greek mathematician Euclid, who first described it in his Elements (c. 300 BC).
    It is an example of an algorithm, a step-by-step procedure for performing a calculation according to well-defined rules,
    and is one of the oldest algorithms in common use. It can be used to reduce fractions to their simplest form, and is a part of many other number-theoretic and cryptographic calculations.

    The Euclidean algorithm is based on the principle that the greatest common divisor of two numbers does not change if the larger number is replaced by its difference with the smaller number. For example, 21 is the GCD of 252 and 105 (as 252 = 21 × 12 and 105 = 21 × 5), and the same number 21 is also the GCD of 105 and 252 − 105 = 147. Since this replacement reduces the larger of the two numbers, repeating this process gives successively smaller pairs of numbers until the two numbers become equal. When that occurs, that number is the GCD of the original two numbers. By reversing the steps or using the extended Euclidean algorithm, the GCD can be expressed as a linear combination of the two original numbers, that is the sum of the two numbers, each multiplied by an integer (for example, 21 = 5 × 105 + (−2) × 252). The fact that the GCD can always be expressed in this way is known as Bézout's identity. (Full article...)
  • Image 7 Kaczynski after his arrest in 1996 Theodore John Kaczynski (/kəˈzɪnski/ ⓘ kə-ZIN-skee; May 22, 1942 – June 10, 2023), also known as the Unabomber (/ˈjuːnəbɒmər/ ⓘ YOO-nə-bom-ər), was an American mathematician and domestic terrorist. He was a mathematics prodigy, but abandoned his academic career in 1969 to pursue a reclusive primitive lifestyle. Kaczynski murdered three people and injured 23 others between 1978 and 1995 in a nationwide mail bombing campaign against people he believed to be advancing modern technology and the destruction of the natural environment. He authored Industrial Society and Its Future, a 35,000-word manifesto and social critique opposing all forms of technology, rejecting leftism, and advocating a nature-centered form of anarchism. (Full article...)

    Theodore John Kaczynski (/kəˈzɪnski/ kə-ZIN-skee; May 22, 1942 – June 10, 2023), also known as the Unabomber (/ˈjnəbɒmər/ YOO-nə-bom-ər), was an American mathematician and domestic terrorist. He was a mathematics prodigy, but abandoned his academic career in 1969 to pursue a reclusive primitive lifestyle.

    Kaczynski murdered three people and injured 23 others between 1978 and 1995 in a nationwide mail bombing campaign against people he believed to be advancing modern technology and the destruction of the natural environment. He authored Industrial Society and Its Future, a 35,000-word manifesto and social critique opposing all forms of technology, rejecting leftism, and advocating a nature-centered form of anarchism. (Full article...)
  • Image 8 Josiah Willard Gibbs (/ɡɪbz/; February 11, 1839 – April 28, 1903) was an American scientist who made significant theoretical contributions to physics, chemistry, and mathematics. His work on the applications of thermodynamics was instrumental in transforming physical chemistry into a rigorous deductive science. Together with James Clerk Maxwell and Ludwig Boltzmann, he created statistical mechanics (a term that he coined), explaining the laws of thermodynamics as consequences of the statistical properties of ensembles of the possible states of a physical system composed of many particles. Gibbs also worked on the application of Maxwell's equations to problems in physical optics. As a mathematician, he created modern vector calculus (independently of the British scientist Oliver Heaviside, who carried out similar work during the same period) and described the Gibbs phenomenon in the theory of Fourier analysis. In 1863, Yale University awarded Gibbs the first American doctorate in engineering. After a three-year sojourn in Europe, Gibbs spent the rest of his career at Yale, where he was a professor of mathematical physics from 1871 until his death in 1903. Working in relative isolation, he became the earliest theoretical scientist in the United States to earn an international reputation and was praised by Albert Einstein as "the greatest mind in American history." In 1901, Gibbs received what was then considered the highest honor awarded by the international scientific community, the Copley Medal of the Royal Society of London, "for his contributions to mathematical physics." (Full article...)

    Josiah Willard Gibbs (/ɡɪbz/; February 11, 1839 – April 28, 1903) was an American scientist who made significant theoretical contributions to physics, chemistry, and mathematics. His work on the applications of thermodynamics was instrumental in transforming physical chemistry into a rigorous deductive science. Together with James Clerk Maxwell and Ludwig Boltzmann, he created statistical mechanics (a term that he coined), explaining the laws of thermodynamics as consequences of the statistical properties of ensembles of the possible states of a physical system composed of many particles. Gibbs also worked on the application of Maxwell's equations to problems in physical optics. As a mathematician, he created modern vector calculus (independently of the British scientist Oliver Heaviside, who carried out similar work during the same period) and described the Gibbs phenomenon in the theory of Fourier analysis.

    In 1863, Yale University awarded Gibbs the first American doctorate in engineering. After a three-year sojourn in Europe, Gibbs spent the rest of his career at Yale, where he was a professor of mathematical physics from 1871 until his death in 1903. Working in relative isolation, he became the earliest theoretical scientist in the United States to earn an international reputation and was praised by Albert Einstein as "the greatest mind in American history." In 1901, Gibbs received what was then considered the highest honor awarded by the international scientific community, the Copley Medal of the Royal Society of London, "for his contributions to mathematical physics." (Full article...)
  • Image 9 Hilary Putnam The Quine–Putnam indispensability argument is an argument in the philosophy of mathematics for the existence of abstract mathematical objects such as numbers and sets, a position known as mathematical platonism. It was named after the philosophers Willard Quine and Hilary Putnam, and is one of the most important arguments in the philosophy of mathematics. Although elements of the indispensability argument may have originated with thinkers such as Gottlob Frege and Kurt Gödel, Quine's development of the argument was unique for introducing to it a number of his philosophical positions such as naturalism, confirmational holism, and the criterion of ontological commitment. Putnam gave Quine's argument its first detailed formulation in his 1971 book Philosophy of Logic. He later came to disagree with various aspects of Quine's thinking, however, and formulated his own indispensability argument based on the no miracles argument in the philosophy of science. A standard form of the argument in contemporary philosophy is credited to Mark Colyvan; whilst being influenced by both Quine and Putnam, it differs in important ways from their formulations. It is presented in the Stanford Encyclopedia of Philosophy: (Full article...)

    The Quine–Putnam indispensability argument is an argument in the philosophy of mathematics for the existence of abstract mathematical objects such as numbers and sets, a position known as mathematical platonism. It was named after the philosophers Willard Quine and Hilary Putnam, and is one of the most important arguments in the philosophy of mathematics.

    Although elements of the indispensability argument may have originated with thinkers such as Gottlob Frege and Kurt Gödel, Quine's development of the argument was unique for introducing to it a number of his philosophical positions such as naturalism, confirmational holism, and the criterion of ontological commitment. Putnam gave Quine's argument its first detailed formulation in his 1971 book Philosophy of Logic. He later came to disagree with various aspects of Quine's thinking, however, and formulated his own indispensability argument based on the no miracles argument in the philosophy of science. A standard form of the argument in contemporary philosophy is credited to Mark Colyvan; whilst being influenced by both Quine and Putnam, it differs in important ways from their formulations. It is presented in the Stanford Encyclopedia of Philosophy: (Full article...)
  • Image 10 The manipulations of the Rubik's Cube form the Rubik's Cube group. In mathematics, a group is a set with an operation that associates an element of the set to every pair of elements of the set (as does every binary operation) and satisfies the following constraints: the operation is associative, it has an identity element, and every element of the set has an inverse element. Many mathematical structures are groups endowed with other properties. For example, the integers with the addition operation form an infinite group, which is generated by a single element called ⁠'"`UNIQ--postMath-00000003-QINU`"'⁠ (these properties characterize the integers in a unique way). (Full article...)
    A Rubik's cube with one side rotated
    The manipulations of the Rubik's Cube form the Rubik's Cube group.

    In mathematics, a group is a set with an operation that associates an element of the set to every pair of elements of the set (as does every binary operation) and satisfies the following constraints: the operation is associative, it has an identity element, and every element of the set has an inverse element.

    Many mathematical structures are groups endowed with other properties. For example, the integers with the addition operation form an infinite group, which is generated by a single element called (these properties characterize the integers in a unique way). (Full article...)
  • Image 11 Plots of logarithm functions, with three commonly used bases. The special points logb b = 1 are indicated by dotted lines, and all curves intersect in logb 1 = 0. In mathematics, the logarithm to base b is the inverse function of exponentiation with base b. That means that the logarithm of a number x to the base b is the exponent to which b must be raised to produce x. For example, since 1000 = 103, the logarithm base '"`UNIQ--postMath-00000004-QINU`"' of 1000 is 3, or log10 (1000) = 3. The logarithm of x to base b is denoted as logb (x), or without parentheses, logb x. When the base is clear from the context or is irrelevant it is sometimes written log x. The logarithm base 10 is called the decimal or common logarithm and is commonly used in science and engineering. The natural logarithm has the number e ≈ 2.718 as its base; its use is widespread in mathematics and physics because of its very simple derivative. The binary logarithm uses base 2 and is frequently used in computer science. (Full article...)
    Plots of logarithm functions, with three commonly used bases. The special points logbb = 1 are indicated by dotted lines, and all curves intersect in logb 1 = 0.


    In mathematics, the logarithm to base b is the inverse function of exponentiation with base b. That means that the logarithm of a number x to the base b is the exponent to which b must be raised to produce x. For example, since 1000 = 10, the logarithm base  of 1000 is 3, or log10 (1000) = 3. The logarithm of x to base b is denoted as logb (x), or without parentheses, logbx. When the base is clear from the context or is irrelevant it is sometimes written log x.

    The logarithm base 10 is called the decimal or common logarithm and is commonly used in science and engineering. The natural logarithm has the number e ≈ 2.718 as its base; its use is widespread in mathematics and physics because of its very simple derivative. The binary logarithm uses base 2 and is frequently used in computer science. (Full article...)
  • Image 12 Stylistic impression of the number, representing how its decimals go on infinitely In mathematics, 0.999... (also written as 0.9, 0..9, or 0.(9)) denotes the smallest number greater than every number in the sequence (0.9, 0.99, 0.999, ...). It can be proved that this number is 1; that is, : '"`UNIQ--postMath-00000005-QINU`"' Despite common misconceptions, 0.999... is not "almost exactly 1" or "very, very nearly but not quite 1"; rather, 0.999... and "1" are exactly the same number. An elementary proof is given below that involves only elementary arithmetic and the fact that there is no positive real number less than all 1/10n, where n is a natural number, a property that results immediately from the Archimedean property of the real numbers. (Full article...)
    Stylistic impression of the number, representing how its decimals go on infinitely

    In mathematics, 0.999... (also written as 0.9, 0..9, or 0.(9)) denotes the smallest number greater than every number in the sequence (0.9, 0.99, 0.999, ...). It can be proved that this number is 1; that is,
    :
    Despite common misconceptions, 0.999... is not "almost exactly 1" or "very, very nearly but not quite 1"; rather, 0.999... and "1" are exactly the same number.

    An elementary proof is given below that involves only elementary arithmetic and the fact that there is no positive real number less than all 1/10, where n is a natural number, a property that results immediately from the Archimedean property of the real numbers. (Full article...)
  • Image 13 In classical mechanics, the Laplace–Runge–Lenz (LRL) vector is a vector used chiefly to describe the shape and orientation of the orbit of one astronomical body around another, such as a binary star or a planet revolving around a star. For two bodies interacting by Newtonian gravity, the LRL vector is a constant of motion, meaning that it is the same no matter where it is calculated on the orbit; equivalently, the LRL vector is said to be conserved. More generally, the LRL vector is conserved in all problems in which two bodies interact by a central force that varies as the inverse square of the distance between them; such problems are called Kepler problems. The hydrogen atom is a Kepler problem, since it comprises two charged particles interacting by Coulomb's law of electrostatics, another inverse-square central force. The LRL vector was essential in the first quantum mechanical derivation of the spectrum of the hydrogen atom, before the development of the Schrödinger equation. However, this approach is rarely used today. (Full article...)
    In classical mechanics, the Laplace–Runge–Lenz (LRL) vector is a vector used chiefly to describe the shape and orientation of the orbit of one astronomical body around another, such as a binary star or a planet revolving around a star. For two bodies interacting by Newtonian gravity, the LRL vector is a constant of motion, meaning that it is the same no matter where it is calculated on the orbit; equivalently, the LRL vector is said to be conserved. More generally, the LRL vector is conserved in all problems in which two bodies interact by a central force that varies as the inverse square of the distance between them; such problems are called Kepler problems.

    The hydrogen atom is a Kepler problem, since it comprises two charged particles interacting by Coulomb's law of electrostatics, another inverse-square central force. The LRL vector was essential in the first quantum mechanical derivation of the spectrum of the hydrogen atom, before the development of the Schrödinger equation. However, this approach is rarely used today. (Full article...)
  • Image 14 Bust of Shen at the Beijing Ancient Observatory Shen Kuo (Chinese: 沈括; 1031–1095) or Shen Gua, courtesy name Cunzhong (存中) and pseudonym Mengqi (now usually given as Mengxi) Weng (夢溪翁), was a Chinese polymath, scientist, and statesman of the Song dynasty (960–1279). Shen was a master in many fields of study including mathematics, optics, and horology. In his career as a civil servant, he became a finance minister, governmental state inspector, head official for the Bureau of Astronomy in the Song court, Assistant Minister of Imperial Hospitality, and also served as an academic chancellor. At court his political allegiance was to the Reformist faction known as the New Policies Group, headed by Chancellor Wang Anshi (1021–1085). In his Dream Pool Essays or Dream Torrent Essays (夢溪筆談; Mengxi Bitan) of 1088, Shen was the first to describe the magnetic needle compass, which would be used for navigation (first described in Europe by Alexander Neckam in 1187). Shen discovered the concept of true north in terms of magnetic declination towards the north pole, with experimentation of suspended magnetic needles and "the improved meridian determined by Shen's [astronomical] measurement of the distance between the pole star and true north". This was the decisive step in human history to make compasses more useful for navigation, and may have been a concept unknown in Europe for another four hundred years (evidence of German sundials made circa 1450 show markings similar to Chinese geomancers' compasses in regard to declination). (Full article...)

    Shen Kuo (Chinese: 沈括; 1031–1095) or Shen Gua, courtesy name Cunzhong (存中) and pseudonym Mengqi (now usually given as Mengxi) Weng (夢溪翁), was a Chinese polymath, scientist, and statesman of the Song dynasty (960–1279). Shen was a master in many fields of study including mathematics, optics, and horology. In his career as a civil servant, he became a finance minister, governmental state inspector, head official for the Bureau of Astronomy in the Song court, Assistant Minister of Imperial Hospitality, and also served as an academic chancellor. At court his political allegiance was to the Reformist faction known as the New Policies Group, headed by Chancellor Wang Anshi (1021–1085).

    In his Dream Pool Essays or Dream Torrent Essays (夢溪筆談; Mengxi Bitan) of 1088, Shen was the first to describe the magnetic needle compass, which would be used for navigation (first described in Europe by Alexander Neckam in 1187). Shen discovered the concept of true north in terms of magnetic declination towards the north pole, with experimentation of suspended magnetic needles and "the improved meridian determined by Shen's [astronomical] measurement of the distance between the pole star and true north". This was the decisive step in human history to make compasses more useful for navigation, and may have been a concept unknown in Europe for another four hundred years (evidence of German sundials made circa 1450 show markings similar to Chinese geomancers' compasses in regard to declination). (Full article...)
  • Image 15 The regular triangular tiling of the plane, whose symmetries are described by the affine symmetric group S̃3 The affine symmetric groups are a family of mathematical structures that describe the symmetries of the number line and the regular triangular tiling of the plane, as well as related higher-dimensional objects. In addition to this geometric description, the affine symmetric groups may be defined in other ways: as collections of permutations (rearrangements) of the integers (..., −2, −1, 0, 1, 2, ...) that are periodic in a certain sense, or in purely algebraic terms as a group with certain generators and relations. They are studied in combinatorics and representation theory. A finite symmetric group consists of all permutations of a finite set. Each affine symmetric group is an infinite extension of a finite symmetric group. Many important combinatorial properties of the finite symmetric groups can be extended to the corresponding affine symmetric groups. Permutation statistics such as descents and inversions can be defined in the affine case. As in the finite case, the natural combinatorial definitions for these statistics also have a geometric interpretation. (Full article...)
    Tiling of the plane by regular triangles
    The regular triangular tiling of the plane, whose symmetries are described by the affine symmetric group 3

    The affine symmetric groups are a family of mathematical structures that describe the symmetries of the number line and the regular triangular tiling of the plane, as well as related higher-dimensional objects. In addition to this geometric description, the affine symmetric groups may be defined in other ways: as collections of permutations (rearrangements) of the integers (..., −2, −1, 0, 1, 2, ...) that are periodic in a certain sense, or in purely algebraic terms as a group with certain generators and relations. They are studied in combinatorics and representation theory.

    A finite symmetric group consists of all permutations of a finite set. Each affine symmetric group is an infinite extension of a finite symmetric group. Many important combinatorial properties of the finite symmetric groups can be extended to the corresponding affine symmetric groups. Permutation statistics such as descents and inversions can be defined in the affine case. As in the finite case, the natural combinatorial definitions for these statistics also have a geometric interpretation. (Full article...)
  • Selected image – show another

    animation showing a roughly star-shaped graph being traced out as a smaller circle rolls around inside of a larger circle
    animation showing a roughly star-shaped graph being traced out as a smaller circle rolls around inside of a larger circle
    Hypotrochoid
    Credit: Sam Derbyshire
    A hypotrochoid is a curve traced out by a point "attached" to a smaller circle rolling around inside a fixed larger circle. In this example, the hypotrochoid is the red curve that is traced out by the red point 5 units from the center of the black circle of radius 3 as it rolls around inside the blue circle of radius 5. A special case is a hypotrochoid with the inner circle exactly one-half the radius of the outer circle, resulting in an ellipse (see an animation showing this). Mathematical analysis of closely-related curves called hypocycloids lead to special Lie groups. Both hypotrochoids and epitrochoids (where the moving circle rolls around on the outside of the fixed circle) can be created using the Spirograph drawing toy. These curves have applications in the "real world" in epicyclic and hypocycloidal gearing, which were used in World War II in the construction of portable radar gear and may be used today in 3D printing.

    Good articles – load new batch

      These are Good articles, which meet a core set of high editorial standards.

  • Image 3 The small set expansion hypothesis or small set expansion conjecture in computational complexity theory is an unproven computational hardness assumption. Under the small set expansion hypothesis it is assumed to be computationally infeasible to distinguish between a certain class of expander graphs called "small set expanders" and other graphs that are very far from being small set expanders. This assumption implies the hardness of several other computational problems, and the optimality of certain known approximation algorithms. The small set expansion hypothesis is related to the unique games conjecture, another unproven computational hardness assumption according to which accurately approximating the value of certain games is computationally infeasible. If the small set expansion hypothesis is true, then so is the unique games conjecture. (Full article...)
    The small set expansion hypothesis or small set expansion conjecture in computational complexity theory is an unproven computational hardness assumption. Under the small set expansion hypothesis it is assumed to be computationally infeasible to distinguish between a certain class of expander graphs called "small set expanders" and other graphs that are very far from being small set expanders. This assumption implies the hardness of several other computational problems, and the optimality of certain known approximation algorithms.

    The small set expansion hypothesis is related to the unique games conjecture, another unproven computational hardness assumption according to which accurately approximating the value of certain games is computationally infeasible. If the small set expansion hypothesis is true, then so is the unique games conjecture. (Full article...)
  • Image 4 Berlin, 1959 Andrew Mattei Gleason (1921–2008) was an American mathematician who made fundamental contributions to widely varied areas of mathematics, including the solution of Hilbert's fifth problem, and was a leader in reform and innovation in math­e­mat­ics teaching at all levels. Gleason's theorem in quantum logic and the Greenwood–Gleason graph, an important example in Ramsey theory, are named for him. As a young World War II naval officer, Gleason broke German and Japanese military codes. After the war he spent his entire academic career at Harvard University, from which he retired in 1992. His numerous academic and scholarly leadership posts included chairmanship of the Harvard Mathematics Department and the Harvard Society of Fellows, and presidency of the American Mathematical Society. He continued to advise the United States government on cryptographic security, and the Commonwealth of Massachusetts on math­e­mat­ics education for children, almost until the end of his life. (Full article...)

    Andrew Mattei Gleason (1921–2008) was an American mathematician who made fundamental contributions to widely varied areas of mathematics, including the solution of Hilbert's fifth problem, and was a leader in reform and innovation in math­e­mat­ics teaching at all levels. Gleason's theorem in quantum logic and the Greenwood–Gleason graph, an important example in Ramsey theory, are named for him.

    As a young World War II naval officer, Gleason broke German and Japanese military codes. After the war he spent his entire academic career at Harvard University, from which he retired in 1992. His numerous academic and scholarly leadership posts included chairmanship of the Harvard Mathematics Department and the Harvard Society of Fellows, and presidency of the American Mathematical Society. He continued to advise the United States government on cryptographic security, and the Commonwealth of Massachusetts on math­e­mat­ics education for children, almost until the end of his life. (Full article...)
  • Image 5 In this example, the alternating sum of angles (clockwise from the bottom) is 90° − 45° + 22.5° − 22.5° + 45° − 90° + 22.5° − 22.5° = 0°. Since it adds to zero, the crease pattern may be flat-folded. Kawasaki's theorem or Kawasaki–Justin theorem is a theorem in the mathematics of paper folding that describes the crease patterns with a single vertex that may be folded to form a flat figure. It states that the pattern is flat-foldable if and only if alternatingly adding and subtracting the angles of consecutive folds around the vertex gives an alternating sum of zero. Crease patterns with more than one vertex do not obey such a simple criterion, and are NP-hard to fold. The theorem is named after one of its discoverers, Toshikazu Kawasaki. However, several others also contributed to its discovery, and it is sometimes called the Kawasaki–Justin theorem or Husimi's theorem after other contributors, Jacques Justin and Kôdi Husimi. (Full article...)
    In this example, the alternating sum of angles (clockwise from the bottom) is 90° − 45° + 22.5° − 22.5° + 45° − 90° + 22.5° − 22.5° = 0°. Since it adds to zero, the crease pattern may be flat-folded.

    Kawasaki's theorem or Kawasaki–Justin theorem is a theorem in the mathematics of paper folding that describes the crease patterns with a single vertex that may be folded to form a flat figure. It states that the pattern is flat-foldable if and only if alternatingly adding and subtracting the angles of consecutive folds around the vertex gives an alternating sum of zero.
    Crease patterns with more than one vertex do not obey such a simple criterion, and are NP-hard to fold.

    The theorem is named after one of its discoverers, Toshikazu Kawasaki. However, several others also contributed to its discovery, and it is sometimes called the Kawasaki–Justin theorem or Husimi's theorem after other contributors, Jacques Justin and Kôdi Husimi. (Full article...)
  • Image 6 Isosceles triangle with vertical axis of symmetry In geometry, an isosceles triangle (/aɪˈsɒsəliːz/) is a triangle that has two sides of equal length. Sometimes it is specified as having exactly two sides of equal length, and sometimes as having at least two sides of equal length, the latter version thus including the equilateral triangle as a special case. Examples of isosceles triangles include the isosceles right triangle, the golden triangle, and the faces of bipyramids and certain Catalan solids. The mathematical study of isosceles triangles dates back to ancient Egyptian mathematics and Babylonian mathematics. Isosceles triangles have been used as decoration from even earlier times, and appear frequently in architecture and design, for instance in the pediments and gables of buildings. (Full article...)

    In geometry, an isosceles triangle (/ˈsɒsəlz/) is a triangle that has two sides of equal length. Sometimes it is specified as having exactly two sides of equal length, and sometimes as having at least two sides of equal length, the latter version thus including the equilateral triangle as a special case.
    Examples of isosceles triangles include the isosceles right triangle, the golden triangle, and the faces of bipyramids and certain Catalan solids.

    The mathematical study of isosceles triangles dates back to ancient Egyptian mathematics and Babylonian mathematics. Isosceles triangles have been used as decoration from even earlier times, and appear frequently in architecture and design, for instance in the pediments and gables of buildings. (Full article...)
  • Image 7 17 indivisible camels The 17-animal inheritance puzzle is a mathematical puzzle involving unequal but fair allocation of indivisible goods, usually stated in terms of inheritance of a number of large animals (17 camels, 17 horses, 17 elephants, etc.) which must be divided in some stated proportion among a number of beneficiaries. It is a common example of an apportionment problem. Despite often being framed as a puzzle, it is more an anecdote about a curious calculation than a problem with a clear mathematical solution. Beyond recreational mathematics and mathematics education, the story has been repeated as a parable with varied metaphorical meanings. (Full article...)
    17 indivisible camels

    The 17-animal inheritance puzzle is a mathematical puzzle involving unequal but fair allocation of indivisible goods, usually stated in terms of inheritance of a number of large animals (17 camels, 17 horses, 17 elephants, etc.) which must be divided in some stated proportion among a number of beneficiaries. It is a common example of an apportionment problem.

    Despite often being framed as a puzzle, it is more an anecdote about a curious calculation than a problem with a clear mathematical solution. Beyond recreational mathematics and mathematics education, the story has been repeated as a parable with varied metaphorical meanings. (Full article...)
  • Image 8 Malfatti circles In geometry, the Malfatti circles are three circles inside a given triangle such that each circle is tangent to the other two and to two sides of the triangle. They are named after Gian Francesco Malfatti, who made early studies of the problem of constructing these circles in the mistaken belief that they would have the largest possible total area of any three disjoint circles within the triangle. Malfatti's problem has been used to refer both to the problem of constructing the Malfatti circles and to the problem of finding three area-maximizing circles within a triangle. A simple construction of the Malfatti circles was given by Steiner (1826) harvtxt error: no target: CITEREFSteiner1826 (help), and many mathematicians have since studied the problem. Malfatti himself supplied a formula for the radii of the three circles, and they may also be used to define two triangle centers, the Ajima–Malfatti points of a triangle. (Full article...)
    Malfatti circles

    In geometry, the Malfatti circles are three circles inside a given triangle such that each circle is tangent to the other two and to two sides of the triangle. They are named after Gian Francesco Malfatti, who made early studies of the problem of constructing these circles in the mistaken belief that they would have the largest possible total area of any three disjoint circles within the triangle.

    Malfatti's problem has been used to refer both to the problem of constructing the Malfatti circles and to the problem of finding three area-maximizing circles within a triangle.
    A simple construction of the Malfatti circles was given by Steiner (1826), and many mathematicians have since studied the problem. Malfatti himself supplied a formula for the radii of the three circles, and they may also be used to define two triangle centers, the Ajima–Malfatti points of a triangle. (Full article...)
  • Image 9 Measuring the width of a Reuleaux triangle as the distance between parallel supporting lines. Because this distance does not depend on the direction of the lines, the Reuleaux triangle is a curve of constant width. In geometry, a curve of constant width is a simple closed curve in the plane whose width (the distance between parallel supporting lines) is the same in all directions. The shape bounded by a curve of constant width is a body of constant width or an orbiform, the name given to these shapes by Leonhard Euler. Standard examples are the circle and the Reuleaux triangle. These curves can also be constructed using circular arcs centered at crossings of an arrangement of lines, as the involutes of certain curves, or by intersecting circles centered on a partial curve. Every body of constant width is a convex set, its boundary crossed at most twice by any line, and if the line crosses perpendicularly it does so at both crossings, separated by the width. By Barbier's theorem, the body's perimeter is exactly π times its width, but its area depends on its shape, with the Reuleaux triangle having the smallest possible area for its width and the circle the largest. Every superset of a body of constant width includes pairs of points that are farther apart than the width, and every curve of constant width includes at least six points of extreme curvature. Although the Reuleaux triangle is not smooth, curves of constant width can always be approximated arbitrarily closely by smooth curves of the same constant width. (Full article...)
    Measuring the width of a Reuleaux triangle as the distance between parallel supporting lines. Because this distance does not depend on the direction of the lines, the Reuleaux triangle is a curve of constant width.

    In geometry, a curve of constant width is a simple closed curve in the plane whose width (the distance between parallel supporting lines) is the same in all directions. The shape bounded by a curve of constant width is a body of constant width or an orbiform, the name given to these shapes by Leonhard Euler. Standard examples are the circle and the Reuleaux triangle. These curves can also be constructed using circular arcs centered at crossings of an arrangement of lines, as the involutes of certain curves, or by intersecting circles centered on a partial curve.

    Every body of constant width is a convex set, its boundary crossed at most twice by any line, and if the line crosses perpendicularly it does so at both crossings, separated by the width. By Barbier's theorem, the body's perimeter is exactly π times its width, but its area depends on its shape, with the Reuleaux triangle having the smallest possible area for its width and the circle the largest. Every superset of a body of constant width includes pairs of points that are farther apart than the width, and every curve of constant width includes at least six points of extreme curvature. Although the Reuleaux triangle is not smooth, curves of constant width can always be approximated arbitrarily closely by smooth curves of the same constant width. (Full article...)
  • Image 10 A three-page book embedding of the complete graph K5. Because it is not a planar graph, it is not possible to embed this graph without crossings on fewer pages, so its book thickness is three. In graph theory, a book embedding is a generalization of planar embedding of a graph to embeddings in a book, a collection of half-planes all having the same line as their boundary. Usually, the vertices of the graph are required to lie on this boundary line, called the spine, and the edges are required to stay within a single half-plane. The book thickness of a graph is the smallest possible number of half-planes for any book embedding of the graph. Book thickness is also called pagenumber, stacknumber or fixed outerthickness. Book embeddings have also been used to define several other graph invariants including the pagewidth and book crossing number. Every graph with n vertices has book thickness at most '"`UNIQ--postMath-00000006-QINU`"', and this formula gives the exact book thickness for complete graphs. The graphs with book thickness one are the outerplanar graphs. The graphs with book thickness at most two are the subhamiltonian graphs, which are always planar; more generally, every planar graph has book thickness at most four. All minor-closed graph families, and in particular the graphs with bounded treewidth or bounded genus, also have bounded book thickness. It is NP-hard to determine the exact book thickness of a given graph, with or without knowing a fixed vertex ordering along the spine of the book. Testing the existence of a three-page book embedding of a graph, given a fixed ordering of the vertices along the spine of the embedding, has unknown computational complexity: it is neither known to be solvable in polynomial time nor known to be NP-hard. (Full article...)
    A three-page book embedding of the complete graph K5. Because it is not a planar graph, it is not possible to embed this graph without crossings on fewer pages, so its book thickness is three.

    In graph theory, a book embedding is a generalization of planar embedding of a graph to embeddings in a book, a collection of half-planes all having the same line as their boundary. Usually, the vertices of the graph are required to lie on this boundary line, called the spine, and the edges are required to stay within a single half-plane. The book thickness of a graph is the smallest possible number of half-planes for any book embedding of the graph. Book thickness is also called pagenumber, stacknumber or fixed outerthickness. Book embeddings have also been used to define several other graph invariants including the pagewidth and book crossing number.

    Every graph with n vertices has book thickness at most , and this formula gives the exact book thickness for complete graphs. The graphs with book thickness one are the outerplanar graphs. The graphs with book thickness at most two are the subhamiltonian graphs, which are always planar; more generally, every planar graph has book thickness at most four. All minor-closed graph families, and in particular the graphs with bounded treewidth or bounded genus, also have bounded book thickness. It is NP-hard to determine the exact book thickness of a given graph, with or without knowing a fixed vertex ordering along the spine of the book. Testing the existence of a three-page book embedding of a graph, given a fixed ordering of the vertices along the spine of the embedding, has unknown computational complexity: it is neither known to be solvable in polynomial time nor known to be NP-hard. (Full article...)
  • Image 11 In mathematics, the derivative is a fundamental tool that quantifies the sensitivity of change of a function's output with respect to its input. The derivative of a function of a single variable at a chosen input value, when it exists, is the slope of the tangent line to the graph of the function at that point. The tangent line is the best linear approximation of the function near that input value. For this reason, the derivative is often described as the instantaneous rate of change, the ratio of the instantaneous change in the dependent variable to that of the independent variable. The process of finding a derivative is called differentiation. There are multiple different notations for differentiation, two of the most commonly used being Leibniz notation and prime notation. Leibniz notation, named after Gottfried Wilhelm Leibniz, is represented as the ratio of two differentials, whereas prime notation is written by adding a prime mark. Higher order notations represent repeated differentiation, and they are usually denoted in Leibniz notation by adding superscripts to the differentials, and in prime notation by adding additional prime marks. The higher order derivatives can be applied in physics; for example, while the first derivative of the position of a moving object with respect to time is the object's velocity, how the position changes as time advances, the second derivative is the object's acceleration, how the velocity changes as time advances. (Full article...)
    In mathematics, the derivative is a fundamental tool that quantifies the sensitivity of change of a function's output with respect to its input. The derivative of a function of a single variable at a chosen input value, when it exists, is the slope of the tangent line to the graph of the function at that point. The tangent line is the best linear approximation of the function near that input value. For this reason, the derivative is often described as the instantaneous rate of change, the ratio of the instantaneous change in the dependent variable to that of the independent variable. The process of finding a derivative is called differentiation.

    There are multiple different notations for differentiation, two of the most commonly used being Leibniz notation and prime notation. Leibniz notation, named after Gottfried Wilhelm Leibniz, is represented as the ratio of two differentials, whereas prime notation is written by adding a prime mark. Higher order notations represent repeated differentiation, and they are usually denoted in Leibniz notation by adding superscripts to the differentials, and in prime notation by adding additional prime marks. The higher order derivatives can be applied in physics; for example, while the first derivative of the position of a moving object with respect to time is the object's velocity, how the position changes as time advances, the second derivative is the object's acceleration, how the velocity changes as time advances. (Full article...)
  • Image 12 A 1-forest (a maximal pseudoforest), formed by three 1-trees In graph theory, a pseudoforest is an undirected graph in which every connected component has at most one cycle. That is, it is a system of vertices and edges connecting pairs of vertices, such that no two cycles of consecutive edges share any vertex with each other, nor can any two cycles be connected to each other by a path of consecutive edges. A pseudotree is a connected pseudoforest. The names are justified by analogy to the more commonly studied trees and forests. (A tree is a connected graph with no cycles; a forest is a disjoint union of trees.) Gabow and Tarjan attribute the study of pseudoforests to Dantzig's 1963 book on linear programming, in which pseudoforests arise in the solution of certain network flow problems. Pseudoforests also form graph-theoretic models of functions and occur in several algorithmic problems. Pseudoforests are sparse graphs – their number of edges is linearly bounded in terms of their number of vertices (in fact, they have at most as many edges as they have vertices) – and their matroid structure allows several other families of sparse graphs to be decomposed as unions of forests and pseudoforests. The name "pseudoforest" comes from Picard & Queyranne (1982) harvtxt error: no target: CITEREFPicardQueyranne1982 (help). (Full article...)
    A 1-forest (a maximal pseudoforest), formed by three 1-trees

    In graph theory, a pseudoforest is an undirected graph in which every connected component has at most one cycle. That is, it is a system of vertices and edges connecting pairs of vertices, such that no two cycles of consecutive edges share any vertex with each other, nor can any two cycles be connected to each other by a path of consecutive edges. A pseudotree is a connected pseudoforest.

    The names are justified by analogy to the more commonly studied trees and forests. (A tree is a connected graph with no cycles; a forest is a disjoint union of trees.) Gabow and Tarjan attribute the study of pseudoforests to Dantzig's 1963 book on linear programming, in which pseudoforests arise in the solution of certain network flow problems. Pseudoforests also form graph-theoretic models of functions and occur in several algorithmic problems. Pseudoforests are sparse graphs – their number of edges is linearly bounded in terms of their number of vertices (in fact, they have at most as many edges as they have vertices) – and their matroid structure allows several other families of sparse graphs to be decomposed as unions of forests and pseudoforests. The name "pseudoforest" comes from Picard & Queyranne (1982). (Full article...)
  • More good articles

    Did you know (auto-generated)load new batch

    More did you know – view different entries

    Did you know...
    Did you know...
    Showing 7 items out of 75

    Selected article – show another


    Carl Friedrich Gauss
    Image credit: C.A. Jensen (1792-1870)

    Carl Friedrich Gauss (30 April 1777 – 23 February 1855) was a German mathematician and scientist of profound genius who contributed significantly to many fields, including number theory, analysis, differential geometry, geodesy, electricity, magnetism, astronomy and optics. Known as "the prince of mathematicians" and "greatest mathematician since antiquity", Gauss had a remarkable influence in many fields of mathematics and science and is ranked as one of history's most influential mathematicians.

    Gauss was a child prodigy, of whom there are many anecdotes pertaining to his astounding precocity while a mere toddler, and made his first ground-breaking mathematical discoveries while still a teenager. He completed Disquisitiones Arithmeticae, his magnum opus, at the age of twenty-one (1798), though it wasn't published until 1801. This work was fundamental in consolidating number theory as a discipline and has shaped the field to the present day. (Full article...)

    View all selected articles

    Subcategories

    Algebra | Arithmetic | Analysis | Complex analysis | Applied mathematics | Calculus | Category theory | Chaos theory | Combinatorics | Dynamical systems | Fractals | Game theory | Geometry | Algebraic geometry | Graph theory | Group theory | Linear algebra | Mathematical logic | Model theory | Multi-dimensional geometry | Number theory | Numerical analysis | Optimization | Order theory | Probability and statistics | Set theory | Statistics | Topology | Algebraic topology | Trigonometry | Linear programming


    Mathematics | History of mathematics | Mathematicians | Awards | Education | Literature | Notation | Organizations | Theorems | Proofs | Unsolved problems

    Full category tree. Select [►] to view subcategories.

    Topics in mathematics

    General Foundations Number theory Discrete mathematics


    Algebra Analysis Geometry and topology Applied mathematics
    Source

    Index of mathematics articles

    ARTICLE INDEX:
    MATHEMATICIANS:

    WikiProjects

    WikiProjects The Mathematics WikiProject is the center for mathematics-related editing on Wikipedia. Join the discussion on the project's talk page.

    In other Wikimedia projects

    The following Wikimedia Foundation sister projects provide more on this subject:

    More portals

    Discover Wikipedia using portals
    1. ^ Coxeter et al. (1999), p. 30–31; Wenninger (1971), p. 65.
    Retrieved from "https://en.wikipedia.org/w/index.php?title=Portal:Mathematics&oldid=1246080347"