An arb_t represents a ball over the real numbers, that is, an interval \([m \pm r] \equiv [m-r, m+r]\) where the midpoint \(m\) and the radius \(r\) are (extended) real numbers and \(r\) is nonnegative (possibly infinite). The result of an (approximate) operation done on arb_t variables is a ball which contains the result of the (mathematically exact) operation applied to any choice of points in the input balls. In general, the output ball is not the smallest possible.
The precision parameter passed to each function roughly indicates the precision to which calculations on the midpoint are carried out (operations on the radius are always done using a fixed, small precision.)
For arithmetic operations, the precision parameter currently simply specifies the precision of the corresponding arf_t operation. In the future, the arithmetic might be made faster by incorporating sloppy rounding (typically equivalent to a loss of 1-2 bits of effective working precision) when the result is known to be inexact (while still propagating errors rigorously, of course). Arithmetic operations done on exact input with exactly representable output are always guaranteed to produce exact output.
For more complex operations, the precision parameter indicates a minimum working precision (algorithms might allocate extra internal precision to attempt to produce an output accurate to the requested number of bits, especially when the required precision can be estimated easily, but this is not generally required).
If the precision is increased and the inputs either are exact or are computed with increased accuracy as well, the output should converge proportionally, absent any bugs. The general intended strategy for using ball arithmetic is to add a few guard bits, and then repeat the calculation as necessary with an exponentially increasing number of guard bits (Ziv’s strategy) until the result is exact enough for one’s purposes (typically the first attempt will be successful).
The following balls with an infinite or NaN component are permitted, and may be returned as output from functions.
The arb_t type is almost identical semantically to the legacy fmprb_t type, but uses a more efficient internal representation. Whereas the midpoint and radius of an fmprb_t both have the same type, the arb_t type uses an arf_t for the midpoint and a mag_t for the radius. Code designed to manipulate the radius of an fmprb_t directly can be ported to the arb_t type by writing the radius to a temporary arf_t variable, manipulating that variable, and then converting back to the mag_t radius. Alternatively, mag_t methods can be used directly where available.
An arb_struct consists of an arf_struct (the midpoint) and a mag_struct (the radius). An arb_t is defined as an array of length one of type arb_struct, permitting an arb_t to be passed by reference.
Alias for arb_struct *, used for vectors of numbers.
Alias for const arb_struct *, used for vectors of numbers when passed as constant input to functions.
Initializes the variable x for use. Its midpoint and radius are both set to zero.
Returns a pointer to an array of n initialized arb_struct entries.
Clears an array of n initialized arb_struct entries.
Sets y to the value of x, rounded to prec bits.
Sets y to \(x \cdot 2^e\), rounded to prec bits.
Sets y to the rational number x, rounded to prec bits.
Sets res to the value specified by the human-readable string inp. The input may be a decimal floating-point literal, such as “25”, “0.001”, “7e+141” or “-31.4159e-1”, and may also consist of two such literals separated by the symbol “+/-” and optionally enclosed in brackets, e.g. “[3.25 +/- 0.0001]”, or simply “[+/- 10]” with an implicit zero midpoint. The output is rounded to prec bits, and if the binary-to-decimal conversion is inexact, the resulting error is added to the radius.
The symbols “inf” and “nan” are recognized (a nan midpoint results in an indeterminate interval, with infinite radius).
Returns 0 if successful and nonzero if unsuccessful. If unsuccessful, the result is set to an indeterminate interval.
Returns a nice human-readable representation of x, with at most n digits of the midpoint printed.
With default flags, the output can be parsed back with arb_set_str(), and this is guaranteed to produce an interval containing the original interval x.
By default, the output is rounded so that the value given for the midpoint is correct up to 1 ulp (unit in the last decimal place).
If ARB_STR_MORE is added to flags, more (possibly incorrect) digits may be printed.
If ARB_STR_NO_RADIUS is added to flags, the radius is not included in the output if at least 1 digit of the midpoint can be printed.
By adding a multiple m of ARB_STR_CONDENSE to flags, strings of more than three times m consecutive digits are condensed, only printing the leading and trailing m digits along with brackets indicating the number of digits omitted (useful when computing values to extremely high precision).
Prints x in decimal. The printed value of the radius is not adjusted to compensate for the fact that the binary-to-decimal conversion of both the midpoint and the radius introduces additional error.
Prints a nice decimal representation of x. By default, the output is guaranteed to be correct to within one unit in the last digit. An error bound is also printed explicitly. See arb_get_str() for details.
Generates a random ball. The midpoint and radius will both be finite.
Generates a random number with zero radius.
Generates a random number with radius around \(2^{-\text{prec}}\) the magnitude of the midpoint.
Generates a random number with midpoint and radius chosen independently, possibly giving a very large interval.
Generates a random interval, possibly having NaN or an infinity as the midpoint and possibly having an infinite radius.
Sets q to a random rational number from the interval represented by x. A denominator is chosen by multiplying the binary denominator of x by a random integer up to bits bits.
The outcome is undefined if the midpoint or radius of x is non-finite, or if the exponent of the midpoint or radius is so large or small that representing the endpoints as exact rational numbers would cause overflows.
Adds err, which is assumed to be nonnegative, to the radius of x.
Adds the supremum of err, which is assumed to be nonnegative, to the radius of x.
Sets z to a ball containing both x and y.
Sets u to the upper bound for the absolute value of x, rounded up to prec bits. If x contains NaN, the result is NaN.
Sets u to the lower bound for the absolute value of x, rounded down to prec bits. If x contains NaN, the result is NaN.
Sets z to an upper bound for the absolute value of x. If x contains NaN, the result is positive infinity.
Sets z to a lower bound for the absolute value of x. If x contains NaN, the result is zero.
Sets z to a lower bound for the signed value of x, or zero if x overlaps with the negative half-axis. If x contains NaN, the result is zero.
Computes the exact interval represented by x, in the form of an integer interval multiplied by a power of two, i.e. \(x = [a, b] \times 2^{\text{exp}}\).
The outcome is undefined if the midpoint or radius of x is non-finite, or if the difference in magnitude between the midpoint and radius is so large that representing the endpoints exactly would cause overflows.
Sets x to a ball containing the interval \([a, b]\). We require that \(a \le b\).
Constructs an interval \([a, b]\) containing the ball x. The MPFR version uses the precision of the output variables.
Returns the effective relative error of x measured in bits, defined as the difference between the position of the top bit in the radius and the top bit in the midpoint, plus one. The result is clamped between plus/minus ARF_PREC_EXACT.
Returns the effective relative accuracy of x measured in bits, equal to the negative of the return value from arb_rel_error_bits().
Returns the number of bits needed to represent the absolute value of the mantissa of the midpoint of x, i.e. the minimum precision sufficient to represent x exactly. Returns 0 if the midpoint of x is a special value.
Sets y to a trimmed copy of x: rounds x to a number of bits equal to the accuracy of x (as indicated by its radius), plus a few guard bits. The resulting ball is guaranteed to contain x, but is more economical if x has less than full accuracy.
If x contains a unique integer, sets z to that value and returns nonzero. Otherwise (if x represents no integers or more than one integer), returns zero.
Sets y to a ball containing \(\lfloor x \rfloor\) and \(\lceil x \rceil\) respectively, with the midpoint of y rounded to at most prec bits.
Assuming that x is finite and not exactly zero, computes integers mid, rad, exp such that \(x \in [m-r, m+r] \times 10^e\) and such that the larger out of mid and rad has at least n digits plus a few guard digits. If x is infinite or exactly zero, the outputs are all set to zero.
Returns nonzero iff zero is not contained in the interval represented by x.
Returns nonzero iff the midpoint and radius of x are both finite floating-point numbers, i.e. not infinities or NaN.
Returns nonzero iff x and y are equal as balls, i.e. have both the same midpoint and radius.
Note that this is not the same thing as testing whether both x and y certainly represent the same real number, unless either x or y is exact (and neither contains NaN). To test whether both operands might represent the same mathematical quantity, use arb_overlaps() or arb_contains(), depending on the circumstance.
Returns nonzero iff all points p in the interval represented by x satisfy, respectively, \(p > 0\), \(p \ge 0\), \(p < 0\), \(p \le 0\). If x contains NaN, returns zero.
Returns nonzero iff x and y have some point in common. If either x or y contains NaN, this function always returns nonzero (as a NaN could be anything, it could in particular contain any number that is included in the other operand).
Returns nonzero iff the given number (or ball) y is contained in the interval represented by x.
If x is contains NaN, this function always returns nonzero (as it could represent anything, and in particular could represent all the points included in y). If y contains NaN and x does not, it always returns zero.
Returns nonzero iff there is any point p in the interval represented by x satisfying, respectively, \(p = 0\), \(p < 0\), \(p \le 0\), \(p > 0\), \(p \ge 0\). If x contains NaN, returns nonzero.
Respectively performs the comparison \(x = y\), \(x \ne y\), \(x < y\), \(x \le y\), \(x > y\), \(x \ge y\) in a mathematically meaningful way. If the comparison \(t \, (\operatorname{op}) \, u\) holds for all \(t \in x\) and all \(u \in y\), returns 1. Otherwise, returns 0.
The balls x and y are viewed as subintervals of the extended real line. Note that balls that are formally different can compare as equal under this definition: for example, \([-\infty \pm 3] = [-\infty \pm 0]\). Also \([-\infty] \le [\infty \pm \infty]\).
The output is always 0 if either input has NaN as midpoint.
Sets y to the absolute value of x. No attempt is made to improve the interval represented by x if it contains zero.
Sets \(z = x + y\), rounded to prec bits. The precision can be ARF_PREC_EXACT provided that the result fits in memory.
Sets \(z = x + m \cdot 2^e\), rounded to prec bits. The precision can be ARF_PREC_EXACT provided that the result fits in memory.
Sets \(z = x - y\), rounded to prec bits. The precision can be ARF_PREC_EXACT provided that the result fits in memory.
Sets \(z = x \cdot y\), rounded to prec bits. The precision can be ARF_PREC_EXACT provided that the result fits in memory.
Sets \(z = z + x \cdot y\), rounded to prec bits. The precision can be ARF_PREC_EXACT provided that the result fits in memory.
Sets \(z = z - x \cdot y\), rounded to prec bits. The precision can be ARF_PREC_EXACT provided that the result fits in memory.
Sets \(z = x / y\), rounded to prec bits. If y contains zero, z is set to \(0 \pm \infty\). Otherwise, error propagation uses the rule
where \(-1 \le \xi_1, \xi_2 \le 1\), and where the triangle inequality has been applied to the numerator and the reverse triangle inequality has been applied to the denominator.
Sets z to the square root of x, rounded to prec bits.
If \(x = m \pm x\) where \(m \ge r \ge 0\), the propagated error is bounded by \(\sqrt{m} - \sqrt{m-r} = \sqrt{m} (1 - \sqrt{1 - r/m}) \le \sqrt{m} (r/m + (r/m)^2)/2\).
Sets z to the square root of x, assuming that x represents a nonnegative number (i.e. discarding any negative numbers in the input interval).
Sets z to the reciprocal square root of x, rounded to prec bits. At high precision, this is faster than computing a square root.
Sets \(z = \sqrt{1+x}-1\), computed accurately when \(x \approx 0\).
Sets z to the k-th root of x, rounded to prec bits. This function selects between different algorithms. For large k, it evaluates \(\exp(\log(x)/k)\). For small k, it uses arf_root() at the midpoint and computes a propagated error bound as follows: if input interval is \([m-r, m+r]\) with \(r \le m\), the error is largest at \(m-r\) where it satisfies
This is evaluated using mag_log1p().
Sets \(y = b^e\) using binary exponentiation (with an initial division if \(e < 0\)). Provided that b and e are small enough and the exponent is positive, the exact power can be computed by setting the precision to ARF_PREC_EXACT.
Note that these functions can get slow if the exponent is extremely large (in such cases arb_pow() may be superior).
Sets \(y = b^e\), computed as \(y = (b^{1/q})^p\) if the denominator of \(e = p/q\) is small, and generally as \(y = \exp(e \log b)\).
Note that this function can get slow if the exponent is extremely large (in such cases arb_pow() may be superior).
Sets \(z = \log(x)\).
At low to medium precision (up to about 4096 bits), arb_log_arf() uses table-based argument reduction and fast Taylor series evaluation via _arb_atan_taylor_rs(). At high precision, it falls back to MPFR. The function arb_log() simply calls arb_log_arf() with the midpoint as input, and separately adds the propagated error.
Computes \(\log(k_1)\), given \(\log(k_0)\) where \(k_0 < k_1\). At high precision, this function uses the formula \(\log(k_1) = \log(k_0) + 2 \operatorname{atanh}((k_1-k_0)/(k_1+k_0))\), evaluating the inverse hyperbolic tangent using binary splitting (for best efficiency, \(k_0\) should be large and \(k_1 - k_0\) should be small). Otherwise, it ignores \(\log(k_0)\) and evaluates the logarithm the usual way.
Sets \(z = \log(1+x)\), computed accurately when \(x \approx 0\).
Sets \(z = \exp(x)\). Error propagation is done using the following rule: assuming \(x = m \pm r\), the error is largest at \(m + r\), and we have \(\exp(m+r) - \exp(m) = \exp(m) (\exp(r)-1) \le r \exp(m+r)\).
Sets \(s = \sin(x)\), \(c = \cos(x)\). Error propagation uses the rule \(|\sin(m \pm r) - \sin(m)| \le \min(r,2)\).
Sets \(s = \sin(\pi x)\), \(c = \cos(\pi x)\).
Sets \(s = \sin(\pi x)\), \(c = \cos(\pi x)\) where \(x\) is a rational number (whose numerator and denominator are assumed to be reduced). We first use trigonometric symmetries to reduce the argument to the octant \([0, 1/4]\). Then we either multiply by a numerical approximation of \(\pi\) and evaluate the trigonometric function the usual way, or we use algebraic methods, depending on which is estimated to be faster. Since the argument has been reduced to the first octant, the first of these two methods gives full accuracy even if the original argument is close to some root other the origin.
Sets \(z = \operatorname{atan}(x)\).
At low to medium precision (up to about 4096 bits), arb_atan_arf() uses table-based argument reduction and fast Taylor series evaluation via _arb_atan_taylor_rs(). At high precision, it falls back to MPFR. The function arb_atan() simply calls arb_atan_arf() with the midpoint as input, and separately adds the propagated error.
The function arb_atan_arf() uses lookup tables if possible, and otherwise falls back to arb_atan_arf_bb().
Sets r to an the argument (phase) of the complex number \(a + bi\), with the branch cut discontinuity on \((-\infty,0]\). We define \(\operatorname{atan2}(0,0) = 0\), and for \(a < 0\), \(\operatorname{atan2}(0,a) = \pi\).
Sets \(s = \sinh(x)\), \(c = \cosh(x)\). If the midpoint of \(x\) is close to zero and the hyperbolic sine is to be computed, evaluates \((e^{2x}\pm1) / (2e^x)\) via arb_expm1() to avoid loss of accuracy. Otherwise evaluates \((e^x \pm e^{-x}) / 2\).
Sets \(y = \tanh(x) = \sinh(x) / \cosh(x)\), evaluated via arb_expm1() as \(\tanh(x) = (e^{2x} - 1) / (e^{2x} + 1)\) if \(|x|\) is small, and as \(\tanh(\pm x) = 1 - 2 e^{\mp 2x} / (1 + e^{\mp 2x})\) if \(|x|\) is large.
Sets \(y = \coth(x) = \cosh(x) / \sinh(x)\), evaluated using the same strategy as arb_tanh().
The following functions cache the computed values to speed up repeated calls at the same or lower precision. For further implementation details, see Algorithms for mathematical constants.
Computes Euler’s constant \(\gamma = \lim_{k \rightarrow \infty} (H_k - \log k)\) where \(H_k = 1 + 1/2 + \ldots + 1/k\).
Computes Catalan’s constant \(C = \sum_{n=0}^{\infty} (-1)^n / (2n+1)^2\).
Computes the rising factorial \(z = x (x+1) (x+2) \cdots (x+n-1)\).
The bs version uses binary splitting. The rs version uses rectangular splitting. The rec version uses either bs or rs depending on the input. The default version is currently identical to the rec version. In a future version, it will use the gamma function or asymptotic series when this is more efficient.
The rs version takes an optional step parameter for tuning purposes (to use the default step length, pass zero).
Computes the rising factorial \(z = x (x+1) (x+2) \cdots (x+n-1)\) using binary splitting. If the denominator or numerator of x is large compared to prec, it is more efficient to convert x to an approximation and use arb_rising_ui().
Letting \(u(x) = x (x+1) (x+2) \cdots (x+n-1)\), simultaneously compute \(u(x)\) and \(v(x) = u'(x)\), respectively using binary splitting, rectangular splitting (with optional nonzero step length step to override the default choice), and an automatic algorithm choice.
Computes the factorial \(z = n!\) via the gamma function.
Computes the binomial coefficient \(z = {n \choose k}\), via the rising factorial as \({n \choose k} = (n-k+1)_k / k!\).
Computes the gamma function \(z = \Gamma(x)\).
Computes the logarithmic gamma function \(z = \log \Gamma(x)\). The complex branch structure is assumed, so if \(x \le 0\), the result is an indeterminate interval.
Evaluates \(\zeta(s)\) at \(\mathrm{num}\) consecutive integers s beginning with start and proceeding in increments of step. Uses Borwein’s formula ([Bor2000], [GS2003]), implemented to support fast multi-evaluation (but also works well for a single s).
Requires \(\mathrm{start} \ge 2\). For efficiency, the largest s should be at most about as large as prec. Arguments approaching LONG_MAX will cause overflows. One should therefore only use this function for s up to about prec, and then switch to the Euler product.
The algorithm for single s is basically identical to the one used in MPFR (see [MPFR2012] for a detailed description). In particular, we evaluate the sum backwards to avoid storing more than one \(d_k\) coefficient, and use integer arithmetic throughout since it is convenient and the terms turn out to be slightly larger than \(2^\mathrm{prec}\). The only numerical error in the main loop comes from the division by \(k^s\), which adds less than 1 unit of error per term. For fast multi-evaluation, we repeatedly divide by \(k^{\mathrm{step}}\). Each division reduces the input error and adds at most 1 unit of additional rounding error, so by induction, the error per term is always smaller than 2 units.
Assuming \(s \ge 2\), approximates \(\zeta(s)\) by \(1 + 2^{-s}\) along with a correct error bound. We use the following bounds: for \(s > b\), \(\zeta(s) - 1 < 2^{-b}\), and generally, \(\zeta(s) - (1 + 2^{-s}) < 2^{2-\lfloor 3 s/2 \rfloor}\).
Computes \(\zeta(s)\) using the Euler product. This is fast only if s is large compared to the precision.
Writing \(P(a,b) = \prod_{a \le p \le b} (1 - p^{-s})\), we have \(1/\zeta(s) = P(a,M) P(M+1,\infty)\).
To bound the error caused by truncating the product at \(M\), we write \(P(M+1,\infty) = 1 - \epsilon(s,M)\). Since \(0 < P(a,M) \le 1\), the absolute error for \(\zeta(s)\) is bounded by \(\epsilon(s,M)\).
According to the analysis in [Fil1992], it holds for all \(s \ge 6\) and \(M \ge 1\) that \(1/P(M+1,\infty) - 1 \le f(s,M) \equiv 2 M^{1-s} / (s/2 - 1)\). Thus, we have \(1/(1-\epsilon(s,M)) - 1 \le f(s,M)\), and expanding the geometric series allows us to conclude that \(\epsilon(M) \le f(s,M)\).
Computes \(\zeta(s)\) for even s via the corresponding Bernoulli number.
Computes \(\zeta(s)\) for arbitrary \(s \ge 2\) using a binary splitting implementation of Borwein’s algorithm. This has quasilinear complexity with respect to the precision (assuming that \(s\) is fixed).
Computes \(\zeta(s)\) at num consecutive integers (respectively num even or num odd integers) beginning with \(s = \mathrm{start} \ge 2\), automatically choosing an appropriate algorithm.
Computes \(\zeta(s)\) for nonnegative integer \(s \ne 1\), automatically choosing an appropriate algorithm. This function is intended for numerical evaluation of isolated zeta values; for multi-evaluation, the vector versions are more efficient.
Sets z to the value of the Riemann zeta function \(\zeta(s)\).
For computing derivatives with respect to \(s\), use arb_poly_zeta_series().
Sets z to the value of the Hurwitz zeta function \(\zeta(s,a)\).
For computing derivatives with respect to \(s\), use arb_poly_zeta_series().
Sets \(b\) to the numerical value of the Bernoulli number \(B_n\) accurate to prec bits, computed by a division of the exact fraction if \(B_n\) is in the global cache or the exact numerator roughly is larger than prec bits, and using arb_bernoulli_ui_zeta() otherwise. This function reads \(B_n\) from the global cache if the number is already cached, but does not automatically extend the cache by itself.
Sets \(b\) to the numerical value of \(B_n\) accurate to prec bits, computed using the formula \(B_{2n} = (-1)^{n+1} 2 (2n)! \zeta(2n) / (2 \pi)^n\).
To avoid potential infinite recursion, we explicitly call the Euler product implementation of the zeta function. We therefore assume that the precision is small enough and \(n\) large enough for the Euler product to converge rapidly (otherwise this function will effectively hang).
Computes the Fibonacci number \(F_n\). Uses the binary squaring algorithm described in [Tak2000]. Provided that n is small enough, an exact Fibonacci number can be computed by setting the precision to ARF_PREC_EXACT.
Sets z to the arithmetic-geometric mean of x and y.
Computes an approximation of \(y = \sum_{k=0}^{N-1} x^{2k+1} / (2k+1)\) (if alternating is 0) or \(y = \sum_{k=0}^{N-1} (-1)^k x^{2k+1} / (2k+1)\) (if alternating is 1). Used internally for computing arctangents and logarithms. The naive version uses the forward recurrence, and the rs version uses a division-avoiding rectangular splitting scheme.
Requires \(N \le 255\), \(0 \le x \le 1/16\), and xn positive. The input x and output y are fixed-point numbers with xn fractional limbs. A bound for the ulp error is written to error.
Computes an approximation of \(y = \sum_{k=0}^{N-1} x^k / k!\). Used internally for computing exponentials. The naive version uses the forward recurrence, and the rs version uses a division-avoiding rectangular splitting scheme.
Requires \(N \le 287\), \(0 \le x \le 1/16\), and xn positive. The input x is a fixed-point number with xn fractional limbs, and the output y is a fixed-point number with xn fractional limbs plus one extra limb for the integer part of the result.
A bound for the ulp error is written to error.
Computes approximations of \(y_s = \sum_{k=0}^{N-1} (-1)^k x^{2k+1} / (2k+1)!\) and \(y_c = \sum_{k=0}^{N-1} (-1)^k x^{2k} / (2k)!\). Used internally for computing sines and cosines. The naive version uses the forward recurrence, and the rs version uses a division-avoiding rectangular splitting scheme.
Requires \(N \le 143\), \(0 \le x \le 1/16\), and xn positive. The input x and outputs ysin, ycos are fixed-point numbers with xn fractional limbs. A bound for the ulp error is written to error.
If sinonly is 1, only the sine is computed; if sinonly is 0 both the sine and cosine are computed. To compute sin and cos, alternating should be 1. If alternating is 0, the hyperbolic sine is computed (this is currently only intended to be used together with sinonly).
Attempts to write \(w = x - q \log(2)\) with \(0 \le w < \log(2)\), where w is a fixed-point number with wn limbs and ulp error error. Returns success.
Attempts to write \(w = |x| - q \pi/4\) with \(0 \le w < \pi/4\), where w is a fixed-point number with wn limbs and ulp error error. Returns success.
The value of q mod 8 is written to octant. The output variable q can be NULL, in which case the full value of q is not stored.
Returns n such that \(\left|\sum_{k=n}^{\infty} x^k / k!\right| \le 2^{-\mathrm{prec}}\), assuming \(|x| \le 2^{\mathrm{mag}} \le 1/4\).
Computes the exponential function using the bit-burst algorithm. If m1 is nonzero, the exponential function minus one is computed accurately.
Aborts if x is extremely small or large (where another algorithm should be used).
For large x, repeated halving is used. In fact, we always do argument reduction until \(|x|\) is smaller than about \(2^{-d}\) where \(d \approx 16\) to speed up convergence. If \(|x| \approx 2^m\), we thus need about \(m+d\) squarings.
Computing \(\log(2)\) costs roughly 100-200 multiplications, so is not usually worth the effort at very high precision. However, this function could be improved by using \(\log(2)\) based reduction at precision low enough that the value can be assumed to be cached.
Computes T, Q and Qexp such that \(T / (Q 2^{\text{Qexp}}) = \sum_{k=1}^N (x/2^r)^k/k!\) using binary splitting. Note that the sum is taken to N inclusive and omits the constant term.
The powtab version precomputes a table of powers of x, resulting in slightly higher memory usage but better speed. For best efficiency, N should have many trailing zero bits.
Computes T, Q and Qexp such that \(T / (Q 2^{\text{Qexp}}) = \sum_{k=1}^N (-1)^k (x/2^r)^{2k} / (2k+1)\) using binary splitting. Note that the sum is taken to N inclusive, omits the linear term, and requires a final multiplication by \((x/2^r)\) to give the true series for atan.
The powtab version precomputes a table of powers of x, resulting in slightly higher memory usage but better speed. For best efficiency, N should have many trailing zero bits.
Computes the arctangent of x. Initially, the argument-halving formula
is applied up to 8 times to get a small argument. Then a version of the bit-burst algorithm is used. The functional equation
is applied repeatedly instead of integrating a differential equation for the arctangent, as this appears to be more efficient.
Returns nonzero iff all entries in x are zero.
Returns nonzero iff all entries in x certainly are finite.
Sets res to a copy of vec.
Sets res to a copy of vec, rounding each entry to prec bits.
Performs the respective scalar operation elementwise.
Sets res to the dot product of vec1 and vec2.
Sets res to the dot product of vec with itself.
Sets bound to an upper bound for the entries in vec.
Returns the maximum of arb_bits() for all entries in vec.
Sets xs to the powers \(1, x, x^2, \ldots, x^{len-1}\).
Adds the magnitude of each entry in err to the radius of the corresponding entry in res.
Applies arb_indeterminate() elementwise.
Applies arb_trim() elementwise.
Calls arb_get_unique_fmpz() elementwise and returns nonzero if all entries can be rounded uniquely to integers. If any entry in vec cannot be rounded uniquely to an integer, returns zero.