Numerical evaluation

Symbolica support efficient numerical evaluations of expressions and code generation. We will go through various alternatives and mention their relative advantages.

Single evaluation

To evaluate an expression once, use evaluate. In Python this evaluates with float and in Rust the type of the evaluation is inferred (one could evaluate with 4x4 simd floats or arbitrary-precision floats for example).

from symbolica import *
x, f, cos = Expression.symbol('x', 'f', 'cos')
e = cos(x)*3 + f(x,2)
print(e.evaluate({x: 1}, {f: lambda args: args[0]+args[1]}))
print(e.evaluate_complex({x: 1+2j}, {f: lambda args: args[0]+args[1]}))
use symbolica::{atom::AtomCore, parse, symbol};

fn main() {
    let a = parse!("cos(x)*3 + f(x,2)").unwrap();
    let mut h = HashMap::default();
    h.insert(symbol!("x"), 1.);
    let mut fn_map = HashMap::default();
    fn_map.insert(symbol!("f"), Box::new(|x, y| x + y));
    println!("{}", a.evaluate(&h, &fn_map));

    let mut h = HashMap::default();
    h.insert(symbol!("x"), Complex::new(1., 2.));
    let mut fn_map = HashMap::default();
    fn_map.insert(symbol!("f"), Box::new(|x, y| x + y));
    println!("{}", a.evaluate(&h, &fn_map));
}

The first argument of evaluate is a map from an atom to a constant and the second argument is a map of a function symbol to a closure that is called with the evaluated arguments.

The above example yields

4.620906917604419
(9.098169021058997-7.155693397455401j)

Evaluators

Naively evaluating an expression can be prohibitively slow for large expressions, especially when it has to be evaluated many times, such as for Monte Carlo simulations/integrations. Symbolica provides state-of-the-art ways to rewrite an expression in a form that makes it very fast to evaluate, often 10-100 times faster than a naive evaluation. Note that during these substitutions, “fast math” operations are applied.

It tries to find the best multivariate Horner scheme, performs common subexpression elimination and recycles intermediate variables and registers. It resembles what a programming language compiler such as gcc would do, but it is much faster as the input is more specific. See this blog post for more information.

Nested expressions

Such an evaluator supports nested expressions; expression that call other expressions. For example, to evaluate \(e\) in

\[ \begin{align*} e &= x + \pi + \cos(x) + f(g(x+1),2x)\\ f(y,z) &=y^2 + z^2 y^2\\ g(y) &=x+y+5 \end{align*} \]

we use the following code:

from symbolica import *
x, y, z, pi, f, g = Expression.symbol(
    'x', 'y', 'z', 'pi', 'f', 'g')

e1 = Expression.parse("x + pi + cos(x) + f(g(x+1),x*2)")
fd = Expression.parse("y^2 + z^2*y^2")
gd = Expression.parse("x + y + 5")

ev = e1.evaluator({pi: Expression.num(22)/7},
             {(f, "f", (y, z)): fd,
              (g, "g", (y, )): gd},
            [x])
res = ev.evaluate([[5.]])
print(res)

Note that at the moment no floats may appear in the expressions or in the map. Use rationalize_coefficients() to convert them.

fn main() {
    let e1 = parse!("x + pi + cos(x) + f(g(x+1),x*2)").unwrap();
    let f = parse!("y^2 + z^2*y^2").unwrap();
    let g = parse!("x+y+5").unwrap();

    let mut fn_map = FunctionMap::new();

    fn_map.add_constant(
        Atom::new_var(symbol!("pi")),
        Rational::from((22, 7)).into(),
    );
    fn_map
        .add_function(
            symbol!("f"),
            "f".to_string(),
            vec![symbol!("y"), symbol!("z")],
            f.as_view(),
        )
        .unwrap();
    fn_map
        .add_function(
            symbol!("g"),
            "g".to_string(),
            vec![symbol!("y")],
            g.as_view(),
        )
        .unwrap();

    let params = vec![parse!("x").unwrap()];

    let evaluator = e1
        .evaluator(&fn_map, &params, OptimizationSettings::default())
        .unwrap();

    let mut e_f64 = evaluator.map_coeff(&|x| x.into());
    let r = e_f64.evaluate_single(&[5.]);
    println!("{}", r);
}

Which yields 25864.42651932832.

The evaluator takes a hashmap of functions with the following key:

(function symbol, function name, argument list)

where the function_name should be a C++-compatible name.

Multiple expressions

Multiple expressions can be turned into an evaluator at the same time, which means they will be optimized together. As a result, common subexpressions between the expressions will be computed only once.

from symbolica import *
x = Expression.symbol('x')

e1 = x * x + 5
e2 = x * x + 6

ev = Expression.evaluator_multiple([e1, e2], {}, {}, [x])
res = ev.evaluate([[5.]])
print(res)

fn main() {
    let x = parse!("x").unwrap();
    let e1 = &x * &x + 5;
    let e2 = &x * &x + 6;

    let evaluator = Atom::evaluator_multiple(
        &[e1.as_view(), e2.as_view()],
        &FunctionMap::new(),
        &[x],
        OptimizationSettings::default(),
    )
    .unwrap();

    let mut e_f64 = evaluator.map_coeff(&|x| x.into());
    let mut out = vec![0., 0.];
    e_f64.evaluate(&[5.], &mut out);
    println!("{:?}", out);
}

Yields 30.0, 31.0. Another way to create evaluators for multiple expressions is to call merge, to merge two evaluators.

Branching

Conditional functions can be registered and takes three arguments: the first is an expression which is tested for not being equal to 0. The second argument is the true branch and the third argument is the false branch. Since the arguments are arbitrary expressions, they can contain additional branching. Below we register the user function if:

ev = E("if(y, x*x + z*z + x*z*z, x * x + 3)").evaluator({}, {}, [S("x"), S("y"), S("z")], conditionals=[S("if")])
print(ev.evaluate([[3., 1., 2.]])) # 25.0
print(ev.evaluate([[3., 0., 2.]])) # 12.0
let mut f = FunctionMap::new();
f.add_conditional(symbol!("if")).unwrap();

let mut eval = parse!("if(y, x*x + z*z + x*z*z, x * x + 3)")
    .evaluator(
        &f,
        &vec![crate::parse!("x"), crate::parse!("y"), crate::parse!("z")],
        Default::default(),
    )
    .unwrap()
    .map_coeff(&|x| x.re.to_f64());

println!("{}", val.evaluate_single(&[3., 1., 2.])); // 25.0
println!("{}", val.evaluate_single(&[3., 0., 2.])); // 12.0        

Automatic differentiation

The Symbolica evaluator supports fast automatic differentiation that doesn’t depend on any external libraries and that cross-optimizes between all derivatives. You can specify the shape of the derivatives and Symbolica will create optimized output that computes these terms using the usual C++/ASM output.

For example, to compute first derivatives in two variables x and y, we use the following shape with 3 components (where every number specifies the depth of the derivative to be taken): [[0, 0], [1, 0], [0, 1]]

from symbolica import *
e1 = E('x^2 + y*x').evaluator({}, {}, [S('x'), S('y')])
e1.dualize([[0, 0], [1, 0], [0, 1]])
r = e1.evaluate([[2., 1., 0., 3., 0., 1.]])
print(r)  # [10, 7, 2]

You can also use external functions with the dual output, by defining new external functions that yield a single component. In the following example, the function f0 yields the [0] component and f1 the [1] component:

ev = E('f(x + 1)').evaluator({}, {}, [S('x')], external_functions={(S('f'), 'f'): lambda args: args[0]}) # f(x) = x
ev.dualize([[0], [1]], {('f', 'f0', 0): lambda args: args[0], ('f', 'f1', 1): lambda args: args[1]})
print(ev.evaluate([[2., 1.]]))  # [[3. 1.]]

Note that the function f1 receives the result of the computation of f0, as the constant term in the expansion is often needed in the higher-order derivative terms (for example in the expansion of exp).

Code generation

Evaluators can be turned into highly optimized C++ code, optionally with inline assembly (x86_64 and arm64). The latter will drastically cut down on C++ compilation time (and may even yield faster code), effectively making the compilation time independent on the size of the expression.

The compiled library accepts evaluations with double and complex<double> and can be directly loaded into Symbolica.

comp = eval.compile('my_fun', 'test.cpp', 'test.so')
res = comp.evaluate([[5.]])
print(res)
let mut compiled = e_f64
    .export_cpp("test.cpp", "my_fun", true, InlineASM::X64)
    .unwrap()
    .compile("test.so", CompileOptions::default())
    .unwrap()
    .load()
    .unwrap();

let mut out = vec![0.];
compiled.evaluate(&[5.], &mut out);
println!("{}", out[0]);

There is support for SIMD and CUDA generation as well (see the documentation of compile).