





















































With this release, the stiff ODE solvers allow expensive calculations, like those in neural ODEs or PDE discretizations, and utilize GPU acceleration. This release also allows the initial condition to be a GPUArray where the internal methods don’t perform any indexing in order to allow for all computations to take place on the GPU without data transfers.
This release comes with a broadcast wrapper that allows all sorts of information to be passed to the compiler in the differential equation solver’s internals. This makes a bunch of no-aliasing and sizing assumptions that are normally not possible. This leads the internals to use a special @..,which also turns out to be faster than standard loops.
This release comes with a smarter linsolve defaults, which automatically detects the BLAS installation and utilizes RecursiveFactorizations.jl that speeds up the process for ODE.
Users can use the linear solver to automatically switch to a form that works for sparse Jacobians. Even banded matrices and Jacobians on the GPU are now automatically handled.
Users can now use GMRES, easily without the need for constructing the full Jacobian matrix. Users can simply use the directional derivatives in the direction of v in order to compute J*v.
With this release, the performance of all implicit methods like KenCarp4 has been improved. DiffEqBiological.jl can now handle large reaction networks and can parse the networks much faster and can build Jacobians that utilize sparse matrices. Though there is still plenty of room for improvement.
This release comes with a lot of improvements and gives a glimpse of working examples of partial neural differential equations that are equations, which have pre-specified portions. These equations allow for batched data and GPU acceleration.
This release comes with memory optimizations of low-memory Runge-Kutta methods for hyperbolic or advection-dominated PDEs. These methods now have a minimal number of registers which are required for the method. Large PDE discretizations can now make use of DifferentialEquations.jl without loss of memory efficiency.
The team has introduced the ContinuousCallback implementation in this release that has increased robustness in double event detection.
To know more about this news, check out the official announcement.
The solvers – these great unknown
Moving Further with NumPy Modules
How to build an options trading web app using Q-learning