Phase 2 of the coding period is smoothly being traversed. I recently had a meeting with Sartaj on 4th of July, Thursday. Here were the minutes of the meeting, along with the deliverables completed over the week.

  • The meeting commenced with the discussion that we need to wrap up the work that I had proposed in the domain of Formal Power Series. In lieu of this, I started off with the left-over portion in PR #17017. The minor change involved changing the name of the function API to product from convolve, since it is currently only handling the linear infinite cycle convolution case. Also the tests were updated suitably. After pushing in all the changes, and a final review, the PR got merged 4 days ago !!

  • The next objective was to finish off PR #17064. Now merging PR #17017 introduced merge conflicts in this PR. So, I had to solve those merge conflicts (5 of them), and then presented it for final review. There was still a bug in the compose function. I was mistakenly using self.ind to fill in the constant term, instead of doing self._eval_term(0). I fixed that bug, optimised the code a little. I also checked the various test cases from various online references, and it came to my notice that fps(sin(sin(x))) was giving a wrong answer in my code. I fixed the logic of the function, and then documented the correct test case. The PR got merged yesterday.

  • The last work in this domain was to complete the FiniteFormalPowerSeries class PR #17134. First I had to pull in the master into the local branch, since the changes in the previous PRs needed to be reflected in tis branch. Then I made all the functions return objects of FiniteFormalPowerSeries class. The class presently contains only one overriden _eval_term function. Since FiniteFormalPowerSeries is a derived class of FormalPowerSeries class, there are some methods which cannot be implemented for objects of the finite version of the class, like integrate or _eval_derivative. Those functions are presently returning a NotImplementedError. Also the logic of the three functions had to be tweaked a liitle.

Finally the output is looking somewhat like this –

>>> from sympy import *
>>> x = symbols('x')
>>> f1, f2 = fps(exp(x)), fps(sin(x))

>>> f1.compose(f2)
FiniteFormalPowerSeries(exp(sin(x)), x, 0, 1, ([1, 1/2, 0, -1/8, -1/15], SeqFormula(x**_k, (_k, 0, oo)), 1))

>>> f1.compose(f2).truncate(6)
1 + x + x**2/2 - x**4/8 - x**5/15 + O(x**6)

>>> f1.inverse()
FiniteFormalPowerSeries(exp(-x), x, 0, 1, ([-1, 1/2, -1/6, 1/24, -1/120], SeqFormula(x**_k, (_k, 0, oo)), 1))

>>> f1.inverse().truncate(6)
1 - x + x**2/2 - x**3/6 + x**4/24 - x**5/120 + O(x**6)

The test cases have been well documented. The PR is in a state of review and merge !!

  • Next we started to decide about what to do next. The next topic in my proposal was Improving limits module. The first task was to implement Shackell's algorithm, which was an alternate version of the famous Gruntz algorithm, which SymPy currently uses. Sartaj told me to look upon resources which talk about Shackell’s algorithm.

    So, I was reading upon Shackell’s Algorithm a lot, and I understood that it is quite hard to implement it, and also that there is no clear guarantee that it will outperform Gruntz algorithm. The best use of Shackell’s algorithm is that it computes nested forms, and then uses it to compute limits. Nested forms are not of that use, given that it is already efficiently computed internally inside gruntz/mrv. So, I didn’t find any real use of implementing such a hard algorithm.

Instead, I then proposed that I would like to work in the field of Asymptotic Expansions. This is a field which I think, is very much required in sympy. The proposal details are as follows –

Asymptotic Expansions

In mathematics, asymptotic expansions, or Poincare Expansions is a formal series of functions which has the property that truncating the series after a finite number of terms provides an approximation to a given function as the argument of the function tends towards a particular, often infinite, point. It’s basically a non-convergent series.

One very good example is the Gamma Function

The gamma function

    .. math::
        \Gamma(x) := \int^{\infty}_{0} t^{x-1} e^{-t} \mathrm{d}t.

Now, presently SymPy performs the following with this asymptotic function -:

>>> series((sin(1/x+exp(-x))-sin(1/x)), x, oo)
 1        
O⎜──; x  ∞⎟
  6       
 x        

But, upon implementation of an asymptotic algorithm, which can provide a value close to the desired answer for a finite number of terms, will deliver this -:

>>> series((sin(1/x+exp(-x))-sin(1/x)), x, oo)
(1  1/(2*x**2) +1/(24*x**4) + O(1/x**6)) * exp(-x)

This depicts the difference and the importance brought by the asymptotic series into the foray.

Now how will I implement it ? Most of the routines needed for the following algorithm are already present in gruntz.py, so it will be easy to implement.

a) MRV_Gruntz algorithm

In “gruntz.py”, finding the most-rapidly varying term(mrv function) and rewriting expression(rewrite function) in terms of omega is already implemented. This algorithm will -:

1) Determine the set of most rapidly varying sub-expressions.
2) Choose one w from the above set and rewrite the other expression as
   as g(x) = h(x)*w**p, where mrv(h(x)) < w, and p is in R+
3) Compute Puiseux series about w = 0.
4) Coefficients are recursively expanded in terms of their most rapidly varying subexpression.
   Algorithm terminates when coefficients become constant.

#Methods

An aseries function will be created in core/expr.py, which will return the asymptotic expansion for self.

aseries will be equivalent to self.series(x, oo, n)        # n - order upto which expr to be truncated

Bound parameter will be used in the function definition to give limit 
on rewriting coefficients in its normalised form.

A formal pseudocode -:

def aseries(self, x, n=6, bound=0):

     omega, exps = mrv(self, x)
     if x in omega :
          Substitute x with exp(x), then perform aseries, and again substitute with log(x).
     Return along with the order term "Order(1/x**n, (x, S.Infinity))", if any.

     f, logw = rewrite(exps, omega, x, Dummy_var)
     
     if self in omega :     # Find a canonical representative
          Find asymptotic series for the exponent, and remove all order terms.
          r = Substitute x with (1/x), find leading term , and then resubstitute.
          logw = log(1/r)
     
     # The above will actually form a base for the hierarchical representation, which will follow later.

The above pseudo-code will be easy to implement given that the core structure has already been worked upon.

  • Hierarchical representation -: Consider the following case –
>>> e = sin(x) * cos(exp(-x))
>>> e.aseries(x)
ValueError: gamma function pole

There are cases like above, when computing the asymptotic expansion using only the mrv and rewrite routines in MRV_gruntz algorithm will result in a complex expansion, and may also lead to recursion loops, leading to “Gamma Pole Errors”. Hierarchical series tend to break after the first recursion. The coefficients are not expanded completely, which results in a more informative expansion.

We will declare a "hierar" parameter for the hierarchical representation.

s = Canonical series (obtained after the previous pseudocode)
if hierar:
    return s.subs(Dummy_var, exp(logw))     # Only the first recursion takes place
else :
    move on to the recursive implementation

b) Extension of eval_aseries in special functions

Quite a few special functions don’t have their asymptotic as well as eval_nseries method implemented. Fresnel_integrals already has it implemented. I am thinking to extend _eval_aseries to other functions in functions/special/.

     * **erf** : `Gaussian error function
     * **Ei** : `Exponential integral`
     * **Riemann zeta function**
     * **Si, Ci** : sine and cosine integral respectively. 

In case expansion does not exist at infinity, functions need to be rewritten in terms of exponential functions. Test cases also need to be added in many functions. Many functions don’t have their _eval_nseries written in a tractable way. Those can be rewritten. This part will be dealt with in a later topic( specifically about eval_nseries)

The entire **Asymptotic Expansions** module will not be hard to implement, as there is already work done in some open PRs, and there’s enough literature on the net to support the rest of the implementations.

I would also be working on the limit XFAIL tests simultaeneously. The rough timeline will be as follows :-

  • Finish off with the mrvasympt algorithm by the end of this week (Week 7). Also, I will try to push one limit XFAIL test PR by the start of Week 8(Preerably Tuesday)
  • Week 8 will go in pushing another limit XFAIL test PR, and in merging the aforementioned PRs.
  • A PR for the extension of eval_aseries for special functions will be pushed in by the end of Week 9.
  • The final limit XFAIL test PR will be pushed in by Week 10, and the rest of the week can be spent on merging any left-over PRs.

That’s it for Week 6. See you in the next week.