8
$\begingroup$

Working over the complex numbers, consider a function $F\left(x,y\right)$ and a curve $C$ defined by $F\left(x,y\right)=0$.

I know that to construct the Jacobian variety associated to $C$, one integrates a basis of global holomorphic differential forms over the contours of the curve's homology group. I'm looking for information that is oriented toward actually computing things for given concrete examples; everything I've seen so far, however, has been uselessly abstract or non-specific. Note: I'm new to this—I'm an analyst who knows next to nothing about algebra and even less about differential geometry or topology.

In my quest for a sensible answer, I turned to a H.F. Baker's wonderful (though densely written) text from the start of the 20th century. Just reading through the first few pages makes it abundantly clear that there is a general procedure for constructing a basis of holomorphic differential forms for a given curve. Ted Shifrin's comment on this math-stack-exchange problem only makes me more certain than ever that the answers I seek are out there, somewhere.

Broadly speaking, my goals are as follows. In all of these, my aim is to be able to use the answers to these questions to compute various specific examples, either by hand, or with the assistance of a computer algebra system. So, I'm looking for formulae, explanations and/or step-by-step procedures/algorithms, and/or pertinent reference/reading material.

(1) In the case where $F$ is a polynomial, what is/are the procedure(s) for determining a basis of holomorphic differential 1-forms over $F$? If the procedure varies depending on certain properties of $F$ (say, if $F$ is an affine curve, or a projective curve, or of a certain form, or some detail like that), what are those variations?

(2) In the case where $F$ is a polynomial of $x$-degree $d_{x}$, $y$-degree $d_{y}$, and $C$ is a curve of genus $g$, I know that the basis of holomorphic differential 1-forms for $C$ will be of dimension $g$. In the case, say, where $C$ is an elliptic curve, with:

$$F\left(x,y\right)=4x^{3}-g_{2}x-g_{3}-y^{2}$$

the classical Jacobi Inversion Problem arises from considering a function $\wp\left(z\right)$ which parameterizes $C$, in the sense that $F\left(\wp\left(z\right),\wp^{\prime}\left(z\right)\right)$ is identically zero. Using the equation: $$F\left(\wp\left(z\right),\wp^{\prime}\left(z\right)\right)=0$$ we can write: $$\wp^{-1}\left(z\right)=\int_{z_{0}}^{z}\frac{ds}{4s^{3}-g_{2}s-g_{3}}$$ and know that the multivaluedness of the integral then reflects the structure of the Jacobian variety associated to $C$.

That being said, in the case where $C$ is of genus $g\geq2$, and where we can write $F\left(x,y\right)=0$ as: $$y=\textrm{algebraic function of }x$$ nothing stops us from performing the exact same computation as for the case with an elliptic curve. Of course, this computation must be wrong; my question is: where and how does it go wrong? How would the parameterizing function thus obtained relate to the "true" parameterizing function—the multivariable Abelian function associated to $C$? Moreover, how—if at all—can this computation be modified to produce the correct parameterizing function (the Abelian function)?

(3) My hope is that by understanding both (1) and (2), I'll be in a position to see what happens when these classical techniques are applied to non-algebraic plane curves defined but with $F$ now being an analytic function (incorporating exponentials, and other transcendental functions, in addition to polynomials). Of particular interest to me are the transcendental curves associated to exponential diophantine equations such as: $$a^{x}-b^{y}=c$$ $$y^{n}=b^{x}-a$$

That being said, I wonder: has this already been done? If so, links and references would be much appreciated.

Even if it has, though, I would still like to know the answers to my previous questions, even if it's merely for my personal edification alone.

Thanks in advance!

$\endgroup$
  • $\begingroup$ This is indeed very classical for smooth curves in $\Bbb{P}^2$. Is this what you have in mind? Otherwise, how do you define the Jacobian? $\endgroup$ – abx Mar 11 at 0:45
  • $\begingroup$ I used the term "Jacobian" primarily to give context for the rest of my question. What I'm interested in isn't so much the Jacobian Variety for a smooth curve, but the classical integration process by which such varieties were (classically) constructed: the basis of differential forms, the methods of dealing with the Jacobi inversion problem, etc. $\endgroup$ – MCS Mar 11 at 1:14
4
$\begingroup$

I'm getting back to the question of describing holomorphic $1$-forms on a plane curve.

Affine curves: Let $X$ be a smooth curve in $\mathbb{A}^2$, given by the equation $F(x,y)$. Then $F_x dx + F_y dy=0$. Since $F=0$ is smooth, the functions $F_x$ and $F_y$ have no common zero on $X$ and we can define a $1$-form on $X$ by $\omega = \tfrac{dx}{F_y} = - \tfrac{dy}{F_x}$. Moreover, $\omega$ is nowhere vanishing, so every $1$-form on $X$ is of the form $h \omega$ for some holomorphic function $h$ on $X$. For example, if $X$ is a hyperelliptic curve $y^2 = x^{2g+1} + a_{2g} x^{2g} + \cdots + a_1 x$, then $\omega = \tfrac{dx}{2y}$ and integrating $\omega$ along paths in $X$ corresponds concretely to computing integrals like $\int \tfrac{dx}{\sqrt{x^{2g+1} + a_{2g} x^{2g} + \cdots + a_1 x+a_0}}$.

Incidently, there is a more conceptual way to describe $\omega$: It is the residue of the $2$-form $\tfrac{dx \wedge dy}{F(x,y)}$ along $X$. Since every automorphism of $\mathbb{A}^2$ multiplies $dx \wedge dy$ by a scalar, and every automorphism taking $X$ to itself (setwise) multiplies $F$ by a scalar, this shows that $\omega$ is well defined up to scalar multiple independent of the choice of coordinates on $\mathbb{A}^2$.

Projective curves Let $\overline{X}$ be the smooth projective completion of $X$. Then we can ask whether or not $h \omega$ extends to a global holomorphic $1$-form on $\overline{X}$. The vector space of holomorphic $1$-forms on $\overline{X}$ always has dimension $g$. Describing which $h \omega$ extend to $\overline{X}$ in general involves describing how to compute the smooth projective completion of $X$, which is a little complicated, so I'll just give the most important special cases.

Hyperelliptic curves Let $X$ be given by $y^2 = x^{2g+1} + a_{2g} x^{2g} + \cdots + a_1 x+a_0$ for some square free polynomial of degree $2g+1$. Then a basis for the $1$-forms on $\overline{X}$ is $x^j \omega$ for $0 \leq j < g$. So the periods of this curve are concretely integrals of the form $\int \tfrac{x^j dx}{\sqrt{x^{2g+1} + a_{2g} x^{2g} + \cdots + a_1 x+a_0}}$ for $j$ in this range.

Smooth projective planar curves Let $f(x,y,z)$ be a smooth degree $d$ hypersurface, so $F(x,y) = f(x,y,1)$. Then a basis of $1$-forms on $\overline{X}$ is $x^i y^j \omega$ for $i+j \leq d-3$.

Curves transverse to a toric compatification The following includes both of the previous cases. Let $F(x,y) = \sum_{(i,j) \in A} F_{ij} x^i y^j$ for some finite set $A$ of exponents. Let $\Delta$ be the convex hull of $A$, this is a convex polytope. I will impose a condition for each edge of $\Delta$. Let $(p,q)$, $(p,q)+(u,v)$, $(p,q)+2 (u,v)$, ..., $(p,q) + k (u,v)$ be the lattice points on an edge. Suppose that the single variable polynomial $\sum F_{(p+ku, q+kv)} z^k$ has no nonzero multiple roots. Then the closure of $X$ in the toric variety associated to $\Delta$ is smooth. In that case, a basis for the $1$-forms is $x^{i-1} y^{j-1} \omega$, where $(i,j)$ runs over the lattice points in the interior of $\Delta$.

$\endgroup$
6
+25
$\begingroup$

$\def\CC{\mathbb{C}}$I'll come back later and leave an answer to (1), which is classical and straightforward. I have nothing to say about (3). I thought the most interesting question was (2), but I am not sure that I understood it correctly.

Here is how I understand question (2).

Let $X_0$ be the plane curve $F(x,y)=0$; let $X$ be the smooth projective completion of $X_0$. We will try to construct a uniformization $z \mapsto (\phi(z), \phi'(z))$ from $\CC \to X$. We may try to define $\phi$ either by the ODE $F(\phi, \phi')=0$, or by the condition that $\phi^{-1}(x) = \int_{x_0}^{x} \tfrac{dx}{F_y(x, y(x))}$. Here $F_y$ is the partial derivative with respect to $y$ and $y(x)$ is a branch of the algebraic function given by $F(x,y(x))$. What goes wrong?

We know something must go wrong because, if we had a nonconstant holomorphic map $\CC \to X$, it would lift to the universal cover of $X$. But, if $X$ has genus $\geq 2$, the universal cover of $X$ is isomorphic to the open disc $\mathbb{D}=\{ z : |z|<1 \}$. So we would have a nonconstant map $\CC \to \mathbb{D}$ and thus a bounded entire function, contrary to Louiville's theorem.

Let's note that things are fine locally. Let $U$ be a simply connected open subset of $X_0$. Then $\tfrac{dx}{y(x)}$ is a well defined $1$-form on $U$ with an antiderivative $\psi : U \to \CC$. The derivative of $\psi$ is nonvanishing so the inverse function theorem does provide $\psi$ with a local inverse $\phi$, and this $\phi$ does solve the ODE $F(\phi, \phi')=0$. However, this is a local discussion. When we try to continue $\phi$ to all of $\CC$, we encounter two problems.

First problem: $\psi$ will have ramification. It is clearer to think about $\tfrac{dx}{y(x)}$ as a $1$-form on $X_0$, not on $\CC$. On the curve $X_0$, we have $F_x(x,y) dx + F_y(x,y) dy$, so $\tfrac{dx}{F_y} = - \tfrac{dy}{F_x}$. The condition that $F$ is smooth says that at least one of these two denominators is always nonzero, so we can define a $1$-form $\omega$ by whichever side of this equation doesn't divide by zero. The $1$-form $\omega$ is nonvanishing on $X_0$ and seems to usually extend to a holomorphic $1$-form on $X$. (I don't know a precise theorem ruling out the possibility of a pole at one of the points of $X \setminus X_0$.)

However, if $X$ has genus $g$, then a $1$-form on $X$ must have $2g-2$ zeroes (with multiplicity), and $2g-2>0$ for $g \geq 2$. So $\omega$ must vanish at some of the points of $X \setminus X_0$. For example, consider the curve $y^2 = x^5+ \cdots$, where the right hand side is a degree $5$ polynomial with distinct roots. There is one additional point $p$ in $X \setminus X_0$. Writing $z$ for a local coordinate near $p$, we have $y = z^{-5} + \cdots$ and $x = z^{-2} + \cdots$. We have $\omega = \tfrac{dx}{2 y} = \tfrac{dx}{2 \sqrt{x^5+\cdots}}$. At the point $p$, the numerator $dx$ looks like $z^{-3} dz$, and the denominator looks like $z^{-5}$, so $\omega$ has a zero of order $2$.

If $\omega$ vanishes to order $k$ near a point $p$, then $\psi = \int \omega$ will vanish to order $k+1$, so the inverse function $\phi$ will have branching like a $(k+1)^{\mathrm{th}}$ root. So $\phi$ can't be defined globally on $\CC$; one needs to make some branch cuts to make a region on which it makes sense.

Second problem: Changing the path of integration. We defined $\psi(x) = \int_{x_0}^x \omega$. If we take this integral along two paths that differ in $H_1(X, \mathbb{Z})$, then the integral will change by a constant known as a period. In the genus $1$ case, this is where the double periodicity of $\phi$ comes from. On a genus $g$ curve, $H_1(X, \mathbb{Z})$ has rank $2g$, so there are $2g$ periods, which generate a dense group of translations. We can analytically continue $\phi$ around $\CC$ and come back on a branch which differs from our original branch by any element of this dense translation group.

In the Abel-Jacobi construction One replaces the single one-form $\omega$ with $g$ one-forms $(\omega_1, \omega_2, \ldots, \omega_g)$. So $( \int \omega_1, \cdots, \int \omega_g)$ gives a map $\Psi$ from a cover of $X$ to $\CC^g$. This map $\Psi$ does not have the problems above: The $\omega_j$ have no common zero, so $\Psi$ is locally injective, and the periods form a discrete subgroup of $\CC^g$. But the target is higher dimensional than the source, so there is no hope of inverting $\Psi$.

Example (sorry that I don't have time to make pictures!) let's take our curve $X_0$ given by $y^2 = x^5 + \cdots = \prod_{i=0}^4 (x-x_i)$. Let $p_i$ be the preimage of $x_i$ in $X_0$. We also write $p_{\infty}$ for the point of $X \setminus X_0$ and $x_{\infty}$ for the point $\infty$ of the Riemann sphere. The $x$-coordinate is a degree $2$ map from $X$ to the Riemann sphere, branched over the $x_i$.

Draw rays from $x_1$, $x_2$, $x_3$ and $x_4$ to $x_{\infty}$; let the angles between adjacent rays be $\theta_1$, $\theta_2$, $\theta_3$, $\theta_4$ where $\sum \theta_j = 2 \pi$. The preimages of these rays are four circles, each one passing through $p_{\infty}$ and through one of $p_1$, $p_2$, $p_3$, $p_4$. The circle meet each other at angles $\theta_j/2$, since $X \to \mathbb{CP}^1$ is $2$-fold ramified at $p_{\infty}$. Removing these four circles from $X$ leaves behind an open octagon $U$, and we recover the standard description of a genus two surface as an octagon with sides glued together.

The region $U$ is simply connected, and $\omega = \tfrac{dx}{2 y}$ is a well defined $1$-form on it. On $U$, the $1$-form $\omega$ has an antiderivative $\psi$. The region $\psi(U)$ is a planar octagon. As we discussed before, $\omega$ vanishes to order $2$ at $p_{\infty}$, so $\psi$ is $3$-fold ramified there, and the angles $3 \theta_j/2$, each twice for $1 \leq j \leq 4$. Note that this adds up to $6 \pi$, as the angles of an octagon should.

Opposite sides are parallel and of the same length; namely, the length of one such side is $2 \int_{p_j}^{p_{\infty}} \omega = \int_{x_j}^{x_{\infty}} \tfrac{dx}{\sqrt{x^5+\cdots}}$. The midpoint of this edge corresponds to $p_j$. If I didn't screw up, the displacement between parallel sides are $2 \int_{p_0}^{p_j} \omega = \int_{x_0}^{x_j} \tfrac{dx}{\sqrt{x^5+\cdots}}$. These integrals are the four periods, which generate a dense subgroup of $\CC$. As we try to analytically continue $\phi$ around $\CC$, we find branches of $\phi$ which are defined on translates of this octagon by those periods; if we encircle a vertex of the octagon, then $\phi$ branches like a cube root.

References: For very concrete discussions of the geometry of integrals like $\int \tfrac{dx}{\sqrt{x^5+\cdots}}$ I recommend Zeev Nehari's Conformal Mapping. For modern applications of these ideas in research, I recommend papers on "billiards" by people like Laura Demarco, Curt McMullen or Alex Wright. You might start with these slides as a concrete example of describing Riemann surfaces by gluing polygons just as I have here.

$\endgroup$
  • $\begingroup$ This is an excellent answer, though it's a bit over my head. To the extent that I understand the content of this article (encyclopediaofmath.org/index.php/Jacobi_inversion_problem), given a formula $F\left(x,y\right)$ there is a complex manifold $S$ of complex dimension $N$ and a complex-valued function $\wp\left(\mathbf{z}\right)$ (where $\mathbf{z}=\left(z_{1},\ldots,z_{N}\right)$) defined on $S$ that, at least in the case where $N=1$, parameterizes the vanishing set of $F$ via $F\left(\wp\left(z\right),\wp^{\prime}\left(z\right)\right)=0$ for all $z$. $\endgroup$ – MCS Mar 13 at 0:30
  • $\begingroup$ I want to know what happens when $N\geq2$. Looking at the linked article, my guess would be that instead of “parameterizing” $F$'s vanishing set, we obtain a system: $$F\left(\wp,\frac{\partial\wp}{\partial z_{1}}\right)=F\left(\wp,\frac{\partial\wp}{\partial z_{2}}\right)=\ldots=F\left(\wp,\frac{\partial\wp}{\partial z_{N}}\right)=0$$ or something like that. Moreover, I want to know how to construct and compute $\wp$ for a given $F$. $\endgroup$ – MCS Mar 13 at 0:30
  • $\begingroup$ That doesn't match my understanding of Jacobi inversion? Here is what I think Jacobi inversion is (see your link, near eqn (1)). Let $\omega_1$, ..., $\omega_g$ be a basis of holomorphic $1$-forms on $X$. Fix a base point $x_0$. Locally, we have a map $X^g \to \mathbb{C}^g$ sending $(x_1, \ldots, x_g)$ to $\left( \sum_j \int_{x_0}^{x_j} \omega_1, \ldots, \sum_j \int_{x_0}^{x_j} \omega_g \right)$. I understood Jacobi inversion as locally inverting this map. I don't know a way to make that look like the equations you give. Maybe someone else does. $\endgroup$ – David E Speyer Mar 13 at 1:20
  • $\begingroup$ I don't think it makes sense to add more variables in that way. And I think I have already said what there is to be said about parametrizing a curve as $(\phi, \phi')$: It works locally, but globally there are all sorts of branching issues with $\phi$. I'll explain why I don't think adding variables in this way is useful: $\endgroup$ – David E Speyer Mar 13 at 1:30
  • $\begingroup$ For any $x$, there are only finitely many $y$ with $F(x,y)=0$, so $(\partial \wp)/(\partial z_j)$ can only take finitely many values. The easiest possibility is that $(\partial \wp)/(\partial z_1) = \cdots = (\partial \wp)/(\partial z_n)$, but then $\wp$ is just a function of $\sum z_j$. I haven't carefully thought through the more complicated combinatorial options, but I don't think it would help. For example, if our equation is $y^2 = p(x)$, then we would have $\pm (\partial \wp)/(\partial z_1) = \cdots = \pm (\partial \wp)/(\partial z_n)$ and $\wp$ would be a function of $\sum \pm z_j$. $\endgroup$ – David E Speyer Mar 13 at 1:33

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service, privacy policy and cookie policy

Not the answer you're looking for? Browse other questions tagged or ask your own question.