1. Introduction and statement of results
Let
$\{S_n;\ n\geq 1\}$
be a random walk with independent and identically distributed increments
$\lbrace X_k;\ k\geq 1\rbrace$
. We shall assume that the increments have negative expected value,
$\mathbb{E} X_1=-a$
. Let
$\overline F(x)=\mathbb{P} (X_1>x)$
be the tail distribution function of
$X_1$
. Let
$\tau\coloneqq \min\lbrace n\geq1\,{:}\, S_n\leq 0 \rbrace$
be the first time the random walk exits the positive half-line. We consider the area under the random walks excursion
$\{S_1,S_2,\ldots,S_{\tau-1}\}$
:

Since
$\tau$
is finite almost surely, the area
$A_\tau$
is finite as well. In this note we will study asymptotics for
$\mathbb{P}(A_\tau>x)$
, as
$x\to \infty$
, in the case when the distribution of increments is heavy-tailed. This paper continues the research of [Reference Perfilev and Wachtel14], where the light-tailed case was considered.
The area under the random walk excursion appears in a number of combinatorial problems, for example in investigations of the asymptotic number of random trees, see [Reference Takacs16,Reference Takacs17,Reference Takacs18]; some further references may be found in [Reference Denisov, Kolb and Wachtel6]. Another application area is statistical physics, see, e.g., [Reference Dobrushin and Hryniv8] or [Reference Carmona and Pétrélis3] and references therein. Applications to queuing theory for the analysis of the load in Transmission Control Protocol networks and to risk theory are discussed in [Reference Borovkov, Boxma and Palmowski2].
In the light-tailed case logarithmic asymptotics for
$\mathbb{P}(A_\tau>x)$
was obtained in [Reference Duffy and Meyn10], and exact local asymptotics in [Reference Perfilev and Wachtel14]. Heavy-tailed asymptotics for
$\mathbb{P}(A_\tau>x)$
was previously studied in [Reference Borovkov, Boxma and Palmowski2], which considered the case when the increments of the random walk have a distribution with regularly varying tail, that is
$\overline F(x)=x^{-\alpha }L(x)$
, where L(x) is a slowly varying function. For
$\alpha>1$
it was shown that

Here, note that
$\mathbb{E}[\tau]<\infty$
follows from the assumption
$\mathbb{E}[X_1]=-a<0$
; see, e.g., [11, Chapter XII.2, Theorem 2]. The asymptotics can be explained by traditional heavy-tailed one-big-jump heuristics. In order to have a huge area, the random walk should have a large jump, say y, at the very beginning of the excursion. After this jump the random walk goes down along the line
$y-an$
according to the law of large numbers. Thus, the duration of the excursion should be approximately
$y/a$
. As a result, the area will be of order
$y^2/2a$
. Now, from the equality
$x=y^2/2a$
we infer that a jump of order
$\sqrt{2ax}$
is needed. Since the same strategy is valid for the maximum
$M_\tau\coloneqq \max_{n<\tau}S_n$
of the first excursion, one can rewrite (1) in the following way:

However, the class of regularly varying distributions does not include all subexponential distributions, excluding, in particular, the log-normal distribution and Weibull distribution with parameter
$\beta<1$
. The asymptotics for these remaining cases have been raised as an open problem in [13, Conjecture 2.2] for a strongly related workload process. We will reformulate this conjecture as

when
$F\in\mathcal S$
and
$\mathcal S$
is a subclass of subexponential distributions. Note that using the asymptotics for

from [Reference Denisov and Shneer7] for Weibull distributions with parameter
$\beta<1/2$
, we can see that in this case the asymptotics in (2) is equivalent to (1). In this note we partially settle (2). It is not difficult to show that the same arguments hold for the workload process and to prove the same asymptotics for the area of the workload process, thus settling the original [Reference Kulik and Palmowski13, Conjecture 2.2]. In passing, we note that it is doubtful that (2) holds in full. The reason is that for both
$\tau$
and
$A_\tau$
the asymptotics (3) and (2) are no longer valid for Weibull distributions with parameter
$\beta>1/2$
. The analysis for
$\beta>1/2$
involves a more complicated optimisation procedure leading to a Cramér series, and it is unlikely that the answers will be the same for the area and for the exit time.
1.1. Main results
We will now present the results. We will start with the regularly varying case. In this case the connection between the tails of
$A_\tau$
and
$M_\tau$
is strong and we will be able to use the asymptotics for
$\mathbb{P}(M_\tau \gt x)$
found in [Reference Foss, Palmowski and Zachary12] (see also a short proof in [Reference Denisov4]) to find the asymptotics for
$\mathbb{P}(A_\tau \gt x)$
.
Proposition 1. The following two statements hold.
-
(a) If
$\overline{F}(x)\coloneqq \mathbb{P}(X_1>x)=x^{-\alpha}L(x)$ with some
$\alpha\ge1$ and
$\mathbb{E}|X_1|<\infty$ then, uniformly in
$y\in[\varepsilon\sqrt{x},\sqrt{2ax}]$ ,
$\varepsilon\in(0,1),$
(4)\begin{equation}\mathbb{P}(A_\tau \gt x,M_\tau \gt y)\sim \mathbb{E}\tau\overline{F}(\sqrt{2ax}).\end{equation}
-
(b) If
$\overline{F}(x)\sim x^{-\varkappa} {\rm e}^{-g(x)}$ , where g(x) is a monotone continuously differentiable function satisfying
$\frac{g(x)}{x^\beta}\downarrow$ for some
$\beta\in(0,1/2)$ , and
$\mathbb{E}|X_1|^\varkappa<\infty$ for some
$\varkappa>1/(1-\beta)$ , then (4) holds uniformly in
$y\in\left[\sqrt{2ax}-\frac{R\sqrt{2ax}}{g(\sqrt{2ax})},\sqrt{2ax}\right]$ ,
$R>0$ .
This statement obviously implies the following lower bound for the tail of
$A_\tau$
:

Furthermore, using this proposition one can give an alternative proof of (1) under the assumption of the regular variation of
$\overline F$
, which is much simpler than the original one in [Reference Borovkov, Boxma and Palmowski2]. We first split the event
$\lbrace A_\tau \gt x\rbrace$
into two parts,

Clearly,
$\lbrace A_\tau \gt x, M_\tau\leq y\rbrace\subseteq\lbrace\tau \gt x/y\rbrace.$
Therefore,

When
$\alpha>1$
, according to Theorem I in [Reference Doney9] or [Reference Denisov and Shneer7, Theorem 3.2],
$\mathbb{P}(\tau \gt t)\sim\mathbb{E}\tau\bar{F}\left(a t\right)$
as
$t\to\infty$
. Choosing
$y=\varepsilon \sqrt{x}$
and recalling that
$\overline{F}$
is regularly varying, we get

It follows from the first statement of Proposition 1 that

Plugging this and (7) into (6), we get, as
$x\to\infty$
,

Letting
$\varepsilon\to0$
, we arrive at (1).
The case of heavy-tailed distributions, which satisfy the conditions of Proposition 1(b), is more complicated. In particular, it seems that in this case there is a regime when the asymptotics in (1) is no longer valid. We will treat this case by using exponential bounds similar to Section 2.2 in [Reference Perfilev and Wachtel14] and asymptotics for
$\mathbb{P}(\tau \gt x)$
from [Reference Denisov, Dieker and Shneer5] and [Reference Denisov and Shneer7].
First, we will introduce a subclass of subexponential distributions to consider. We will assume that
$\mathbb{E}[X_1^2]=\sigma^2<\infty$
. Without loss of generality we may assume that
$\sigma=1$
.
Assumption 1. Let

where g(x) is an eventually increasing function such that eventually

for some
$\gamma_0\in(0,1]$
.
Due to the asymptotic nature of equivalence in (8), without loss of generality we may assume that g is continuously differentiable and that (9) holds for all
$x>0$
. Clearly, monotonicity in (9) implies that

for all sufficiently large x. Using the Karamata representation theorem we can show that this class of subexponential distributions includes regularly varying distributions
$\overline F(x)\sim x^{-r}L(x)$
for
$r>2$
. Also, it is not difficult to show that lognormal distributions and Weibull distributions (
$\overline F(x) \sim {\rm e}^{-x^\beta}$
,
$\beta\in(0,1)$
) belong to our class of distributions. This class previously appeared in [15] for the analysis of large deviations of sums of subexponential random variables on the whole axis.
Now we are able to give rough (logarithmic) asymptotics for
$\gamma_0\le 1$
.
Theorem 1. Let
$\mathbb{E}[X_1]=-a<0$
and
${\rm Var}(X_1)<\infty$
. Assume that Assumption 1 holds with
$\gamma_0=1$
. Then, there exits a constant
$C>0$
such that

Furthermore, for any
$\varepsilon>0$
there exists
$C>0$
such that

In, particular, if
$\gamma_0<1$
then

To obtain the exact asymptotics we will impose a further requirement on the function g.
Assumption 2. Let g(x) satisfy

This assumption implies that

In particular, it excludes all regularly varying distributions.
1.2. Discussion and organisation of the paper
The main result of this note, Theorem 2, provides tail asymptotics for
$A_\tau$
in the case when increments of the random walk have a Weibull-like distribution with the shape parameter
$\gamma_0<1/2$
. We believe that
$\mathbb{P}(A_\tau \gt x)$
behaves differently in the case when
$g(x)=x^{\gamma_0}$
with
$\gamma_0\ge 1/2$
. This change in the asymptotic behaviour appears in the analysis of the exact asymptotics for
$\mathbb{P}(\tau \gt n)$
and
$\mathbb{P}(S_n>an)$
; see, correspondingly, [Reference Denisov, Dieker and Shneer5,Reference Denisov and Shneer7].
The conjecture in [Reference Kulik and Palmowski13] was formulated for the workload process of a single-server queue rather than the area under the random walk excursion. However, one can prove analogous results for the Lévy processes by essentially the same arguments. It is well known that the workload of the M/G/1 queue can be represent as a Lévy process, and thus our results can be transferred to this setting almost immediately. We believe that the treatment of the workload of the general G/G/1 queue is not that different either.
The paper is organised as follows. We will start by proving Proposition 1 in Section 2. Then we will derive a useful exponential bound and prove Theorem 1 in Section 3. Finally, we derive exact asymptotics for
$\mathbb{P}(A_\tau \gt x)$
and thus prove Theorem 2 in Section 4.
2. Proof of Proposition 1
Before giving the proof we collect some auxiliary results that we will need in this and the following sections.
We will require the following statement, the first part of which follows from Theorem 2 in [Reference Foss, Palmowski and Zachary12] (see also [Reference Denisov4] for a short proof), and the second part from [7, Theorem 3.2].
Proposition 2. Let
$\mathbb{E}[X_1]=-a$
and either (a)
$\overline{F}(x)\coloneqq \mathbb{P}(X_1>x)=x^{-\alpha}L(x)$
with some
$\alpha>1$
or (b)
$\overline{F}(x)\sim x^{-\varkappa} {\rm e}^{-g(x)}$
, where g(x) is a monotone continuously differentiable function satisfying
$\frac{g(x)}{x^\beta}\downarrow$
for
$\beta\in(0,1/2)$
, and
$\mathbb{E}|X_1|^\varkappa<\infty$
for some
$\varkappa>1/(1-\beta)$
; then, for any fixed k,



and

In the proof we will need some properties of the function
$\overline F(x)\sim x^{-\varkappa} {\rm e}^{-g(x)}$
that we will summarise in the following lemma, which will also be used later in the paper.
Lemma 1. Let the distribution function
$\overline F(x)$
be such that
$\overline{F}(x)\sim x^{-\varkappa} {\rm e}^{-g(x)}$
, where g(x) is a monotone continuously differentiable function satisfying
$\frac{g(x)}{x^\beta}\downarrow$
for
$\beta\in(0,1)$
, and
$\mathbb{E}|X_1|^\varkappa<\infty$
for some
$\varkappa>1/(1-\beta)$
. Then,




Proof. Since g(x) is continuously differentiable and
$\frac{g(x)}{x^\beta}$
is monotone decreasing then, with necessity,

implying (17). To prove (18), note that

To prove (18), note that, since
$x-y\ge y$
,

since
$2^\beta\le 1+\beta$
for
$\beta\in[0,1]$
. To show (20), note that, uniformly in
$y\le x^{1/\varkappa}$
, as
$x\to\infty$
,

since
$1/\varkappa<1-\beta$
. Here, we have also made use of (18).
Proof of Proposition 2. To prove (13), (14), and (15), by Theorem 2 of [Reference Foss, Palmowski and Zachary12] it is sufficient to show that (a) or (b) implies that
$F\in\mathcal S^*$
, that is,
$\int_0^\infty \overline F(y)<\infty$
and

The fact that (a) implies
$F\in\mathcal S^*$
is well known and follows immediately from the dominated convergence theorem, since
$ \overline F(x)\sim \overline F(x-y)$
for all fixed y and

and
$\overline F(x-y)\le C\overline F(x)$
for some
$C>0$
when
$y\le x/2$
. Now, assume that (b) holds and show that
$F\in\mathcal S^*$
. Consider

Uniformly in
$y\in [\ln x ,x/2]$
we have, by (19),

and therefore, since
$\varkappa>1$
,

Next, applying (20) we see that
$\frac{\overline F(x-y)}{\overline F(x)} \to 1 $
uniformly in
$y\in [0, \ln x]$
, which implies that
$F\in\mathcal S^*$
.
The proof of (16) can be done by verification that (8) and (9) imply that the conditions of Theorem 3.1 (and hence of Theorem 3.2) of [Reference Denisov and Shneer7] hold. We will provide the arguments in the more complicated case (b). First,
$X_1+a$
satisfies
$\mathbb{E}[|X_1+a]^\varkappa<\infty$
by the assumption of this proposition. Convergence (3.1) in Theorem 3.1 of [Reference Denisov and Shneer7] holds by (20). Let

where
$\xi_i=X_i+a$
,
$i=1,2$
. To show (3.2) in Theorem 3.1 of [Reference Denisov and Shneer7] we need to prove that
$\varepsilon(n)=o(1/n)$
. For
$x\ge 2n^{1/\varkappa}$
we have

Then, using (19),

for some C. Integrating by parts,

uniformly in
$x\ge 2n^{1/\varkappa}$
. Using (21),
$P_2\le Cx^{-\varkappa} {\rm e}^{(\beta-1)g(x)}=o(1/n)$
uniformly in
$x\ge 2n^{1/\varkappa}$
, which proves that
$\varepsilon(n)=o(1/n)$
.
Define
$\sigma_y=\inf\lbrace n<\tau\,{:}\,S_n>y\rbrace.$
Lemma 2. Under the conditions of Proposition 2,

Proof. For every
$k\geq 1$
,

It follows from (14) and (15) that

It is clear that

Lemma 3. For every fixed k,

Proof. Fix some
$N>0$
and define the events

It is clear that
$D_{k,N}\subseteq \lbrace S_k>v\rbrace.$
Therefore,

For the first term we have
$(y>(k-1)N)$

uniformly in
$v > y$
, where
$\varepsilon_N^{(1)}\coloneqq \mathbb{P}(\tau \gt k-1,\vert X_l\vert> N \text{ for some }l<k)\to0$
as
$N \to \infty$
.
Furthermore,

where
$\varepsilon_N^{(2)}\coloneqq k\left(1-(\mathbb{P}(\vert X_1\vert\leq N))^{k-1}\right)\to0,$
as
$N \to \infty$
. Combining (23) and (24) and letting
$N\rightarrow\infty$
, we get the desired relation.
We now turn to the study of the tail behaviour of
$A_\tau$
on the event
$\{\sigma_y=k\}$
. For the corresponding result we need the following property of
$\overline{F}$
.
Lemma 4. Assume that the conditions of Proposition 1 are fulfilled. Then

for any
$h=o(y/g(y))$
. Furthermore, for every
$R>0$
there exists a constant C such that

Proof. Since
$\overline{F}(x)\sim x^{-\varkappa}{\rm e}^{-g(x)}$
and
$\frac{y}{y+h}\to1$
, (25) will follow from

Since
$\frac{g(x)}{x^\beta}$
is monotone decreasing and g is differentiable,

Then, for
$h<0$
,

In the last step we have used the fact that
$\frac{g(t)}{t}$
is decreasing. Similarly, for
$h>0$
,

These estimates yield (27).
To prove the second claim we note that, by (28),

If
$h\le R\frac{y}{g(y)}$
then

This completes the proof.
Lemma 5. Assume that the conditions of Proposition 1 hold. Then

uniformly in
$y\in[\varepsilon\sqrt{z},\sqrt{2az}]$
for regularly varying tails
$\overline{F}$
and in
$y\in\big[\sqrt{2ax}-\frac{R\sqrt{2ax}}{g(\sqrt{2ax})},\sqrt{2ax}\big]$
for tails satisfying the conditions of part (b) in Proposition 1.
Proof. By the Markov property, for every
$z>0$
,

Let
$\varkappa\in(1/(1-\beta),2)$
if
$\overline{F}$
satisfies the conditions of part (b), and let
$\varkappa=1$
in the case when
$\overline{F}$
is regularly varying. Fix some
$\delta>0$
and consider the set

Since
$\mathbb{E}|X_1|^\varkappa<\infty$
, it follows from the Marcinkiewicz–Zygmund law of large numbers that

Consequently,
$\mathbb{P}(B_v \mid S_0=v)\to 1$
as
$v\to\infty$
. This implies that, as
$y\rightarrow\infty$
,

On the event
$B_v\cap\{S_0=v\}$
one has

Consequently,

and

on the same event. In other words,
$\mathbb{P}\left(\lbrace A_\tau \gt z\rbrace\cap B_v \mid S_0=v\right)=\mathbb{P}(B_v \mid S_0=v)$
if
$v-\delta v^{1/\varkappa}\ge\sqrt{2az}$
, and
$\mathbb{P}\left(\lbrace A_\tau \gt z\rbrace\cap B_v \mid S_0=v\right)=0$
if
$v+\delta v^{1/\varkappa}<\sqrt{2az}$
. Therefore, for all v large enough,

and

By Lemma 3,
$\mathbb{P}(S_{\sigma_y}>v,\sigma_y=k)\sim\bar{F}(v)\mathbb{P}(\tau \gt k-1)$
as
$y\rightarrow\infty$
uniformly in
$v\geq y$
and, consequently,

and

Under our assumptions on
$\overline{F}$
, we have

Indeed, this relation is obvious for regularly varying tails, and under the conditions of part (b) it is a particular case of (25). Therefore,

Combining (15) and (22), we get
$\mathbb{P}(\sigma_y=k)\sim q_k\mathbb{E}\tau\overline{F}(y).$
Thus, it remains to show that
$\overline{F}(y)=O(\overline{F}(\sqrt{2az}))$
. This is obvious for regularly varying tails and
$y\ge \varepsilon\sqrt{z}$
. For distributions satisfying the conditions of part (b), it suffices to apply (26) with
$y=\sqrt{2az}$
.
Proof of Proposition 1. For every fixed
$N\geq1$
we have

For the last term on the right-hand side we have

It follows from (22) that
$\mathbb{P}(\sigma_y>N \mid M_\tau \gt y)\rightarrow\sum_{j=N+1}^\infty q_j$
as
$y\rightarrow\infty$
. Then, using (15), we get

where
$\varepsilon_N\rightarrow 0$
as
$N\rightarrow\infty$
.
For every fixed k we have
$\mathbb{P}(A_\tau \gt x, \sigma_y=k, M_\tau \gt y)=\mathbb{P}(A_\tau \gt x, \sigma_y=k)$
. Since
$S_j\in(0,y)$
for all
$j<k$
, we obtain

and

Using Lemma 5 with
$z=x$
and with
$z=x-ky$
, we conclude that
$\mathbb{P}(A_\tau \gt x,\sigma_y=k)\sim q_k\mathbb{E}\tau\overline{F}(\sqrt{2ax})$
. Consequently,

Plugging (30) and (31) into (29) and letting
$N\rightarrow\infty$
, we obtain

Recalling that
$\overline{F}(y)=O(\overline{F}(\sqrt{2ax}))$
, we finish the proof.
3. Proof of Theorem 1
We start by proving an exponential estimate for the area
$A_n$
when random variables
$X_j$
are truncated. Let
$\overline X_n=\max(X_1,\ldots,X_n)$
. The next result is our main technical tool to investigate trajectories without big jumps.
Lemma 6. Let
$\mathbb{E}[X_1]=-a$
and
$\sigma^2\coloneqq {\rm Var}(X_1)<\infty$
. Assume that the distribution function F of
$X_j$
satisfies (8) and that (9) holds with
$\gamma_0=1$
. Then there exists a constant
$C_0>0$
such that

where
$\lambda = \frac{g(y)}{y}$
.
Proof. We will prove this lemma by using the exponential Chebyshev inequality. For that, we need to obtain estimates for the moment-generating function of
$A_n$
. First,

where
$\varphi_y(t) \coloneqq \mathbb{E}[{\rm e}^{t X_j};\ X_j\le y]$
and
$\lambda_{n,j} \coloneqq \lambda\frac{(n-j+1)}{n}$
. Then,

Using the elementary bound
${\rm e}^x\le 1+x+x^2$
for
$x\le 1$
, we obtain

Next, using integration by parts and the assumption in (8),

Now note that, for
$t\le y$
,

due to the condition in (9). Then,

and, therefore,

where we also used the Chebyshev inequality. As a result, for some constant C,

Consequently,

Finally,

We can now obtain upper bounds for the tail of
$A_\tau$
using the exponential bound in Lemma 6.
Lemma 7. Let
$\mathbb{E}[X_1]=-a<0$
and
${\rm Var}(X_1)<\infty$
. Assume that the distribution function F of
$X_j$
satisfies (8), and that (9) holds with
$\gamma_0=1$
. Then there exists a constant
$C>0$
such that

for all y satisfying
$C_0 g(y)\le ay/4$
, where
$C_0$
is the constant given by Lemma 6. Moreover,

Proof. Using Lemma 6 with
$y=\sqrt{2ax}$
we obtain

where
$\lambda = \frac{g(\sqrt {2ax} )}{\sqrt {2ax}}$
and
$I=\frac{a}{2}-C\lambda.$
The assumption
$C_0g(y)\le y\frac{a}{4}$
implies that
$I>\frac{a}{4}$
. Since I is positive, we have the inequality

With formula (25) on page 146 of [Reference Bateman1], we have

Now, using the asymptotics for the modified Bessel function

we obtain

Therefore, (32) is proven.
The second claim in the lemma obviously holds for all x such that
$C_0g(\sqrt{2ax})\ge a\sqrt{2ax}$
. Assume that x is so large that
$C_0g(\sqrt{2ax})< a\sqrt{2ax}$
. Clearly,

By (32) with
$y=\sqrt{2ax}$
,

Next,

Then, the claim follows.
Now we will give a lower bound.
Lemma 8. Let
$\mathbb{E}[X_1]=-a<0$
and
${\rm Var}(X_1)<\infty$
. Then, for any
$\varepsilon>0$
there exists
$C>0$
such that

Proof. Fix
$N\ge 1$
. Put
$y^+ =\sqrt{2ax}+Cx^{1/4+\varepsilon/2},$
where C will be picked later. Since
$\mathbb{E}[X_1^2]<\infty$
, by the strong law of large numbers,

Hence, for any
$\delta>0$
we can pick
$R>0$
such that

Define

If
$C>1+(2/a)$
then, for all x large enough,
$al+l^{1/2+\varepsilon}+R\le\sqrt{2ax}+(2x/a)^{1/4+\varepsilon/2}+R\le y^+$
for all
$l\le\sqrt{2x/a}$
. Therefore, for every
$k\le N$
,
$E^+_k\subset \{\tau \gt k+\sqrt{2x/a}\}$
. Furthermore, if
$\tau \gt k+\sqrt{2x/a}$
then, on the event
$E_k^+$
,

Now, we can choose C so large that, for every
$k\le N$
,
$E_k^+\subset \{A_\tau \gt x\}$
. Hence,

For every fixed k we have
$\mathbb{P}\left(\overline X_{k-1}\le y^+,\tau \gt k-1\right) \to \mathbb{P}\left(\tau \gt k-1\right)$
as
$x\to\infty$
. Furthermore,
$\sum_{k=0}^N \mathbb{P}(\tau \gt k)\to\mathbb{E}\tau$
as
$N\to\infty$
. Therefore, we can pick sufficiently large N such that

Then, for all x sufficiently large,
$\mathbb{P}(A_{\tau}>x)\ge (1-\delta)^2\mathbb{E}\tau\overline F(y^+)$
. As
$\delta>0$
is arbitrarily small, we arrive at the conclusion.
Proof of Theorem 1. The upper bound follows from Lemma 7. The lower bound follows from Lemma 8. The rough asymptotics follow immediately from the lower and upper bounds, and from the observation that

where
$\rho(x)\to0$
. To prove (33) we note that by (9) and (10)

This implies that, as
$x\to\infty$
,

Recalling that
$\log\overline{F}(x)\sim-g(x)-2\log x$
, one easily obtains (33).
4. Proof of Theorem 2
Set

where
$C>\frac{5/4}{1-\gamma_0}$
. We first split the probability
$\mathbb{P}(A_{\tau}>x)$
as follows:

The first term will be estimated using the exponential bound proved in Lemma 6.
Lemma 9. Let
$\mathbb{E}[X_1]=-a$
and
$\mathrm{Var}(X_1)<\infty$
. Assume that (8) and (9) hold with some
$\gamma_0<1/2$
, together with (11). Then,
$P_1 = o(\overline F(\sqrt{2ax}))$
.
Proof. According to (32),

Since (9) holds for some
$\gamma_0<1/2$
,
$g^2(y)/y\to 0$
, and hence

Then,

To finish the proof, it is sufficient to show that

We first note that

Using (18), we can see that

Hence,

According to (34),
$g(y)\sim g(\sqrt{2ax})$
. Therefore, (35) is valid for any C satisfying
$C(\gamma_0-1)+\frac{5}{4}<0$
.
The next lemma gives the term that dominates in
$\mathbb{P}(A_\tau \gt x)$
.
Lemma 10. Under the assumptions of Lemma 9 we have the following estimate:

Proof. Put

By the total probability formula,

Now, note that, by (18) and (34),

Then the statement immediately follows.
We proceed to the analysis of
$P_3$
. Fix some
$\delta>0$
and set
$z=\frac{1}{a}\left(\sqrt{2ax} + \delta\sqrt{x} \right)$
. We split
$P_3$
further as follows:

where
$J_1 =\{ \mbox{there exists} $
k (1,
$\tau$
) such that
$X_k>y \mbox{ and }\max_{1\le i\le \tau, i\neq k} X_i \le y\}$
and, correspondingly,
$J_{\ge 2} =\{\mbox{there exist} $
k, l
$\in(1,\tau$
) such that
$X_k>y \mbox{ and } X_l>y\}$
.
We start with the easier terms
$P_{32}$
and
$P_{33}$
. To deal with these terms we will use Proposition 2.
Lemma 11. Let assumptions (8) and (9) hold for
$\gamma_0<1/2$
. Assume also that (11) holds as well. Then
$P_{33} = o(\overline F(\sqrt{2ax}))$
as
$x\to\infty$
.
Proof. We have, by Proposition 2,
$P_{33}=\mathbb{P}(\tau \gt z)\le (\mathbb{E}\tau+o(1))\overline F(az) =O\big(\overline F(\sqrt{2ax}+\delta \sqrt{x})\big)$
. Therefore,

By the mean value theorem and by the assumption (11),
$g(cx)-g(x)\to\infty$
as
$x\to\infty$
for every
$c>1$
. This completes the proof.
Lemma 12. Let
$\mathbb{E}[X_1]=-a$
and
$\mathrm{Var}(X_1)<\infty$
. Assume that (8) and (9) hold with some
$\gamma_0<1/2$
, together with (11). Then
$P_{32} = o(\overline F(\sqrt{2ax}))$
.
Proof. We can use the formula of total probability to write

Then,

Using (18) we can see that, in view of (12),

$P_{31}$
remains to be analysed. For that, introduce
$\mu(y)\coloneqq \min\{n\ge 1\,{:}\, X_k>y\}$
. Now we will complete the proof with the following lemma.
Lemma 13. Let assumptions (8), (9), and (11) hold for
$\gamma_0<1/2$
. Then
$P_{31} = o(\overline F(\sqrt{2ax}))$
as
$x\to\infty$
.
Proof. First, represent event
$J_1$
as
$J_1=J_{11}\cup J_{12}$
, where

Then,

so

By (18),
$g(\sqrt{2ax})-g(y)\le C\ln x$
. Then, in view of the relation (12), we have
$g(\sqrt{2ax})-g(y)-g(x^{\varepsilon})\le -4\ln x$
, which implies that
$Q_{2} = o(\overline F(\sqrt{2ax}))$
.
To estimate

we make use of the exponential bound given in Lemma 6. Put

Then, we have

where
$\lambda = \frac{g(x^\varepsilon)}{x^\varepsilon}$
. Now note that

Since

we obtain

Thus,
$Q_{1} \le Cx {\rm e}^{-\lambda h(x)/\log x+\lambda^2 z}\overline{F}(y)$
. Next, we can pick
$\varepsilon = \frac{1}{4(1-\gamma_0)} $
to achieve

by the condition (9). Note that the assumption
$\gamma_0<1/2$
implies that
$\varepsilon=\frac{1}{4(1-\gamma_0)}<1/2$
. Then, using (8), we obtain

and, using (18),

Finally, noting that

grows polynomially, we obtain the required convergence to 0. The polynomial growth can be immediately seen for
$g(x)=x^{\gamma_0}$
. However, a proper proof goes as follows:

Therefore
$\lambda h(x)\ge x^{1/2-\varepsilon} x^{-\gamma_0(1/2-\varepsilon)}=x^{(1-\gamma_0)/2-1/4}$
, where we have used the equality
$\varepsilon=\frac{1}{4(1-\gamma_0)}$
.
Proof of Theorem 2. Combining the preceding lemmas give us the upper bound. The lower bound has been shown in (5) under even weaker conditions.
Acknowledgement
We would like to thank the anonymous referees for a number of useful comments and suggestions that helped us to improve the paper.