Produced in association with the Journal of Common Market Studies (JCMS), the Annual Review covers the major developments in the European Union in the past year.
read more...The JCMS Annual Review of the European Union in 2016
Agile Project Management For Dummies, 2nd Edition
Agile project management is a fast and flexible approach to managing all projects, not just software development. By learning the principles and techniques in this book, you'll be able to create a product roadmap, schedule projects, and prepare for product launches with the ease of Agile software developers. You'll discover how to manage scope, time, and cost, as well as team dynamics, quality, and risk of every
read more...Out-there Scenarios I: ISIS is Funded by a Major Asset Management Firm
I break risk management into three levels, Versions 1.0, 2.0, and 3.0.
Risk Management 1.0 is the standard risk management of VaR and the like, where history is used as a guide, and thus where things work if the future is drawn from the same distribution as the past. Any approach that is looking at risks historically, whether using past prices or variance-covariance relationships or leverage numbers or credit ratings; whether using a normal distribution or a t distribution or a gamma distribution or a part of a distribution like semi-variance, is part of this. If the future looks like the past in some specific ways, it works; if the futures deviates from the past it might not work.
Risk Management 2.0 is a reaction to the fact that the 1.0 methods failed during the 2008 crisis. This failure is not surprising or unexpected by most of those working in risk management, because we understand the assumptions behind Version 1.0. But sometimes this was not articulated well when the numbers were passed up the chain. In any case, after 2008 risk management started to depend more visibly on stress testing. I say "more visibly" because anyone doing risk management over the past decades has done stress testing in one form or another. Certainly when there are non-linear risk-return tradeoffs, like with option exposures, it is a standard method. But after 2008 it became de rigor in the analysis of bank risk, for example using CCAR.
And there is Risk Management 3.0, which I won't get into here. It recognizes that a static stress will miss important dynamics that lead to feedback, contagion, and cascades. And it is not something that can be readily addressed with the standard economics. You can check out my book, The End of Theory, or some of my papers while I was at the Office of Financial Research to get more on this.
Here I am focused on what we need to do before we can get to these dynamics: We need to know what is triggering a market dislocation. And we are particularly interested in triggers that are large in either magnitude or in the number of agents that are affected. So even before worrying about the methods for dealing with crisis dynamics, the question to ask is: What can go wrong in a really big way.
I sometimes get at this by starting with something really extreme, and then dialing it back until it can be considered as a reasonable scenario. Reasonable does not mean it is likely to happen, but it also is not "what if an asteroid hits New York" either. Anyway, I want to run through some of the extreme scenarios that I have been thinking about. I'll put one out here and see if anyone responds, either with comments on it in particular, or with others that they are cooking up in a similar vein.
So, Out-there Scenario I: A large asset manager is rumored to be funding ISIS.
Suppose a rumor goes viral that a very large asset management firm is actually owned by, or at least is funding ISIS. This hits all the usual fake news outlets, and is then, of course, bounces into the real news if only as a "there is a rumor, unsubstantiated, making the rounds that...." The result will be large scale redemptions in that asset manager. This will start a downdraft in the markets. It will also lead to questions about other asset managers, and redemptions there as well. The resulting cascade could spread across the markets, erode confidence, and become a major market event.
Now, of course (at least I hope it is obvious) I am not saying specifically that this rumor is likely. But start with this and, as I suggested above, dial it down a bit. The point is, we can come up with scenarios where there can be massive redemptions in some particular major asset manager, and they can be exogenous to anything in the market, and on the face of it might be unreasonable.
One argument against this path to major redemptions hitting the market is that people can redeem by moving their holdings to another asset manager. If they do that there will be no actual selling of assets, and no market impact. This is the way investors will redeem if they continue to want to hold the assets and if they operate with professional aplomb. But the sort of people who would buy into a rumor like this are also likely to simply say, "give me my money", and then figure out what to do after that.
A little footnote: A few years ago the Office of Financial Research did a research study of the asset management community, with the key question being whether the largest asset should be SIFIs (systemically important financial institutions). The report was castigated, especially by the SEC, mostly, I think, because the SEC was honed for inter-agency rivalry. But in any case, no one threw the ISIS scenario into the report.
The Strength of Absent Ties: Social Integration via Online Dating. (arXiv:1709.10478v1 [physics.soc-ph])
We used to marry people to which we were somehow connected to: friends of friends, schoolmates, neighbours. Since we were more connected to people similar to us, we were likely to marry someone from our own race.
read more...Distributions of Centrality on Networks. (arXiv:1709.10402v1 [cs.SI])
In many social and economic networks, agents' outcomes depend substantially on the centrality of their network position. Our current understanding of network centrality is largely restricted to deterministic settings, but in many applications data limitations or theoretical concerns lead practitioners to use random network models. We provide a foundation for understanding how central agents in random networks are likely to be. Our main theorems show that on large random networks, centrality measures are close to their expected values with high probability. By applying these theorems to stochastic block models, we study how segregated networks contribute to inequality. When networks are segregated, benefits from peer effects tend to accrue unevenly to the advantage of more central individuals and groups. We also discuss applications to more general network formation models, including models where link probabilities are governed by geography.
Obstacle problems for nonlocal operators. (arXiv:1709.10384v1 [math.AP])
We prove existence, uniqueness, and regularity of viscosity solutions to the stationary and evolution obstacle problems defined by a class of nonlocal operators that are not stable-like and may have supercritical drift. We give sufficient conditions on the coefficients of the operator to obtain H\"older and Lipschitz continuous solutions. The class of nonlocal operators that we consider include non-Gaussian asset price models widely used in mathematical finance, such as Variance Gamma Processes and Regular L\'evy Processes of Exponential type. In this context, the viscosity solutions that we analyze coincide with the prices of perpetual and finite expiry American options.
Classification of the Bounds on the Probability of Ruin for L{\'e}vy Processes with Light-tailed Jumps. (arXiv:1709.10295v1 [math.PR])
In this note, we study the ultimate ruin probabilities of a real-valued L{\'e}vy process X with light-tailed negative jumps. It is well-known that, for such L{\'e}vy processes, the probability of ruin decreases as an exponential function with a rate given by the root of the Laplace exponent, when the initial value goes to infinity. Under the additional assumption that X has integrable positive jumps, we show how a finer analysis of the Laplace exponent gives in fact a complete description of the bounds on the probability of ruin for this class of L{\'e}vy processes. This leads to the identification of a case that is not considered in the literature and for which we give an example. We then apply the result to various risk models and in particular the Cram{\'e}r-Lundberg model perturbed by Brownian motion.
A Structural Model for Fluctuations in Financial Markets. (arXiv:1709.10277v1 [q-fin.ST])
In this paper we provide a comprehensive analysis of a structural model for the dynamics of prices of assets traded in a market originally proposed in [1]. The model takes the form of an interacting generalization of the geometric Brownian motion model. It is formally equivalent to a model describing the stochastic dynamics of a system of analogue neurons, which is expected to exhibit glassy properties and thus many meta-stable states in a large portion of its parameter space. We perform a generating functional analysis, introducing a slow driving of the dynamics to mimic the effect of slowly varying macro-economic conditions. Distributions of asset returns over various time separations are evaluated analytically and are found to be fat-tailed in a manner broadly in line with empirical observations. Our model also allows to identify collective, interaction mediated properties of pricing distributions and it predicts pricing distributions which are significantly broader than their non-interacting counterparts, if interactions between prices in the model contain a ferro-magnetic bias. Using simulations, we are able to substantiate one of the main hypotheses underlying the original modelling, viz. that the phenomenon of volatility clustering can be rationalised in terms of an interplay between the dynamics within meta-stable states and the dynamics of occasional transitions between them.
Executive stock option exercise with full and partial information on a drift change point. (arXiv:1709.10141v1 [q-fin.MF])
We analyse the valuation and exercise of an American executive call option written on a stock whose drift parameter falls to a lower value at a change point given by an exponential random time, independent of the Brownian motion driving the stock. Two agents, who do not trade the stock, have differing information on the change point, and seek to optimally exercise the option by maximising its discounted payoff under the physical measure. The first agent has full information, and observes the change point. The second agent has partial information and filters the change point from price observations. Our setup captures the position of an executive (insider) and employee (outsider), who receive executive stock options. The latter yields a model under the observation filtration $\widehat{\mathbb F}$ where the drift process becomes a diffusion driven by the innovations process, an $\widehat{\mathbb F}$-Brownian motion also driving the stock under $\widehat{\mathbb F}$, and the partial information optimal stopping problem has two spatial dimensions. We analyse and numerically solve to value the option for both agents and illustrate that the additional information of the insider can result in exercise patterns which exploit the information on the change point.