Sunday, June 12, 2022

Project Proteus pt.2 "Failures are Fuel"

Hello everyone,It's been a while and I figured it might be time for an update.... If you havn't read my first blog on these experiments, I suggest you go an examine those before moving forward so you get some idea of what is being researched and developed.

Here is the link to Harmonic Hydrogen Fusion Part.1 incase you haven't read it yet.

I am doing my best at autodidactically sorting through multidisciplinary sciences is hard AF! This is also some pretty fringe scientific territory that is fraught with danger electrochemical danger. So I prefer taking my time, though it restricts updating people on my progress at times. It is all in the hopes we can leave this place better than when we came in. Life does get in the way sometimes and that is the excuse I am sticking to. πŸ˜‰

The failures are the fuel

We recently had another experiment that failed. Of course it was due to my ignorance on something.. but the mistakes help shed light on improvements, and that's pretty much what this blog will discuss.. The failure and how we will iterate on the mistakes I have made. The major failure seems to be in isolating the EMI (Electro-magnetic Interference) from the collapsing hydrogen plasma..... again....

However, so long as we are breathing I will keep going and never give up for us. Until we have collected enough data to draw a reasonable conclusion, it should be our motivating purpose in life to get some solid answers from nature.

In the spirit of that prologue we come to the body of the failure. My most recent dilemmas in the failures of building an Electrolytic Magnetohydrodynamic Lattice Confined Plasma in a Resonating Cavity. <-- That's always a silly mouthful to say and it’s just easier to call it Harmonic Hydrogen Fusion.The two main issues are ignorance, time, and the ability to use one against the other. We have an abundance of the former and a limited supply of the latter so that makes this a challenging matter. So I thought it necessary to update everyone on the progress and failures we have made.

A lot has changed in these 8 or 9 months since the last blog and life has been getting in the way of the dream a little bit. I got to work moving and setting up a new garage lab running Project Proteus v1. If you follow our social media you probably already know that to some degree. It has been a goal for us to level up our financial education and get us a house that we will eventually own. 

Of course you aren't here for a life story... 

Most of you reading this, probably just want the meat and potatoes of the fusion thing.

So here we go…

Project Proteus.v1 is a failure. Mostly on my part at collecting data and not listening to my elders. They told us to use mass flow calorimetry and now I can see why. There really is no other way to shield against the amount of Electromagnetic Pulses that come from the reactor that I am aware of. I have yet to find a way to get a close proximity sensor to the reaction site as the pt-1000 RTD has failed to handle the pulses once the reactor starts. Any sensors I have tried in direct contact with the electrolytic solution of the reactor have also failed when plasma excitation begins. Even sensors in the outside water jacket to absorb and measure heat are too close to the radiative effects of the point discharges.

Research and Development is a costly gig, but it's a good thing a lot of this research has been done by others and documented very well. In particular Takaki Matsumoto has been one of the most thoroughly documented elders who has given us hope that this tree bares fruit. Here is a link to the greater body of his work. We are simply adding to the body of work that already exists to determine exactly what is going on. Then engineering the ability to utilize the phenomenon to help generate sustainable energy across the world. To implement that in a free market that is not monopolized will be an interesting challenge that is also being worked on by us. You can read about that later on in this article but be warned, it involves blockchain technology and the utilization of Web3. None of it possible without Bitcoin of course πŸ˜‰

The setup was simple

Use a known amount of water, in my case it was 5 gallons of distilled water ~ 500ml reactor flask.

Measuring the rate of temperature rise in the system should allow us to see a baseline of the experiment's specific heat capacity.

We calibrate the system using a 10 ohm resistor with 10 Watts of power to start.

10 Watt Resistor

 The Calibration Experiment Data 10Watt Resistor Calibration 10W Pretty simple way to measure the data right? πŸ˜…

The formula to calculate the heating time of water is as follows:

(amount of water in kg) • (end temperature in °C – start temperature in °C) • (4186 joules/kg/°C) / (heating power in watts) = heating time in seconds

5 Gallons=18.93kg 18.93kg x (19.75c - 18.75c) x 4186 / 10 watts = 5,943 seconds

https://preview.redd.it/595fcp2bt9591.png?width=480&format=png&auto=webp&s=9501f9aac27bf2233aa334eded652ef16b2a7c0a

The operating experiment

https://preview.redd.it/rhtcioket9591.png?width=480&format=png&auto=webp&s=ead8b77c1b1eb5f7a445d6cde59a0c262382c31b

The run time was 117 Minutes or 7,020 Seconds. So we have a large 1077 second predicted discrepancy in the data from the known specific heat capacity of water of 4186 joules. This means we have a COP (Coefficient of Power) of ~85.75% in our ceramic heater setup.. This margin of error is not tolerable to get an accurate heat measurement.

https://preview.redd.it/c4yvsregt9591.png?width=600&format=png&auto=webp&s=30a3f6c638e4f289ee7bd8f1744837609f02043e

 Then it was decided to run a standard electrolysis experiment that measures (joule heating) to eliminate the resistor as the variable, and the results were surprisingly accurate!

https://youtu.be/Ab2G4Wmur98

https://preview.redd.it/pjy5q55lt9591.png?width=480&format=png&auto=webp&s=9cded39dfab553dfd4ea53d91848c308b8ee1591

https://preview.redd.it/frnts03mt9591.png?width=480&format=png&auto=webp&s=dd5eb901858214615ebb89a3bf0e543482c87f0f

https://preview.redd.it/wg8zmx2nt9591.png?width=480&format=png&auto=webp&s=5a93fd7ac295cef7edde283cdf923c992b588be7

The run power of just turning the switch on is ~10 watts, so that is subtracted from the experiment running watts actual power.

Again we use the formula to calculate the heating time of water as follows: (amount of water in kg) • (end temperature in °C – start temperature in °C) • (4186 joules/kg/°C) / (heating power in watts) = heating time in seconds

5 Gallons=18.93kg 18.93kg x (21c - 19.5c) x 4186 / 10 watts = 11,886 seconds Subtracting our ~end time of 4:39 (temp. Stability) from our start time of 1:19. We get 199 minutes or 11,940 which is well in range of the predicted specific heat capacity.

 Run it!

Since we got an accurate temperature calibration in place it was time to run the experiment. Once again we were met with a challenge. The inability to shield the sensors from EMI/EMP (Electromagnetic Interference/Electromagnetic Pulses) once the plasma reaction was initiated was the culprit we have faced at least 4 times now…. Despite all the measures to shield the lines the sensors themselves still were too close to the reaction site to hold up to the incredible emanation of electromagnetic radiation.

https://preview.redd.it/1zgd1w4ot9591.png?width=270&format=png&auto=webp&s=e6e4fc3f5aedd23f3a07f5795b481acea2edcc4f

Sensor wrapped in grounded Aluminum Foil

 One of the live stream of the experiment, which didn't work out very well on YouTube. The other is a past live stream that gave better visualization of the phenomenon and was hitting the camera with the reported field effects. 

Below you will find the data crudely collected from the Arduino device and the absolute clipped out telemetry of all the sensors at time 1/13/2022 18:41:05 and on. What is peculiar is that the electrical power was also spiking when the plasma was initiated… which supports the hypothesis of increased conductivity through the plasma i.e. a collapse in potential difference without a direct connection. This is similar to a semi-conductive state and is fascinating from a material science point of view. Damn this multidisciplinary stuff is hard for just one dude!

https://youtu.be/Y0bfv8aecmA

https://youtu.be/u8pHHtENDgg?t=3779

https://preview.redd.it/qipoe78st9591.png?width=480&format=png&auto=webp&s=c7b970ac8faef7006baa2d4ee341fa9759b29262

https://preview.redd.it/sumxqq5tt9591.png?width=480&format=png&auto=webp&s=390d20d2753e895624b2f6f82b8cf40a4169dadb

Chart of plasma power spikes and sensor failure. Due to this, it may be necessary to create pulsed dc to limit current, though it will affect the harmonics of the resonator tube to some degree and create a diminished ring. There is likely an optimal balance to facilitate directed cavitation incidents on the active Nickel medium. 

Never give up

With this data and another wonderful failure in the books, we will not give up until we can prove this has positive or negative results. This technology is too important, if nature says it’s possible empirically, that I would be doing a disservice to myself and the others to quit now. So moving a chiller has been purchased to recycle water at a constant temperature.

https://preview.redd.it/02wuln5ut9591.png?width=360&format=png&auto=webp&s=32fffa2901e2733b277e7be7b7844335c24f54c4

The purpose of the chiller is to keep an isolated water supply of a known temperature. Like I alluded to above, it will be necessary to redesign the circuit and hardware to include a flow meter.

https://preview.redd.it/x2q2ck4yt9591.png?width=444&format=png&auto=webp&s=5866d363d36da5f8086a6e716b3fd7e5657e9ca7

https://preview.redd.it/4z7ouo6zt9591.png?width=480&format=png&auto=webp&s=3e3bddb8bcf9cf012ebd61bf747f04a26916ab17

We will have to add a flow meter to the circuit scheme and start work on the C  code for the DAQ CPU (Data Acquisition Computer). This will allow us to monitor the fluid flow and heat through the reactor. With this setup, we can keep the temperature sensor shielded and away from the reactor core. A problem still arises in getting visual data on the reactor when it's in the new flow calorimetry box. 

https://preview.redd.it/2e5y7o70u9591.png?width=480&format=png&auto=webp&s=a4d5cf59873eef6a5cb1f7c6241ef57b37d00fe4

but before we close the reactor into a dark little box, we will need to get all visual and audio data documented.

The next experiment Proteus Vision 

For the next experiment in line we will be putting visual/audio sensors on project Proteus. Before we can move onto closing up the reactor I will get a large sample of visual spectral data. To do this I will deploy audio/visual sensor data and also experiment with magnetohydrodynamic flow. The last time we ran this experiment is had a pretty radical reaction I didn’t expect so I will document it with a PZT Piezoelectric transducer, HD 1080p Logitech Camera, CCD Logitech camera blacked out for high energy events, and a light spectrum analyzer that was made using diffraction grading and open source spectral analyzing software.

There will also be a Geiger counter in place but so far I have yet to see anything too exciting on that instrument… except in the near field of the reaction, which seems to excite the circuit with the same EMI/EMP that plagues my sensors.

https://preview.redd.it/th4bft62u9591.png?width=480&format=png&auto=webp&s=63d96808602293dbc239a560b9ccf0571d599d7b

After all of the visual data is collected, I will add a nanoparticle carbon catalyst to see if I can reproduce the runaway I had recorded before. These experiments can be rather dangerous and I don’t like rushing into them willy nilly.

So please give it some time for my next report unless fresh capital flows come in for us. Anyways, we appreciate you (the reader) taking the time to see what crazy open science thing we are up to. If you have any questions, suggestions, or concerns please feel free to reach out to me via discord, twitter, or comment on this blog post.

One final thing…

The discord channel has changed its name to represent the artwork and DAO (Decentralized Autonomous Organization) that is being created. It is now called LuminaryDAO.com and is designed to develop the Conscious Energy Chain and help fund LENR (Lattice Enabled Nuclear Reactions) or the 10 other names this style of nuclear synthesis has been labeled as.

We prefer to development this with openness and transparency in which all can participate. It is likely this is the only way this type of deflationary technology would be able to develop from a socioeconomic our understanding. 

Though this field is pushed to the fringes of science and can be lonely. It's import to break free from the lone wolf experimentalist attitude. All this knowledge has been passed down by our elders. It's our generations time to shine to help future generations in asking the most important questions physics has to offer. 

So the steps are being made to bring this together as a public utility with positive mutual benefit for all involved. A way to decentralize the energy grid using blockchain technology in a fair and democratized way.At the current time www.consciousenergy.shop is the blog and swag store to get the information out. At the present moment www.conscious.energy is under construction and forwards to LuminaryDAO to begin bringing together the community with my attempt at art. πŸ˜…

Another group of collogues is creating a science DAO to advance this technology further in a open and authentic scientific way. We are actively working to help them intergrade into Web3 applications and the ability to openly spread this information further in a mutual economic success structure. A rising tide raises all ships, and we can only succeed if we raise up together is our view. This is not a zero sum game we are playing. 

All of this Web3 stuff will still be important whether or not Harmonic Hydrogen Fusion proves to be successful or not. To learn, more please feel free to check out LuminaryDAO.com and keep up to date on its development.

We all have the ability to contribute to this conscious energy in every present moment. So until next time, I hope your contributions are filled with love, life, and happiness.


Quant Finance Arxiv submissions 2022-05-30 - 2022-06-05

Quant Finance Arxiv submissions 2022-05-30 - 2022-06-05

This is your weekly snap of quant finance submissions to the Arxiv. Papers are sorted in reverse chronological order of the original publication date. i.e. the newest papers are at the top, revisions are lower down the list.

If any paper take your fancy, you're encouraged to submit a link post to the subreddit, and start the discussion in the comments. Or in this thread, what do I care I'm not your boss.

Cointegration and ARDL specification between the Dubai crude oil and the US natural gas market

Authors: Stavros Stavroyiannis

Categories: Statistical Finance

PDF: http://arxiv.org/pdf/2206.03278v1

Dates: originally published: 2022-06-03, updated: 2022-06-03

Summary: This paper examines the relationship between the price of the Dubai crude oil and the price of the US natural gas using an updated monthly dataset from 1992 to 2018, incorporating the latter events in the energy markets. After employing a variety of unit root and cointegration tests, the long-run relationship is examined via the autoregressive distributed lag (ARDL) cointegration technique, along with the Toda-Yamamoto (1995) causality test. Our results indicate that there is a long-run relationship with a unidirectional causality running from the Dubai crude oil market to the US natural gas market. A variety of post specification tests indicate that the selected ARDL model is well-specified, and the results of the Toda-Yamamoto approach via impulse response functions, forecast error variance decompositions, and historical decompositions with generalized weights, show that the Dubai crude oil price retains a positive relationship and affects the US natural gas price.

Adaptive Robust Online Portfolio Selection

Authors: Man Yiu Tsang, Tony Sit, Hoi Ying Wong

Categories: Portfolio Management, Optimization and Control

PDF: http://arxiv.org/pdf/2206.01064v1

Dates: originally published: 2022-06-02, updated: 2022-06-02

Summary: The online portfolio selection (OLPS) problem differs from classical portfolio model problems, as it involves making sequential investment decisions. Many OLPS strategies described in the literature capture market movement based on various beliefs and are shown to be profitable. In this paper, we propose a robust optimization (RO)-based strategy that takes transaction costs into account. Moreover, unlike existing studies that calibrate model parameters from benchmark data sets, we develop a novel adaptive scheme that decides the parameters sequentially. With a wide range of parameters as input, our scheme captures market uptrend and protects against market downtrend while controlling trading frequency to avoid excessive transaction costs. We numerically demonstrate the advantages of our adaptive scheme against several benchmarks under various settings. Our adaptive scheme may also be useful in general sequential decision-making problems. Finally, we compare the performance of our strategy with that of existing OLPS strategies using both benchmark and newly collected data sets. Our strategy outperforms these existing OLPS strategies in terms of cumulative returns and competitive Sharpe ratios across diversified data sets, demonstrating its adaptability-driven superiority.

The Evolution of Investor Activism in Japan

Authors: Ryo Sakai

Categories: General Finance

PDF: http://arxiv.org/pdf/2206.00640v1

Dates: originally published: 2022-06-01, updated: 2022-06-01

Summary: Activist investors have gradually become a catalyst for change in Japanese companies. This study examines the impact of activist board representation on firm performance in Japan. I focus on the only two Japanese companies with activist board representation: Kawasaki Kisen Kaisha, Ltd. ("Kawasaki") and Olympus Corporation ("Olympus"). Overall, I document significant benefits from the decision to engage with activists at these companies. The target companies experience greater short- and long-term abnormal stock returns following the activist engagement. Moreover, I show operational improvements as measured by return on assets and return on equity. Activist board members also associate with important changes in payout policy that help explain the positive stock returns. My findings support the notion that Japanese companies should consider engagements with activist investors to transform and improve their businesses. Such interactions can lead to innovative and forward-thinking policies that create value for Japanese businesses and their stakeholders.

RMT-Net: Reject-aware Multi-Task Network for Modeling Missing-not-at-random Data in Financial Credit Scoring

Authors: Qiang Liu, Yingtao Luo, Shu Wu, Zhen Zhang, Xiangnan Yue, Hong Jin, Liang Wang

Categories: Statistical Finance

PDF: http://arxiv.org/pdf/2206.00568v1

Dates: originally published: 2022-06-01, updated: 2022-06-01

Summary: In financial credit scoring, loan applications may be approved or rejected. We can only observe default/non-default labels for approved samples but have no observations for rejected samples, which leads to missing-not-at-random selection bias. Machine learning models trained on such biased data are inevitably unreliable. In this work, we find that the default/non-default classification task and the rejection/approval classification task are highly correlated, according to both real-world data study and theoretical analysis. Consequently, the learning of default/non-default can benefit from rejection/approval. Accordingly, we for the first time propose to model the biased credit scoring data with Multi-Task Learning (MTL). Specifically, we propose a novel Reject-aware Multi-Task Network (RMT-Net), which learns the task weights that control the information sharing from the rejection/approval task to the default/non-default task by a gating network based on rejection probabilities. RMT-Net leverages the relation between the two tasks that the larger the rejection probability, the more the default/non-default task needs to learn from the rejection/approval task. Furthermore, we extend RMT-Net to RMT-Net++ for modeling scenarios with multiple rejection/approval strategies. Extensive experiments are conducted on several datasets, and strongly verifies the effectiveness of RMT-Net on both approved and rejected samples. In addition, RMT-Net++ further improves RMT-Net's performances.

Hedging option books using neural-SDE market models

Authors: Samuel N. Cohen, Christoph Reisinger, Sheng Wang

Categories: Risk Management, Probability, Computational Finance, Statistical Finance

PDF: http://arxiv.org/pdf/2205.15991v1

Dates: originally published: 2022-05-31, updated: 2022-05-31

Summary: We study the capability of arbitrage-free neural-SDE market models to yield effective strategies for hedging options. In particular, we derive sensitivity-based and minimum-variance-based hedging strategies using these models and examine their performance when applied to various option portfolios using real-world data. Through backtesting analysis over typical and stressed market periods, we show that neural-SDE market models achieve lower hedging errors than Black--Scholes delta and delta-vega hedging consistently over time, and are less sensitive to the tenor choice of hedging instruments. In addition, hedging using market models leads to similar performance to hedging using Heston models, while the former tends to be more robust during stressed market periods.

Cone-constrained Monotone Mean-Variance Portfolio Selection Under Diffusion Models

Authors: Yang Shen, Bin Zou

Categories: Mathematical Finance, Optimization and Control, Portfolio Management

PDF: http://arxiv.org/pdf/2205.15905v1

Dates: originally published: 2022-05-31, updated: 2022-05-31

Summary: We consider monotone mean-variance (MMV) portfolio selection problems with a conic convex constraint under diffusion models, and their counterpart problems under mean-variance (MV) preferences. We obtain the precommitted optimal strategies to both problems in closed form and find that they coincide, without and with the presence of the conic constraint. This result generalizes the equivalence between MMV and MV preferences from non-constrained cases to a specific constrained case. A comparison analysis reveals that the orthogonality property under the conic convex set is a key to ensuring the equivalence result.

A novel approach to rating transition modelling via Machine Learning and SDEs on Lie groups

Authors: Kevin Kamm, Michelle Muniz

Categories: Risk Management

PDF: http://arxiv.org/pdf/2205.15699v1

Dates: originally published: 2022-05-31, updated: 2022-05-31

Summary: In this paper, we introduce a novel methodology to model rating transitions with a stochastic process. To introduce stochastic processes, whose values are valid rating matrices, we noticed the geometric properties of stochastic matrices and its link to matrix Lie groups. We give a gentle introduction to this topic and demonstrate how It^o-SDEs in R will generate the desired model for rating transitions. To calibrate the rating model to historical data, we use a Deep-Neural-Network (DNN) called TimeGAN to learn the features of a time series of historical rating matrices. Then, we use this DNN to generate synthetic rating transition matrices. Afterwards, we fit the moments of the generated rating matrices and the rating process at specific time points, which results in a good fit. After calibration, we discuss the quality of the calibrated rating transition process by examining some properties that a time series of rating matrices should satisfy, and we will see that this geometric approach works very well.

Exact solution to two-body financial dealer model: revisited from the viewpoint of kinetic theory

Authors: Kiyoshi Kanazawa, Hideki Takayasu, Misako Takayasu

Categories: Trading and Market Microstructure, Statistical Mechanics, Physics and Society

PDF: http://arxiv.org/pdf/2205.15558v1

Dates: originally published: 2022-05-31, updated: 2022-05-31

Summary: The two-body stochastic dealer model is revisited to provide an exact solution to the average order-book profile using the kinetic approach. The dealer model is a microscopic financial model where individual traders make decisions on limit-order prices stochastically and then reach agreements on transactions. In the literature, this model was solved for several cases: an exact solution for two-body traders $N=2$ and a mean-field solution for many traders $N\gg 1$. Remarkably, while kinetic theory plays a significant role in the mean-field analysis for $N\gg 1$, its role is still elusive for the case of $N=2$. In this paper, we revisit the two-body dealer model $N=2$ to clarify the utility of the kinetic theory. We first derive the exact master-Liouville equations for the two-body dealer model by several methods. We next illustrate the physical picture of the master-Liouville equation from the viewpoint of the probability currents. The master-Liouville equations are then solved exactly to derive the order-book profile and the average transaction interval. Furthermore, we introduce a generalised two-body dealer model by incorporating interaction between traders via the market midprice and exactly solve the model within the kinetic framework. We finally confirm our exact solution by numerical simulations. This work provides a systematic mathematical basis for the econophysics model by developing better mathematical intuition.

A multimodal model with Twitter FinBERT embeddings for extreme price movement prediction of Bitcoin

Authors: Yanzhao Zou, Dorien Herremans

Categories: Statistical Finance

PDF: http://arxiv.org/pdf/2206.00648v1

Dates: originally published: 2022-05-30, updated: 2022-05-30

Summary: Bitcoin, with its ever-growing popularity, has demonstrated extreme price volatility since its origin. This volatility, together with its decentralised nature, make Bitcoin highly subjective to speculative trading as compared to more traditional assets. In this paper, we propose a multimodal model for predicting extreme price fluctuations. This model takes as input a variety of correlated assets, technical indicators, as well as Twitter content. In an in-depth study, we explore whether social media discussions from the general public on Bitcoin have predictive power for extreme price movements. A dataset of 5,000 tweets per day containing the keyword Bitcoin' was collected from 2015 to 2021. This dataset, called PreBit, is made available online. In our hybrid model, we use sentence-level FinBERT embeddings, pretrained on financial lexicons, so as to capture the full contents of the tweets and feed it to the model in an understandable way. By combining these embeddings with a Convolutional Neural Network, we built a predictive model for significant market movements. The final multimodal ensemble model includes this NLP model together with a model based on candlestick data, technical indicators and correlated asset prices. In an ablation study, we explore the contribution of the individual modalities. Finally, we propose and backtest a trading strategy based on the predictions of our models with varying prediction threshold and show that it can used to build a profitable trading strategy with a reduced risk over ahold' or moving average strategy.

Stock Trading Optimization through Model-based Reinforcement Learning with Resistance Support Relative Strength

Authors: Huifang Huang, Ting Gao, Yi Gui, Jin Guo, Peng Zhang

Categories: Mathematical Finance, Portfolio Management

PDF: http://arxiv.org/pdf/2205.15056v1

Dates: originally published: 2022-05-30, updated: 2022-05-30

Summary: Reinforcement learning (RL) is gaining attention by more and more researchers in quantitative finance as the agent-environment interaction framework is aligned with decision making process in many business problems. Most of the current financial applications using RL algorithms are based on model-free method, which still faces stability and adaptivity challenges. As lots of cutting-edge model-based reinforcement learning (MBRL) algorithms mature in applications such as video games or robotics, we design a new approach that leverages resistance and support (RS) level as regularization terms for action in MBRL, to improve the algorithm's efficiency and stability. From the experiment results, we can see RS level, as a market timing technique, enhances the performance of pure MBRL models in terms of various measurements and obtains better profit gain with less riskiness. Besides, our proposed method even resists big drop (less maximum drawdown) during COVID-19 pandemic period when the financial market got unpredictable crisis. Explanations on why control of resistance and support level can boost MBRL is also investigated through numerical experiments, such as loss of actor-critic network and prediction error of the transition dynamical model. It shows that RS indicators indeed help the MBRL algorithms to converge faster at early stage and obtain smaller critic loss as training episodes increase.

Replicating Portfolios: Constructing Permissionless Derivatives

Authors: Estelle Sterrett, Waylon Jepsen, Evan Kim

Categories: Computational Finance, Pricing of Securities

PDF: http://arxiv.org/pdf/2205.09890v2

Dates: originally published: 2022-05-19, updated: 2022-06-02

Summary: The current design space of derivatives in Decentralized Finance (DeFi) relies heavily on oracle systems. Replicating market makers (RMMs) provide a mechanism for converting specific payoff functions to an associated Constant Function Market Makers (CFMMs). We leverage RMMs to replicate the approximate payoff of a Black-Scholes covered call option. RMM-01 is the first implementation of an on-chain expiring option mechanism that relies on arbitrage rather than an external oracle for price. We provide frameworks for derivative instruments and structured products achievable on-chain without relying on oracles. We construct long and binary options and briefly discuss perpetual covered call strategies commonly referred to as "theta vaults." Moreover, we introduce a procedure to eliminate liquidation risk in lending markets. The results suggest that CFMMs are essential for structured product design with minimized trust dependencies.

Fitting Generalized Tempered Stable distribution: Fractional Fourier Transform (FRFT) Approach

Authors: A. H. Nzokem, V. T. Montshiwa

Categories: Probability, Statistical Finance

PDF: http://arxiv.org/pdf/2205.00586v2

Dates: originally published: 2022-05-02, updated: 2022-06-04

Summary: The paper investigates the rich class of Generalized Tempered Stable distribution, an alternative to Normal distribution and the $\alpha$-Stable distribution for modelling asset return and many physical and economic systems. Firstly, we explore some important properties of the Generalized Tempered Stable (GTS) distribution. The theoretical tools developed are used to perform empirical analysis. The GTS distribution is fitted using S&P 500, SPY ETF and Bitcoin BTC. The Fractional Fourier Transform (FRFT) technique evaluates the probability density function and its derivatives in the maximum likelihood procedure. Based on the results from the statistical inference and the Kolmogorov-Smirnov (K-S) goodness-of-fit, the GTS distribution fits the underlying distribution of the SPY ETF return. The right side of the Bitcoin BTC return, and the left side of the S&P 500 return underlying distributions fit the Tempered Stable distribution; while the left side of the Bitcoin BTC return and the right side of the S&P 500 return underlying distributions are modelled by the compound Poisson process

Hull and White and AlΓ²s type formulas for barrier options in stochastic volatility models with nonzero correlation

Authors: Frido Rolloos

Categories: Pricing of Securities

PDF: http://arxiv.org/pdf/2205.05489v3

Dates: originally published: 2022-04-30, updated: 2022-05-31

Summary: Two novel closed-form formulas for the price of barrier options in stochastic volatility models with zero interest rate and dividend yield but nonzero correlation between the asset and its instantaneous volatility are derived. The first is a Hull and White type formula, and the second is a decomposition formula similar in form to the Al`os decomposition for vanilla options. A model-free approximation is also given.

Deep calibration of the quadratic rough Heston model

Authors: Mathieu Rosenbaum, Jianfei Zhang

Categories: Mathematical Finance, Risk Management, Computational Finance, Pricing of Securities

PDF: http://arxiv.org/pdf/2107.01611v2

Dates: originally published: 2021-07-04, updated: 2022-05-30

Summary: The quadratic rough Heston model provides a natural way to encode Zumbach effect in the rough volatility paradigm. We apply multi-factor approximation and use deep learning methods to build an efficient calibration procedure for this model. We show that the model is able to reproduce very well both SPX and VIX implied volatilities. We typically obtain VIX option prices within the bid-ask spread and an excellent fit of the SPX at-the-money skew. Moreover, we also explain how to use the trained neural networks for hedging with instantaneous computation of hedging quantities.

Reverse Sensitivity Analysis for Risk Modelling

Authors: Silvana M. Pesenti

Categories: Risk Management

PDF: http://arxiv.org/pdf/2107.01065v2

Dates: originally published: 2021-07-02, updated: 2022-05-31

Summary: We consider the problem where a modeller conducts sensitivity analysis of a model consisting of random input factors, a corresponding random output of interest, and a baseline probability measure. The modeller seeks to understand how the model (the distribution of the input factors as well as the output) changes under a stress on the output's distribution. Specifically, for a stress on the output random variable, we derive the unique stressed distribution of the output that is closest in the Wasserstein distance to the baseline output's distribution and satisfies the stress. We further derive the stressed model, including the stressed distribution of the inputs, which can be calculated in a numerically efficient way from a set of baseline Monte Carlo samples and which is implemented in the R package SWIM on CRAN. The proposed reverse sensitivity analysis framework is model-free and allows for stresses on the output such as (a) the mean and variance, (b) any distortion risk measure including the Value-at-Risk and Expected-Shortfall, and (c) expected utility type constraints, thus making the reverse sensitivity analysis framework suitable for risk models.

Efficient approximations for utility-based pricing

Authors: Laurence Carassus, Massinissa Ferhoune

Categories: Computational Finance, Pricing of Securities

PDF: http://arxiv.org/pdf/2105.08804v2

Dates: originally published: 2021-05-18, updated: 2022-05-30

Summary: In a context of illiquidity, the reservation price is a well-accepted alternative to the usual martingale approach which does not apply. However, this price is not closed and requires numerical methods such as Monte Carlo or polynomial approximations to evaluate it. We show that these methods can be inaccurate and propose a deterministic decomposition of the reservation price using the Lambert function. This decomposition allows us to perform an improved Monte Carlo method called LMC and to give deterministic approximations of the reservation price and of the optimal strategies based on the Lambert function. We also give an answer to the problem of selecting a hedging asset that minimizes the reservation price and also the cash invested. Our theoretical results are illustrated by numerical simulations.

Mortgage Contracts and Underwater Default

Authors: Yerkin Kitapbayev, Scott Robertson

Categories: Risk Management, Pricing of Securities

PDF: http://arxiv.org/pdf/2005.03554v4

Dates: originally published: 2020-05-07, updated: 2022-05-31

Summary: We analyze recently proposed mortgage contracts that aim to eliminate selective borrower default when the loan balance exceeds the house price (the ``underwater'' effect). We show contracts that automatically reduce the outstanding balance in the event of house price decline remove the default incentive, but may induce prepayment in low price states. However, low state prepayments vanish if the benefit from home ownership is sufficiently high. We also show that capital gain sharing features, such as prepayment penalties in high house price states, are ineffective as they virtually eliminate prepayment. For observed foreclosure costs, we find that contracts with automatic balance adjustments become preferable to the traditional fixed-rate contracts at mortgage rate spreads between 20-50 basis points. We obtain these results for perpetual versions of the contracts using American options pricing methodology, in a continuous-time model with diffusive home prices. The contracts' values and optimal decision rules are associated with free boundary problems, which admit semi-explicit solutions.

Singular Perturbation Expansion for Utility Maximization with Order-$Ξ΅$ Quadratic Transaction Costs

Authors: Andrew Papanicolaou, Shiva Chandra

Categories: Trading and Market Microstructure, Portfolio Management

PDF: http://arxiv.org/pdf/1910.06463v4

Dates: originally published: 2019-10-14, updated: 2022-06-04

Summary: We present an expansion for portfolio optimization in the presence of small, instantaneous, quadratic transaction costs. Specifically, the magnitude of transaction costs has a coefficient that is of the order $\epsilon$ small, which leads to the optimization problem having an asymptotically-singular Hamilton-Jacobi-Bellman equation whose solution can be expanded in powers of $\sqrt\epsilon$. In this paper we derive explicit formulae for the first two terms of this expansion. Analysis and simulation are provided to show the behavior of this approximating solution.

Which portfolio is better? A discussion of several possible comparison criteria

Authors: Henryk Gzyl, Alfredo Rios

Categories: Portfolio Management

PDF: http://arxiv.org/pdf/1805.06345v3

Dates: originally published: 2018-05-16, updated: 2022-06-04

Summary: During the last few years, there has been an interest in comparing simple or heuristic procedures for portfolio selection, such as the naive, equal weights, portfolio choice, against more "sophisticated" portfolio choices, and in explaining why, in some cases, the heuristic choice seems to outperform the sophisticated choice. We believe that some of these results may be due to the comparison criterion used. It is the purpose of this note to analyze some ways of comparing the performance of portfolios. We begin by analyzing each criterion proposed on the market line, in which there is only one random return. Several possible comparisons between optimal portfolios and the naive portfolio are possible and easy to establish. Afterwards, we study the case in which there is no risk free asset. In this way, we believe some basic theoretical questions regarding why some portfolios may seem to outperform others can be clarified.

How brokers can optimally plot against traders

Authors: Manuel Lafond

Categories: Trading and Market Microstructure

PDF: http://arxiv.org/pdf/1605.04949v2

Dates: originally published: 2016-04-08, updated: 2022-06-02

Summary: Traders buy and sell financial instruments in hopes of making profit, and brokers are responsible for the transaction. There are several hypotheses and conspiracy theories arguing that in some situations, brokers want their traders to lose money. For instance, a broker may want to protect the positions of a privileged customer. Another example is that some brokers take positions opposite to their traders', in which case they make money whenever their traders lose money. These are reasons for which brokers might manipulate prices in order to maximize the losses of their traders. In this paper, our goal is to perform this shady task optimally -- or at least to check whether this can actually be done algorithmically. Assuming total control over the price of an asset (ignoring the usual aspects of finance such as market conditions, external influence or stochasticity), we show how in quadratic time, given a set of trades specified by a stop-loss and a take-profit price, a broker can find a maximum loss price movement. We also look at an online trade model where broker and trader exchange turns, each trying to make a profit. We show in which condition either side can make a profit, and that the best option for the trader is to never trade.

** Original source code: https://github.com/machalejm/arxiv_scraper **


An incredible task from an exceptionally encouraging group

IKONIC #CRYPTO #BSC #BINANCE #BITCOIN

https://www.ikonic.gg/ An incredible task from an exceptionally encouraging group, the folks are truly occupied with the venture and its turn of events and advancement.


The most simple question to ask yourself. Have you bought all the crypto you want to own?

If you haven't bought all the crypto you want to own, if your percentage of possession of the projects you believe in isn't as high as you want it to be, whether you have a target of owning 1 bitcoin or 10 bitcoin or 10000 ETH, why would you possibly be upset about prices dropping? If you have a car, and you know you want to buy another car, are you concerned if the prices of cars drop?

Crypto investors, particularly those who are young and see it as a later life investment seem to get excited when things are mooning, but if the intention is to aquire rather than sell then why does sentiment get worse here when world events or markets drive the price down? It seems totally illogical to me watching people get depressed over this, at least in terms of people who don't need liquidity.

Just enjoy the lowering cost of your average buy.


Been there, done that, have the T-Sh*rt, and still going to hold

SHIBArmy, the value of your stash is not going down because SHIBA INU sucks. It is going down because people need gas and food money. Some folks are saying they can't afford to drive into work anymore due to high gas prices. Real life is why crypto market is crashing, and some even think it's going to make Bitcoin hit $20k or lower.

Russia will eventually come to their senses, and then gas tycoons will too we can hope. Now, it's time for patience even if its a year+. SHIBA INU is not going to poof out of existence on you. Crypto is in a legit crash until economies recover from current events. This is a gold hold.