Ronald Richman

Ronald Richman

City of Johannesburg, Gauteng, South Africa
12K followers 500+ connections

About

Ron is an experienced actuary and risk manager, and has recently founded an Insurtech…

Articles by Ronald

Contributions

Activity

Join now to see all activity

Experience

  • insureAI Graphic

    insureAI

    Johannesburg Metropolitan Area

  • -

  • -

    Johannesburg Metropolitan Area

  • -

    Johannesburg Metropolitan Area

  • -

    Johannesburg Metropolitan Area

  • -

    South Africa

  • -

  • -

    Johannesburg Metropolitan Area

  • -

    Johannesburg Area, South Africa

  • -

  • -

    Johannesburg Area, South Africa

  • -

    Johannesburg Area, South Africa

  • -

    Johannesburg Area, South Africa

  • -

    South Africa

  • -

  • -

    JHB

  • -

    Petach Tikva

Education

  • University of the Witwatersrand Graphic
  • -

    Dissertation title : South African Old Age Mortality – Experience from 1985-2011

  • -

    Applications of deep learning in actuarial science

  • -

Licenses & Certifications

Publications

  • Time-Series Forecasting of Mortality Rates using Deep Learning

    Scandinavian Actuarial Journal

    The time-series nature of mortality rates lends itself to processing through neural networks
    that are specialized to deal with sequential data, such as recurrent and convolutional
    networks. Although appealing intuitively, a naive implementation of these networks does
    not lead to enhanced predictive performance. We show how the structure of the Lee Carter
    model can be generalized, and propose a relatively simple convolutional network model that
    can be interpreted as a…

    The time-series nature of mortality rates lends itself to processing through neural networks
    that are specialized to deal with sequential data, such as recurrent and convolutional
    networks. Although appealing intuitively, a naive implementation of these networks does
    not lead to enhanced predictive performance. We show how the structure of the Lee Carter
    model can be generalized, and propose a relatively simple convolutional network model that
    can be interpreted as a generalization of the Lee Carter model, allowing for its components
    to be evaluated in familiar terms. The model produces highly accurate forecasts on the Human
    Mortality Database, and, without further modification, generalizes well to the United
    States Mortality Database.

    Other authors
    • Francesca Perla
    • Salvatore Scognamiglio
    • Mario V. Wuthrich
    See publication
  • Discrimination-Free Insurance Pricing

    SSRN

    A simple formula for non-discriminatory insurance pricing is introduced. This formula is based on the assumption that certain individual (discriminatory) policyholder information is not allowed to be used for insurance pricing. The suggested procedure can be summarized as follows: First, we construct a price that is based on all available information, including discriminatory information. Thereafter, we average out the effect of discriminatory information. This averaging out is done such that…

    A simple formula for non-discriminatory insurance pricing is introduced. This formula is based on the assumption that certain individual (discriminatory) policyholder information is not allowed to be used for insurance pricing. The suggested procedure can be summarized as follows: First, we construct a price that is based on all available information, including discriminatory information. Thereafter, we average out the effect of discriminatory information. This averaging out is done such that discriminatory information can also not be inferred from the remaining non-discriminatory one, thus, neither allowing for direct nor for indirect discrimination.

    Other authors
    See publication
  • Believing the Bot - Model Risk in the Era of Deep Learning

    SSRN

    Deep Learning models are currently being introduced into business processes to support decision-making in insurance companies. At the same time model risk is recognized as an increasingly relevant field within the management of operational risk that tries to mitigate the risk of poor business decisions because of flawed models or inappropriate model use. In this paper we try to determine how Deep Learning models are different from established actuarial models currently in use in insurance…

    Deep Learning models are currently being introduced into business processes to support decision-making in insurance companies. At the same time model risk is recognized as an increasingly relevant field within the management of operational risk that tries to mitigate the risk of poor business decisions because of flawed models or inappropriate model use. In this paper we try to determine how Deep Learning models are different from established actuarial models currently in use in insurance companies and how these differences might necessitate changes in the model risk management framework. We analyse operational risk in the development and implementation of Deep Learning models using examples from pricing and mortality forecasting to illustrate specific model risks and controls to mitigate those risks. We discuss changes in model governance and the role that model risk managers could play in providing assurance on the appropriate use of Deep Learning models.

    Other authors
    See publication
  • Lee and Carter go Machine Learning: Recurrent Neural Networks

    SSRN

    In this tutorial we introduce recurrent neural networks (RNNs), and we describe the two most popular RNN architectures. These are the long short-term memory (LSTM) network and gated recurrent unit (GRU) network. Their common field of application is time series modeling, and we demonstrate their use on a mortality rate prediction problem using data from the Swiss female and male populations.

    Other authors
    • Mario V. Wuthrich
    See publication
  • A Neural Network Extension of the Lee-Carter Model to Multiple Populations

    SSRN

    The Lee-Carter model is a basic approach to forecasting mortality rates of a single population. Although extensions of the Lee-Carter model to forecasting rates for multiple populations have recently been proposed, the structure of these extended models is hard to justify and the models are often difficult to calibrate, relying on customized optimization schemes. Based on the paradigm of representation learning, we extend the Lee-Carter model to multiple populations using neural networks, which…

    The Lee-Carter model is a basic approach to forecasting mortality rates of a single population. Although extensions of the Lee-Carter model to forecasting rates for multiple populations have recently been proposed, the structure of these extended models is hard to justify and the models are often difficult to calibrate, relying on customized optimization schemes. Based on the paradigm of representation learning, we extend the Lee-Carter model to multiple populations using neural networks, which automatically select an optimal model structure. We fit this model to mortality rates since 1950 for all countries in the Human Mortality Database and observe that the out-of-sample forecasting performance of the model is highly competitive.

    Other authors
    • Mario Wüthrich
    See publication
  • Neural Network Embedding of the Over-Dispersed Poisson Reserving Model

    SSRN

    The main idea of this paper is to embed a classical actuarial regression model into a neural network architecture. This nesting allows us to learn model structure beyond the classical actuarial regression model if we use as starting point of the neural network calibration exactly the classical actuarial model. Such models can be fitted efficiently which allows us to explore bootstrap methods for prediction uncertainty. As an explicit example we consider the cross-classified over-dispersed…

    The main idea of this paper is to embed a classical actuarial regression model into a neural network architecture. This nesting allows us to learn model structure beyond the classical actuarial regression model if we use as starting point of the neural network calibration exactly the classical actuarial model. Such models can be fitted efficiently which allows us to explore bootstrap methods for prediction uncertainty. As an explicit example we consider the cross-classified over-dispersed Poisson model for general insurance claims reserving. We demonstrate how this model can be improved by neural network features.

    Other authors
    • Mario Wüthrich
    • Andrea Gabrielli
    See publication
  • AI in Actuarial Science - working paper

    SSRN

    Rapid advances in Artificial Intelligence and Machine Learning are creating products and services with the potential not only to change the environment in which actuaries operate, but also to provide new opportunities within actuarial science. These advances are based on a modern approach to designing, fitting and applying neural networks, generally referred to as “Deep Learning”. This paper investigates how actuarial science may adapt and evolve in the coming years to incorporate these new…

    Rapid advances in Artificial Intelligence and Machine Learning are creating products and services with the potential not only to change the environment in which actuaries operate, but also to provide new opportunities within actuarial science. These advances are based on a modern approach to designing, fitting and applying neural networks, generally referred to as “Deep Learning”. This paper investigates how actuarial science may adapt and evolve in the coming years to incorporate these new techniques and methodologies. After providing some background on machine learning and deep learning, and providing a heuristic for where actuaries might benefit from applying these techniques, the paper surveys emerging applications of AI in actuarial science, with examples from mortality modelling, claims reserving, non-life pricing and telematics. For some of the examples, code has been provided on GitHub so that the interested reader can experiment with these techniques for themselves. The paper concludes with an outlook on the potential for actuaries to integrate deep learning into their activities.

    See publication
  • Mortality rates and improvement over time at advanced ages in South Africa – insights from the national-level data

    Presented at the Actuarial Society of South Africa’s 2016 Convention 23–24 November 2016, Cape Town International Convention Centre

    Actuaries rely on population mortality rates to determine compensation in cases of damages,
    trends in mortality rates inform the modelling of mortality risk and valuation of insurance
    companies and pension schemes and, not least, actuarial calculations of population mortality
    contribute to wider societal debates. Estimating the level and trend in population mortality rates
    at advanced ages in South Africa is complicated by potential problems. Population and death
    data…

    Actuaries rely on population mortality rates to determine compensation in cases of damages,
    trends in mortality rates inform the modelling of mortality risk and valuation of insurance
    companies and pension schemes and, not least, actuarial calculations of population mortality
    contribute to wider societal debates. Estimating the level and trend in population mortality rates
    at advanced ages in South Africa is complicated by potential problems. Population and death
    data, particularly in developing countries, often suffer from age misreporting – age exaggeration
    and digit preference. In addition, censuses may under- or overestimate the population and
    registration of deaths is usually incomplete in developing countries
    In this research, we use the Death Distribution Methods (Moultrie et al., 2013) to correct
    the death data for incomplete registration of deaths, and the Near Extinct Generation (NEG)
    methods (Thatcher et al., 2002) to estimate the population by projecting future deaths of nearly
    extinct cohorts. In applying NEG methods to the South African data, we exploit the theoretical
    connection to actuarial methods for the calculation of claims incurred but not yet reported,
    and propose an adapted NEG method based on the chain-ladder model of Renshaw and Verrall
    (1998) to smooth the digit preference in the death data. We use this model to re-estimate the
    population at each age from 70 and above and to calculate mortality rates since 1996. We find
    that both the population and death data suffer from the same pattern of digit preference and that the population data are affected by age exaggeration, leading to underestimated mortality rates if the census counts are used as exposures. The level and trend in mortality rates are discussed and compared to the mortality rates in the Human Mortality Database, other studies of South African mortality and insured life tables.

    The paper was "Highly Commended" by the Awards Committee of the IFoA in 2017.

    Other authors
    • Rob Dorrington
    See publication

Honors & Awards

  • 2020 Hachemeister Prize

    Casualty Actuarial Society/International Actuarial Association: ASTIN

    This prize was established in 1993 in recognition of Charles A. Hachemeister's many contributions to Actuarial Studies in Non-Life Insurance (ASTIN) and his efforts to establish a closer relationship between the Casualty Actuarial Society (CAS) and ASTIN.

  • Outstanding Achievement Award - GI Board

    Institute and Faculty of Actuaries

Languages

  • English

    Native or bilingual proficiency

  • Hebrew

    Limited working proficiency

More activity by Ronald

View Ronald’s full profile

  • See who you know in common
  • Get introduced
  • Contact Ronald directly
Join to view full profile

Other similar profiles

Explore collaborative articles

We’re unlocking community knowledge in a new way. Experts add insights directly into each article, started with the help of AI.

Explore More

Others named Ronald Richman

Add new skills with these courses