Across the insurance market today, data requirements, technical development and applications for the output of catastrophe models are increasingly sophisticated. Probabilistic risk measures are common currency throughout the market, while operating practices, individual priorities and differing perspectives force different evolutions through this process on individual market participants.
A common driver of these efforts is the need to relate the cost of catastrophe risk to a given capital base. One way to view this is as a relationship between three components:
- capital exposed to cat risk
- premiums associated with that business
- risk metrics
Each of these components has variations, allowing flexibility but requiring clear thinking and decision-making along the way.
Of the three, premium income is usually the most readily available. The target business applications will help determine the choices between using gross/net and written/earned premiums. Gross written premiums would be suitable for underwriting considerations, while net earned premiums better support profitability analyses.
Qualifying and quantifying exposure accumulations present the next series of choices and challenges, and are the most labour intensive part of the process. Often, because of the effort involved, it is tempting to collect data at a relatively broad level of resolution, rather than more site-specific levels. Unfortunately, low resolution and aggregated data are not adequate for modelling needs. The best option remains to gather the highest resolution, most detailed data available and work upwards.
The resulting data are not only suitable for modeling catastrophe risk, but also provide a basis for a number of exposure profiles that can be related to the capital put at risk. The basic definition for capital at risk is a sum of the (re)insurance limits in-force in a given area. Several aspects of this information need to be clear for it to be developed:
- One aspect is the nature of those limits and how they vary across different types of business. Consider the difference between an insurance-to-value exposure versus an excess of loss construct with a stated occurrence limit.
- Another aspect is the treatment of how scheduled business covering a large number of locations exposes primary and/or excess of loss limits.
Clarity of interpretation and methodology is important to avoid counting limits multiple times while adding exposure areas together.
- Finally, exposures should reflect gross or net premiums as much as possible, again depending on the target application.
Risk metrics, our third component, also have flexible types and definitions.
The most common metrics these days reflect probabilistic analyses that address frequency and severity of potential loss. Probability of exceedance analyses identify the likelihood of different levels of loss, while other measures such as the expected loss (pure premium) and its associated volatility (usually expressed as standard deviation) are popular bases for determining the cost of catastrophe risk.
That calculated cost may be used to support capital allocation and risk pricing. Choices at this stage should reflect fundamental corporate philosophies on risk tolerance and risk management. A key factor is the role and scale of catastrophe risk in the portfolio's development. For a portfolio where catastrophe risk is not a primary focus, the goal will be to reduce portfolio volatility. The emphasis is likely to be on controlling the probability and severity of the maximum potential loss levels. A portfolio heavy in catastrophe risk will share those concerns, but is likely to have more choices in allocating capital by using risk metrics. These can include segmenting exceedence probability distributions and their associated expected losses, quantifying and controlling diversification, and engineering the development of catastrophe risk relative to exposure and development of premium.
Additionally, marginal capital costing and pricing become a viable option.
While this makes possible the valuation of incoming business relative to the overall portfolio risk, most techniques allow the order in which the business arrives to influence the costing of subsequent opportunities.
Periodically, the cost of capital for the portfolio and its individual components need recalculation to determine their true relationships.
Returning to the exposure-premium-risk triangle, we now have the tools not only to calculate the cost of catastrophe risk, but better to control it and ensure profitability. Individual risk differentiation, competitive pricing opportunities, and the effectiveness of diversification and risk transfer vehicles become much more tangible and consistent.
These capabilities have established catastrophe modelling as an ongoing business discipline. While these tools provide valuable information and support a wide range of decision-relevant information, their limitations must remain clearly recognised. Models help describe what we understand and believe about reality, but cannot predict it completely. In the applications discussed, they provide the important ability to evaluate different choices on a consistent basis, as well as a new medium of communication.
CASE STUDY: WELLINGTON CAT RISK MANAGEMENT PHILOSOPHY
Over the past five years, Wellington Underwriting has made a significant effort to advance catastrophe risk assessment and management capabilities through the use of cat models. Achieving success in this discipline at a complex organization operating in Lloyd's and through multiple branch offices in the US has required dedication and flexibility. This article will briefly profile our thinking through the development of the process and the applications of the results.
What's important
Effective planning starts with the end in mind. One of the first choices we faced was where in the business process to emphasise use of modelling and cat risk metrics; should our key priority be to understand portfolio risk or is using models as underwriting tools more important? In underwriting applications should our emphasis be on risk selection or risk pricing?
On the portfolio side, should we be looking at portfolio risk across all business first, or for individual business units?
These questions look binary, but the answers are in fact a blend. We will absolutely apply the models to support underwriting, but understanding portfolio risk is the primary target. In underwriting, both risk selection and pricing applications will play a part, their weighting and relevance to be determined by the underwriting team and the nature of the individual risk. We will understand our combined portfolio risk through the understanding of the risk of individual units.
The nature of our business influences our choices
Clearly, the priorities we set above reflect the nature of our business.
As a Lloyd's syndicate, most of our portfolios are based on large, complex risks rather than large numbers of small, homogenous risks. This type of portfolio requires flexibility in applying any sort of model construct to the underwriting process, as each risk will have its own exceptional needs and behaviors.
Additionally, we manage our portfolio through multiple distinct books, each of which has its own growth and performance targets. Subordinating individual book performance to the whole constrains choices in managing risk, growing the business and maximising profitability.
Having said that, as a business we must naturally also understand the comprehensive portfolio and relate that back to syndicate management practices.
We must support regulatory, reinsurance and rating agency requirements.
Key to this is the capture of correlations in risk across the spectrum of business.
It's all about correlation
All cat risk assessment is event-driven at some level. Large events are expected to hit multiple books but in different ways. Varying engagements in different geographic areas also mean simply adding cat risk numbers together does not always work. Models provide keys to quantifying correlated risks by providing consistent event frameworks through which different risks and portfolios can be analysed independently or together.
An example of this is the Lloyd's realistic disaster scenarios (RDS).
These are a series of specific event scenarios prescribed by Lloyd's.
Each syndicate is required to estimate its potential losses from these scenarios and report them to Lloyd's annually.
Completing these provides us with a clear, repeatable view of correlated risk across our books. Each event scenario is applied to each book independently and the results may be aggregated to provide an overall view of risk.
This sort of deterministic, single-event analysis is useful for testing processes and illustrating book behavior. However it does not address distributions of severity, nor does it address frequency. To incorporate them, we rely on probabilistic analyses, which provide metrics such as loss levels with associated probabilities and annual expected loss levels.
Even in these more complex frameworks, the modelling methodologies respect and reflect correlation of risk within and across portfolios.
This ability to understand correlation and diversification has ramifications throughout the business process. For instance, understanding correlated risk within a portfolio, along with the relationships of market forces, assists in the business planning process for individual units. Likewise it is possible to quantify diversification across business units when allocating capital and/or shared reinsurance costs.
Eventually, it is possible to evolve a feedback loop defining the characteristics of business most profitable to a specific risk and market situation, and use that knowledge to identify better business supporting growth towards performance goals.
www.wellington.co.uk