Chat with us, powered by LiveChat Read chapter 25 – Uses of Efficient Frontier Analysis in Strategic Risk Management from the Text boo | Economics Write

Read chapter 25 – Uses of Efficient Frontier Analysis in Strategic Risk Management from the Text book – Implementing enterprise risk management: Case studies and best practices. John Wiley & Sons. Authors – Betty Simkins; John Fraser; Kristina Narvaez, published by Wiley, 2014After reading chapter 25,Answer the below questions in this assignment Considering an organization you are familar with, describe how the same ERM techniques from this chapter 25 use case could be used to mitigate risks. 1. Brief description of the organization and proposed risks 2. Rationale if the same ERM models and/or techniques can be used in the organization to mitigate risks identified.Requirements1. Length of paper – 2 pages of content (excluding Title and Reference Page)2. APA style, Include Citations3. Minimum two scholarly references with published date no earlier than 20144. Zero Plagiarism
Read chapter 25 – Uses of Efficient Frontier Analysis in Strategic Risk Management from the Text book – Implementing enterprise risk management: Case studies and best practices. John Wiley & Sons. Aut
CHAPTER 25Uses of Efficient Frontier Analysis in Strategic Risk Management A Technical Examination WARD CHING Vice President, Risk Management Operations, Safeway Inc. LOREN NICKEL, FCAS, CFA, MAAA Regional Director and Actuary, Aon Global Risk Consulting Over the past 25 years, the use of advanced quantitative financial and behavioral analysis has received increasing attention in an attempt to better understand and predict the performance impact on hazard risk portfolios. The limitations of single discipline modeling and decision making, which can lead to misreading of financial and performance risks across broad operational categories, were highlighted by the collapse of the financial markets in mid-2007. The need to answer broader risk questions has motivated the risk management industry (i.e., insurance, actuarial, finance, audit, and operations) to recalibrate and redirect core analytical protocols toward a more integrated approach. The effort to take advantage of complex data techniques was, in part, stimulated by the evolving risk management framework integration into what is now being modestly referred to as enterprise risk management (ERM) or strategic risk management (SRM).1 Within the 2013 Risk and Insurance Management Society (RIMS) SRM Implementation Guide, the concept of strategic risk management is defined as a “business discipline that drives the deliberations and actions surrounding business-related uncertainties, while uncovering untapped opportunities reflected in an organization’s strategy and execution.” What distinguishes this definition from previous descriptions of enterprise-wide risk management (ERM) approaches is the effort to sustainably deliver a robust fact-based strategic dialogue across the entire organization. This new strategic dialogue requires an analytical framework that is dynamic and encompasses all areas of an enterprise. In this chapter, we demonstrate how the use of efficient frontier analysis (EFA), and many of its derivative techniques, provides a robust portfolio approach to hazard, operational, market, and reputational risk domains. STRATEGIC RISK MANAGEMENT FRAMEWORK EXAMINED One of the most important ways SRM is beneficial for an organization is its ability to create opportunities for interaction and risk discovery (sometimes called “risk sensing”) across organizational boundaries. This has not always been the case with previous ERM frameworks, where conceptual frameworks were overly formalized and yielded very narrow risk estimates. For most active SRM practitioners, this has proven not to be the case. Even in the area of insurance, where dialogues around risk estimates of frequency and severity are common, the effort to cross internal organizational boundaries has sometimes been met with significant resistance or dismissal. An illustration of the SRM approach as described by RIMS is shown in Exhibit 25.1. Exhibit 25.1 Strategic Risk Management Diagram Source: RIMS Strategic Risk Management Implementation Guide 2012. While first impressions might suggest that the SRM framework is a closed system, in actuality it is a continuous cycle with a robust opportunity for various parts of an organization to recognize and examine risk profiles within the context of a strategy setting, with the focus toward establishing the trade-off between risk transfer and risk assumption. Moreover, the notion of risk appetite and risk tolerance combined with scenario and stress testing speaks to a more comprehensive analytical framework.2 The intent of this framework is to drive a different set of “analytically informed” discussions among decision makers who may also be asking whether the risk profile of the organization constitutes a competitive opportunity. As Fox and Merrifield point out: Strategic risk management focuses on the risks that may impede or accelerate the organization’s strategic objectives for creating value, whether that value is expressed as market share, profit, service provision, donor levels, social impact, or other benefit. Strategic risk management serves as a source of competitive advantage for decision making in two aspects: risk to the objectives themselves and risks arising from the plans to meet the objectives. While many organizations include risks to the objectives themselves, little consideration generally is given to the risks arising from the plans to meet the objectives, nor to the additional opportunities evolving from the underlying strategy and from emerging and dynamic risks. When addressed early and linked to the control framework, strategic adjustments can be made relatively quickly. Fox and Merrifield, RIMS Strategic Risk Management Implementation Guide, 2013 The fundamental difference between traditional risk assessment and SRM is the conscious effort to define advantage or exploitable risk profiles that can be used to sustainably differentiate or distinguish the organization in a competitively noisy environment. MODERN PORTFOLIO THEORY AS A FOUNDATION FOR EFFICIENT FRONTIER ANALYSIS Modern portfolio theory (MPT) is a mathematical method developed in the early 1950s and built out through the mid-1970s as a theory of finance that focuses on the maximization of portfolio return while minimizing the risk for a given amount or level of expected return, by specifically choosing the proportions of various assets contained in the portfolio. For the most part, MPT consists of a number of mathematical formulations that simulate and identify the impact of a risk-adjusted strategy investment diversification where the portfolio risk profile is collectively lower in value or volatility than any one asset. In general, MPT models asset returns as a normally distributed function and recognizes risk as the standard deviation of return where the portfolio is viewed as the weighted combination of assets. Thus, the return of a given portfolio is considered the weighted combination of the asset return streams (Markowitz 1952). Expected return is characterized as: where Rp is the return on the portfolio, Ri is the return on the asset i, and wi is the weighting of the asset i, which represents the asset i in the overall portfolio. The operational concept behind MPT is that the assets in an investment portfolio should not be selected individually but should consider how their relative prices and values change across the portfolio. For many, this speaks to the relative trade-offs between calculated risk and expected return. Therefore, MPT would argue that assets and investments with higher expected returns attract higher measurable levels of risk. If the objective is to maximize the highest possible return on a portfolio of performing assets, MPT provides a way to describe and select those assets and investments that fit the return demand. From an SRM perspective, within any operating organization there exists a series of hazard, operational, market, human capital, and reputational risks. These risks, while generally identified and mitigated separately, in fact exist in an integrated operational space—a risk portfolio. The essential questions that MPT can attempt to answer are: What is the economic value of an organization’s material risk profile when characterized as a financial portfolio? How can the economic and operational volatility of an organization’s risk profile be characterized dynamically and intertemporally? Are an organization’s risk mitigation strategies and methods efficiently matching an organization’s risk profile? If an organization changes its operations in a material way, what impact can be visualized across the organization’s risk portfolio? Given the financial and operational activities of an organization, can an efficient3 risk profile be determined? What trade-offs might be required to achieve an efficient risk profile? Efficiency could be defined as maximizing the contractual financial return relative to the expected utility of risk transferred to a third party. If the trade is equal—in other words, the price of the transference effectively matches the economic dynamics of the risk—then the trade may be considered efficient for both parties. If risk retention and risk transfer are considered two independent variables in an organization’s risk profile distribution, how can the value of risk retention and risk transfer be maximized throughout an organization’s insurance purchasing approach? The approach to answering these questions is found with a number of mathematical techniques within MPT, notably efficient frontier analysis (EFA), dynamic financial analysis (DFA), capital asset pricing modeling (CAPM), or some other behavioral economic analysis of choice under conditions of information uncertainty. For the purpose of this chapter and its case study, we focus on the use of EFA within an insurance purchasing context. It is important, however, to point out that some assumptions contained within the original MPT framework have been controversial and have generated a lively even-sided debate within the academic and practitioner literature base.4 The key assumptions include: The owners of portfolios are exclusively interested in the optimization problem. Asset returns are jointly normally distributed and random. Expected correlations between assets are fixed and constant without a time frame—in effect, forever. All parties to the use or exploitation of the portfolio always maximize economic utility regardless of other information, expectations, or considerations. All parties to the portfolio are considered rational and risk-averse. All parties to the portfolio performance have consistent, timely, and the same information at all points in time. All parties have the ability to accurately conceptualize and calculate the possible distribution of returns to the portfolio, and these calculations, in fact, match the actual returns of the portfolio. The performance of the portfolio is free of tax or transaction costs, and there is no transactional or postreturn friction. All parties to the portfolio are considered price takers, and their behaviors and choices do not influence the price market for the portfolio. Like the transactional or postreturn friction assumption, capital to invest in the portfolio is free and without an encumbering interest rate. A priori risk volatility can be conceptualized, calculated, and known in advance of the portfolio’s construction, including asset/investment selection. Also, the portfolio’s risk volatility is constant except when significant or material changes to the asset/investment distribution are made.5 For many, the primary criticism of the MPT model and many of its derivative subanalytics is that the assumptions are overly restrictive and do not adequately model real-world markets. Critics view MPT output and/or results as mathematical predictions about the future because many of the risk distributions, return calculations, and hypothesized correlations contained in the MPT approach are found in expected values. Since expected values are themselves statistical distributions, they may be inaccurate due to misspecification or may be subject to the influences of mitigating market information or circumstances. Nonetheless, MPT and the use of EFA represent powerful ways to generate insight into portfolio performance and the prospective individual portfolio component efficiencies, which is a key step in implementing a strategic risk management philosophy. PRACTICAL APPLICATIONS OF RISK MEASUREMENT FOR INSURANCE Now we begin our journey through the practical application of risk theory applied to insurance risk and portfolios. The purpose of the process is to optimize insurance placements and risk limits for a relevant organization. We will start with a basic understanding of terminology, knowledge, and skills needed for a proper analysis, and then dive into the details and calculations necessary for a robust study. In the end, we will establish that this process can transcend insurance and be used in alternative risk transfer, noninsurance settings. For the purposes of working through a real-life example, we need to establish insurance equivalents for the portfolio theory formulas. What follows is a list of definitions that we will use throughout this chapter and the equivalent portfolio theory definition. From the previous section, we bring forth the standard portfolio theory formulas for the optimal return and optimal variance using the capital asset line: Here the expected risk spend on an insurance portfolio, E(rsp), replaces the expected return on an asset, E(rp). The expected risk spend is defined as the expected losses not transferred in the insurance contract plus the costs of the insurance contract. The expected risk spend is based on the insurance contract at hand, and will differ (often significantly) based on different contracts analyzed as part of the analysis. The risk-free rate is replaced by an insurance portfolio with no risk transfer (i.e., an uninsured risk line/portfolio). The intent here is to set the steady state at no insurance purchase and determine if insurance will actually lower the risk to the organization. If it does, then insurance should be purchased. If it does not, insurance should not be purchased. In other words, on the capital market line for a given level of risk, you want to buy a portfolio with the highest level of return, but here you want to put together a risk portfolio with the lowest level of losses outside of the insurance contract for a given level of risk. By minimizing the losses, you are maximizing your return. Visually, in our insurance example, you want to pick the bottom of the portfolio efficient frontier and not the one on the capital asset line as in typical portfolio theory. Tail value at risk, also known as tail conditional expectation (TCE) or conditional tail expectation (CTE), is a risk measure associated with the more general value at risk. It quantifies the expected value of the loss given that an event outside a given probability level has occurred.6 MODERN PORTFOLIO THEORY (MPT) Given a portfolio of A, one would prefer B, C, or D as compared to A as shown in Exhibit 25.2. Exhibit 25.2 MPT Portfolio Preference EFFICIENT FRONTIER INSURANCE FRAMEWORK As for MPT, given a portfolio of A, one would prefer B, C, or D as compared to A as shown in Exhibit 25.3. However, notice that the preferred portfolios are now below the stated portfolio, as the preference here is to lower the expected losses and premium dollars spent. Exhibit 25.3 Efficient Frontier Framework Portfolio Preference The replacement of the typical finance standard deviation is an important one. In most financial textbooks (and practical usage), the standard deviation is most often from a normal distribution. In our example, we may use any multivariate distribution that is applicable, but for practicality we have chosen closed-form lognormal/Pareto distributions, which are typically used in insurance. We have also made another significant variation in the use of the tail value at risk (TVaR) instead of the standard deviation. The intent of this replacement is that most insurance contracts are low-probability contracts, so the standard deviation does not completely describe the use or intent of the contract. By using the tail value at risk, we can focus on the main use of the insurance contract and allow for multiple distribution functions, which will better describe the underlying distribution for its intended use. The given probability of the TVaR calculation is up to the user. We have selected a probability level of 95 percent, meaning that the worst 5 percent of outcomes are averaged to produce the TVaR figure at 95 percent. The next complexity of selecting a TVaR calculation means that one will almost always be required to run a simulation model to determine the statistic. Only the most simplistic applications will allow for a closed-form expression of the measure of volatility. Therefore, we have chosen to use Monte Carlo simulation for our application of the efficient frontier for insurance portfolios. The added benefit of using a simulation model is that we are now free to use multivariate distributions, complex correlations, copulas, and other transformations that may be too complex for most formulaic calculations. It is also important to note that most insurance portfolios contain more than seven to 10 different contracts/risks; so modeling is often a required component for any portfolio analysis. We certainly do not want to gloss over the correlation concerns with insurance contracts, as there are many. It is becoming more common to use copulas (and different versions of copulas formulas—for example, a Gaussian copula or a Gumbel copula7) to measure more complex correlations. The choice and use of correlations are critical elements of a proper model and should be reviewed with statisticians or actuaries versed in their use.8 For our purposes, we have assumed no correlations, for the simplicity of the calculations and translation of the results into knowledge. It should be noted that TVaR is a simple method to allocate capital for insurance risk. The TVaR demonstrates the level of risk for a given insurance line or contract. Capital can thus be allocated based on that level of risk. Capital allocation theory is beyond the scope of this chapter, as there are many other variations upon this theme for allocating capital. It should be noted that the next step beyond the portfolio optimization is capital allocation. One immediate question with the introduction of the TVaR as a risk measure is: “What is the right level of risk?” Or in simpler terms: “What is the largest loss I am willing to take?” Management should make a conscious decision on the level of risk to take through a formal enterprise risk management program. Risk setting is a critical step in any efficient frontier analysis and should not be overlooked. For our purposes, we have assumed that the organization will seek to minimize risk and minimize the annual costs to the budget (i.e., uninsured losses and insurance costs). With some liberties taken in the usage of financial theory in the development of our risk transfer methods, we can now build a framework to analyze risk and optimize risk transfer spends (i.e., like insurance). The framework is intended for financial professionals versed in financial theory and its applications. With proper application, many organizations across the world could more efficiently allocate their risk spends and reduce the risk to their balance sheets. SAMPLE CASE STUDY Let’s start with a practical example of a large corporation with three basic insurance risks: earthquake exposure to buildings, workers’ compensation insurance, and general liability insurance. Earthquake risk is defined as the potential for loss to buildings and property from a large earthquake as well as business interruption following the event. For our sample company, management has chosen to insure earthquake risk with a policy that covers $25 million in business and personal property with a 5 percent per occurrence retention. Earthquake sprinkler leakage is not covered. For workers’ compensation, management has chosen to buy a retention policy with a $1 million per occurrence retention, with no upper limitation, as it is a statutorily unlimited coverage. The general liability coverage is represented by a $25 million per occurrence limit and a $250,000 per occurrence retention. Now that we have the insurance coverage, we can assume the risk of loss for each of the three lines of coverage follows basic loss distributions as follows: Earthquake (EQ). Loss frequency has a Poisson distribution with mean λ = 0.1, and severity has a Pareto distribution with parameter θ = 5,000,000, α = 50,000,000. Workers’ compensation (WC). Loss frequency has a Poisson distribution with mean λ = 50, and severity has a lognormal distribution with parameters μ = 10, σ = 1.5. General liability (GL). Loss frequency has a Poisson distribution with mean λ = 10, and severity has a lognormal distribution with parameters μ = 12, σ = 1.0. Notice that because the retentions are rather large, we are more focused on the tail portion of the loss distributions. We have decided not to use correlations for this example, to allow the reader to more easily follow and replicate the figures. In reality, correlations would be a key input into the model and would help determine the optimal risk transfer structures. Exhibit 25.4 is a brief summary of the expected losses for the insurance policy and to the corporation below retentions and above insurance limits. The intent of this exhibit is to show the risk profile of the corporation using the assumed distributions listed earlier. Retention Limit Current EQ 5% $25,000,000 $2,500,501 WC $1,000,000 Statutory $3,163,992 GL $250,000 $25,000,000 $1,597,373 Portfolio $7,261,866 Exhibit 25.4 Mean Retained Losses by Line Note that there are many methods for fitting proper distributions and selecting the parameters to ensure good fits of historical data. Curve fitting is well beyond the scope of this chapter, and we will let the reader peruse other sources for details on loss distribution fitting. With the knowledge of the current risk profile, we can now seek to optimize the portfolio and the insurance purchase by selecting different insurance options for our portfolio. By “options” we mean to choose different risk transfer contracts that can be used to modify the risk profile of the corporation. This can be done by taking a mathematical approach (using increments off of the current program) or by selecting common insurance contract terms known in the insurance marketplace. Exhibit 25.5 lists the options using the two different methods. Option #1 Option #2 Option #3 Option #4 Option #5 EQ 5% retention 5% retention 5% retention 5% retention 10% retention $20M limit $30M limit $40M limit $50M limit $25M limit WC $250K retention $500K retention $2M retention $3M retention $4M retention Statutory limit Statutory limit Statutory limit Statutory limit Statutory limit GL $500K retention $1M retention $2M retention $3M retention $500K retention $25M limit $25M limit $25M limit $25M limit $30M limit Exhibit 25.5 Portfolio Options under the Mathematical Approach As one can see, there is almost an unlimited amount of options in the mathematical approach. The possibilities are only limited by your computing power. It should also be noted that the selections for the different options are based on simple increments from the current values. These options may not be available in the insurance marketplace. This is somewhat intentional, as the goal is to find the optimal mathematical solution and then find the insurance option that gets closest to that optimal solution. The coverage availability approach is shown in Exhibit 25.6. Option #1 Option #2 Option #3 Option #4 Option #5 EQ 5% retention 5% retention 5% retention 5% retention 10% retention $20M limit $50M limit $75M limit $100M limit $25M limit WC $250K retention $500K retention $2M retention $5M retention $10M retention Statutory limit Statutory limit Statutory limit Statutory limit Statutory limit GL $500K retention $2M retention $5M retention $10M retention $500K retention $25M limit $25M limit $25M limit $25M limit $30M limit Exhibit 25.6 Portfolio Options under the Coverage Availability Approach You will notice a subtle change in Exhibit 25.6, as indicated by the bolded options. The difference here is that we have selected options that can be knowingly purchased in the insurance marketplace. For more historical reasons than anything else, insurance risk transfer has been based around round numbers for retentions and limits. By using these options, we are guaranteeing (assuming the entity is insurable) viable options for the corporation. Now the mathematicians can begin their number crunching. Using the options for Exhibit 25.5, we can determine the expected risk spend (expected losses to the corporation, which are the losses below the retention and above the limits) and the tail value at risk (TVaR) for each option, and then plot them on a graph. We have done this for each line described earlier and combined all the lines in a portfolio. We have assumed no correlations in the portfolio, to keep the mathematics and logic easier for the reader to follow. To obtain Exhibits 25.7 to 25.10, we have run a simulation model using a Monte Carlo simulator. There are various software programs that provide the capability to simulate losses by using different distributions. Readers may wish to try the parameters within their own software to follow along. Exhibit 25.7 Earthquake Modeled Options Exhibit 25.8 Workers’ Compensation Modeled Options Exhibit 25.9 General Liability Modeled Options Exhibit 25.10 Combined Portfolio Modeled Options Exhibit 25.10 provides the assumed insurance premiums for each of the mathematical options. In reality, we would work with insurance brokers to obtain insurance quotes for each of the options to arrive at a true market price for each option. The option exists to use an actuarial estimate of premium, which is not preferred. The reason an actuarial estimate of premium is not preferred is that the market does not always follow actuarial estimates and can often fall to other vagaries of market pricing (underwriting judgment, capital constraints, class restrictions, premium goals, etc.). Therefore, we recommend using different quotes provided by insurance brokers for each option. Given insurance premiums are presented in Exhibit 25.11. Current Option #1 Option #2 Option #3 Option #4 Option #5 EQ $2,941,765 $2,353,412 $3,618,371 $4,942,165 $6,008,556 $2,941,765 WC $288,796 $1,098,994 $607,957 $116,861 $64,051 $40,630 GL $1,359,385 $696,302 $261,277 $68,436 $26,041 $696,302 Exhibit 25.11 Given Insurance Premiums Now with the options plotted (using our modeled losses, TVaR, and insurance premiums), we have created an efficient frontier and can determine the best option for a given level of risk. Ideally, we would select more than five options, and the options would be more complex. The beauty of the process is that it can be as simple or complex as one desires. The process is flexible so as to handle different risk measures (not just TVaR) and can optimize different costs of risk (losses, insurance spend, internal costs, etc.). It is also important to have an enterprise understanding of our risk appetite and tolerance. By having a formal statement of risk appetite, we can use that knowledge in the proper selection of the options in our efficient frontier. CASE STUDY GENERAL FINDINGS Using the same charts as previously, we can make a few judgments about the options presented. For this example, let’s assume the company does not want to lose more than $20 million in a fiscal period. This would be considered its risk appetite and is roughly equivalent to maximizing utility for a corporation. By selecting a program that puts the $20 million or more at risk, there is potential for breaching that corporate goal. Note that the models assume that insurance is recoverable for the risk analyzed. This may not always be the case, so it is important to review coverage and ensure that the model is reflective of the coverage provided and that the insurance carrier’s ability to pay is also reviewed. The numbers and options have been chosen to reflect realistic scenarios. The results are typical of what we see in the insurance and corporate landscape. Findings on the earthquake simulation are (see Exhibit 25.12): We have a wide variety of options and a wide variety of risk levels. The slope of the efficient frontier is very steep as a result. The options all lie close to the frontier, resulting in many efficient options. If the organization is using a risk appetite for only earthquake risks, then it would look at the efficient frontier below the $20 million tail value at risk level. (Options #3 and #4 qualify.) Exhibit 25.12 Efficient Frontier on Earthquake Options Findings on the workers’ compensation simulation are (see Exhibit 25.13): We have a similar wide variety of options, but a much tighter range of risk levels. The slope of the efficient frontier is very shallow as a result. The options all lie close to the frontier, resulting in many efficient options. If the organization is using a risk appetite for only workers’ compensation, then it would look at the efficient frontier below the $20 million tail value at risk level. All options qualify. Because workers’ compensation risks are relatively stable, the model has only modest differences between options and all options are reasonable. To change the options to give a greater range of results, one could be more extreme on the options (assuming the insurance market is willing to provide such options to the corporation). Exhibit 25.13 Efficient Frontier on Workers’ Compensation Options Findings on the general liability simulation are (see Exhibit 25.14): We have a similar wide variety of options, and a modest range of risk levels. The slope of the efficient frontier is shallow as a result. The options all lie close to the frontier, resulting in many efficient options. If the organization is using a risk appetite for only general liability, then it would look at the efficient frontier below the $20 million tail value at risk level. (All options qualify.) Similarly to workers’ compensation, different options can be substituted here for a wider range of outcomes. Exhibit 25.14 Efficient Frontier on General Liability Options The portfolio shown in Exhibit 25.15 is simply the annual events for all three lines added together, again with no correlation assumptions (i.e., independence). Portfolio option #1 is the sum of each of the respective lines Option #1, with no aggregate insurance limitations assumed. The framework certainly allows for aggregations and correlations; we have not provided them here for simplicity. Findings on the portfolio simulation are (see Exhibit 25.15): The portfolios no longer follow the efficient frontier, as some of the options lie considerably above the efficient frontier line. The slope of the efficient frontier is somewhat steep, and follows the risks that contribute to the portfolio (earthquake in this instance is driving the steep curve). If the organization is using a risk appetite for the entire portfolio, then it would look at the efficient frontier below the $20 million tail value at risk level. (Only option #4 qualifies.) Exhibit 25.15 Efficient Frontier on the Combined Portfolio Options We can now see how the efficient frontier insurance framework utilizes the information provided, combines a complex set of insurance structures, and uses a risk appetite to select the best portfolio option. This framework facilitates a company’s ability to make fact-based decisions, using real-time information. The organization no longer has to wonder if it is getting the best deal or if there were other options that might have provided a better bang for its buck. INTENDED USES FOR OUR APPROACH It is important to note that this framework, as all others, has limitations in its use. The intended purpose for this framework is to help large corporate organizations with their risk management process and portfolio management. The framework is robust enough to handle both insurance risk and noninsurance risk. It is best used within an established enterprise risk management discipline. The following is a brief description of the benefits of an ERM strategy and how our framework fits within those benefits, which is important for understanding the full potential of its use. We have referenced James Lam’s (2003) benefits, as they are excellent. The four benefits to risk management as defined by James Lam9 are: Managing risk is management’s job. Managing risk can reduce earnings volatility. Managing risk can maximize shareholders’ value. Risk management promotes job and financial security. In item 1, Lam indicates that management has access to critical information about the business and therefore has a duty to use it to manage risk. We agree wholeheartedly with his assessment, and our process is intended to improve senior leaders’ understanding of risk and give them more transparency in managing costs. In item 2, Lam indicates that top-tier companies better manage their earnings volatility through risk management activities. Too often firms do not consider risk management or relegate it to small, back-room activities. This often overlooks the value that can be had by minimizing volatility on major risks to the organization. By taking a more in-depth look at the portfolio of risk through the efficient frontier and making more data-driven decisions, volatility can be reduced. In item 3, Lam indicates that firms can increase their shareholders’ values by 20 to 30 percent or more by identifying opportunities for risk management and business optimization through a risk-based program.10 This goes beyond just managing volatility and extends to a better-performing business model with more accurate information spread across the organization. Using risk-based measures is a critical element of any risk measurement department. Components like the efficient frontier require wide distribution and use; otherwise they are not getting the full attention they deserve. For the real company this framework was modeled after, the efficient frontier was sent directly to the business leaders and they became owners of the risks for their particular areas of influence. They had to learn the language of risk and through a diverse corporate program are now using the risk assessments as part of their daily routines, leading to a better understanding of risk for the business leaders and more accurate information for the risk management team. When implementing this framework at different companies, we often hear something to the effect of “What’s in it for me?,” which really gets down to job and financial security for individuals, as noted in item 4. A truly robust framework should allow for better risk taking, as the guidelines have been set and approved by management. With a data-first strategy there should be less concern over losing your job, as long as the risk is within the tolerances set by management. Thus, when a calculated risk does happen, the organization is ready to respond. All too often, the opposite is true and a surprise event leads to the ouster of a senior leader. We believe that our framework will help provide senior leaders with the information they need to take calculated risks and therefore preserve their livelihoods, regardless of their golden parachutes. It is inherently assumed that the lines of insurance or risk transfer can be modeled appropriately. This is certainly not an insignificant assumption, as data limitations, information asymmetry, internal disputes, and plain modeling foibles can easily derail the best intentions of the framework. To combat these issues, it is always important to stress test any model, back-test the model if possible, involve different business leaders to vet the results of the model, and use independent experts to question and test the assumptions in the model. Any model is only as good as its creators, so it is advised to hire the best and then “trust but verify.” MODERN PORTFOLIO CONCERNS CONTAINED IN THE FRAMEWORK There are several modern portfolio shortcomings that we should address in relevance to our framework, represented in these MPT assumptions:11 Asset returns are (jointly) normally distributed random variables. Correlations between assets are fixed and constant forever. All investors have access to the same information at the same time. All securities can be divided into parcels of any size. Risk and volatility of an asset are known in advance and are constant. To address the first point, we have already discussed our use of nonnormal distributions and feel the framework is robust enough to handle any variation of distributions that a modeler feels is appropriate. In postmodern portfolio management, the use of normal distributions has also been relaxed for similar reasons, so this is not as much of a concern as originally stated. Correlations are clearly not constant or fixed, and once more, they are hard to measure without good historical data. The modeler will often make assumptions around correlations and use copulas to simulate different relationships between correlations at different points of a distribution. It is clear that, again, modern computing power has allowed us to use correlations in a much different way than in the past. Unfortunately, the flexibility is not always a good thing. As correlations are often a modeler’s assumption, the use and selection of them should be highly scrutinized. In insurance, the market is very far from what one would call efficient. On stock exchanges there are clearinghouses and information services to provide an up-to-date information exchange. And even then, the market is not truly efficient. In insurance, pricing different contracts is dealt serious information asymmetry and is fraught with poor information, as the data and pricing start with an actuary in a corporate insurance company, then are translated by an underwriter, and then are ignored by sales professionals (only slight exaggerations involved). This lack of an efficient market is what makes our risk framework so critical. Without it, the insurance buyer has little chance of getting the best deal. Our framework does have an issue with the ability to fractionalize options and to get the insurance market to respond to all potential mathematical pricing options. This can happen for a variety of reasons: internal restrictions, lack of proper information, risk limits, reinsurance requirements, and so on. The framework can, however, lead insurance markets to more optimal insurance contracts. So even if an option is not technically available, the closest option available in the marketplace can be substituted in similar fashion. In insurance, especially for large corporations, the party who controls the information can hold a competitive advantage. Both parties to a transaction (corporation and insurance company) have pieces of the puzzle in determining the true risk exposure for the corporation. The insurance company has a significantly larger database of similar risks, and the corporation has very specific data to its risk profile and a much better understanding of how its risk profile is changing. All of this means that the underlying risk is clearly not constant and is difficult to predict. Thankfully, to optimize a risk portfolio one does not require perfect information, only relative accuracy and reasonable assumptions on information that is not available. In our framework, we are not fully constrained by the limitation of modern portfolio theory, as we are not developing a theory, but rather a practical modeling application. We also have use of greater computer power than ever before, which allows the relaxation of many of the constraints presented earlier in this chapter. We believe that we have addressed the major concerns of modern portfolio theory and its application to insurance, but we will leave that conclusion fully up to the reader. CONSIDERATION OF BEHAVIORAL CONCERNS IN STRUCTURE A commonly stated concern with the efficient frontier theory is that is breaks down due to behavioral concerns with the market participants. The participants do not always maximize utility, information is not always readily available, and people do not always make decisions based solely on mean and standard deviations of returns.12 Because of these concerns, it is necessary to discuss the behavioral implications for our framework. We start with the definition of common behavioral errors associated with information processing and then move on to the types of informational errors. Definition: “Information processing—errors in information processing can lead investors to misestimate the true probabilities of possible events or associated rates of return.”13 The different types of informational processing errors are: Forecasting errors Overconfidence Conservatism Sample size neglect and representativeness14 People often have problems forecasting the future. The most typical concern is using the most recent information to forecast the future. As risk professionals, we see this every day as everyone thinks that the most recent years of information reflect the best and most reliable information. In reality, forecasting is much more complex than that. In our model, we rely on forecasting techniques, but concentrate on methods that use a minimum of five years of information and often 10 years or more of information if it is available. This reduces any forecasting errors and relies on data methods, which are more consistent than human forecasts. Overconfidence is another common behavioral trait that is difficult to overcome. People often believe they forecast better than they actually do and are often unwilling to recognize that blind spot. This is where a robust process and using several independent experts can reduce the bias that comes from overconfidence. Any one person can have his or her own biases, even experts. So involving a team of experts and a process to reduce the bias is critical to getting a more accurate estimate of risk. Sometimes a process or framework can be too slow to react to new information. A slow response often occurs in insurance where there is an unrecognized change in a company’s risk profile. The client history and the industry data are naturally slow to reflect trends, and large volumes of data are required to finally identify new information. This phenomenon is the counterbalance to being too fast to react. The conservatism bias is best handled by involving business experts in the process to question and comment on changes in the business and to get a common understanding on how those changes are reflected in the modeling work. Sample size bias is usually pretty well handled by expert modelers. They understand that small sample sizes are less credible than large ones and therefore provide less usable information within a forecast. This can be difficult to communicate, however, so it should be noted that communication of the biases of sample size neglect and representativeness is just as important as realizing them. We next consider behavioral biases. It has been stated that “Behavioral biases largely affect how investors frame questions of risk versus return, and therefore make risk-return trade-offs.”15 The main types of behavioral biases are: Framing Mental accounting Regret avoidance Prospect theory16 Framing is the way a question is posed about risk. The question can be posed as “Will you lose $50 million under a worst-case scenario?” or be posed as “Will you stand to make $5 million on the expected basis under the same scenario?” Different questions can lead to different responses, even in seemingly rational people. The way we approach framing is to include the positive and the negative, as well as several other scenarios to provide a range of responses. This can be information overload at first, but after the framework is understood, it provides key information to avoid the framing bias. Oftentimes people segregate risks based on a particular belief or internal structure within an organization, saying it is fine to take risk in this particular area but not in another one. This is called mental accounting. Organizations are plagued with mental accounting as different divisions; regions, locations, and management all create some level of mental accounting for an organization. The only way to minimize this bias is to have the C-level executives dictate the level of risk they want to adhere to as an organization; otherwise the line-level managers will all view risk through their own lenses. Consultants can often point out this bias within a company, but a company that is not already aware of this bias can fail to use any risk framework appropriately. Another large corporate risk is regret avoidance. This is the phenomenon that losing a bet on a scenario with long odds is more painful than losing the same amount on a game with a better expected outcome. This is illustrated in the saying “No one ever got fired by hiring IBM.” Large corporations have different cultures and approaches to this bias. Some companies in Silicon Valley make an extra effort to avoid this bias and to create a risk-taking culture. Either way, this is a concern for our analysis. Any option we present, no matter how risk reducing to the organization, will look suboptimal to the current one based on our behavioral biases. Prospect theory does not apply as well in a corporate environment as in a personal one. In prospect theory the change in wealth from one’s current wealth is what is important, not the absolute wealth. For an organization, each employee has his or her own “wealth” and access to company funds. Many are limited in this area, and any change in wealth for the company is not often felt by the employee. There is a disengagement from the wealth of the corporation. This does not mean there is a certain level of bias in the corporation. As we have shown, there are several behavioral considerations to make in any risk framework. We have tried to comment on how we address those concerns, but are sure there are many other successful ways to handle these biases. The key consideration here is to be aware of the biases and to make sure the organization addresses these issues as part of its enterprise risk management program. QUESTIONS How does efficient frontier analysis differ from other forms of complex risk assessment techniques? What limitations might an analyst encounter through the use of efficient frontier analysis? How can efficient frontier analysis results be communicated and utilized with nonmathematical decision makers? ACKNOWLEDGMENTS Special recognition is given to the following editors of this chapter: Jillian Hagan, FCAS Virginia Jones, ACAS Betty Simkins, PhD NOTES 1 “RIMS Strategic Risk Management Implementation Guide,” 2013.2 “Details of Risk Appetite and Tolerance,” We have defined efficient to mean the maximum return on investment for keeping risk or transferring risk to a third party.4 Milan Vaclavik and Josef Jablonsky, “Revisions of Modern Portfolio Theory Optimization Model,” 2011.5 Jerry A. Miccolis and Marina Goodman, “Next Generation Investment Risk Management: Putting the ‘Modern’ Back in Modern Portfolio Theory,” Journal of Financial Planning, January 2012.6 Ibid.7 Ibid. Bodie, Zvi, Alex Kane, and Alan Marcus. Investments. 8th edition. New York: McGraw-Hill.8 For reference, a good article on copulas is available on the CAS website: James Lam, “Enterprise Risk Management from Controls to Incentives,” 6–9.10 Ibid., 8.11 Miccolis and Goodman, “Next Generation Investment Risk Management,” 2012.12 Zvi Bodie, Alex Kane, and Alan Markus, Investments, 8th ed. (New York: McGraw-Hill, 2008), 385.14 Ibid., 386.15 Ibid., 387.16 Ibid., 387–388. REFERENCES Bodie, Zvi, Alex Kane, and Alan Marcus. 2008. Investments. 8th edition. New York: McGraw-Hill. “RIMS Strategic Risk Management Implementation Guide.” 2013. “Managed Futures—Reducing Portfolio Volatility, A Look into the Top 3 Managed Futures Accounts Worldwide.” 2011., March 19. Markowitz, H. M. 1952. “Portfolio Selection.” Journal of Finance 7:1, 77–91. Markowitz, H. M. 1959. Portfolio Selection: Efficient Diversification of Investments. New York: John Wiley & Sons, reprinted by Yale University Press, 1970. Merton, Robert. 1972. “An Analytical Derivation of the Efficient Frontier.” Journal of Financial and Quantitative Analysis 7, September. Miccolis, Jerry A., and Marina Goodman. 2012. “Next Generation Investment Risk Management: Putting the Modern Back in Modern Portfolio Theory.”Journal of Financial Planning, January. Lam, James. 2003. Enterprise Risk Management from Controls to Incentives. Hoboken, NJ: John Wiley & Sons. Taleb, Nassim Nicholas. 2007. The Black Swan: The Impact of the Highly Improbable. New York: Random House. Vaclavik, Milan, and Josef Jablonsky. 2011. “Revisions of Modern Portfolio Theory Optimization Model.” ABOUT THE CONTRIBUTORS Ward Ching is Vice President, Risk Management Operations, at Safeway Inc., located in Pleasanton, California. His responsibilities include enterprise risk management, integrated risk finance, hazard loss control, environmental compliance, property risk control/engineering, and a variety of retail, distribution, and manufacturing risk management initiatives, including Safeway’s Culture of Safety. Prior to joining Safeway, he was a principal at Towers Perrin and a managing director at Marsh. He completed his undergraduate and graduate degrees in international relations and economics at the University of Southern California, and has taught and written extensively on the subjects of international relations, game theoretical applications in foreign policy, and enterprise risk management. Loren Nickel, FCAS, CFA, MAAA, is the Regional Director and Actuary for the Northwest Region (Seattle, San Francisco, and Los Angeles) and National Leader for Operational Risk for Aon Global Risk Consulting. He is responsible for providing clients with actuarial support as well as a variety of financial and tailored risk services. His work includes pricing, reserving, profitability studies, retention studies, dynamic financial analysis, and captive analysis for all major lines of insurance. He provides professional actuarial opinions as well as a variety of innovative risk solutions.

error: Content is protected !!