DETAILED ACTION
This communication is a Non-Final Office Action rejection on the merits. Claims 1, 7, 9, 11, and 13-22 are currently pending and have been addressed below.
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 06/04/25 has been entered.
Response to Arguments
Applicant's arguments filed on 06/04/25 (related to the 112 rejection) have been fully considered and are persuasive. Applicant clarified that the processing circuitry is configured to scan the liability system for predefined measurement parameters capturing dynamic characteristics of at least one liability risk driver of the parameterized liability risk drivers. Therefore, 112 Rejection has been withdrawn.
Applicant's arguments filed on 06/04/25 (related to the 101 rejection) have been fully considered but they are not persuasive.
Applicant states, on pages 13-16, that the system has an improved capability to capture the external and/or internal factors that affect liability exposure, while keeping the used trigger techniques transparent. The system captures complex, interdependent occurring patterns of said factors, and captures possible correlations at the same time. These concepts are discussed in at least paragraphs [0011] and [0014] of U.S. Publication No. 2021/0012256. Moreover, paragraph [0017] sets forth a further advantage that the measure parameters and/or liability risk drivers can automatically be weighted, which allows a further self-adaption of the system. This feature is now more clearly recited in the independent claims. Moreover, one of the main advantages lies in the technical purpose of making sure that liability risk drivers can be meaningfully and technical-based compared to one another.
Examiner respectfully disagrees with Applicant’s arguments and amended claims do not overcome the 101 Rejection. The amended limitations in their respective claims, are directed, in part, to systems and methods for tracking, parametrizing, modeling and forecasting developments of liability loss measures. These claim elements are considered to be abstract ideas because they are directed to certain methods of organizing human activity which include mitigating risk. In this case, predicting a liability loss is a form of mitigating risk because it allows the system/method to determine which economic risk factors contribute to the predicted loss. Therefore, insurance companies can minimize risks of a loss from fluctuations in the economy by adjusting their premiums or transferring the risk to another insurer. Also, the limitation of “performing a weighting process of the selected liability risk drivers by applying a defined transformation to normalize the selected liability risk drivers to the final time series of the parameters” is directed to mathematical calculations. If a claim limitation, under its broadest reasonable interpretation, covers mitigating risk and/or mathematical calculations, then it falls within the “certain methods of organizing human activity” and/or “mathematical concepts” grouping of abstract ideas.
The main functions of the additional elements recited in claim 1 are merely used to: collect data (e.g., measurement parameters), analyze the data (e.g., determine liability risk drivers using a known technique), and display certain results of the collection and analysis (e.g., provide the minimum number of liability risk drivers). Those are functions that the courts have described as merely indicating a field of use or technological environment in which to apply a judicial exception (see MPEP 2106.05(h)). Further, the limitations of “automatically selecting ...,” “adapting dynamically the minimum number of liability risk drivers…,” “generating the time-dependent composite index parameter based on the adapted reduced set of liability risk drivers…,” and “an automated risk transfer adapted based on time-dependent fluctuations” are considered a conventional computer function as they’re just performing repetitive calculations and transferring/transmitting the risk (MPEP 2106.05d, recomputing values/drivers and transmitting data). Also, adding a final step of “automatically transferring/transmitting the risk” does not add any meaningful limitations (MPEP 2106.05(g), insignificant application).
Lastly, the claim fails to recite any improvements to another technology or technical field, improvements to the functioning of the computer itself, use of a particular machine, effecting a transformation or reduction of a particular article to a different state or thing, adding unconventional steps that confine the claim to a particular useful application, and/or meaningful limitations beyond generally linking the use of an abstract idea to a particular environment. See 84 Fed. Reg. 55. Viewed individually or as a whole, these additional claim elements do not provide meaningful limitations to transform the abstract idea into a patent eligible application of the abstract idea such that the claim amounts to significantly more than the abstract idea itself. Thus, the claim is not patent eligible.
Independent claim 22 recites similar features as independent claim 1. Claims 7, 9, 11, and 13-21 are rejected for having the same deficiencies as those set forth with respect to the claims that they depend from, independent claim 1.
Applicant's arguments filed on 06/04/25 (related to the 103 rejection) have been fully considered but are not persuasive.
Applicant states, on pages 16-20, that the applied references are not measurement methods in the sense of the presently claimed subject matter, but simulation methods. All references are essentially what is called scenario-based. Scenarios are not the physical world but are models. In particular, they do not propose a dynamically adjusted measurement method but a use fix simulation modelling. Further, none of the art proposes varying the measuring approach, i.e., the measured risk drivers and accordingly processed data versus the resulting efficiency, in order to optimize the data processing performance of the system, i.e. ''provide an improved, more accurate and more efficient automated decoding and measuring" and "improved capability to capture the external and/or internal factors that affect liability exposure, while keeping the used trigger techniques transparent" (see paragraph [0014]).
Examiner respectfully disagrees with Applicant. Vlasimsky discloses optimizing the accuracy of the model using common insurance industry metrics. In doing so, the present technology ensures that the model is neither over-fit nor under-fit. With a built-in ability to reduce the number of dimensions, the present platform condenses the risk factors (dimensions) being evaluated to the few that are truly predictive (Paragraph 0068, risk factors may be continuously added until the model is over-fit and predictive accuracy begins to decline due to over complexity; Paragraph 0080, weight or emphasize the variables and algorithms that in combination yield the highest predictive value). A large number of parameters are thus not required to adjust the complexity of the model, thereby insulating the user from having to adjust a multitude of parameters to arrive at a suitable model (Paragraph 0014). Therefore, Examiner notes that the method disclosed by Vlasimsky uses a known optimization technique to select the most significant parameters and not a simulation technique (e.g., the method described by Vlasimsky is known as a stepwise selection algorithm). Also, Vlasimsky discloses a dynamically adjusted measurement method since it can continuously assess incoming data and the predictive accuracy of the model. If an investigation confirms that the required data truly has changed, this may reflect a need to access analytical logic for purposes of updating or tuning the model to assure continuing predictive accuracy of the implemented model (Paragraph 0146). Therefore, Vlasimsky discloses to “continuously select a suitable minimum number of risk factors/drivers” since the risk factors/drivers may be adjusted over time against a maximum significance (see at least Paragraphs 0013 condenses the risk factors being evaluated to the few that are truly predictive; Paragraph 0067, selects the most statistically significant fields; Paragraph 0146, continuously assess incoming data and the predictive accuracy of the model).
Although Vlasimsky Richard et al. discloses to select a minimum number of liability risk factors/drivers in relation to maximized statistical significance (Paragraph 0068, risk factors may be continuously added until the model is over-fit and predictive accuracy begins to decline due to over complexity; Examiner notes that the technique taught by Vlasimsky Richard et al. is known as a stepwise selection algorithm), Vlasimsky Richard et al. does not specifically disclose wherein the algorithm used to select the minimum number of liability risk drivers in relation to maximized statistical significance is an Akaike Information Criterion (AIC).
However, Lindsey discloses to select a minimum number of liability risk factors/drivers in relation to maximized statistical significance using a grid (Page 661, Best Subsets). In this case, the selected model is the model that has the smallest AIC. Also, the grid is the table that includes the different models with their respective R2 or AIC (Page 661, Best Subsets. Examiner notes that the table provided on Page 661 is the same as the table presented in Figure 11 of Applicant’s specification, number of factors/drivers used in the analysis evaluated against an AIC or R2). As known by an ordinary skill in the art, the model with the smallest AIC is the model that optimizes the factors/drivers against a maximum significance. Also, the adjusted R2 and the corrected AIC (e.g., AdjR2 and AICc) are suggested as an alternative criterion since they evaluate the number of factors/drivers in the regression model against a maximum significance, which avoids overfitting the model (see conclusion, Al-Subaihi reference). Therefore, Lindsey improves over Vlasimsky by further displaying, using a grid, the selected minimum number of factors/drivers in relation to the maximized statistical significance by applying R2 maximization compared to a number of selected liability risk drivers. Also, this is merely a simple substitution of a known technique used to select risk factors/drivers (e.g., stepwise selection algorithm) for another known technique used to select risk factors/drivers (e.g., AIC). The simple substitution of one known technique for another producing a predictable result renders the claim obvious.
PNG
media_image1.png
355
442
media_image1.png
Greyscale
Although Vlasimsky Richard et al. discloses to track significant changes to the incoming data and a need to access the model for purposes of updating or tuning the model when the data has changed (Paragraph 0146), Vlasimsky Richard et al. does not specifically disclose wherein the model is automatically updated in response to a trigger.
However, Salghetti et al. discloses to dynamically adapt the liability risk drivers in response to a trigger condition (Paragraph 0139, dynamically adapt the set 16 of liability risk drivers 311-313 varying the liability risk drivers 311-313 in relation to the measured liability exposure signal). Therefore, Salghetti et al. improves over Vlasimsky Richard et al. and Lindsey by automatically updating the model in response to a trigger.
Independent claim 22 recites similar features as independent claim 1. Claims 7, 9, 11, and 13-21 are rejected for having the same deficiencies as those set forth with respect to the claims that they depend from, independent claim 1.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1, 7, 9, 11, and 13-22 are rejected under 35 U.S.C. 101 because the claimed invention is directed to a judicial exception (i.e., an abstract idea) without reciting significantly more.
Independent Claim 1
Step One - First, pursuant to step 1 in the January 2019 Revised Patent Subject Matter Eligibility Guidance (“2019 PEG”) on 84 Fed. Reg. 53, the claim 1 is directed to an apparatus which is a statutory category.
Step 2A, Prong One - Claim 1 recites: An automated system steering a liability risk-driven interaction having at least one measurable liability exposure, based on measuring and dynamically parametrizing a liability dependent measure for a liability by providing an accurate liability risk measure based on a time-dependent, composite index parameter, wherein a measured value of a composite index is a measure of a change of liability-dependent economic measuring factors of regionally defined systems extracted from measurement parameters providing technical-based measurement of drivers, wherein the measurement parameters assigned to parameterized liability risk drivers are measured for generating the measured time-dependent, composite index parameter, the system comprising: to scan the liability for predefined measurement parameters capturing dynamic static characteristics of at least one liability risk driver of the parameterized liability risk drivers, and identify and mark liability risk drivers, the identified and marked liability risk driver either contracting or expanding a measured liability risk exposure, wherein in case the identified liability risk drivers expand the measured liability risk exposure, additional liability risk drivers are selected until the measured liability risk exposure is contracting due to the additional risk drivers, and mutual normalization of the liability risk drivers is initiated by, and wherein in case the identified liability risk drivers contract the measured liability risk exposure, mutual normalization of the liability risk drivers is directly initiated, select a first set of liability risk drivers by parametrizing an economic-based contribution to a general liability exposure loss, wherein the first set of liability risk drivers at least comprises a risk driver parametrizing Gross Domestic Product (GDP) growth, a risk driver parametrizing healthcare expenditure growth, and a risk driver parametrizing real wage growth based on an impact of captured alterations to the variable composite index parameter, the liability risk drivers being mutually normalized to each other, and select the additional liability risk drivers by parametrizing at least societal and/or legal and/or political based contributions to the general liability exposure loss, dynamically apply the additional liability risk drivers based on their impact to the measured variable composite index parameter, and dynamically normalize the liability risk drivers to each other, to automatically perform a weighting process of the selected liability risk drivers by applying a defined transformation to normalize the selected liability risk drivers to the final time series of the parameters by providing individual set weights and/or individual driver weights, wherein historic exposure and loss data assigned to a geographic region are selected from a comprising region-specific data, and historic measurement parameters are generated corresponding to the measurement parameters, and wherein the liability exposure signal is weighted by the historic measurement parameters, for optimization, based on the weighted liability risk drivers and sets of liability risk drivers, automatically select a minimum number of liability risk drivers in relation to maximized statistical significance, and provide the minimum number of liability risk drivers as a reduced set of liability risk drivers out of all available liability risk drivers using best fit characteristics, wherein to select the minimum number of liability risk drivers in relation to maximized statistical significance based on the weighted liability risk drivers and sets of liability risk drivers, to apply maximization compared to a number of selected liability risk drivers, for stabilization, to scale impacts of the different liability risk drivers to a same scale by applying as normalization the transformation to a final time series xt= [xt -min(xt,)] / [max(xt,) - min(xt,)], and to select the minimum number of liability risk drivers in relation to maximized statistical significance based on the weighted liability risk drivers and sets of liability risk drivers by applying R2-maximization compared to a number of selected liability risk drivers, to adapt dynamically the set of liability risk drivers by varying the minimum number of liability risk drivers in relation to the measured liability exposure signal by periodic time response, and generate the time-dependent, composite index parameter based on the adapted reduced set of liability risk drivers, a liability risk-driven interaction being adjusted based upon the time-dependent, composite index parameter, wherein operation of the automated systems is adapted based on the measured time-dependent, composite index parameter accounting peaks in time-dependent fluctuations of the measurement parameters, to dynamically adapt the first set of liability risk drivers varying the liability risk drivers in relation to the measured liability exposure signal by periodic time response, and transmit a request for a measurement parameter update periodically for dynamic detection of variations of the measure parameters adapting measurable characteristics of the selected risk drivers by direct measurements, to compare the exposure derived from the liability risk drivers to an effective measured exposure, switching automatically to liability risk drivers based on saved historic data to minimize a possibly measured deviation of the exposures by dynamically adapting the liability risk drivers based on the saved historic data, in response to the deviation exceeding a threshold, and to automatically and dynamically steer the liability risk-driven interaction based upon the time-dependent, composite index parameter, wherein the steering is adjusted based upon the liability exposure signal, wherein the automated risk-transfer is activated by appropriate signal generation and transmission to resolve loss of a loss unit. These claim elements are considered to be abstract ideas because they are directed to “certain methods of organizing human activity” which include “mitigating risk.” Predicting a liability loss is a form of mitigating risk because it allows the system to determine which economic risk factors contribute to the predicted loss. Therefore, insurance companies can minimize risks of a loss from fluctuations in the economy by adjusting their premiums or transferring the risk to another insurer. If a claim limitation, under its broadest reasonable interpretation, covers mitigating risk, then it falls within the “certain methods of organizing human activity” grouping of abstract ideas. Accordingly, the claim recites an abstract idea.
Step 2A Prong 2 - The judicial exception is not integrated into a practical application. Claim 1 includes additional elements: a central processing device of the electronic system; a processing circuitry; a dedicated data storage; using a grid; apply Akaike Information Criterion (AIC)-maximization; apply R2- maximization; a risk-transfer system; an automated operating device; a measuring device; and an automated repair node.
The central processing device is merely used to receive the liability risk drivers (Paragraph 0039). The processing circuitry is merely used to: select liability risk drivers; normalize the liability risk drivers; select a minimum number of liability risk drivers; and adapt the minimum number of liability risk drivers periodically (Claim 1). The dedicated data storage is merely used to store region-specific data (Paragraph 0018). The grid is merely used to display the optimal number of factors (Figure 6). The Akaike Information Criterion maximization technique is merely used to select a minimum number of risk drivers in relation to the Akaike Information Criterion value (Paragraphs 0044 & 0047). The R2 maximization technique is merely used select a minimum number of risk drivers in relation to the largest R2 value (Paragraphs 0044 & 0047). The risk transfer system is merely used to provide pricing adjustments based upon varying liability risk drivers (Paragraph 0007). The operating device is merely used to steer the liability risk-driven interaction (Paragraph 0039). The measuring device is merely used to periodically transmit a measurement parameter update (Paragraph 0016). The automated repair node is merely used to resolve the loss of the loss unit (Paragraph 0021). Merely stating that the step is performed by a computer component results in “apply it” on a computer (MPEP 2106.05f). These elements of central processing device, processing circuitry, dedicated data storage, grid, Akaike Information Criterion, R2, risk transfer system, operating device, measuring device, and automated repair node are recited at a high level of generality such that it amounts no more than mere instructions to apply the exception using a generic computer element. Further, the operating device and the measuring device are considered “field of use” as they’re just used to collect and transmit data, but the technology is not improved (MPEP 2106.05h). Accordingly, alone and in combination, these additional elements do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea. Therefore, the claims are directed to an abstract idea.
Step 2B - The claim does not include additional elements that are sufficient to amount significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the claims describe how to generally “apply” the concept of mitigating risk by evaluating economic fluctuations. The specification shows that the central processing device is merely used to receive the liability risk drivers (Paragraph 0039). The processing circuitry is merely used to: select liability risk drivers; normalize the liability risk drivers; select a minimum number of liability risk drivers; and adapt the minimum number of liability risk drivers periodically (Claim 1). The dedicated data storage is merely used to store region-specific data (Paragraph 0018). The grid is merely used to display the optimal number of factors (Figure 6). The risk transfer system is merely used to provide pricing adjustments based upon varying liability risk drivers (Paragraph 0007). The Akaike Information Criterion maximization technique is merely used to select a minimum number of risk drivers in relation to the Akaike Information Criterion value (Paragraphs 0044 & 0047). The R2 maximization technique is merely used select a minimum number of risk drivers in relation to the largest R2 value (Paragraphs 0044 & 0047). The operating device is merely used to steer the liability risk-driven interaction (Paragraph 0039). The measuring device is merely used to periodically transmit a measurement parameter update (Paragraph 0016). The automated repair node is merely used to resolve the loss of the loss unit (Paragraph 0021). Also, the operating device and the measuring device communication are considered a conventional computer function of “receiving and transmitting over a network” (MPEP 2106.05d). Lastly, adding a final step of “automatically transferring/transmitting the risk” does not add any meaningful limitations (MPEP 2106.05(g), insignificant application). Thus, nothing in the claim adds significantly more to the abstract idea. The claim is ineligible.
Independent claim 22 is directed to a method at step 1, which is a statutory category. Claim 22 recites similar limitations as claim 1 and is rejected for the same reasons at step 2a, prong one; step 2a, prong 2; and step 2b. The claim is not patent eligible.
Dependent claims 7, 14, and 18 are directed to additional elements such as: a risk exposed unit. The risk exposure unit is merely used to capture the external and/or internal factors that affect liability exposure (Paragraph 0014). The automated repair node is merely used to resolve the loss of the loss unit (Paragraph 0021). Merely stating that the step is performed by a computer component results in “apply it” on a computer (MPEP 2106.05f) being applicable at both Step 2A, Prong 2 and Step 2B. Further, although the repair node is used to resolve the loss, the claim fails to recite details of how the solution is accomplished (see MPEP 2106.05(f)). Mere instructions to apply an exception using a generic computer component cannot provide an inventive concept. Thus, nothing in the claim adds significantly more to the abstract idea. The claim is ineligible.
Dependent claims 9 and 11 are directed to additional elements such as: a scan measuring device; and a memory. The scan measuring device are merely used to capture a characteristic of a liability risk driver (Paragraph 0013). The memory is merely used to save historic data (Claim 11). The scan measuring device is considered “field of use” MPEP 2106.05h at Step 2A, Prong 2, as it’s just used to receive information (e.g., capture a characteristic) and the technology of scanning is not improved; at Step 2B, this is conventional still, “receiving and transmitting over a network” (see MPEP 2106.05d). The memory is considered “field of use” MPEP 2106.05h at Step 2A, Prong 2, as it’s just used to receive information and the technology is not improved; at Step 2B, this is conventional still, “storing information in a memory” (see MPEP 2106.05d). Thus, nothing in the claim adds significantly more to the abstract idea. The claim is ineligible.
Dependent claim 13 is directed to an additional element such as: a trigger module. The trigger module is merely used to trigger variation of the measurement parameters and transmit detected variations of one or more measurement parameters to the control unit controller (Paragraph 0020). Merely stating that the step is performed by a computer component results in “apply it” on a computer (MPEP 2106.05f) being applicable at both Step 2A, Prong 2 and Step 2B. Also, the trigger module is considered “field of use” MPEP 2106.05h at Step 2A, Prong 2, as it’s just used to transmit information, but the technology is not improved; at Step 2B, this is conventional still, “receiving and transmitting over a network” (see MPEP 2106.05d). Thus, nothing in the claim adds significantly more to the abstract idea. The claim is ineligible.
Dependent claim 15 is directed to additional elements such as: a risk exposed unit; an interface module; and an appropriate data transmission network. The risk exposure unit is merely used to capture the external and/or internal factors that affect liability exposure (Paragraph 0014). The interface module is merely used to connect the communication network according to the transmission standard or protocol (Paragraph 0037). The data transmission network is merely used to transmit data (Paragraph 0037). These elements are considered “field of use” MPEP 2106.05h at Step 2A, Prong 2, as they are just used to receive and transmit information, but the interface is not improved; at Step 2B, this is conventional still, “receiving and transmitting over a network” (see MPEP 2106.05d). Thus, nothing in the claim adds significantly more to the abstract idea. The claim is ineligible.
Dependent claims 16, 17, and 19-21 are directed to additional elements such as: a payment transfer module; a second risk-transfer system; and a second resource pooling system. The payment transfer module is merely used to store payment transfer parameters (see Claim 16). The second risk-transfer is merely used to receive at least parts of the risk exposure associated with the occurrence (Claim 17). The second resource pooling system is merely used to select and filter various risk exposures to provide a more accurate prediction of future losses (Paragraph 0011). Merely stating that the step is performed by a computer component results in “apply it” on a computer (MPEP 2106.05f) being applicable at both Step 2A, Prong 2 and Step 2B. Further, the payment transfer module is considered “field of use” MPEP 2106.05h at Step 2A, Prong 2, as it’s just used to collect payment data and the technology is not improved; at Step 2B, this is conventional still, “receiving and transmitting over a network” (see MPEP 2106.05d). The second risk transfer is considered “field of use” MPEP 2106.05h at Step 2A, Prong 2, as it’s just used to receive parts of the risk exposure and the technology is not improved; at Step 2B, this is conventional still, “receiving and transmitting over a network” (see MPEP 2106.05d). Thus, nothing in the claim adds significantly more to the abstract idea. The claim is ineligible.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries set forth in Graham v. John Deere Co., 383 U.S. 1, 148 USPQ 459 (1966), that are applied for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims 1, 7, 9, 11, and 13-22 are rejected under 35 U.S.C. 103 as being unpatentable over Vlasimsky Richard et al. (WO 2007/005975 A2), in view of Lindsey (Lindsey C, Sheather S. Variable selection in linear regression. The Stata Journal. 2010 Dec; pp.650-669), in further view of Bhanja (Bhanja S, Das A. Impact of data normalization on deep neural network for time series forecasting. arXiv preprint arXiv:1812.05519. 2018 Dec 13) and Salguetti et al. (US 2012/0143633 A1).
Regarding claim 1 (Currently Amended), Vlasimsky Richard et al. discloses an automated electronic system steering a liability risk-driven interaction between an automated risk-transfer system and an automated operating device operating on a liability system having at least one measurable liability exposure (Paragraph 0059, For example, during preprocessing, actual losses are especially noted so that a model only uses loss information from prior terms. Accordingly, it is possible to adjust the predictive model on the basis of time-sequencing to see, for example, if a recent loss history indicates that it would be unwise to renew an existing policy under its present terms; Paragraph 0086, In another aspect, as shown in Fig. 12, it will be appreciated that the terms and conditions for a particular policy may be adjusted to accommodate irregularities in the predictive model results. The loss ratio results of Fig. 12 show an anomalous upward bulge for the medium risk segment of business. This may be smoothed upon policy renewal or the writing of new policies, for example, by capping the amount of a particular loss category; Paragraph 0084, An allocation routine 906 may allocate selected policies to the deciles where they achieve the best financial result according to fitness of the model for a particular category of risk; Paragraph 0085, This type of policy allocation may be provided as shown in Fig. 10 for a particular policy that is measured by loss ratio; Paragraph 0139, The processes of development 2008 and 2010 are supported by automated underwriting analysis 2016, an algorithm library 2018 that may be used in various ensembles as shown in Fig. 4, and a policy system for use in generating policies as shown in Fig. 9. A workflow engine 2022 facilitates the creation, assignment and tracking of discrete tasks; Examiner interprets the “automatic underwriting analysis” as the “steering a liability risk-driven interaction between an automated risk-transfer system and an automated operating device” because the system can automatically adjust the terms and conditions (e.g., pricing) when the data indicates an inadequate liability exposure (e.g., loss ratio)), based on measuring and dynamically parametrizing a liability dependent measure for the liability system by providing an accurate liability risk measure based on a time-dependent, composite index parameter (Paragraph 0006, In one aspect, the present disclosure provides a modeling system that operates on an initial data collection which includes risk factors and outcomes. Data storage is provided for a plurality of risk factors and outcomes that are associated with the risk factors; Paragraph 0014, Optimizers are employed to reduce noise and optimize the accuracy of the model using common insurance industry metrics (e.g., loss ratio, net profit). In doing so, the present technology ensures that the model is neither over-fit nor under-fit; Paragraph 0049, As shown in Fig. 1, three basic steps are involved in finding patterns from digitally represented data, and generating a model based on the data. As shown in Fig. 1, these steps include data set preparation 102, together with an iterative process of model development 104 and validation 106. In this iterative process, a candidate model is created based upon reporting from a dataset, and the fitness of the resulting model is evaluated. The result is then reevaluated to confirm model fitness. This process is repeated, using a new set of model parameter permutations until a predictive candidate model is found; Paragraph 0146, In addition to the previously described system functionalities, is it useful to provide monitoring logic 2108 to continuously assess incoming data and predictive accuracy of the model), wherein a measured value of a composite index is a measure of a change of liability-dependent economic measuring factors of regionally defined systems extracted from measurement parameters providing a measuring system with technical-based measurement of drivers (Paragraph 0055, A number of external sources are available and may be accessed for reporting purposes to accept and integrate external data for modeling purposes that extend modeling parameters beyond what underwriters currently use today. The external data may be used to enrich data that is otherwise available to achieve a greater predictive accuracy. Data such as this may include, for example, firmagraphic, demographic, demographic, econometric, geographic, weather, legal, vehicle, industry, driver, property, and geo-location data; By way of example, external data from the following sources may be utilized: U.S. Census, such as Population Density, and housing density; Paragraph 0062, A principle that is popularly known as Occam's Razor holds that one may arrive at an optimum level of complexity that is associated with the highest predictive accuracy by eliminating concepts, variables or constructs that are not needed to explain or predict a phenomenon. Limiting the risk factors to a predetermined number, such as ten per coverage model, allows utilization of the most predictive independent variables, but is also general enough to fit a larger range of potential policies in the future. A smaller set of risk factors advantageously minimizes disruptions to the eventual underwriting process, reduces data entry and simplifies explainability. Moreover, by selecting a subset of risk factors having the highest statistical correlation, and thus the highest predictive information, provides the most desirable target model; Paragraph 0063, Before data from the training set 120 or testing set 122 are submitted for further use, it is possible to use a segmentation filter 123 to focus the model upon a particular population or subpopulation of data. These subpopulations of dataset 108 may be further limited to types of violations, such as speeding or running a red light, and as particular geography, such as a residence in a particular state or city; Paragraph 0146, Fig. 23B shows a significant change in the incoming data for Risk Factor 2 where the respective lines identified by the circles and squares are not closely correlated. This may be due to a number of circumstances, such as a change in demographics. If an investigation confirms that the required data truly has changed, this may reflect a need to access analytical logic 2104 for purposes of updating or tuning the model to assure continuing predictive accuracy of the implemented model; Examiner notes that Vlasimsky Richard et al. discloses measurement of drivers since it’s selecting the risk factors having the highest statistical correlation. Further, the system is continuously monitoring any significant changes in the incoming data to assure predictive accuracy), wherein the measurement parameters assigned to parameterized liability risk drivers are measured and transmitted to a central processing device of the electronic system for generating the measured time-dependent, composite index parameter, the electronic system comprising (Paragraph 0075, Historical risk values, such as those for loss ratio field 414, may be time-segregated to ascertain the relative predictive value of the most current information versus older data. An aggregator 416 may consider the time-segregated data in respective groups to see if there is a benefit in using segregated data for different time intervals, such as data for the prior year 418, prior three years 420, or policy lifetime 422; Paragraph 0076, Pattern 424 is a feature extractor that contains a lookup preprocessor 426. The lookup pre-processor 426 accesses external data 114 to provide or report from derived data 428, which has been obtained as described above. This data receives special handling to form ensembles in an expert way according to a predetermined set of derived data rules 428. The lookup pre-processor 426 may utilize a variety of numeric, nominal or ordinal techniques as statistical preprocessors. These may operate on values including SIC codes, NCCI codes, zip codes, county codes, country codes, state codes, injury statistics, health cost statistics, unemployment information, and latitude and longitude. These may be applied using expert rules to convert such codes or values into statistically useful information):
processing circuitry configured to scan the liability system for predefined measurement parameters capturing dynamic characteristics of at least one liability risk driver of the parameterized liability risk drivers, and identify and mark liability risk drivers, the identified and marked liability risk driver either contracting or expanding a measured liability risk exposure (Paragraph 0060, The training set 120 is a subset of dataset 108 that is used to develop the predictive model. During the "training" process, and during the course of model development 104, the training set 120 is presented to a library of algorithms that are shown generally as pattern recognition engine 126. The pattern recognition engine performs multivariate, non-linear analysis to 'fit' a model to the training set 120. The algorithms in this library may be any statistical algorithm that relates one or more variables to one or more other variables and tests the data to ascertain whether there is a statistically significant association between variables. In other words, the algorithm(s) operate to test the statistical validity of the association between the risk factors and the associated outcomes; Paragraph 0067, Output from the pattern recognition engine 126 is provided to risk mapping logic 128 for model development Risk mapping logic 128 receives output from the pattern recognition engine 126, selects the most statistically significant fields for combination in to risk variable groups, builds relationships between the risk variable groups to form one or more ensembles, and analyzes the ensembles by quantifying the variables and relationships in association with a risk parameter; Paragraph 0136, Premium modification logic tracks changes in risk factors over time), wherein in case the identified liability risk drivers expand the measured liability risk exposure, additional liability risk drivers are selected until the measured liability risk exposure is contracting due to the additional risk drivers, and mutual normalization of the liability risk drivers is initiated by the electronic system, and wherein in case the identified liability risk drivers contract the measured liability risk exposure, mutual normalization of the liability risk drivers is directly initiated by the electronic system (Paragraph 0013, The present system includes built-in capacity control that balances the complexity of the solutions with the accuracy of the model developed. Optimizers are employed to reduce noise and optimize the accuracy of the model using common insurance industry metrics (e.g., loss ratio, net profit). In doing so, the present technology ensures that the model is neither over-fit nor under-fit. With a built-in ability to reduce the number of dimensions, the present platform condenses the risk factors (dimensions) being evaluated to the few that are truly predictive. A large number of parameters are thus not required to adjust the complexity of the model, thereby insulating the user from having to adjust a multitude of parameters to arrive at a suitable model. In the end, the models developed by the present system have less chance of introducing inconsistencies, ambiguities and redundancies, which, in turn, result in a higher predictive accuracy; Paragraph 0062, Multivariate models should be of a complexity that is just right. Models that incorporate too little complexity are said to under-fit the available data and result in poor predictive accuracy. On the other hand, models that incorporate too much complexity can over-fit to the data that is used. This causes the model to interpret noise as signal, which produces a less accurate predictive model. A principle that is popularly known as Occam's Razor holds that one may arrive at an optimum level of complexity that is associated with the highest predictive accuracy by eliminating concepts, variables or constructs that are not needed to explain or predict a phenomenon. Limiting the risk factors to a predetermined number, such as ten per coverage model, allows utilization of the most predictive independent variables, but is also general enough to fit a larger range of potential policies in the future. A smaller set of risk factors advantageously minimizes disruptions to the eventual underwriting process, reduces data entry and simplifies explainability. Moreover, by selecting a subset of risk factors having the highest statistical correlation, and thus the highest predictive information, provides the most desirable target model; Paragraph 0080, When a sufficient number of such tests have been run, such as thousands of such tests, it is possible to use logical training processes to weight or emphasize the variables and algorithms that in combination yield the highest predictive value; Paragraph 0067, Output from the pattern recognition engine 126 is provided to risk mapping logic 128 for model development Risk mapping logic 128 receives output from the pattern recognition engine 126, selects the most statistically significant fields for combination in to risk variable groups, builds relationships between the risk variable groups to form one or more ensembles, and analyzes the ensembles by quantifying the variables and relationships in association with a risk parameter; Paragraph 0146, Figs. 23A and 23B each represent the results of frequency distribution calculations for a particular risk factor that is scaled to a range of 0 to 100 on the X- axis; As stated in Paragraph 0044 of Applicant’s specification, “normalizing” is a transformation that provides individual driver weights), select a first set of liability risk drivers by parametrizing an economic-based contribution to a general liability exposure loss, wherein the first set of liability risk drivers at least comprises a risk driver parametrizing Gross Domestic Product (GDP) growth, a risk driver parametrizing healthcare expenditure growth, and a risk driver parametrizing real wage growth based on an impact of captured alterations to the variable composite index parameter (Paragraph 0055, A number of external sources are available and may be accessed for reporting purposes to accept and integrate external data for modeling purposes that extend modeling parameters beyond what underwriters currently use today. The external data may be used to enrich data that is otherwise available to achieve a greater predictive accuracy. Data such as this may include, for example, firmagraphic, demographic, demographic, econometric, geographic, weather, legal, vehicle, industry, driver, property, and geo-location data; By way of example, external data from the following sources may be utilized: a. U.S. Census, such as Population Density, and housing density; wage data; insurance law data; Paragraph 0062, A principle that is popularly known as Occam's Razor holds that one may arrive at an optimum level of complexity that is associated with the highest predictive accuracy by eliminating concepts, variables or constructs that are not needed to explain or predict a phenomenon. Limiting the risk factors to a predetermined number, such as ten per coverage model, allows utilization of the most predictive independent variables, but is also general enough to fit a larger range of potential policies in the future. A smaller set of risk factors advantageously minimizes disruptions to the eventual underwriting process, reduces data entry and simplifies explainability. Moreover, by selecting a subset of risk factors having the highest statistical correlation, and thus the highest predictive information, provides the most desirable target model; Paragraph 0076, turn health cost statistics values into statically useful information; Paragraph 0136, Premium modification logic 1842 may be linked to business information that tracks the financial performance of policies in effect, as well as changes in risk factors over time. The premium modification logic may recommend a premium modification on the basis of current changes to data indicating a desirability of adjusting premium amounts up or down; Paragraph 0146, As can be seen from Fig. 23A, there is no meaningful change in the nature of the incoming data, and so the predictive value on the implemented model should continue to be quite high on the basis of incoming data for Risk Factor 1. On the other hand, Fig. 23B shows a significant change in the incoming data for Risk Factor 2 where the respective lines identified by the circles and squares are not closely correlated. This may be due to a number of circumstances, such as a change in demographics, the data becoming distorted due to a change in the way insurance agents are selecting people or companies to insure, a change in the way the data is being reported by the official source of the data, or a clerical error in entering or uploading the data. The monitoring logic may identify these changes by correlation analysis to compare the respective curves and print out reports for potential problem areas. If an investigation confirms that the required data truly has changed, this may reflect a need to access analytical logic 2104 for purposes of updating or tuning the model to assure continuing predictive accuracy of the implemented model; It can be noted that the claim language is written in alternative form. The limitation taught by Vlasimsky Richard et al. is based on “real wage growth" and “healthcare expenditure growth” since any changes to those factors are measured over time. In this case, if the wage data is a significant factor, then it will be added to the model), the liability risk drivers being mutually normalized to each other (Paragraph 0080, When a sufficient number of such tests have been run, such as thousands of such tests, it is possible to use logical training processes to weight or emphasize the variables and algorithms that in combination yield the highest predictive value; As stated in Paragraph 0044 of Applicant’s specification, “normalizing” is a transformation that provides individual driver weights), and select the additional liability risk drivers by parametrizing at least societal and/or legal and/or political based contributions to the general liability exposure loss (Paragraph 0099, In another example, a graph was created to display the loss per unit exposure across various ranges of the number of vehicles on a policy at issue. In comparison to the previous example where loss is correlated to population density, the trend line for number of vehicles shows a flatter linear correlation that the more vehicles on a policy, the higher the loss per unit exposure. Although variance exists across values for this risk factor, they do not vary as widely as those for population density; As stated in Figure 4 of Applicant’s specification, population density is a societal factor; It can be noted that the claim language is written in alternative form. The limitation taught by Vlasimsky is based on “societal contributions"), dynamically apply the additional liability risk drivers based on their impact to the measured variable composite index parameter (Paragraph 0136, Premium modification logic 1842 may be linked to business information that tracks the financial performance of policies in effect, as well as changes in risk factors over time. The premium modification logic may recommend a premium modification on the basis of current changes to data indicating a desirability of adjusting premium amounts up or down), and dynamically normalize the liability risk drivers to each other (Paragraph 0098, In one example, a graph was created to display the loss per unit exposure across various ranges of population density per square miles based on zip code. The trend line illustrates a strong linear correlation that the more density populated an area, the higher the loss per unit exposure), wherein the processing circuitry is configured to automatically perform a weighting process of the selected liability risk drivers by applying a defined transformation to normalize the selected liability risk drivers to the final time series of the parameters by providing individual set weights and/or individual driver weights (Paragraph 0064, Accordingly, the pattern recognition engine 126 uses statistical correlations to identify data parameters or fields that constitute risk factors from the training set 120. The data fields may be analyzed singly or in different combinations for this purpose. The use of multivariate of ANOVA analysis is particularly advantageous for this purpose. The pattern recognition engine 126 selects and combines statistically significant data fields by performing statistical analysis, such as a multivariate statistical analysis, relating these data fields to a risk value under study. Generally, the multivariate analysis combines the respective data fields using a statistical processing technique to stratify a relative risk score and relate the risk score to a risk value under study; Paragraph 0080, When a sufficient number of such tests have been run, such as thousands of such tests, it is possible to use logical training processes to weight or emphasize the variables and algorithms that in combination yield the highest predictive value; Paragraph 0109, The relative risk score may be, for example, an overall change of incurring a loss as predicted by an ensemble and scaled to a range of 0 to 100 on the basis of the model output a histogram or frequency distribution of this predictive value; Paragraph 0146, In addition to the previously described system functionalities, is it useful to provide monitoring logic 2108 to continuously assess incoming data and the predictive accuracy of the model. If an investigation confirms that the required data truly has changed, this may reflect a need to access analytical logic 2104 for purposes of updating or tuning the model to assure continuing predictive accuracy of the implemented model), wherein historic exposure and loss data assigned to a geographic region are selected from a dedicated data storage comprising region-specific data, and historic measurement parameters are generated corresponding to the measurement parameters (Paragraph 0055, External data 114 generally constitutes third party information that is optionally but preferably leveraged to augment the dataset 108 and prepare for the modeling process. A number of external sources are available and may be accessed for reporting purposes to accept and integrate external data for modeling purposes that extend modeling parameters beyond what underwriters currently use today. The external data may be used to enrich data that is otherwise available to achieve a greater predictive accuracy. Data such as this may include, for example, firmagraphic, demographic, demographic, econometric, geographic, weather, legal, vehicle, industry, driver, property, and geo-location data; Paragraph 0058, Another example in the use of zip codes includes relating a geographic location to external weather information, such as average weather conditions or seasonal hail or other storm conditions that may also be used as predictive loss indicators. Other uses of derived data may include using demographic studies to assess likely incidence of disease or substance abuse on the basis of derived age and geographical location), and wherein the liability exposure signal is weighted by the historic measurement parameters, for optimization, the processing circuitry is configured to, based on the weighted liability risk drivers and sets of liability risk drivers (Paragraph 0064, Accordingly, the pattern recognition engine 126 uses statistical correlations to identify data parameters or fields that constitute risk factors from the training set 120. The data fields may be analyzed singly or in different combinations for this purpose. The use of multivariate of ANOVA analysis is particularly advantageous for this purpose. The pattern recognition engine 126 selects and combines statistically significant data fields by performing statistical analysis, such as a multivariate statistical analysis, relating these data fields to a risk value under study. Generally, the multivariate analysis combines the respective data fields using a statistical processing technique to stratify a relative risk score and relate the risk score to a risk value under study; Paragraph 0080, When a sufficient number of such tests have been run, such as thousands of such tests, it is possible to use logical training processes to weight or emphasize the variables and algorithms that in combination yield the highest predictive value; Paragraph 0113, grid computer arquitecture), automatically select a minimum number of liability risk drivers in relation to maximized statistical significance …, and provide the minimum number of liability risk drivers as a reduced set of liability risk drivers out of all available liability risk drivers using best fit characteristics, wherein to select the minimum number of liability risk drivers in relation to maximized statistical significance based on the weighted liability risk drivers and sets of liability risk drivers, the processing circuitry is configured to apply … maximization compared to a number of selected liability risk drivers (Paragraph 0068, In one aspect, while building models by use of the risk mapping logic 128, the risk factor with the most predictive information may be first selected. The model then selects and adds the risk factors that complement the existing risk factors with the most unique predictive information. To determine the most predictive model, results from the model are analyzed to determine which model has the highest predictive accuracy across the entire book of business. Such risk factors may be continuously added until the model is over-fit and predictive accuracy begins to decline due to over complexity; Examiner notes that the model with the highest predictive accuracy is the model that maximizes statistical significance), for stabilization the processing circuitry is configured to scale impacts of the different liability risk drivers to a same scale by applying as normalization the transformation to a final time series … (Paragraph 0064, Accordingly, the pattern recognition engine 126 uses statistical correlations to identify data parameters or fields that constitute risk factors from the training set 120. The data fields may be analyzed singly or in different combinations for this purpose. The use of multivariate of ANOVA analysis is particularly advantageous for this purpose. The pattern recognition engine 126 selects and combines statistically significant data fields by performing statistical analysis, such as a multivariate statistical analysis, relating these data fields to a risk value under study. Generally, the multivariate analysis combines the respective data fields using a statistical processing technique to stratify a relative risk score and relate the risk score to a risk value under study; Paragraph 0080, When a sufficient number of such tests have been run, such as thousands of such tests, it is possible to use logical training processes to weight or emphasize the variables and algorithms that in combination yield the highest predictive value; Paragraph 0109, The relative risk score may be, for example, an overall change of incurring a loss as predicted by an ensemble and scaled to a range of 0 to 100 on the basis of the model output a histogram or frequency distribution of this predictive value), and to select the minimum number of liability risk drivers in relation to maximized statistical significance based on the weighted liability risk drivers and sets of liability risk drivers by applying … maximization compared to a number of selected liability risk drivers (Paragraph 0068, In one aspect, while building models by use of the risk mapping logic 128, the risk factor with the most predictive information may be first selected. The model then selects and adds the risk factors that complement the existing risk factors with the most unique predictive information. To determine the most predictive model, results from the model are analyzed to determine which model has the highest predictive accuracy across the entire book of business. Such risk factors may be continuously added until the model is over-fit and predictive accuracy begins to decline due to over complexity), the processing circuitry is configured to adapt dynamically the set of liability risk drivers by varying the minimum number of liability risk drivers in relation to the measured liability exposure signal by periodic time response, and generate the time-dependent, composite index parameter based on the adapted reduced set of liability risk drivers (Paragraph 0068, In one aspect, while building models by use of the risk mapping logic 128, the risk factor with the most predictive information may be first selected. The model then selects and adds the risk factors that complement the existing risk factors with the most unique predictive information. To determine the most predictive model, results from the model are analyzed to determine which model has the highest predictive accuracy across the entire book of business. Such risk factors may be continuously added until the model is over-fit and predictive accuracy begins to decline due to over complexity; Paragraph 0136, Premium modification logic 1842 may be linked to business information that tracks the financial performance of policies in effect, as well as changes in risk factors over time. The premium modification logic may recommend a premium modification on the basis of current changes to data indicating a desirability of adjusting premium amounts up or down), a liability risk-driven interaction between an automated risk-transfer system and the automated operating device being adjusted based upon the time-dependent, composite index parameter, wherein operation of the automated systems is adapted based on the measured time-dependent, composite index parameter accounting peaks in time-dependent fluctuations of the measurement parameters (Paragraph 0059, The additional derived data increases the number of risk factors available to the model, which allows for more robust predictions. Besides deriving new risk factors, pre-processing also prepares the data so modeling is performed at the appropriate level of information. For example, during preprocessing, actual losses are especially noted so that a model only uses loss information from prior terms. Accordingly, it is possible to adjust the predictive model on the basis of time-sequencing to see, for example, if a recent loss history indicates that it would be unwise to renew an existing policy under its present terms; Paragraph 0086, In another aspect, as shown in Fig. 12, it will be appreciated that the terms and conditions for a particular policy may be adjusted to accommodate irregularities in the predictive model results. The loss ratio results of Fig. 12 show an anomalous upward bulge for the medium risk segment of business. This may be smoothed upon policy renewal or the writing of new policies, for example, by capping the amount of a particular loss category; Paragraph 0084, An allocation routine 906 may allocate selected policies to the deciles where they achieve the best financial result according to fitness of the model for a particular category of risk; Paragraph 0085, This type of policy allocation may be provided as shown in Fig. 10 for a particular policy that is measured by loss ratio; Paragraph 0139, The processes of development 2008 and 2010 are supported by automated underwriting analysis 2016, an algorithm library 2018 that may be used in various ensembles as shown in Fig. 4, and a policy system for use in generating policies as shown in Fig. 9. A workflow engine 2022 facilitates the creation, assignment and tracking of discrete tasks; Examiner interprets the “automatic underwriting analysis” as the “automated risk-transfer.” Based on broadest reasonable interpretation in light of the specification, Vlasimsky Richard et al. discloses “an automated risk-transfer system and an operating device being adjusted based upon the time-dependent composite index parameter” because the terms/conditions/policies are updated automatically when the data indicates an inadequate loss ratio), the processing circuitry is configured to dynamically adapt the first set of liability risk drivers varying the liability risk drivers in relation to the measured liability exposure signal by periodic time response, and transmit a request for a measurement parameter update periodically to measuring devices for dynamic detection of variations of the measure parameters adapting measurable characteristics of the selected risk drivers by direct measurements (Paragraph 0059, The additional derived data increases the number of risk factors available to the model, which allows for more robust predictions. Besides deriving new risk factors, pre-processing also prepares the data so modeling is performed at the appropriate level of information. For example, during preprocessing, actual losses are especially noted so that a model only uses loss information from prior terms. Accordingly, it is possible to adjust the predictive model on the basis of time-sequencing to see, for example, if a recent loss history indicates that it would be unwise to renew an existing policy under its present terms; Paragraph 0136, Premium modification logic 1842 may be linked to business information that tracks the financial performance of policies in effect, as well as changes in risk factors over time. The premium modification logic may recommend a premium modification on the basis of current changes to data indicating a desirability of adjusting premium amounts up or down), the processing circuitry is configured to compare the exposure derived from the liability risk drivers to an effective measured exposure, the electronic system switching automatically to liability risk drivers based on saved historic data to minimize a possibly measured deviation of the exposures by dynamically adapting the liability risk drivers based on the saved historic data, … the deviation exceeding a threshold (Paragraph 0136, Premium modification logic 1842 may be linked to business information that tracks the financial performance of policies in effect, as well as changes in risk factors over time. The premium modification logic may recommend a premium modification on the basis of current changes to data indicating a desirability of adjusting premium amounts up or down; Paragraph 0146, In addition to the previously described system functionalities, is it useful to provide monitoring logic 2108 to continuously assess incoming data and the predictive accuracy of the model. Figs. 23A and 23B show a comparison between 20 stationary (Fig. 23A for Risk Factor 1) and non-stationary (Fig. 23B for Risk Factor 2) risk factors. Figs. 23A and 23B each represent the results of frequency distribution calculations for a particular risk factor that is scaled to a range of 0 to 100 on the X-axis. Circles identify calculation results for data that was used to develop the model, while squares identify calculation results for data that has arrived after the model was implemented. As can be seen from Fig. 23A, there is no meaningful change in the nature of the incoming data, and so the predictive value on the implemented model should continue to be quite high on the basis of incoming data for Risk Factor 1. On the other hand, Fig. 23B shows a significant change in the incoming data for Risk Factor 2 where the respective lines identified by the circles and squares are not closely correlated; Paragraph 0146, This may be due to a number of circumstances, such as a change in demographics, the data becoming distorted due to a change in the way insurance agents are selecting people or companies to insure, a change in the way the data is being reported by the official source of the data, or a clerical error in entering or uploading the data. The monitoring logic may identify these changes by correlation analysis to compare the respective curves and print out reports for potential problem areas. If an investigation confirms that the required data truly has changed, this may reflect a need to access analytical logic 2104 for purposes of updating or tuning the model to assure continuing predictive accuracy of the implemented model; Examiner interprets a “significant change in the incoming data for a specific factor” as a “deviation exceeding a threshold”), and the processing circuitry is configured to automatically and dynamically steer the liability risk-driven interaction between the automated risk-transfer system and the automated operating device based upon the time-dependent, composite index parameter (Paragraph 0059, The additional derived data increases the number of risk factors available to the model, which allows for more robust predictions. Besides deriving new risk factors, pre-processing also prepares the data so modeling is performed at the appropriate level of information. For example, during preprocessing, actual losses are especially noted so that a model only uses loss information from prior terms. Accordingly, it is possible to adjust the predictive model on the basis of time-sequencing to see, for example, if a recent loss history indicates that it would be unwise to renew an existing policy under its present terms; Paragraph 0086, In another aspect, as shown in Fig. 12, it will be appreciated that the terms and conditions for a particular policy may be adjusted to accommodate irregularities in the predictive model results. The loss ratio results of Fig. 12 show an anomalous upward bulge for the medium risk segment of business. This may be smoothed upon policy renewal or the writing of new policies, for example, by capping the amount of a particular loss category; Paragraph 0139, The processes of development 2008 and 2010 are supported by automated underwriting analysis 2016, an algorithm library 2018 that may be used in various ensembles as shown in Fig. 4, and a policy system for use in generating policies as shown in Fig. 9. A workflow engine 2022 facilitates the creation, assignment and tracking of discrete tasks), wherein the steering of the automated operating device is adjusted based upon the liability exposure signal, wherein the automated risk-transfer system is activated by the electronic system, and when the risk-transfer system is activated by the electronic system, an automated repair node assigned to the automated risk-transfer system is activated by appropriate signal generation and transmission to resolve loss of a loss unit (see Figure 20 and related text in Paragraph 0139, The processes of development 2008 and 2010 are supported by automated underwriting analysis 2016, an algorithm library 2018 that may be used in various ensembles as shown in Fig. 4, and a policy system for use in generating policies as shown in Fig. 9. A workflow engine 2022 facilitates the creation, assignment and tracking of discrete tasks).
Although Vlasimsky Richard et al. discloses to select a minimum number of liability risk drivers in relation to maximized statistical significance (Paragraph 0068, risk factors may be continuously added until the model is over-fit and predictive accuracy begins to decline due to over complexity; Examiner notes that the technique taught by Vlasimsky Richard et al. is known as a stepwise selection algorithm), Vlasimsky Richard et al. does not specifically disclose wherein the algorithm used to select the minimum number of liability risk drivers in relation to maximized statistical significance is an Akaike Information Criterion (AIC).
However, Lindsey discloses automatically select a minimum number of ... drivers in relation to maximized statistical significance using a grid, and provide the minimum number of ... drivers as a reduced set of ... drivers out of all available ... drivers using best fit characteristics, wherein to select the minimum number of … drivers in relation to maximized statistical significance based on the weighted … drivers and sets of … drivers, the processing circuitry is configured to apply Akaike Information Criterion (AIC)-maximization compared to a number of selected … drivers, for stabilization, …, and to select the minimum number of … drivers in relation to maximized statistical significance based on the weighted … drivers and sets of … drivers by applying R2-maximization compared to a number of selected … drivers (Page 661, Best Subsets, The optimal R2 ADJ value, 0.7582178, is obtained by the three-variable model with predictors dwgs, spans, and ccost. This is the same model obtained by forward selection and backward elimination under AIC. This model also optimizes AIC, with an AIC of 25.2924. The most optimal model under BIC and AICc is the predictor model using dwgs and spans. This is the same model found by forward selection under BIC. We find that Mallows’s Cp suggests the five-predictor model when we choose the best model as having a Cp value close to the predictor size +1. Otherwise, when picking the smallest Mallows’s Cp model, we would choose the two-predictor model that BIC and AICc chose. This is one of the occasions when there is no completely clear, best final model. We can narrow our decision down to the two mentioned models. We might investigate whether AICc is more appropriate than AIC in this situation. Recall that picking the model with the highest R2 ADJ generally leads to overfitting (Sheather 2009). Regardless, there is little difference between the values of AIC and R2ADJ for the two-and three predictor models. We will arbitrarily pick the two-predictor model that estimates time by dwgs and spans as our final model. This selection yields no high variance inflation factors; Examiner notes that Lindsey discloses to select, using a grid, the number of drivers (e.g., predictors) that optimizes AIC. In this case, the model that maximizes the R2 is the three-variable model with predictors dwgs, spans, and ccost. See below).
PNG
media_image1.png
355
442
media_image1.png
Greyscale
It would have been obvious to one ordinary skill in the art at the time the invention was filed to modify the system used to select a minimum number of liability risk drivers in relation to maximized statistical significance (e.g., a stepwise selection algorithm) of the invention of Vlasimsky Richard et al. to further incorporate wherein the algorithm used to select the minimum number of liability risk drivers in relation to maximized statistical significance using a grid is an Akaike Information Criterion (AIC) of the invention of Lindsey because doing so would allow the system to display and highlight the optimal model (see Lindsey, Page 653, 1.1 Information criteria & Best subsets). Therefore, this results in a simple substitution of a known technique used to select risk drivers (e.g., stepwise selection algorithm) for another known technique used to select risk drivers (e.g., AIC). The simple substitution of one known technique for another producing a predictable result renders the claim obvious.
Although Vlasimsky Richard et al. discloses wherein impacts of the different liability risk drivers are scaled to a same scale by applying, as normalization, the transformation to a final time series (Paragraph 0109, score can be scaled), the combination of Vlasimsky Richard et al. and Lindsey does not specifically disclose the formula used to normalize the selected liability risk drivers to the final time series of the parameters.
However, Bhanja discloses the for stabilization, … is configured to scale impacts of the different liability risk drivers to a same scale by applying as normalization the transformation to a final time series xt= [xt -min(xt,)] / [max(xt,) - min(xt,)] (Page 2, 5.1 Min-Max Normalization, see formula below).
PNG
media_image2.png
51
338
media_image2.png
Greyscale
It would have been obvious to one ordinary skill in the art at the time the invention was filed to modify the system used to dynamically normalize the liability risk drivers of the invention of Vlasimsky Richard et al. and Lindsey to further specify the formula used for normalization to a final time series of the invention of Bhanja because doing so would allow the system to scale values to a same range of values (see Bhanja, Page 2, 5 Different Normalization Techniques). Further, the claimed invention is merely a combination of old elements, and in combination each element would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that the results of the combination were predictable.
Although Vlasimsky Richard et al. discloses to track significant changes to the incoming data and a need to access the model for purposes of updating or tuning the model when the data has changed (Paragraph 0136), Vlasimsky Richard et al. does not specifically disclose wherein the model is automatically updated in response to a trigger.
However, Salghetti et al. discloses … dynamically adapting the liability risk drivers based on the saved historic data, in response to the deviation exceeding a threshold (Paragraph 0009, As a further embodiment variant, the system can comprise a switching module comparing the exposure based upon the liability risk drivers to the effective occurring or measured exposure by switching automatically to liability risk drivers based on saved historic data to minimize a possibly measured deviation of the exposures by dynamically adapting the liability risk drivers based on saved historic data; Paragraph 0139, dynamically adapt the set 16 of liability risk drivers 311-313 varying the liability risk drivers 311-313 in relation to the measured liability exposure signal).
It would have been obvious to one ordinary skill in the art at the time the invention was filed to modify the system used to select a minimum number of liability risk drivers in relation to maximized statistical significance (e.g., a stepwise selection algorithm), wherein the model may be updated or tuned when the incoming data shows a significant change of the invention of Vlasimsky Richard et al. to further incorporate wherein the model is automatically updated or tuned in response to a trigger condition of the invention of Salghetti et al. because doing so would allow the system to dynamically adapt the set 16 of liability risk drivers in relation to the measured liability exposure signal (see Salghetti et al., Paragraph 0139). Further, the claimed invention is merely a combination of old elements, and in combination each element would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that the results of the combination were predictable.
Regarding claim 22 (Currently Amended), Vlasimsky Richard et al. discloses a measuring and indexing method for an automated electronic system steering a liability risk-driven interaction between an automated risk-transfer system and an automated operating device operating on a liability system having at least one measurable liability exposure (Paragraph 0059, For example, during preprocessing, actual losses are especially noted so that a model only uses loss information from prior terms. Accordingly, it is possible to adjust the predictive model on the basis of time-sequencing to see, for example, if a recent loss history indicates that it would be unwise to renew an existing policy under its present terms; Paragraph 0086, In another aspect, as shown in Fig. 12, it will be appreciated that the terms and conditions for a particular policy may be adjusted to accommodate irregularities in the predictive model results. The loss ratio results of Fig. 12 show an anomalous upward bulge for the medium risk segment of business. This may be smoothed upon policy renewal or the writing of new policies, for example, by capping the amount of a particular loss category; Paragraph 0084, An allocation routine 906 may allocate selected policies to the deciles where they achieve the best financial result according to fitness of the model for a particular category of risk; Paragraph 0085, This type of policy allocation may be provided as shown in Fig. 10 for a particular policy that is measured by loss ratio; Paragraph 0139, The processes of development 2008 and 2010 are supported by automated underwriting analysis 2016, an algorithm library 2018 that may be used in various ensembles as shown in Fig. 4, and a policy system for use in generating policies as shown in Fig. 9. A workflow engine 2022 facilitates the creation, assignment and tracking of discrete tasks; Examiner interprets the “automatic underwriting analysis” as the “steering a liability risk-driven interaction between an automated risk-transfer system and an automated operating device” because the system can automatically adjust the terms and conditions (e.g., pricing) when the data indicates an inadequate liability exposure (e.g., loss ratio)), based on measuring and dynamically parametrizing a liability dependent measure for the liability system by providing an accurate liability risk measure based on a time-dependent, composite index parameter (Paragraph 0006, In one aspect, the present disclosure provides a modeling system that operates on an initial data collection which includes risk factors and outcomes. Data storage is provided for a plurality of risk factors and outcomes that are associated with the risk factors; Paragraph 0014, Optimizers are employed to reduce noise and optimize the accuracy of the model using common insurance industry metrics (e.g., loss ratio, net profit). In doing so, the present technology ensures that the model is neither over-fit nor under-fit; Paragraph 0048, Fig. 1 illustrates an exemplary methodology 100 for use in the present system; Paragraph 0049, As shown in Fig. 1, three basic steps are involved in finding patterns from digitally represented data, and generating a model based on the data. As shown in Fig. 1, these steps include data set preparation 102, together with an iterative process of model development 104 and validation 106. In this iterative process, a candidate model is created based upon reporting from a dataset, and the fitness of the resulting model is evaluated. The result is then reevaluated to confirm model fitness. This process is repeated, using a new set of model parameter permutations until a predictive candidate model is found), wherein a measured value of a composite index is a measure of change of liability-dependent economic measuring factors of regionally defined systems extracted from measurement parameters providing a measuring system with technical-based measurement of drivers (Paragraph 0055, A number of external sources are available and may be accessed for reporting purposes to accept and integrate external data for modeling purposes that extend modeling parameters beyond what underwriters currently use today. The external data may be used to enrich data that is otherwise available to achieve a greater predictive accuracy. Data such as this may include, for example, firmagraphic, demographic, demographic, econometric, geographic, weather, legal, vehicle, industry, driver, property, and geo-location data; By way of example, external data from the following sources may be utilized: U.S. Census, such as Population Density, and housing density; Paragraph 0062, A principle that is popularly known as Occam's Razor holds that one may arrive at an optimum level of complexity that is associated with the highest predictive accuracy by eliminating concepts, variables or constructs that are not needed to explain or predict a phenomenon. Limiting the risk factors to a predetermined number, such as ten per coverage model, allows utilization of the most predictive independent variables, but is also general enough to fit a larger range of potential policies in the future. A smaller set of risk factors advantageously minimizes disruptions to the eventual underwriting process, reduces data entry and simplifies explainability. Moreover, by selecting a subset of risk factors having the highest statistical correlation, and thus the highest predictive information, provides the most desirable target model; Paragraph 0063, Before data from the training set 120 or testing set 122 are submitted for further use, it is possible to use a segmentation filter 123 to focus the model upon a particular population or subpopulation of data. These subpopulations of dataset 108 may be further limited to types of violations, such as speeding or running a red light, and as particular geography, such as a residence in a particular state or city; Paragraph 0146, Fig. 23B shows a significant change in the incoming data for Risk Factor 2 where the respective lines identified by the circles and squares are not closely correlated. This may be due to a number of circumstances, such as a change in demographics. If an investigation confirms that the required data truly has changed, this may reflect a need to access analytical logic 2104 for purposes of updating or tuning the model to assure continuing predictive accuracy of the implemented model; Examiner notes that Vlasimsky Richard et al. discloses measurement of drivers since it’s selecting the risk factors having the highest statistical correlation. Further, it continuously monitor any significant changes in the incoming data for the risk factors), wherein the measurement parameters assigned to parameterized liability risk drivers are measured and transmitted to a central processing device of the electronic system for generating the measured time-dependent, composite index parameter, the method comprising (Paragraph 0075, Historical risk values, such as those for loss ratio field 414, may be time-segregated to ascertain the relative predictive value of the most current information versus older data. An aggregator 416 may consider the time-segregated data in respective groups to see if there is a benefit in using segregated data for different time intervals, such as data for the prior year 418, prior three years 420, or policy lifetime 422; Paragraph 0076, Pattern 424 is a feature extractor that contains a lookup preprocessor 426. The lookup pre-processor 426 accesses external data 114 to provide or report from derived data 428, which has been obtained as described above. This data receives special handling to form ensembles in an expert way according to a predetermined set of derived data rules 428. The lookup pre-processor 426 may utilize a variety of numeric, nominal or ordinal techniques as statistical preprocessors. These may operate on values including SIC codes, NCCI codes, zip codes, county codes, country codes, state codes, injury statistics, health cost statistics, unemployment information, and latitude and longitude. These may be applied using expert rules to convert such codes or values into statistically useful information):
scanning the liability system for predefined measurement parameters capturing dynamic characteristics of at least one liability risk driver, and automatically identifying and marking impacting liability risk drivers, the identified and marked liability risk driver either contracting or expanding a measured liability risk exposure (Paragraph 0060, The training set 120 is a subset of dataset 108 that is used to develop the predictive model. During the "training" process, and during the course of model development 104, the training set 120 is presented to a library of algorithms that are shown generally as pattern recognition engine 126. The pattern recognition engine performs multivariate, non-linear analysis to 'fit' a model to the training set 120. The algorithms in this library may be any statistical algorithm that relates one or more variables to one or more other variables and tests the data to ascertain whether there is a statistically significant association between variables. In other words, the algorithm(s) operate to test the statistical validity of the association between the risk factors and the associated outcomes; Paragraph 0067, Output from the pattern recognition engine 126 is provided to risk mapping logic 128 for model development Risk mapping logic 128 receives output from the pattern recognition engine 126, selects the most statistically significant fields for combination in to risk variable groups, builds relationships between the risk variable groups to form one or more ensembles, and analyzes the ensembles by quantifying the variables and relationships in association with a risk parameter; Paragraph 0136, Premium modification logic tracks changes in risk factors over time), wherein in case the identified liability risk drivers expand the measured liability risk exposure, additional liability risk drivers are selected until the measured liability risk exposure is contracting due to the additional risk drivers, and mutual normalization of the liability risk drivers is initiated, and wherein in case the identified liability risk drivers contract the measured liability risk exposure, mutual normalization of the liability risk drivers is directly initiated (Paragraph 0013, The present system includes built-in capacity control that balances the complexity of the solutions with the accuracy of the model developed. Optimizers are employed to reduce noise and optimize the accuracy of the model using common insurance industry metrics (e.g., loss ratio, net profit). In doing so, the present technology ensures that the model is neither over-fit nor under-fit. With a built-in ability to reduce the number of dimensions, the present platform condenses the risk factors (dimensions) being evaluated to the few that are truly predictive. A large number of parameters are thus not required to adjust the complexity of the model, thereby insulating the user from having to adjust a multitude of parameters to arrive at a suitable model. In the end, the models developed by the present system have less chance of introducing inconsistencies, ambiguities and redundancies, which, in turn, result in a higher predictive accuracy; Paragraph 0062, Multivariate models should be of a complexity that is just right. Models that incorporate too little complexity are said to under-fit the available data and result in poor predictive accuracy. On the other hand, models that incorporate too much complexity can over-fit to the data that is used. This causes the model to interpret noise as signal, which produces a less accurate predictive model. A principle that is popularly known as Occam's Razor holds that one may arrive at an optimum level of complexity that is associated with the highest predictive accuracy by eliminating concepts, variables or constructs that are not needed to explain or predict a phenomenon. Limiting the risk factors to a predetermined number, such as ten per coverage model, allows utilization of the most predictive independent variables, but is also general enough to fit a larger range of potential policies in the future. A smaller set of risk factors advantageously minimizes disruptions to the eventual underwriting process, reduces data entry and simplifies explainability. Moreover, by selecting a subset of risk factors having the highest statistical correlation, and thus the highest predictive information, provides the most desirable target model; Paragraph 0080, When a sufficient number of such tests have been run, such as thousands of such tests, it is possible to use logical training processes to weight or emphasize the variables and algorithms that in combination yield the highest predictive value; Paragraph 0067, Output from the pattern recognition engine 126 is provided to risk mapping logic 128 for model development Risk mapping logic 128 receives output from the pattern recognition engine 126, selects the most statistically significant fields for combination in to risk variable groups, builds relationships between the risk variable groups to form one or more ensembles, and analyzes the ensembles by quantifying the variables and relationships in association with a risk parameter; Paragraph 0146, Figs. 23A and 23B each represent the results of frequency distribution calculations for a particular risk factor that is scaled to a range of 0 to 100 on the X- axis; As stated in Paragraph 0044 of Applicant’s specification, “normalizing” is a transformation that provides individual driver weights);
selecting a first set of liability risk drivers by parametrizing an economic-based contribution to a general liability exposure loss, wherein the first set of liability risk drivers at least comprises a risk driver parametrizing Gross Domestic Product (GDP) growth, a risk driver parametrizing healthcare expenditure growth, and a risk driver parametrizing real wage growth based on an impact of captured alterations to the variable composite index parameter (Paragraph 0055, A number of external sources are available and may be accessed for reporting purposes to accept and integrate external data for modeling purposes that extend modeling parameters beyond what underwriters currently use today. The external data may be used to enrich data that is otherwise available to achieve a greater predictive accuracy. Data such as this may include, for example, firmagraphic, demographic, demographic, econometric, geographic, weather, legal, vehicle, industry, driver, property, and geo-location data; By way of example, external data from the following sources may be utilized: a. U.S. Census, such as Population Density, and housing density; wage data; insurance law data; Paragraph 0062, A principle that is popularly known as Occam's Razor holds that one may arrive at an optimum level of complexity that is associated with the highest predictive accuracy by eliminating concepts, variables or constructs that are not needed to explain or predict a phenomenon. Limiting the risk factors to a predetermined number, such as ten per coverage model, allows utilization of the most predictive independent variables, but is also general enough to fit a larger range of potential policies in the future. A smaller set of risk factors advantageously minimizes disruptions to the eventual underwriting process, reduces data entry and simplifies explainability. Moreover, by selecting a subset of risk factors having the highest statistical correlation, and thus the highest predictive information, provides the most desirable target model; Paragraph 0076, turn health cost statistics values into statically useful information; Paragraph 0136, Premium modification logic 1842 may be linked to business information that tracks the financial performance of policies in effect, as well as changes in risk factors over time. The premium modification logic may recommend a premium modification on the basis of current changes to data indicating a desirability of adjusting premium amounts up or down; Paragraph 0146, As can be seen from Fig. 23A, there is no meaningful change in the nature of the incoming data, and so the predictive value on the implemented model should continue to be quite high on the basis of incoming data for Risk Factor 1. On the other hand, Fig. 23B shows a significant change in the incoming data for Risk Factor 2 where the respective lines identified by the circles and squares are not closely correlated. This may be due to a number of circumstances, such as a change in demographics, the data becoming distorted due to a change in the way insurance agents are selecting people or companies to insure, a change in the way the data is being reported by the official source of the data, or a clerical error in entering or uploading the data. The monitoring logic may identify these changes by correlation analysis to compare the respective curves and print out reports for potential problem areas. If an investigation confirms that the required data truly has changed, this may reflect a need to access analytical logic 2104 for purposes of updating or tuning the model to assure continuing predictive accuracy of the implemented model; It can be noted that the claim language is written in alternative form. The limitation taught by Vlasimsky Richard et al. is based on “real wage growth" and “healthcare expenditure growth” since any changes to those factors are measured over time. In this case, if the wage data is a significant factor, then it will be added to the model), the liability risk drivers being mutually normalized to each other (Paragraph 0080, When a sufficient number of such tests have been run, such as thousands of such tests, it is possible to use logical training processes to weight or emphasize the variables and algorithms that in combination yield the highest predictive value; As stated in Paragraph 0044 of Applicant’s specification, “normalizing” is a transformation that provides individual driver weights);
selecting additional liability risk drivers by parametrizing at least societal and/or legal and/or political based contributions to the general liability exposure loss (Paragraph 0099, In another example, a graph was created to display the loss per unit exposure across various ranges of the number of vehicles on a policy at issue. In comparison to the previous example where loss is correlated to population density, the trend line for number of vehicles shows a flatter linear correlation that the more vehicles on a policy, the higher the loss per unit exposure. Although variance exists across values for this risk factor, they do not vary as widely as those for population density; As stated in Figure 4 of Applicant’s specification, population density is a societal factor; It can be noted that the claim language is written in alternative form. The limitation taught by Vlasimsky is based on “societal contributions"), dynamically apply the additional liability risk drivers based on their impact to the measured variable composite index parameter (Paragraph 0136, Premium modification logic 1842 may be linked to business information that tracks the financial performance of policies in effect, as well as changes in risk factors over time. The premium modification logic may recommend a premium modification on the basis of current changes to data indicating a desirability of adjusting premium amounts up or down) and dynamically normalizing the liability risk drivers to each other (Paragraph 0098, In one example, a graph was created to display the loss per unit exposure across various ranges of population density per square miles based on zip code. The trend line illustrates a strong linear correlation that the more density populated an area, the higher the loss per unit exposure), wherein for the normalizing of the selected liability risk drivers, automatically perform a weighting process of the selected liability risk drivers by applying a defined transformation to the final time series of the parameters by providing individual set weights and/or individual driver weights (Paragraph 0064, Accordingly, the pattern recognition engine 126 uses statistical correlations to identify data parameters or fields that constitute risk factors from the training set 120. The data fields may be analyzed singly or in different combinations for this purpose. The use of multivariate of ANOVA analysis is particularly advantageous for this purpose. The pattern recognition engine 126 selects and combines statistically significant data fields by performing statistical analysis, such as a multivariate statistical analysis, relating these data fields to a risk value under study. Generally, the multivariate analysis combines the respective data fields using a statistical processing technique to stratify a relative risk score and relate the risk score to a risk value under study; Paragraph 0080, When a sufficient number of such tests have been run, such as thousands of such tests, it is possible to use logical training processes to weight or emphasize the variables and algorithms that in combination yield the highest predictive value; Paragraph 0109, The relative risk score may be, for example, an overall change of incurring a loss as predicted by an ensemble and scaled to a range of 0 to 100 on the basis of the model output a histogram or frequency distribution of this predictive value; Paragraph 0146, In addition to the previously described system functionalities, is it useful to provide monitoring logic 2108 to continuously assess incoming data and the predictive accuracy of the model. If an investigation confirms that the required data truly has changed, this may reflect a need to access analytical logic 2104 for purposes of updating or tuning the model to assure continuing predictive accuracy of the implemented model), wherein historic exposure and loss data assigned to a geographic region are selected from a dedicated data storage comprising region-specific data, and historic measurement parameters are generated corresponding to the measurement parameters (Paragraph 0055, External data 114 generally constitutes third party information that is optionally but preferably leveraged to augment the dataset 108 and prepare for the modeling process. A number of external sources are available and may be accessed for reporting purposes to accept and integrate external data for modeling purposes that extend modeling parameters beyond what underwriters currently use today. The external data may be used to enrich data that is otherwise available to achieve a greater predictive accuracy. Data such as this may include, for example, firmagraphic, demographic, demographic, econometric, geographic, weather, legal, vehicle, industry, driver, property, and geo-location data; Paragraph 0058, Another example in the use of zip codes includes relating a geographic location to external weather information, such as average weather conditions or seasonal hail or other storm conditions that may also be used as predictive loss indicators. Other uses of derived data may include using demographic studies to assess likely incidence of disease or substance abuse on the basis of derived age and geographical location), and wherein the liability exposure signal is weighted by the historic measurement parameters; for optimization, based on the weighted liability risk drivers and sets of liability risk drivers (Paragraph 0064, Accordingly, the pattern recognition engine 126 uses statistical correlations to identify data parameters or fields that constitute risk factors from the training set 120. The data fields may be analyzed singly or in different combinations for this purpose. The use of multivariate of ANOVA analysis is particularly advantageous for this purpose. The pattern recognition engine 126 selects and combines statistically significant data fields by performing statistical analysis, such as a multivariate statistical analysis, relating these data fields to a risk value under study. Generally, the multivariate analysis combines the respective data fields using a statistical processing technique to stratify a relative risk score and relate the risk score to a risk value under study; Paragraph 0080, When a sufficient number of such tests have been run, such as thousands of such tests, it is possible to use logical training processes to weight or emphasize the variables and algorithms that in combination yield the highest predictive value; Paragraph 0113, grid computer arquitecture), automatically selecting a minimum number of liability risk drivers in relation to maximized statistical significance …, and providing the minimum number of liability risk drivers as a reduced set of liability risk drivers out of all available liability risk drivers using best fit characteristics, wherein to select the minimum number of liability risk drivers in relation to maximized statistical significance based on the weighted liability risk drivers and sets of liability risk drivers includes applying … maximization compared to a number of selected liability risk drivers (Paragraph 0068, In one aspect, while building models by use of the risk mapping logic 128, the risk factor with the most predictive information may be first selected. The model then selects and adds the risk factors that complement the existing risk factors with the most unique predictive information. To determine the most predictive model, results from the model are analyzed to determine which model has the highest predictive accuracy across the entire book of business. Such risk factors may be continuously added until the model is over-fit and predictive accuracy begins to decline due to over complexity; Examiner notes that the model with the highest predictive accuracy is the model that maximizes statistical significance), for stabilization, scaling impacts of the different liability risk drivers to a same scale by applying as normalization the transformation to a final time series … (Paragraph 0064, Accordingly, the pattern recognition engine 126 uses statistical correlations to identify data parameters or fields that constitute risk factors from the training set 120. The data fields may be analyzed singly or in different combinations for this purpose. The use of multivariate of ANOVA analysis is particularly advantageous for this purpose. The pattern recognition engine 126 selects and combines statistically significant data fields by performing statistical analysis, such as a multivariate statistical analysis, relating these data fields to a risk value under study. Generally, the multivariate analysis combines the respective data fields using a statistical processing technique to stratify a relative risk score and relate the risk score to a risk value under study; Paragraph 0080, When a sufficient number of such tests have been run, such as thousands of such tests, it is possible to use logical training processes to weight or emphasize the variables and algorithms that in combination yield the highest predictive value; Paragraph 0109, The relative risk score may be, for example, an overall change of incurring a loss as predicted by an ensemble and scaled to a range of 0 to 100 on the basis of the model output a histogram or frequency distribution of this predictive value), and selecting the minimum number of liability risk drivers in relation to maximized statistical significance based on the weighted liability risk drivers and sets of liability risk drivers by applying … maximization compared to a number of selected liability risk drivers (Paragraph 0068, In one aspect, while building models by use of the risk mapping logic 128, the risk factor with the most predictive information may be first selected. The model then selects and adds the risk factors that complement the existing risk factors with the most unique predictive information. To determine the most predictive model, results from the model are analyzed to determine which model has the highest predictive accuracy across the entire book of business. Such risk factors may be continuously added until the model is over-fit and predictive accuracy begins to decline due to over complexity);
dynamically adapting the set of liability risk drivers by varying the minimum number of liability risk drivers varying the liability risk drivers in relation to the measured liability exposure signal by periodic time response, and generating the time-dependent, composite index parameter based on the adapted reduced set of liability risk drivers (Paragraph 0068, In one aspect, while building models by use of the risk mapping logic 128, the risk factor with the most predictive information may be first selected. The model then selects and adds the risk factors that complement the existing risk factors with the most unique predictive information. To determine the most predictive model, results from the model are analyzed to determine which model has the highest predictive accuracy across the entire book of business. Such risk factors may be continuously added until the model is over-fit and predictive accuracy begins to decline due to over complexity; Paragraph 0136, Premium modification logic 1842 may be linked to business information that tracks the financial performance of policies in effect, as well as changes in risk factors over time. The premium modification logic may recommend a premium modification on the basis of current changes to data indicating a desirability of adjusting premium amounts up or down), the liability risk-driven interaction between an automated risk-transfer system and an automated operating device being adjusted based upon the time-dependent, composite index parameter, wherein operation of the automated systems is adapted based on the measured time-dependent, composite index parameter accounting peaks in time-dependent fluctuations of the measurement parameters (Paragraph 0059, The additional derived data increases the number of risk factors available to the model, which allows for more robust predictions. Besides deriving new risk factors, pre-processing also prepares the data so modeling is performed at the appropriate level of information. For example, during preprocessing, actual losses are especially noted so that a model only uses loss information from prior terms. Accordingly, it is possible to adjust the predictive model on the basis of time-sequencing to see, for example, if a recent loss history indicates that it would be unwise to renew an existing policy under its present terms; Paragraph 0086, In another aspect, as shown in Fig. 12, it will be appreciated that the terms and conditions for a particular policy may be adjusted to accommodate irregularities in the predictive model results. The loss ratio results of Fig. 12 show an anomalous upward bulge for the medium risk segment of business. This may be smoothed upon policy renewal or the writing of new policies, for example, by capping the amount of a particular loss category; Paragraph 0084, An allocation routine 906 may allocate selected policies to the deciles where they achieve the best financial result according to fitness of the model for a particular category of risk; Paragraph 0085, This type of policy allocation may be provided as shown in Fig. 10 for a particular policy that is measured by loss ratio; Paragraph 0139, The processes of development 2008 and 2010 are supported by automated underwriting analysis 2016, an algorithm library 2018 that may be used in various ensembles as shown in Fig. 4, and a policy system for use in generating policies as shown in Fig. 9. A workflow engine 2022 facilitates the creation, assignment and tracking of discrete tasks; Examiner interprets the “automatic underwriting analysis” as the “automated risk-transfer.” Based on broadest reasonable interpretation in light of the specification, Vlasimsky Richard et al. discloses “an automated risk-transfer system and an operating device being adjusted based upon the time-dependent composite index parameter” because the terms/conditions/policies are updated automatically when the data indicates an inadequate loss ratio);
dynamically adapting the first set of liability risk drivers varying the liability risk drivers in relation to the measured liability exposure signal by periodic time response, and transmitting a request for a measurement parameter update periodically to measuring devices for dynamic detection of variations of the measure parameters adapting measurable characteristics of the selected risk drivers by direct measurements (Paragraph 0059, The additional derived data increases the number of risk factors available to the model, which allows for more robust predictions. Besides deriving new risk factors, pre-processing also prepares the data so modeling is performed at the appropriate level of information. For example, during preprocessing, actual losses are especially noted so that a model only uses loss information from prior terms. Accordingly, it is possible to adjust the predictive model on the basis of time-sequencing to see, for example, if a recent loss history indicates that it would be unwise to renew an existing policy under its present terms; Paragraph 0136, Premium modification logic 1842 may be linked to business information that tracks the financial performance of policies in effect, as well as changes in risk factors over time. The premium modification logic may recommend a premium modification on the basis of current changes to data indicating a desirability of adjusting premium amounts up or down);
comparing the exposure derived from the liability risk drivers to an effective measured exposure, and switching automatically to liability risk drivers based on saved historic data to minimize a possibly measured deviation of the exposures by dynamically adapting the liability risk drivers based on the saved historic data, … the deviation exceeding a threshold (Paragraph 0136, Premium modification logic 1842 may be linked to business information that tracks the financial performance of policies in effect, as well as changes in risk factors over time. The premium modification logic may recommend a premium modification on the basis of current changes to data indicating a desirability of adjusting premium amounts up or down; Paragraph 0146, In addition to the previously described system functionalities, is it useful to provide monitoring logic 2108 to continuously assess incoming data and the predictive accuracy of the model. Figs. 23A and 23B show a comparison between 20 stationary (Fig. 23A for Risk Factor 1) and non-stationary (Fig. 23B for Risk Factor 2) risk factors. Figs. 23A and 23B each represent the results of frequency distribution calculations for a particular risk factor that is scaled to a range of 0 to 100 on the X-axis. Circles identify calculation results for data that was used to develop the model, while squares identify calculation results for data that has arrived after the model was implemented. As can be seen from Fig. 23A, there is no meaningful change in the nature of the incoming data, and so the predictive value on the implemented model should continue to be quite high on the basis of incoming data for Risk Factor 1. On the other hand, Fig. 23B shows a significant change in the incoming data for Risk Factor 2 where the respective lines identified by the circles and squares are not closely correlated; Paragraph 0146, This may be due to a number of circumstances, such as a change in demographics, the data becoming distorted due to a change in the way insurance agents are selecting people or companies to insure, a change in the way the data is being reported by the official source of the data, or a clerical error in entering or uploading the data. The monitoring logic may identify these changes by correlation analysis to compare the respective curves and print out reports for potential problem areas. If an investigation confirms that the required data truly has changed, this may reflect a need to access analytical logic 2104 for purposes of updating or tuning the model to assure continuing predictive accuracy of the implemented model; Examiner interprets a “significant change in the incoming data for a specific factor” as a “deviation exceeding a threshold”);
and automatically and dynamically steering the liability risk-driven interaction between the automated risk-transfer system and the automated operating device based upon the time-dependent, composite index parameter (Paragraph 0059, The additional derived data increases the number of risk factors available to the model, which allows for more robust predictions. Besides deriving new risk factors, pre-processing also prepares the data so modeling is performed at the appropriate level of information. For example, during preprocessing, actual losses are especially noted so that a model only uses loss information from prior terms. Accordingly, it is possible to adjust the predictive model on the basis of time-sequencing to see, for example, if a recent loss history indicates that it would be unwise to renew an existing policy under its present terms; Paragraph 0086, In another aspect, as shown in Fig. 12, it will be appreciated that the terms and conditions for a particular policy may be adjusted to accommodate irregularities in the predictive model results. The loss ratio results of Fig. 12 show an anomalous upward bulge for the medium risk segment of business. This may be smoothed upon policy renewal or the writing of new policies, for example, by capping the amount of a particular loss category; Paragraph 0139, The processes of development 2008 and 2010 are supported by automated underwriting analysis 2016, an algorithm library 2018 that may be used in various ensembles as shown in Fig. 4, and a policy system for use in generating policies as shown in Fig. 9. A workflow engine 2022 facilitates the creation, assignment and tracking of discrete tasks), wherein the steering of the automated operating device is adjusted based upon the liability exposure signal, wherein the automated risk-transfer system is activated by the electronic system, and when the risk-transfer system is activated by the electronic system, an automated repair node assigned to the automated risk-transfer system is activated by appropriate signal generation and transmission to resolve loss of a loss unit (see Figure 20 and related text in Paragraph 0139, The processes of development 2008 and 2010 are supported by automated underwriting analysis 2016, an algorithm library 2018 that may be used in various ensembles as shown in Fig. 4, and a policy system for use in generating policies as shown in Fig. 9. A workflow engine 2022 facilitates the creation, assignment and tracking of discrete tasks).
Although Vlasimsky Richard et al. discloses selecting a minimum number of liability risk drivers in relation to maximized statistical significance (Paragraph 0068, risk factors may be continuously added until the model is over-fit and predictive accuracy begins to decline due to over complexity; Examiner notes that the technique taught by Vlasimsky Richard et al. is known as a stepwise selection algorithm), Vlasimsky Richard et al. does not specifically disclose wherein the algorithm used for selecting the minimum number of liability risk drivers in relation to maximized statistical significance is an Akaike Information Criterion (AIC).
However, Lindsey discloses automatically selecting a minimum number of ... drivers in relation to maximized statistical significance using a grid, and provide the minimum number of ... drivers as a reduced set of ... drivers out of all available ... drivers using best fit characteristics, wherein the selecting of the minimum number of … drivers in relation to maximized statistical significance based on the weighted … drivers and sets of … drivers includes applying Akaike Information Criterion (AIC)-maximization compared to a number of selected … drivers; for stabilization, …, and selecting the minimum number of … drivers in relation to maximized statistical significance based on the weighted … drivers and sets of … drivers by applying R2-maximization compared to a number of selected … drivers (Page 661, Best Subsets, The optimal R2 ADJ value, 0.7582178, is obtained by the three-variable model with predictors dwgs, spans, and ccost. This is the same model obtained by forward selection and backward elimination under AIC. This model also optimizes AIC, with an AIC of 25.2924. The most optimal model under BIC and AICc is the predictor model using dwgs and spans. This is the same model found by forward selection under BIC. We find that Mallows’s Cp suggests the five-predictor model when we choose the best model as having a Cp value close to the predictor size +1. Otherwise, when picking the smallest Mallows’s Cp model, we would choose the two-predictor model that BIC and AICc chose. This is one of the occasions when there is no completely clear, best final model. We can narrow our decision down to the two mentioned models. We might investigate whether AICc is more appropriate than AIC in this situation. Recall that picking the model with the highest R2 ADJ generally leads to overfitting (Sheather 2009). Regardless, there is little difference between the values of AIC and R2ADJ for the two-and three predictor models. We will arbitrarily pick the two-predictor model that estimates time by dwgs and spans as our final model. This selection yields no high variance inflation factors; Examiner notes that Lindsey discloses to select, using a grid, the number of drivers (e.g. predictors) that optimizes AIC. In this case, the model that maximizes the R2 is the three-variable mode with predictors dwgs, spans, and ccost. See below).
PNG
media_image1.png
355
442
media_image1.png
Greyscale
It would have been obvious to one ordinary skill in the art at the time the invention was filed to modify the system used to select a minimum number of liability risk drivers in relation to maximized statistical significance (e.g., a stepwise selection algorithm) of the invention of Vlasimsky Richard et al. to further incorporate wherein the algorithm used to select the minimum number of liability risk drivers in relation to maximized statistical significance using a grid is an Akaike Information Criterion (AIC) of the invention of Lindsey because doing so would allow the system to display and highlight the optimal model (see Lindsey, Page 653, 1.1 Information criteria & Best subsets). Therefore, this results in a simple substitution of a known technique used to select risk drivers (e.g., stepwise selection algorithm) for another known technique used to select risk drivers (e.g., AIC). The simple substitution of one known technique for another producing a predictable result renders the claim obvious.
Although Vlasimsky Richard et al. discloses wherein impacts of the different liability risk drivers are scaled to a same scale by applying, as normalization, the transformation to a final time series (Paragraph 0109, score can be scaled), the combination of Vlasimsky Richard et al. and Lindsey does not specifically disclose the formula used to normalize the selected liability risk drivers to the final time series of the parameters.
However, Bhanja discloses the … for stabilization, scaling impacts of the different liability risk drivers to a same scale by applying as normalization the transformation to a final time series xt= [xt -min(xt,)] / [max(xt,) - min(xt,)] (Page 2, 5.1 Min-Max Normalization, see formula below).
PNG
media_image2.png
51
338
media_image2.png
Greyscale
It would have been obvious to one ordinary skill in the art at the time the invention was filed to modify the system used to dynamically normalize the liability risk drivers of the invention of Vlasimsky Richard et al. and Lindsey to further specify the formula used for normalization to a final time series of the invention of Bhanja because doing so would allow the system to scale values to a same range of values (see Bhanja, Page 2, 5 Different Normalization Techniques). Further, the claimed invention is merely a combination of old elements, and in combination each element would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that the results of the combination were predictable.
Although Vlasimsky Richard et al. discloses to track significant changes to the incoming data and a need to access the model for purposes of updating or tuning the model when the data has changed (Paragraph 0136), Vlasimsky Richard et al. does not specifically disclose wherein the model is automatically updated in response to a trigger.
However, Salghetti et al. discloses … dynamically adapting the liability risk drivers based on the saved historic data, in response to the deviation exceeding a threshold (Paragraph 0009, As a further embodiment variant, the system can comprise a switching module comparing the exposure based upon the liability risk drivers to the effective occurring or measured exposure by switching automatically to liability risk drivers based on saved historic data to minimize a possibly measured deviation of the exposures by dynamically adapting the liability risk drivers based on saved historic data; Paragraph 0139, dynamically adapt the set 16 of liability risk drivers 311-313 varying the liability risk drivers 311-313 in relation to the measured liability exposure signal).
It would have been obvious to one ordinary skill in the art at the time the invention was filed to modify the system used to select a minimum number of liability risk drivers in relation to maximized statistical significance (e.g., a stepwise selection algorithm), wherein the model may be updated or tuned when the incoming data shows a significant change of the invention of Vlasimsky Richard et al. to further incorporate wherein the model is automatically updated or tuned in response to a trigger condition of the invention of Salghetti et al. because doing so would allow the system to dynamically adapt the set 16 of liability risk drivers in relation to the measured liability exposure signal (see Salghetti et al., Paragraph 0139). Further, the claimed invention is merely a combination of old elements, and in combination each element would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that the results of the combination were predictable.
Regarding claim 7 (Currently Amended), which is dependent of claim 1, the combination of Vlasimsky Richard et al., Lindsey, Bhanja, and Salguetti et al. discloses all the limitations in claim 1. Vlasimsky Richard et al. further discloses wherein the liability-dependent, automated devices are realized as automated risk-transfer systems or as automated risk-transfer systems electronically interacting with a plurality of risk-exposed units with at least one measurable liability exposure, wherein in response to an occurring loss at the loss unit induced by a risk-exposed unit, the automated risk-transfer system is activated by signaling of the electronic system and the loss is automatically resolved by the risk-transfer system (Paragraph 0059, The additional derived data increases the number of risk factors available to the model, which allows for more robust predictions. Besides deriving new risk factors, pre-processing also prepares the data so modeling is performed at the appropriate level of information. For example, during preprocessing, actual losses are especially noted so that a model only uses loss information from prior terms. Accordingly, it is possible to adjust the predictive model on the basis of time-sequencing to see, for example, if a recent loss history indicates that it would be unwise to renew an existing policy under its present terms; Paragraph 0086, In another aspect, as shown in Fig. 12, it will be appreciated that the terms and conditions for a particular policy may be adjusted to accommodate irregularities in the predictive model results. The loss ratio results of Fig. 12 show an anomalous upward bulge for the medium risk segment of business. This may be smoothed upon policy renewal or the writing of new policies, for example, by capping the amount of a particular loss category; Paragraph 0084, An allocation routine 906 may allocate selected policies to the deciles where they achieve the best financial result according to fitness of the model for a particular category of risk; Paragraph 0085, This type of policy allocation may be provided as shown in Fig. 10 for a particular policy that is measured by loss ratio; Paragraph 0139, The processes of development 2008 and 2010 are supported by automated underwriting analysis 2016, an algorithm library 2018 that may be used in various ensembles as shown in Fig. 4, and a policy system for use in generating policies as shown in Fig. 9. A workflow engine 2022 facilitates the creation, assignment and tracking of discrete tasks; Examiner interprets the “automatic underwriting analysis” as the “automated risk-transfer.” Based on broadest reasonable interpretation in light of the specification, Vlasimsky Richard et al. discloses “an automated risk-transfer in response to an occurring loss” because the terms/conditions/policies are updated automatically when the data indicates an inadequate loss ratio).
Regarding claim 9 (Currently Amended), which is dependent of claim 1, the combination of Vlasimsky Richard et al., Lindsey, Bhanja, and Salguetti et al. discloses all the limitations in claim 1. Vlasimsky Richard et al. further discloses wherein the processing circuitry is configured to scan measuring devices or memories assignable to loss units of the electronic system for the measurement parameters capturing dynamic characteristics of at least one liability risk driver (Paragraph 0058, Another example in the use of zip codes includes relating a geographic location to external weather information, such as average weather conditions or seasonal hail or other storm conditions that may also be used as predictive loss indicators. Other uses of derived data may include using demographic studies to assess likely incidence of disease or substance abuse on the basis of derived age and geographical location; Paragraph 0059, The additional derived data increases the number of risk factors available to the model, which allows for more robust predictions. Besides deriving new risk factors, pre-processing also prepares the data so modeling is performed at the appropriate level of information. For example, during preprocessing, actual losses are especially noted so that a model only uses loss information from prior terms. Accordingly, it is possible to adjust the predictive model on the basis of time- sequencing to see, for example, if a recent loss history indicates that it would be unwise to renew an existing policy under its present terms; Paragraph 0136, Premium modification logic 1842 may be linked to business information that tracks the financial performance of policies in effect, as well as changes in risk factors over time. The premium modification logic may recommend a premium modification on the basis of current changes to data indicating a desirability of adjusting premium amounts up or down; Examiner notes that the weather is a dynamic characteristic).
Regarding claim 11 (Original), which is dependent of claim 9, the combination of Vlasimsky Richard et al., Lindsey, Bhanja, and Salguetti et al. discloses all the limitations in claim 9. Vlasimsky Richard et al. further discloses wherein the measurement parameters of at least one of the liability risk drivers are generated based on saved historic data in a memory of the memories (Paragraph 0042, Figs 23A and 23B graphically illustrate data monitoring on a comparative basis where the frequency distribution of incoming data is stationary (Fig. 23A) with respect to historical data that populates the system, and nonstationary (Fig. 23B); Paragraph 0055, A number of external sources are available and may be accessed for reporting purposes to accept and integrate external data for modeling purposes that extend modeling parameters beyond what underwriters currently use today. The external data may be used to enrich data that is otherwise available to achieve a greater predictive accuracy. Data such as this may include, for example, firmagraphic, demographic, demographic, econometric, geographic, weather, legal, vehicle, industry, driver, property, and geo-location data; By way of example, external data from the following sources may be utilized: U.S. Census, such as Population Density, and housing density), when one or more measurement parameters are not scannable for a liability risk driver of the operating device by the electronic system (Paragraph 0062, A principle that is popularly known as Occam's Razor holds that one may arrive at an optimum level of complexity that is associated with the highest predictive accuracy by eliminating concepts, variables or constructs that are not needed to explain or predict a phenomenon. Limiting the risk factors to a predetermined number, such as ten per coverage model, allows utilization of the most predictive independent variables, but is also general enough to fit a larger range of potential policies in the future. A smaller set of risk factors advantageously minimizes disruptions to the eventual underwriting process, reduces data entry and simplifies explainability. Moreover, by selecting a subset of risk factors having the highest statistical correlation, and thus the highest predictive information, provides the most desirable target model; Paragraph 0063, Before data from the training set 120 or testing set 122 are submitted for further use, it is possible to use a segmentation filter 123 to focus the model upon a particular population or subpopulation of data. These subpopulations of dataset 108 may be further limited to types of violations, such as speeding or running a red light, and as particular geography, such as a residence in a particular state or city).
Regarding claim 13 (Previously Presented), which is dependent of claim 1, the combination of Vlasimsky Richard et al., Lindsey, Bhanja, and Salguetti et al. discloses all the limitations in claim 1. Vlasimsky Richard et al. further discloses wherein the measuring devices comprise a trigger module triggering variation of the measurement parameters and transmitting detected variations of one or more measurement parameters to the electronic system (Paragraph 0136, Premium modification logic 1842 may be linked to business information that tracks the financial performance of policies in effect, as well as changes in risk factors over time. The premium modification logic may recommend a premium modification on the basis of current changes to data indicating a desirability of adjusting premium amounts up or down; Paragraph 0146, On the other hand, Fig. 23B shows a significant change in the incoming data for Risk Factor 2 where the respective lines identified by the circles and squares are not closely correlated. This may be due to a number of circumstances, such as a change in demographics, the data becoming distorted due to a change in the way insurance agents are selecting people or companies to insure, a change in the way the data is being reported by the official source of the data, or a clerical error in entering or uploading the data. The monitoring logic may identify these changes by correlation analysis to compare the respective curves and print out reports for potential problem areas. If an investigation confirms that the required data truly has changed, this may reflect a need to access analytical logic 2104 for purposes of updating or tuning the model to assure continuing predictive accuracy of the implemented model).
Regarding claim 14 (Original), which is dependent of claim 7, the combination of Vlasimsky Richard et al., Lindsey, Bhanja, and Salguetti et al. discloses all the limitations in claim 7. Vlasimsky Richard et al. further discloses wherein when the risk-transfer system is activated by the electronic system, the risk-transfer system unlocks an automated repair node assigned to the risk-transfer system by appropriate signal generation and transmission to resolve the loss of the loss unit (Paragraph 0059, The additional derived data increases the number of risk factors available to the model, which allows for more robust predictions. Besides deriving new risk factors, pre-processing also prepares the data so modeling is performed at the appropriate level of information. For example, during preprocessing, actual losses are especially noted so that a model only uses loss information from prior terms. Accordingly, it is possible to adjust the predictive model on the basis of time-sequencing to see, for example, if a recent loss history indicates that it would be unwise to renew an existing policy under its present terms; Paragraph 0086, In another aspect, as shown in Fig. 12, it will be appreciated that the terms and conditions for a particular policy may be adjusted to accommodate irregularities in the predictive model results. The loss ratio results of Fig. 12 show an anomalous upward bulge for the medium risk segment of business. This may be smoothed upon policy renewal or the writing of new policies, for example, by capping the amount of a particular loss category; Paragraph 0084, An allocation routine 906 may allocate selected policies to the deciles where they achieve the best financial result according to fitness of the model for a particular category of risk; Paragraph 0085, This type of policy allocation may be provided as shown in Fig. 10 for a particular policy that is measured by loss ratio; Paragraph 0139, The processes of development 2008 and 2010 are supported by automated underwriting analysis 2016, an algorithm library 2018 that may be used in various ensembles as shown in Fig. 4, and a policy system for use in generating policies as shown in Fig. 9. A workflow engine 2022 facilitates the creation, assignment and tracking of discrete tasks; Examiner interprets the “automatic underwriting analysis” as the “automated risk-transfer.” Based on broadest reasonable interpretation in light of the specification, Vlasimsky Richard et al. discloses “an automated repair node” because the terms/conditions/policies are updated automatically when the data indicates an inadequate loss ratio).
Regarding claim 15 (Original), which is dependent of claim 1, the combination of Vlasimsky Richard et al., Lindsey, Bhanja, and Salguetti et al. discloses all the limitations in claim 1. Vlasimsky Richard et al. further discloses wherein the processing circuitry is configured to automatically capture and automatically aggregate measured loss parameters overall risk-exposed units via appropriate interface modules and an appropriate data transmission network (Paragraph 0069, The output from risk map logic 128 includes a group of statistically significant variables that are related by association to form one or more ensembles that may be applied for use in a model. These results are transferred to model evaluation logic 130. The model evaluation logic 130 uses data from the test set 122 to validate the model as a predictive model. The test may be used, for example, to evaluate loss ratio, profit, frequency of claims, severity of risk, policy retention, and accuracy of prediction. The test set 122 is a separate portion of dataset 108 that is used to test the risk mapping results or ensemble. Values from the test set 122 are submitted to the model evaluation logic to test the predictive accuracy of a particular ensemble).
Regarding claim 16 (Original), which is dependent of claim 7, the combination of Vlasimsky Richard et al., Lindsey, Bhanja, and Salguetti et al. discloses all the limitations in claim 7. Vlasimsky Richard et al. further discloses wherein risk exposure units are connected to the automated risk-transfer system by payment transfer modules configured to receive and store payment transfer parameters from the risk exposure units for transfer of risks associated with the risk exposure units from the risk exposure units to the risk-transfer system (Paragraph 0050, The predictive model is used, in general terms, to assure that total losses for a given policy type should be less than the total premiums that are paid; Paragraph 0053, The policy data 110 includes data that is specific to any policy type, such as automobile, health, life worker's compensation, malpractice, home, general liability, intellectual property, or disability policies. The policy data 110 contains information including, for example, the number of persons or employees who are covered by the policy, the identify of such persons, the addresses of such persons, coverage limits, exclusions, limitations, payment schedules, payment tracking, geographic scope, policy type, prior risk assessments, historical changes to coverage, and any other policy data that is conventionally maintained by an insurance company).
Regarding claim 17 (Original), which is dependent of claim 7, the combination of Vlasimsky Richard et al., Lindsey, Bhanja, and Salguetti et al. discloses all the limitations in claim 7. Vlasimsky Richard et al. further discloses wherein the risk-exposed units are connected to the automated risk- transfer system transferring risk exposure associated with occurrence of defined risk events from the risk-exposed units to the automated risk-transfer system by equitable (Paragraph 0059, The additional derived data increases the number of risk factors available to the model, which allows for more robust predictions. Besides deriving new risk factors, pre-processing also prepares the data so modeling is performed at the appropriate level of information. For example, during preprocessing, actual losses are especially noted so that a model only uses loss information from prior terms. Accordingly, it is possible to adjust the predictive model on the basis of time-sequencing to see, for example, if a recent loss history indicates that it would be unwise to renew an existing policy under its present terms; Paragraph 0086, In another aspect, as shown in Fig. 12, it will be appreciated that the terms and conditions for a particular policy may be adjusted to accommodate irregularities in the predictive model results. The loss ratio results of Fig. 12 show an anomalous upward bulge for the medium risk segment of business. This may be smoothed upon policy renewal or the writing of new policies, for example, by capping the amount of a particular loss category; Paragraph 0084, An allocation routine 906 may allocate selected policies to the deciles where they achieve the best financial result according to fitness of the model for a particular category of risk; Paragraph 0085, This type of policy allocation may be provided as shown in Fig. 10 for a particular policy that is measured by loss ratio; Paragraph 0139, The processes of development 2008 and 2010 are supported by automated underwriting analysis 2016, an algorithm library 2018 that may be used in various ensembles as shown in Fig. 4, and a policy system for use in generating policies as shown in Fig. 9. A workflow engine 2022 facilitates the creation, assignment and tracking of discrete tasks), mutually aligned risk transfer parameters and correlated aligned payment transfer parameters (Paragraph 0050, The predictive model is used, in general terms, to assure that total losses for a given policy type should be less than the total premiums that are paid), wherein the automated risk-transfer system is connected to a … risk-transfer system and transfers at least parts of the risk exposure associated with the occurrence of the defined risk events from the risk-transfer system to the … risk-transfer system by equitable, mutually aligned … risk transfer parameters and correlated aligned … payment transfer parameters, wherein, in response to occurrence of one of the defined risk events, loss parameters measuring the loss at the risk-exposed units are captured and transmitted to the automated risk-transfer system, and wherein the loss is automatically covered by the automated risk-transfer system based on the equitable, mutually aligned risk transfer parameters (Paragraph 0059, The additional derived data increases the number of risk factors available to the model, which allows for more robust predictions. Besides deriving new risk factors, pre-processing also prepares the data so modeling is performed at the appropriate level of information. For example, during preprocessing, actual losses are especially noted so that a model only uses loss information from prior terms. Accordingly, it is possible to adjust the predictive model on the basis of time-sequencing to see, for example, if a recent loss history indicates that it would be unwise to renew an existing policy under its present terms; Paragraph 0086, In another aspect, as shown in Fig. 12, it will be appreciated that the terms and conditions for a particular policy may be adjusted to accommodate irregularities in the predictive model results. The loss ratio results of Fig. 12 show an anomalous upward bulge for the medium risk segment of business. This may be smoothed upon policy renewal or the writing of new policies, for example, by capping the amount of a particular loss category; Paragraph 0084, An allocation routine 906 may allocate selected policies to the deciles where they achieve the best financial result according to fitness of the model for a particular category of risk; Paragraph 0085, This type of policy allocation may be provided as shown in Fig. 10 for a particular policy that is measured by loss ratio; Paragraph 0139, The processes of development 2008 and 2010 are supported by automated underwriting analysis 2016, an algorithm library 2018 that may be used in various ensembles as shown in Fig. 4, and a policy system for use in generating policies as shown in Fig. 9. A workflow engine 2022 facilitates the creation, assignment and tracking of discrete tasks).
Although Vlasimsky Richard et al. discloses all the limitations above and wherein the loss is automatically covered by the automated risk-transfer system based on the equitable, mutually aligned risk transfer parameters (e.g., automatically underwriting analysis such as adjusting premium price), the combination of Vlasimsky Richard et al., Lindsey, and Bhanja does not specifically disclose wherein at least part of the risk is transferred to a second risk transfer system.
However, Salguetti et al. discloses wherein the automated risk-transfer system is connected to a second risk-transfer system and transfers at least parts of the risk exposure associated with the occurrence of the defined risk events from the risk-transfer system to the second risk-transfer system by equitable, mutually aligned second risk transfer parameters and correlated aligned second payment transfer parameters, wherein, in response to occurrence of one of the defined risk events, loss parameters measuring the loss at the risk-exposed units are captured and transmitted to the automated risk-transfer system, and wherein the loss is automatically covered by the automated risk-transfer system based on the equitable, mutually aligned risk transfer parameters (Paragraph 0006, Moreover, the system should be better able to capture how and where risk is transferred, which will create a more efficient and correct use of risk and loss drivers in liability insurance technology systems. Furthermore, it is an object of the invention to provide an adaptive pricing tool for insurance products based upon liability exposure, especially for mid-size risks; Paragraph 0008, Moreover, the system is able to dynamically capture and adapt how and where risk is transferred, which will create a more efficient and correct use of risk and loss drivers in the liability insurance technology systems. Furthermore, the invention is able to provide an electronically automated, adaptive pricing tool for insurance products based upon liability exposure, especially for mid-size risks; Paragraph 0029, FIG. 14 shows a diagram illustrating schematically the implementation of short-term extension modules to the system allowing a generation of the expected loss after reinsurance risk transfer).
It would have been obvious to one ordinary skill in the art at the time the invention was filed to modify the risk transfer system of the invention of Vlasimsky Richard et al. to further incorporate wherein at least part of the risk is automatically transferred to a second risk transfer system of the invention of Salguetti et al. because doing so would allow the system to create a more efficient and correct use of risk and loss drivers in the liability insurance technology systems (see Salguetti et al., Paragraph 0008). Further, the claimed invention is merely a combination of old elements, and in combination each element would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that the results of the combination were predictable.
Regarding claim 18 (Original), which is dependent of claim 7, the combination of Vlasimsky Richard et al., Lindsey, Bhanja, and Salguetti et al. discloses all the limitations in claim 7. Vlasimsky Richard et al. further discloses in response to the occurrence of one of the defined risk events, loss parameters measuring the loss at the risk-exposed units are captured and transmitted to the automated risk-transfer system, the loss being automatically covered by the automated risk-transfer system (Paragraph 0059, The additional derived data increases the number of risk factors available to the model, which allows for more robust predictions. Besides deriving new risk factors, pre-processing also prepares the data so modeling is performed at the appropriate level of information. For example, during preprocessing, actual losses are especially noted so that a model only uses loss information from prior terms. Accordingly, it is possible to adjust the predictive model on the basis of time-sequencing to see, for example, if a recent loss history indicates that it would be unwise to renew an existing policy under its present terms; Paragraph 0086, In another aspect, as shown in Fig. 12, it will be appreciated that the terms and conditions for a particular policy may be adjusted to accommodate irregularities in the predictive model results. The loss ratio results of Fig. 12 show an anomalous upward bulge for the medium risk segment of business. This may be smoothed upon policy renewal or the writing of new policies, for example, by capping the amount of a particular loss category; Paragraph 0084, An allocation routine 906 may allocate selected policies to the deciles where they achieve the best financial result according to fitness of the model for a particular category of risk; Paragraph 0085, This type of policy allocation may be provided as shown in Fig. 10 for a particular policy that is measured by loss ratio; Paragraph 0139, The processes of development 2008 and 2010 are supported by automated underwriting analysis 2016, an algorithm library 2018 that may be used in various ensembles as shown in Fig. 4, and a policy system for use in generating policies as shown in Fig. 9. A workflow engine 2022 facilitates the creation, assignment and tracking of discrete tasks; Examiner interprets the “automatic underwriting analysis” as the “automated risk-transfer.” Based on broadest reasonable interpretation in light of the specification, Vlasimsky Richard et al. discloses “an automated risk-transfer” because the terms/conditions/policies are updated automatically when the data indicates an inadequate loss ratio).
Regarding claim 19 (Original), which is dependent of claim 17, the combination of Vlasimsky Richard et al., Lindsey, Bhanja, and Salguetti et al. discloses all the limitations in claim 17. Vlasimsky Richard further discloses wherein the automated risk-transfer system is connected to the … risk-transfer system by payment transfer modules configured to receive and store … payment parameters from the automated risk-transfer system for the transfer of risks associated with the risk exposures of the risk-exposed units from the automated risk-transfer system to the … risk-transfer system ((Paragraph 0059, The additional derived data increases the number of risk factors available to the model, which allows for more robust predictions. Besides deriving new risk factors, pre-processing also prepares the data so modeling is performed at the appropriate level of information. For example, during preprocessing, actual losses are especially noted so that a model only uses loss information from prior terms. Accordingly, it is possible to adjust the predictive model on the basis of time-sequencing to see, for example, if a recent loss history indicates that it would be unwise to renew an existing policy under its present terms; Paragraph 0086, In another aspect, as shown in Fig. 12, it will be appreciated that the terms and conditions for a particular policy may be adjusted to accommodate irregularities in the predictive model results. The loss ratio results of Fig. 12 show an anomalous upward bulge for the medium risk segment of business. This may be smoothed upon policy renewal or the writing of new policies, for example, by capping the amount of a particular loss category; Paragraph 0084, An allocation routine 906 may allocate selected policies to the deciles where they achieve the best financial result according to fitness of the model for a particular category of risk; Paragraph 0085, This type of policy allocation may be provided as shown in Fig. 10 for a particular policy that is measured by loss ratio; Paragraph 0139, The processes of development 2008 and 2010 are supported by automated underwriting analysis 2016, an algorithm library 2018 that may be used in various ensembles as shown in Fig. 4, and a policy system for use in generating policies as shown in Fig. 9. A workflow engine 2022 facilitates the creation, assignment and tracking of discrete tasks).
Although Vlasimsky Richard et al. discloses all the limitations above and payment parameters (e.g., total premiums that are paid and coverage limits), the combination of Vlasimsky Richard et al., Lindsey, and Bhanja does not specifically disclose wherein at least part of the risk is transferred to a second risk transfer system.
However, Salguetti et al. discloses wherein the automated risk-transfer system is connected to the second risk-transfer system by payment transfer modules configured to receive and store second payment parameters from the automated risk-transfer system for the transfer of risks associated with the risk exposures of the risk-exposed units from the automated risk-transfer system to the second risk-transfer system (Paragraph 0006, Moreover, the system should be better able to capture how and where risk is transferred, which will create a more efficient and correct use of risk and loss drivers in liability insurance technology systems. Furthermore, it is an object of the invention to provide an adaptive pricing tool for insurance products based upon liability exposure, especially for mid-size risks; Paragraph 0008, Moreover, the system is able to dynamically capture and adapt how and where risk is transferred, which will create a more efficient and correct use of risk and loss drivers in the liability insurance technology systems. Furthermore, the invention is able to provide an electronically automated, adaptive pricing tool for insurance products based upon liability exposure, especially for mid-size risks; Paragraph 0029, FIG. 14 shows a diagram illustrating schematically the implementation of short-term extension modules to the system allowing a generation of the expected loss after reinsurance risk transfer).
It would have been obvious to one ordinary skill in the art at the time the invention was filed to modify the risk transfer system of the invention of Vlasimsky Richard et al. to further incorporate wherein at least part of the risk is automatically transferred to a second risk transfer system of the invention of Salguetti et al. because doing so would allow the system to create a more efficient and correct use of risk and loss drivers in the liability insurance technology systems (see Salguetti et al., Paragraph 0008). Further, the claimed invention is merely a combination of old elements, and in combination each element would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that the results of the combination were predictable.
Regarding claim 20 (Original), which is dependent of claim 19, the combination of Vlasimsky Richard et al., Lindsey, Bhanja, and Salguetti et al. discloses all the limitations in claim 19. Although Vlasimsky Richard et al. discloses all the limitations above and a payment transfer (e.g., total premiums that are paid and coverage limits), the combination of Vlasimsky Richard et al., Lindsey, and Bhanja does not specifically disclose a second-risk transfer system, wherein the second risk-transfer system is only activatable by triggering a payment transfer matching a predefined activation control parameter.
However, Salguetti et al. discloses wherein the processing circuitry is configured to capture a payment transfer from the automated risk-transfer system to the payment transfer modules, wherein the second risk-transfer system is only activatable by triggering a payment transfer matching a predefined activation control parameter (Paragraph 0006, Moreover, the system should be better able to capture how and where risk is transferred, which will create a more efficient and correct use of risk and loss drivers in liability insurance technology systems. Furthermore, it is an object of the invention to provide an adaptive pricing tool for insurance products based upon liability exposure, especially for mid-size risks; Paragraph 0008, Moreover, the system is able to dynamically capture and adapt how and where risk is transferred, which will create a more efficient and correct use of risk and loss drivers in the liability insurance technology systems. Furthermore, the invention is able to provide an electronically automated, adaptive pricing tool for insurance products based upon liability exposure, especially for mid-size risks; Paragraph 0029, FIG. 14 shows a diagram illustrating schematically the implementation of short-term extension modules to the system allowing a generation of the expected loss after reinsurance risk transfer).
It would have been obvious to one ordinary skill in the art at the time the invention was filed to modify the risk transfer system of the invention of Vlasimsky Richard et al. to further incorporate wherein at least part of the risk is automatically transferred to a second risk transfer system of the invention of Salguetti et al. because doing so would allow the system to create a more efficient and correct use of risk and loss drivers in the liability insurance technology systems (see Salguetti et al., Paragraph 0008). Further, the claimed invention is merely a combination of old elements, and in combination each element would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that the results of the combination were predictable.
Regarding claim 21 (Original), which is dependent of claim 7, the combination of Vlasimsky Richard et al., Lindsey, Bhanja, and Salguetti et al. discloses all the limitations in claim 7. Vlasimsky Richard et al. further discloses wherein a loss associated with the one of the defined risk events and allocated to the risk-exposed units is covered distinctly and/or separately by an automated first resource pooling system of the automated risk-transfer system via a transfer of payments from the automated first resource pooling system to the risk-exposed units (Paragraph 0059, The additional derived data increases the number of risk factors available to the model, which allows for more robust predictions. Besides deriving new risk factors, pre-processing also prepares the data so modeling is performed at the appropriate level of information. For example, during preprocessing, actual losses are especially noted so that a model only uses loss information from prior terms. Accordingly, it is possible to adjust the predictive model on the basis of time-sequencing to see, for example, if a recent loss history indicates that it would be unwise to renew an existing policy under its present terms; Paragraph 0086, In another aspect, as shown in Fig. 12, it will be appreciated that the terms and conditions for a particular policy may be adjusted to accommodate irregularities in the predictive model results. The loss ratio results of Fig. 12 show an anomalous upward bulge for the medium risk segment of business. This may be smoothed upon policy renewal or the writing of new policies, for example, by capping the amount of a particular loss category; Paragraph 0084, An allocation routine 906 may allocate selected policies to the deciles where they achieve the best financial result according to fitness of the model for a particular category of risk; Paragraph 0085, This type of policy allocation may be provided as shown in Fig. 10 for a particular policy that is measured by loss ratio; Paragraph 0139, The processes of development 2008 and 2010 are supported by automated underwriting analysis 2016, an algorithm library 2018 that may be used in various ensembles as shown in Fig. 4, and a policy system for use in generating policies as shown in Fig. 9. A workflow engine 2022 facilitates the creation, assignment and tracking of discrete tasks), …
Although Vlasimsky Richard et al. discloses all the limitations above, a risk assessment in response to an occurring loss (e.g., increase premiums), a payment transfer (e.g., premium paid), Vlasimsky Richard et al. does not specifically disclose wherein a second payment transfer from an automated second resource pooling system of the second risk-transfer system to the automated first resource pooling system is triggered via the generated activation signal based on the measured actual loss of the risk-exposed units by the processing circuitry.
However, Salguetti et al. discloses wherein a second payment transfer from an automated second resource pooling system of the second risk-transfer system to the automated first resource pooling system is triggered via the generated activation signal based on the measured actual loss of the risk-exposed units by the processing circuitry (Paragraph 0006, Moreover, the system should be better able to capture how and where risk is transferred, which will create a more efficient and correct use of risk and loss drivers in liability insurance technology systems. Furthermore, it is an object of the invention to provide an adaptive pricing tool for insurance products based upon liability exposure, especially for mid-size risks; Paragraph 0008, Moreover, the system is able to dynamically capture and adapt how and where risk is transferred, which will create a more efficient and correct use of risk and loss drivers in the liability insurance technology systems. Furthermore, the invention is able to provide an electronically automated, adaptive pricing tool for insurance products based upon liability exposure, especially for mid-size risks; Paragraph 0029, FIG. 14 shows a diagram illustrating schematically the implementation of short-term extension modules to the system allowing a generation of the expected loss after reinsurance risk transfer).
It would have been obvious to one ordinary skill in the art at the time the invention was filed to modify the risk transfer system of the invention of Vlasimsky Richard et al. to further incorporate wherein at least part of the risk is automatically transferred to a second risk transfer system of the invention of Salguetti et al. because doing so would allow the system to create a more efficient and correct use of risk and loss drivers in the liability insurance technology systems (see Salguetti et al., Paragraph 0008). Further, the claimed invention is merely a combination of old elements, and in combination each element would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that the results of the combination were predictable.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant’s disclosure.
Hayward et al. (US 2021/0390624 A1) – discloses discovering
new causes of loss that may be utilized to set pricing of insurance. Causes of loss for homeowners may include wind, hail, fire, mold, etc. The present embodiments may dynamically characterize insurance claims, and/or dynamically determine causes of loss associated with insurance claims, which may vary geographically. The present embodiments may dynamically update pricing models to facilitate better matching insurance premium price to actual risk (Paragraph 0051).
Berhe (Berhe, T.A. and Kaur, J., 2017. Determinants of insurance companies’ profitability analysis of insurance sector in Ethiopia. International journal of research in finance and marketing (IJRFM), 7(4), pp.124-137) – discloses a study to identify the key factors that affect profitability of insurance companies in Ethiopia. Specifically, it investigates the internal or firm specific variables (size of insurance companies, capital adequacy, leverage ratio, liquidity ratio, and loss ratio, and external or macro variables (market share, growth rate of GDP and inflation rate). Results of the regression analysis revealed that size of insurance, capital adequacy, liquidity ratio and growth rate of GDP were the major factors that significantly affect the profitability of insurance companies. On the other hand, leverage ratio, loss ratio, market share and inflation rate were found to have insignificant effect on insurance companies profitability.
Al-Subaihi (Al-Subaihi, A.A., 2002. Variable selection in multivariable regression using SAS/IML. Journal of Statistical Software, 7, pp.1-20) – discloses the benefits of using Adjusted R2 Selection Criterion and Corrected Form of Akaike’s Information Criterion (see pages 10-11).
Goldburd (Goldburd, M., Khare, A., Tevet, D. and Guller, D., 2016. Generalized linear models for insurance rating. Casualty Actuarial Society, CAS Monographs Series, 5) – discloses: an Akaike Information Criterion (AIC) to select model parameters (page 66); model stability (page 73); and random variables and fixed variables (page 93, random effects and fixed effects).
Kelley (Kelley, K.H., Fontanetta, L.M., Heintzman, M. and Pereira, N., 2018. Artificial intelligence: Implications for social inflation and insurance. Risk Management and Insurance Review, 21(3), pp.373-387) - discloses how social inflation continues to drive loss cost trends for insurance carriers and should not be underestimated (see Pages 375-376).
Brady et al. (US 2016/0078544 A1) - discloses a “predictive model” that may, for example, establish premium pricing functions. As used herein, the phrase “predictive model” might refer to, for example, any of a class of algorithms that are used to understand relative factors contributing to an outcome, estimate unknown outcomes, discover trends, and/or make other estimations based on a data set of factors collected across prior trials. Note that a predictive model might refer to, but is not limited to, methods such as ordinary least squares regression, logistic regression, decision trees, neural networks, generalized linear models, and/or Bayesian models. The predictive model may be trained with historical premium and claim transaction data, and may be applied to a new insurance product to help determine a pricing function (see Paragraph 0046).
Jung et al. (US 2009/0100095 A1) – discloses to transfer at least a portion of a risk from a first insurer to a second insurer, where the second insurer (reinsurer) is obligated to cover any part of a total annual loss burden that exceeds an agreed deductible. In an example, transferor module 110 transfers risk from a direct insurer to a reinsurer and the reinsurer is obligated to cover all of a total annual loss burden exceeding an agreed deductible (Paragraph 0049).
Any inquiry concerning this communication or earlier communications from the examiner should be directed to MARJORIE PUJOLS-CRUZ whose telephone number is (571)272-4668. The examiner can normally be reached Mon-Thru 7:30 AM - 5:00 PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Patricia H Munson can be reached at (571)270-5396. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/MARJORIE PUJOLS-CRUZ/Examiner, Art Unit 3624