Prosecution Insights
Last updated: April 19, 2026
Application No. 18/651,387

SYSTEM AND METHOD FOR GENERATING SIMULATED CHARACTERS FOR POPULATION GROUPS

Final Rejection §102
Filed
Apr 30, 2024
Examiner
WHITE, DYLAN C
Art Unit
3625
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
State Farm Mutual Automobile Insurance Company
OA Round
2 (Final)
78%
Grant Probability
Favorable
3-4
OA Rounds
2y 4m
To Grant
90%
With Interview

Examiner Intelligence

Grants 78% — above average
78%
Career Allow Rate
672 granted / 867 resolved
+25.5% vs TC avg
Moderate +12% lift
Without
With
+12.1%
Interview Lift
resolved cases with interview
Typical timeline
2y 4m
Avg Prosecution
38 currently pending
Career history
905
Total Applications
across all art units

Statute-Specific Performance

§101
29.9%
-10.1% vs TC avg
§103
24.0%
-16.0% vs TC avg
§102
29.0%
-11.0% vs TC avg
§112
8.4%
-31.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 867 resolved cases

Office Action

§102
DETAILED ACTION This Office Action is in reply to Applicants response after non-final rejection received on December 16, 2025. Claim(s) 1-2 and 4-20 is/are currently pending in the instant application. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Arguments The Examiner acknowledges the Applicants amendments to claims 1, 7, 10, 11, 14, 16-17, and 19, and 21 in the response on December 16, 2025. Claim 3 is canceled and claim 22 is added in the response. Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. Claim(s) 1-2, 4-21 is/are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Kiljanek WO 2020/077163 A1 (hereafter Kiljanek). Regarding claim 1, determining a respective simulated population for each simulated character of multiple simulated characters based upon respective characteristic values associated with each simulated character (see at least [0041] The simulated patient population dataset 140 may take the form of a table, a database, or a similar data structure. In some cases, each simulated patient dataset may 145n occupy a row in the simulated patient population dataset 140. In such a case, each simulated patient dataset 145n may have a simulated patient identifier uniquely identifying the simulated patient that is being described.), comprising: determining a respective associated population group of multiple population groups for a real-life population based upon each simulated character (see at least [0057] a real patient population dataset reputation score, an expert reputation score of an expert that provided the real patient population dataset 245,); determining respective member characteristics for the respective associated population group (see at least [0059] The real patient population dataset 245 includes multiple real patient datasets 250A-Z, each including features and outcomes identified as features 255A-Z and outcomes 260 A-Z respectively. [0060] Feature naming normalization may rename features in certain simulated or patient population datasets so that features that should be the same, but are inconsistently named, are modified to be named consistently. For example, one simulated patient population dataset in the training dataset 290 may have a feature titled "age" while another may simulated patient population dataset in the training dataset 290 may have a feature "how old are you?" These clearly refer to the same feature, so feature naming normalization may rename the "how old are you?" feature to "age" or vice versa. Features are specific characteristics of the data. (Ex. Age, gender, etc.); and generating each simulated member of the respective simulated population, comprising associating each simulated member with one or more respective altered characteristic values altered based upon the respective characteristic values associated with each simulated character and the respective member characteristics for the respective associated population group (see at least [0071] a simulated patient dataset 405 is pulled from a simulated patient population dataset. The simulated patient dataset 405 includes features 410, outcomes 415, and metadata 418. The simulated patient dataset 405 is modified via removal of the outcomes 415 and optionally the metadata 418 to become the modified simulated patient dataset 420 that includes the features 410 and optionally the metadata 418 without the outcomes 415. The modified simulated patient dataset 420 then behaves like a query dataset 510.); receiving, from a user device for a user, user feedback for the respective simulated population (see at least [0085] Quality and verifiability of predicted outcomes may also be improved, as multiple experts 105 may independently provide multiple outcomes 125 for the simulated patient population datasets. Cross-verification 400 as illustrated in FIG. 4, and feedback 550 as illustrated in FIG. 5, may modify reputation scores 350/355, causing re-generation of the training dataset 290 as discussed with respect to at least Figs. 1, 2, and 3 Tills improves quality and verifiability.); and re-training the trained population-generating model based upon the respective simulated population and the user feedback (see at least [0062] the expert reputation score and/or the simulated patient population dataset reputation score may be increased or decreased after training, for example based on feedback 550 of a querying user 505 as in FIG. 5 In such situations, the training dataset 290 may optionally be re-generated, with the amount of simulated patient datasets pulled from a simulated patient population dataset optionally modified based on the increase or decrease in the expert reputation score and/or the simulated patient population dataset reputation score. The newly re-generated training dataset 290 may then be input back into the training module 215 to train the machine learning engine 210.). Regarding claim 2, wherein generating each simulated member of the respective simulated population for each simulated character of the multiple simulated characters further comprises determining, by a trained population-generating model, each simulated member based upon the respective characteristic values associated with each simulated character (see at least [0063] The machine learning engine 210, once trained based on the training dataset 290 (e.g., the simulated patient population dataset 140 and optionally one or more additional simulated and/or real patient population datasets), may generate one or more artificial intelligence (AI) or machine learning (ML) models that the machine learning engine 210 may use to generate predicted outcomes 540 based on query datasets 510 as discussed further in FIG. 5. Four such models are illustrated in FIG. 2, namely a first model 270A, a second model 270B, a third model 270C, and a fourth model 270D.). Regarding claim 4, the method further comprising one or more of: determining one or more respective characteristic variations for each simulated character of the multiple simulated characters based upon the respective member characteristics for the respective associated population group for each simulated character, wherein associating each simulated member of the respective simulated population for each simulated character with the one or more respective altered characteristic values comprises altering the one or more respective altered characteristic values for each simulated member based upon the one or more respective characteristic variations; or after determining the respective simulated population: generating one or more simulated responses to an inquiry for the real-life population based upon the respective simulated population for each simulated character of the multiple simulated characters; and transmitting the one or more simulated responses to be displayed on a user interface on a second user device for a second user (see at least [0079] block diagram 500 of FIG. 5 includes a query device 520 and the dataset analysis system 205. One or more querying users 505 interact with the query device 520 through a query user interface (UI) 525, providing a query dataset 510 to the query device 520 through a query user interface (UI) 525. The query dataset 510 may identify one or more features and one or more feature values for those features, as in the example query dataset 710 of FIG. 7A. The query device 110 may then send the query dataset 510 to the query module 420 of the machine learning engine 210 of the dataset analysis system 205. The query module 420 queries the various models 270A-D of the machine learning engine 210. Each model of the models 270A-D may be tailored to a particular outcome (e.g., particular diagnosis, recommended test, recommended treatment, etc.). Therefore, each model, when queried with the features from the query dataset 510, identifies whether the outcome that the model is tailored to is a predicted outcome or not. In this way, the machine learning engine 210 generates a set of one or more predicted outcomes 540 based on the query dataset 510. [0080] The one or more predicted outcomes 540 are provided from the dataset analysis system 205 to the query device 520. Upon receipt of the one or more predicted outcomes 540, the query device 520 renders and displays the one or more predicted outcomes 540 for the one or more querying users 505 to review, optionally through the query UI 525. In some cases, the one or more querying users 505 may input feedback 550 about the one or more predicted outcomes 540 into the query device 520 upon reviewing the one or more predicted outcomes 540, optionally through the query UI 525. The feedback 550 may include feedback for the entire set of one or more predicted outcomes 540. The feedback 550 may include feedback for each predicted outcome of the set of one or more predicted outcomes 540.). Regarding claim 5, wherein, when the one or more respective characteristic variations for each simulated character are determined, determining the one or more respective characteristic variations comprises at least one of: determining the one or more respective characteristic variations based upon respective statistics data for the respective member characteristics for the respective associated population group; retrieving, from a database, the one or more respective characteristic variations; or receiving, via a third user device from a third user, the one or more respective characteristic variations (see at least [0079] The block diagram 500 of FIG. 5 includes a query device 520 and the dataset analysis system 205. One or more querying users 505 interact with the query device 520 through a query user interface (UI) 525, providing a query dataset 510 to the query device 520 through a query user interface (UI) 525. The query dataset 510 may identify one or more features and one or more feature values for those features, as in the example query dataset 710 of FIG. 7A. The query device 110 may then send the query dataset 510 to the query module 420 of the machine learning engine 210 of the dataset analysis system 205.). Regarding claim 6, wherein, when the one or more simulated responses are generated, one or more of: the one or more simulated responses further comprise answers and at least one of reasons for the answers or recommendations; or generating the one or more simulated responses comprises generating the one or more simulated responses by a trained response-generating model (see at least [0103] An example of the predicted outcomes 730 is illustrated in FIG. 7B. As in FIG. 5, the set of one or more predicted outcomes 730 are sent from the dataset analysis system 205 to the query device 520. Upon receipt of the set of one or more predicted outcomes 730, the query device 520 renders and displays the set of one or more predicted outcomes 730 for the one or more querying users 505 to review. In response, the one or more querying users 505 input feedback 750 on the one or more predicted outcomes 730 into the query device 520.). Regarding claim 7, further comprising, after transmitting the one or more simulated responses: receiving, from the second user device, user feedback for the one or more simulated responses; and when the trained response-generating model is used, re-training the trained response-generating model based upon the one or more simulated responses and the user feedback (see at least [0103] An example of the predicted outcomes 730 is illustrated in FIG. 7B. As in FIG. 5, the set of one or more predicted outcomes 730 are sent from the dataset analysis system 205 to the query device 520. Upon receipt of the set of one or more predicted outcomes 730, the query device 520 renders and displays the set of one or more predicted outcomes 730 for the one or more querying users 505 to review. In response, the one or more querying users 505 input feedback 750 on the one or more predicted outcomes 730 into the query device 520. The query device sends the feedback 750 to the dataset analysis system 205, which provides the feedback 750 to the machine learning engine 210, and optionally tunes one or more models of the machine learning engine 210 based on changes to the metadata (expert reputation 350, simulated patient population dataset reputation score 355) as discussed with respect to FIG. 5. The training dataset 290 may be re-generated when metadata is updated. An amount of simulated patient datasets from a simulated patient population dataset that are included in the training dataset may be a function of the expert reputation score 350 and/or simulated patient population reputation 355 and/or of the initial count 218 (how many were generated in the population) and/or of characteristics of the machine learning engine 210.). Regarding claim 8, wherein determining the respective member characteristics for the respective associated population group comprises receiving, via a third user device from a third user, the respective member characteristics for the respective associated population group (see at least [0079] The block diagram 500 of FIG. 5 includes a query device 520 and the dataset analysis system 205. One or more querying users 505 interact with the query device 520 through a query user interface (UI) 525, providing a query dataset 510 to the query device 520 through a query user interface (UI) 525. The query dataset 510 may identify one or more features and one or more feature values for those features, as in the example query dataset 710 of FIG. 7A. The query device 110 may then send the query dataset 510 to the query module 420 of the machine learning engine 210 of the dataset analysis system 205. [0043] Each of the one or more outcomes 155n of the simulated patient dataset 145n may have a column of the simulated patient population dataset 140 dedicated to it. The cells in those columns and in the row corresponding to the simulated patient dataset 145n may then have outcome values for each of those features.). Regarding claim 9, wherein the real-life population comprises one or more of: customers of a retailer; owners of vehicles manufactured by an automobile manufacture; homeowners; or members of a target market (see at least [0004] medical data and healthcare). Claim 10 is substantially similar to claim 1 and therefore rejected under the same rationale. Claim 11 is substantially similar to claims 2 and 3 and therefore rejected under the same rationale. Claim 12 is substantially similar to claim 4 and therefore rejected under the same rationale. Claim 13 is substantially similar to claim 5 and therefore rejected under the same rationale. Claim 14 is substantially similar to claims 6 and 7 and therefore rejected under the same rationale. Claim 15 is substantially similar to claim 8 and therefore rejected under the same rationale. Claim 16 is substantially similar to claims 1 or 10 and therefore rejected under the same rationale. Claim 17 is substantially similar to claims 1 and 3 or 11 and therefore rejected under the same rationale. Claim 18 is substantially similar to claims 4 or 12 and therefore rejected under the same rationale. Claim 19 is substantially similar to claims 13, 6, and 7 or 13 and 14 and therefore rejected under the same rationale. Claim 20 is substantially similar to claims 8 or 15 and therefore rejected under the same rationale. Claim 21 is substantially similar to claims 1, 10, or 16 and therefore rejected under the same rationale. Claim 22 is substantially similar to claim 2 and therefore rejected under the same rationale. Response to Arguments The Applicants remarks begin on page 12 of the response on December 16, 2025, with a summary of the claims and the interview. The arguments begin with the rejection under 35 U.S.C § 101 where the Applicants argue that the claims are allowable over the rejection as the claims in integrated into a practical application. Applicates argue (remarks page 13) that the claims recite an improvement to technology or technical field, and also recite use beyond generally linking to a technological environment. Applicant specifically cites the amendments to independent claims 1, 10, 16, and 21 with receiving and retaining steps. In light of the amendments, the Examiner has withdrawn the rejection under 35 U.S.C § 101 at this time. The argument move to the rejection under 35 U.S.C § 102 with respect to Kiljanek where Applicants allege that the reference does not disclose every limitation of the amended independent claims 1, 10, 16, and 21. Applicants point to “re-training a trained population-generating model based upon the respective simulated population and the use feedback”. Applicants claim the reference fails to show this and cites the “causing re-generation of training dataset 290” as not an aspect of retraining the model. Further, the Applicant states that nowhere does Kiljanek show or disclose the limitations as amended. The Examiner disagrees with the Applicant. First, the citation clearly disclose the updating of a training dataset for the purpose of updating the model. If the model was not updated by using the training data set there would be no reason to perform the update to the training dataset. Additionally, if the Applicant had read the reference then it would have been clear in [0062] that the feedback from a user and the training dataset 290 may be re-generated, and the newly re-generated training dataset 290 may then be input back into the training module to train the machine learning engine. This is proof that the Applicants arguments are incorrect and the reference does in fact disclose each and every element in the claims. In summary, the rejection under 35 U.S.C § 101 has been withdrawn while the rejection under 35 U.S.C § 102 remains. The claims are not in condition for allowance. Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to DYLAN C WHITE whose telephone number is (571)272-1406. The examiner can normally be reached M-F 7:30-4:00 EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Beth Boswell can be reached at (571)272-6737. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /DYLAN C WHITE/Primary Examiner, Art Unit 3625 March 18, 2026
Read full office action

Prosecution Timeline

Apr 30, 2024
Application Filed
Sep 19, 2025
Non-Final Rejection — §102
Oct 28, 2025
Applicant Interview (Telephonic)
Oct 31, 2025
Examiner Interview Summary
Dec 09, 2025
Response Filed
Mar 20, 2026
Final Rejection — §102 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602604
METHOD AND SYSTEM FOR ESTIMATING DURATION AND PERFORMANCE OF A PRODUCT OVER LIFECYCLE OF THE SAME
2y 5m to grant Granted Apr 14, 2026
Patent 12591895
SYSTEMS AND METHODS FOR MONITORING SERVICES USING SMART CONTRACTS
2y 5m to grant Granted Mar 31, 2026
Patent 12591791
FAR EDGE/IOT INTELLIGENCE DESIGN AND APPARATUS FOR HUMAN OPERATORS ASSISTANCE
2y 5m to grant Granted Mar 31, 2026
Patent 12586113
IMPROVED SYSTEM-USER INTERACTION RELATING TO ADVICE
2y 5m to grant Granted Mar 24, 2026
Patent 12586025
OPERATION MANAGEMENT SYSTEM
2y 5m to grant Granted Mar 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
78%
Grant Probability
90%
With Interview (+12.1%)
2y 4m
Median Time to Grant
Moderate
PTA Risk
Based on 867 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month