Prosecution Insights
Last updated: April 19, 2026
Application No. 19/042,267

Machine Learning Technologies for Predicting Results of Cable Fire Tests

Non-Final OA §103§DP
Filed
Jan 31, 2025
Examiner
TRAN, LOC
Art Unit
2164
Tech Center
2100 — Computer Architecture & Software
Assignee
UL LLC
OA Round
1 (Non-Final)
84%
Grant Probability
Favorable
1-2
OA Rounds
2y 7m
To Grant
99%
With Interview

Examiner Intelligence

Grants 84% — above average
84%
Career Allow Rate
311 granted / 372 resolved
+28.6% vs TC avg
Strong +24% interview lift
Without
With
+23.9%
Interview Lift
resolved cases with interview
Typical timeline
2y 7m
Avg Prosecution
17 currently pending
Career history
389
Total Applications
across all art units

Statute-Specific Performance

§101
14.4%
-25.6% vs TC avg
§103
44.8%
+4.8% vs TC avg
§102
24.4%
-15.6% vs TC avg
§112
8.0%
-32.0% vs TC avg
Black line = Tech Center average estimate • Based on career data from 372 resolved cases

Office Action

§103 §DP
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Double Patenting The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the claims at issue are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); and In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969). A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on a nonstatutory double patenting ground provided the reference application or patent either is shown to be commonly owned with this application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP §§ 706.02(l)(1) - 706.02(l)(3) for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b). The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/forms/. The filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to http://www.uspto.gov/patents/process/file/efs/guidance/eTD-info-I.jsp. Claims 1-20 are rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1-11 of U.S. Patent No. 12,216,686. Although the claims at issue are not identical, they are not patentably distinct from each other because the claims of the present application are anticipated by the claims of the parent patent, USPN 12,216,686. For example, claim 1 of the present application and corresponding claim 1 of the parent is compared below. USPN 12,216,686 – Claim 1 Application No. 19/042,267 - Claim 1 A computer-implemented method for predicting an outcome of a large-scale product test, the computer-implemented method comprising: receiving, by one or more processors, a set of small-scale results of a product tested according to a small-scale product test representative of the large-scale product test; calculating, by the one or more processors and based on the set of small-scale results as a first input to a first machine learning model of a plurality of machine learning models, a first result predicting an outcome of the product tested according to the large-scale product test, wherein calculating the first result includes: determining, by the one or more processors, a first classification for the product, and calculating, by the one or more processors, a confidence value for the first classification; calculating, by the one or more processors and based on the set of small-scale results as a second input to at least one second machine learning model of the plurality of machine learning models, a second result predicting the outcome of the product tested according to the large-scale product test, wherein calculating the second result includes: determining, by the one or more processors and based on a test profile for the large-scale product test, a second classification for the product, wherein determining the second classification includes: predicting, by the one or more processors and based on the set of small-scale results as the second input to the at least one second machine learning model of the plurality of machine learning models, the test profile, wherein: the at least one second machine learning model includes a plurality of regression models, and each regression model of the plurality of regression models predicts a test value of a plurality of test values of the test profile, the plurality of test values including at least one of: (i) a flame spread value; (ii) a total heat release value; (iii) a peak heat release rate value; and (iv) a fire growth rate index value; and predicting, by the one or more processors, an outcome of the large-scale product test based at least on the first result and the second result. A computer-implemented method for predicting an outcome of a large-scale product test, the computer-implemented method comprising: receiving, by one or more processors, a set of small-scale results of a product tested according to a small-scale product test representative of the large-scale product test; calculating, by the one or more processors and based on the set of small-scale results as a first input to a first machine learning model of a plurality of machine learning models, a first result predicting an outcome of the product tested according to the large-scale product test, wherein calculating the first result includes: determining, by the one or more processors, a first classification for the product, and calculating, by the one or more processors, a confidence value for the first classification; calculating, by the one or more processors and based on the set of small-scale results as a second input to at least one second machine learning model of the plurality of machine learning models, a second result predicting the outcome of the product tested according to the large-scale product test, wherein calculating the second result includes: determining, by the one or more processors, a second classification for the product; and predicting, by the one or more processors, an outcome of the large-scale product test based at least on the first result and the second result. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-20 are rejected under 35 U.S.C. 103 as being unpatentable over Tabaddor et al (“Tabaddor” US 2021/0334699 A1), published on October 28, 2021, in view of Lyer (US 20220301031 A1), published on September 22, 2022. As to claim 1, Tabaddor teaches “receiving, by one or more processors, a set of small-scale results of a product tested according to a small-scale product test representative of the large-scale product test” in par. 0058 (“… the electronic device may use the results of the small-scale test for a product to predict an outcome of the product being tested according to the large-scale product test”. Noting that the results of the small-scale test is used to predict an outcome of the product being tested according to the large-scale product test as to suggest of a small-scale product test representative of the large-scale product test). Tabaddor teaches “calculating, by the one or more processors and based on the set of small-scale results as a first input to a first machine learning model of a plurality of machine learning models, a first result predicting an outcome of the product tested according to the large-scale product test” in par. 0058 (“… the result from the machine learning model may be embodied as a set of outputs as specified by the corresponding large-scale product test. Accordingly, the electronic device may use the results of the small-scale test for a product to predict an outcome of the product being tested according to the large-scale product test”). It appears Tabaddor does not explicitly teach “wherein calculating the first result includes: determining, by the one or more processors, a first classification for the product”. However, Lyer teaches “wherein calculating the first result includes: determining, by the one or more processors, a first classification for the product” in par. 0006 (“…determining a first machine learning model based on the set of data attributes, determining, using the first machine learning model on the set of data attributes, a first prediction of a standardized code and a first confidence score for the first prediction in association with a classification of the product”). Tabaddor and Lyer are analogous art because they are in the same field of endeavor, machine learning based product classification. It would have been obvious to one of ordinary skill in the art before the effective filling date of the claim invention to calculate product results, disclosed by Tabaddor, including “wherein calculating the first result includes: determining, by the one or more processors, a first classification for the product”, as suggested by Lyer in order to determine actionable item based on product classification (see Lyer par. 0006). Lyer teaches “calculating, by the one or more processors, a confidence value for the first classification” in par. 0006 (“…using the first machine learning model on the set of data attributes, a first prediction of a standardized code and a first confidence score for the first prediction in association with a classification of the product…”). Tabaddor teaches “calculating, by the one or more processors and based on the set of small-scale results as a second input to at least one second machine learning model of the plurality of machine learning models, a second result predicting the outcome of the product tested according to the large-scale product test” in par. 0007 (“…an initial set of results … a set of small-scale products tested according to a small-scale product test …input the additional set of results into the machine learning model, and after inputting the additional set of results into the machine learning model, output a result from the machine learning model, the result predicting an outcome of the product tested according to the large-scale product test”. Noting that the an initial set of results of a set of small-scale is input into the machine learning model to produce a result predicting an outcome of the product tested according to the large-scale product test). Lyer teaches “wherein calculating the second result includes: determining, by the one or more processors, a second classification for the product” in par. 0006 (“…determining a first machine learning model based on the set of data attributes, determining, using the first machine learning model on the set of data attributes, a first prediction of a standardized code and a first confidence score for the first prediction in association with a classification of the product”). Tabaddor teaches “and predicting, by the one or more processors, an outcome of the large-scale product test based at least on the first result and the second result” in par. 0007 (“…generate the machine learning model using an initial dataset indicating an initial set of results of … (ii) a set of small-scale products tested according to a small-scale product test, access an additional set of small-scale results of a product tested according to the small-scale product test, input the additional set of results into the machine learning model…output a result from the machine learning model, the result predicting an outcome of the product tested according to the large-scale product test”). As to claim 2, Tabaddor teaches “wherein determining the second classification includes: predicting, by the one or more processors and based on the set of small-scale results as the second input to the at least one second machine learning model of the plurality of machine learning models, a test profile for the large-scale product test; and determining, by the one or more processors and based on the test profile, the second classification” in figure 3, paragraphs [0051-0052] (set of inputs 305 correspond to the set of small-scale results. Set of values as diameter, peak heat release rate… corresponds to a test profile. A large-scale version (corresponding to the second classification) of the product is predicted based on the set of outputs). As to claim 3, Tabaddor teaches “wherein the at least one second machine learning model includes a plurality of regression models, further wherein each regression model of the plurality of regression models predicts a test value of a plurality of test values of the test profile” in par. 0052 (“…An electronic device may support a set of regression models 310 into which the set of inputs 305 for a given small-scale product may be input…”). As to claim 4, Tabaddor teaches “wherein the plurality of test values includes at least: (i) a flame spread value; (ii) a total heat release value; (iii) a peak heat release rate value; and (iv) a fire growth rate index value” in par. 0051 (“…the set of inputs 305 may be associated with a small-scale cable fire test, and may include the following inputs: diameter (in.), peak heat release rate (kW/m2), total heat release (MJ), heat of combustion (MJ/g), total smoke (TS) (m2), SECA (m2/g), and ignition time(s)…”). As to claim 5, Tabaddor teaches “wherein the small-scale product test is administered by a cone calorimeter” in par. 0057 (“…the small-scale product test may be a small-scale cable fire test (e.g., a test administered by a cone calorimeter) …”). As to claim 6, Tabaddor teaches “transmitting, by the one or more processors, the outcome, the first result, and the second result to a user device; and updating, by the one or more processors, the plurality of machine learning models using the outcome, the first result, and the second result” in par. 0040 (“…the product test predictor application 160 may add, to the machine learning model, additional product test results so that the product test predictor application 160 may use the updated machine learning model in subsequent input data analysis”). As to claim 7, Tabaddor teaches “training, by the one or more processors, the plurality of machine learning models using an initial dataset indicating at least an initial set of results of (i) an initial set of large-scale products tested according to the large-scale product test, and (ii) an initial set of small-scale products tested according to the small-scale product test” in par. 0023 (training set 116 may include a small-scale fire test and a large-scale product test). As to claim 8, Tabaddor teaches “receive a set of small-scale results of a product tested according to a small-scale product test representative of the large-scale product test” in par. 0058 (“… the electronic device may use the results of the small-scale test for a product to predict an outcome of the product being tested according to the large-scale product test”. Noting that the results of the small-scale test is used to predict an outcome of the product being tested according to the large-scale product test as to suggest of a small-scale product test representative of the large-scale product test). Tabaddor teaches “calculate, based on the set of small-scale results as a first input to a first machine learning model of a plurality of machine learning models, a first result predicting an outcome of the product tested according to the large-scale product test” in par. 0058 (“… the result from the machine learning model may be embodied as a set of outputs as specified by the corresponding large-scale product test. Accordingly, the electronic device may use the results of the small-scale test for a product to predict an outcome of the product being tested according to the large-scale product test”). It appears Tabaddor does not explicitly teach “wherein calculating the first result includes: determining a first classification for the product”. However, Lyer teaches “wherein calculating the first result includes: determining a first classification for the product” in par. 0006 (“…determining a first machine learning model based on the set of data attributes, determining, using the first machine learning model on the set of data attributes, a first prediction of a standardized code and a first confidence score for the first prediction in association with a classification of the product”). Tabaddor and Lyer are analogous art because they are in the same field of endeavor, machine learning based product classification. It would have been obvious to one of ordinary skill in the art before the effective filling date of the claim invention to calculate product results, disclosed by Tabaddor, including “wherein calculating the first result includes: determining a first classification for the product”, as suggested by Lyer in order to determine actionable item based on product classification (see Lyer par. 0006). Lyer teaches “and calculating a confidence value for the first classification” in par. 0006 (“…using the first machine learning model on the set of data attributes, a first prediction of a standardized code and a first confidence score for the first prediction in association with a classification of the product…”). Tabaddor teaches “calculate, based on the set of small-scale results as a second input to at least one second machine learning model of the plurality of machine learning models, a second result predicting the outcome of the product tested according to the large-scale product test” in par. 0007 (“…an initial set of results … a set of small-scale products tested according to a small-scale product test …input the additional set of results into the machine learning model, and after inputting the additional set of results into the machine learning model, output a result from the machine learning model, the result predicting an outcome of the product tested according to the large-scale product test”. Noting that an initial set of results of a set of small-scale is input into the machine learning model to produce a result predicting an outcome of the product tested according to the large-scale product test). Lyer teaches “wherein calculating the second result includes: determining a second classification for the product, and predict an outcome of the large-scale product test based at least on the first classification, the confidence value, and the second classification” in par. 0006 (“…determining a first machine learning model based on the set of data attributes, determining, using the first machine learning model on the set of data attributes, a first prediction of a standardized code and a first confidence score for the first prediction in association with a classification of the product”). As to claim 15, it is rejected for similar reason as claim 8. As to claim 9, Tabaddor teaches “predicting, based on the set of small-scale results as the second input to the at least one second machine learning model of the plurality of machine learning models, a test profile for the large-scale product test; and determining, based on the test profile, the second classification” in figure 3, paragraphs [0051-0052] (set of inputs 305 correspond to the set of small-scale results. Set of values as diameter, peak heat release rate… corresponds to a test profile. A large-scale version (corresponding to the second classification) of the product is predicted based on the set of outputs). As to claim 16, it is rejected for similar reason as claim 9. As to claim 10, Tabaddor teaches “wherein the at least one second machine learning model includes a plurality of regression models, further wherein each regression model of the plurality of regression models predicts a test value of a plurality of test values of the test profile” in par. 0052 (“…An electronic device may support a set of regression models 310 into which the set of inputs 305 for a given small-scale product may be input…”). As to claim 17, it is rejected for similar reason as claim 10. As to claim 11, Tabaddor teaches “wherein the plurality of test values includes at least: (i) a flame spread value; (ii) a total heat release value; (iii) a peak heat release rate value; and (iv) a fire growth rate index value” in par. 0051 (“…the set of inputs 305 may be associated with a small-scale cable fire test, and may include the following inputs: diameter (in.), peak heat release rate (kW/m2), total heat release (MJ), heat of combustion (MJ/g), total smoke (TS) (m2), SECA (m2/g), and ignition time(s)…”). As to claim 18, it is rejected for similar reason as claim 11. As to claim 12, Tabaddor teaches “wherein the small-scale product test is administered by a cone calorimeter” in par. 0057 (“…the small-scale product test may be a small-scale cable fire test (e.g., a test administered by a cone calorimeter) …”). As to claim 19, it is rejected for similar reason as claim 12. As to claim 13, Tabaddor teaches “transmit the outcome, the first result, and the second result to a user device; and train the plurality of machine learning models using the outcome, the first result, and the second result” in par. 0040 (“…the product test predictor application 160 may add, to the machine learning model, additional product test results so that the product test predictor application 160 may use the updated machine learning model in subsequent input data analysis”). As to claim 20, it is rejected for similar reason as claim 13. As to claim 14, Tabaddor teaches “train the plurality of machine learning models using an initial dataset indicating at least an initial set of results of (i) an initial set of large-scale products tested according to the large-scale product test, and (ii) an initial set of small-scale products tested according to the small-scale product test” in par. 0023 (training set 116 may include a small-scale fire test and a large-scale product test). Conclusion The prior art made of record and not relied upon is considered pertinent to applicants’ disclosure: . Bhide et al (US 2021/0295204 A1) . Franke et al (US 2022/0137119 A1) Any inquiry concerning this communication or earlier communications from the examiner should be directed to Loc Tran whose telephone number is 571-272-8485. The examiner can normally be reached on Mon-Fri. 7:30am-5pm; First Fri Off. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Amy Ng can be reached on (571)-270-1698. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free).If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /LOC TRAN/ Primary Examiner, Art Unit 2164
Read full office action

Prosecution Timeline

Jan 31, 2025
Application Filed
Feb 02, 2026
Non-Final Rejection — §103, §DP (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602375
COMPOSITE SYMBOLIC AND NON-SYMBOLIC ARTIFICIAL INTELLIGENCE SYSTEM FOR ADVANCED REASONING AND SEMANTIC SEARCH
2y 5m to grant Granted Apr 14, 2026
Patent 12554706
METHOD AND SYSTEM FOR DATA QUERY
2y 5m to grant Granted Feb 17, 2026
Patent 12536237
METHOD FOR BOOK PUSHING, METHOD FOR GENERATING BOOK RECOMMENDATION TEXT, APPARATUS, AND ELECTRONIC DEVICE
2y 5m to grant Granted Jan 27, 2026
Patent 12536213
COMPOSITE SYMBOLIC AND NON-SYMBOLIC ARTIFICIAL INTELLIGENCE SYSTEM FOR ADVANCED REASONING AND AUTOMATION
2y 5m to grant Granted Jan 27, 2026
Patent 12536136
STORAGE SYSTEM AND DATA PROCESSING METHOD
2y 5m to grant Granted Jan 27, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
84%
Grant Probability
99%
With Interview (+23.9%)
2y 7m
Median Time to Grant
Low
PTA Risk
Based on 372 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month