Prosecution Insights
Last updated: April 19, 2026
Application No. 17/993,628

MEASURING SIMULATION REALISM

Final Rejection §101§103
Filed
Nov 23, 2022
Examiner
KWON, JUN
Art Unit
2127
Tech Center
2100 — Computer Architecture & Software
Assignee
GM Cruise Holdings LLC
OA Round
2 (Final)
38%
Grant Probability
At Risk
3-4
OA Rounds
4y 3m
To Grant
84%
With Interview

Examiner Intelligence

Grants only 38% of cases
38%
Career Allow Rate
26 granted / 68 resolved
-16.8% vs TC avg
Strong +46% interview lift
Without
With
+46.2%
Interview Lift
resolved cases with interview
Typical timeline
4y 3m
Avg Prosecution
34 currently pending
Career history
102
Total Applications
across all art units

Statute-Specific Performance

§101
31.8%
-8.2% vs TC avg
§103
41.4%
+1.4% vs TC avg
§102
7.6%
-32.4% vs TC avg
§112
18.1%
-21.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 68 resolved cases

Office Action

§101 §103
Detailed Action This Office Action is in response to the remarks entered on 11/19/2025. No claims were cancelled or added. Claims 1-20 are currently pending. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 101 Applicant’s arguments, see [Remarks, page 7], filed 11/19/2025, with respect to claims 1-20 have been fully considered and are persuasive. The 35 U.S.C. 101 rejection of claims 1-20 has been withdrawn. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1, 8 and 15 are rejected under 35 U.S.C. 103 as being unpatentable over Castellani et al. (Castellani et al, “Real-World Anomaly Detection by Using Digital Twin Systems and Weakly Supervised Learning”, 2021, hereinafter ‘Castellani’) in view of Kim et al. (US 20230031919 A1, hereinafter ‘Kim’) and further in view of Degirmenci et al. (US 20230110713 A1, hereinafter ‘Degirmenci’). Regarding claim 1, Castellani teaches: An apparatus comprising: at least one memory storing instructions; and at least one processor coupled to the at least one memory, wherein the instructions, when processed by the at least one processor, cause the at least one processor to: ([Castellani, page 4737, right col, C. Implementation and Training Details, line 3-8] discloses utilizing a quad-core CPU and 16GB of DDR4 RAM and GPU) generate first simulation training data, wherein the first simulation training data comprises first simulated sensor data for one or more simulated sensor types; ([Castellani, page 4735, left col, III. PROPOSED ALGORITHMS, line 2-6; A. Cluster Centers, line 3-22; and Fig. 2] discloses using two datasets: normal operation dataset N, generated with the DT simulation (the first simulation training data) which is represented as x_DT, and a small set A of labeled anomalous samples from the real-world measurement system, which is represented as x_S. [Castellani, page 4737, left col, A. Dataset Description, line 3-6] discloses utilizing the recorded data consists of a large number of sensors for heat, cold, and electricity consumption and productions) provide the first simulation training data to an encoder neural network to generate a plurality of simulation embeddings; ([Castellani, page 4735, left col, III. PROPOSED ALGORITHMS, line 2-6; A. Cluster Centers, line 3-22; and Fig. 2] discloses using two datasets: normal operation dataset N, generated with the DT simulation which is represented as x_DT, and a small set A of labeled anomalous samples from the real-world measurement system, which is represented as x_S. [Castellani, page 4736, line 18-24] discloses the neural network evaluating a pair of data samples (x_DT, x_S), which indicates that the neural network receives the pair of data samples. [Castellani, page 4735, right col, B. Siamese Autoencoder] discloses utilizing an Siamese encoder-decoder network to perform the task. The h1 denotes the simulation embeddings generated from the x_DT) receive real-world training data, wherein the real-world training data comprises sensor data for one or more sensor types; ([Castellani, page 4735, left col, III. PROPOSED ALGORITHMS, line 2-6; A. Cluster Centers, line 3-22; and Fig. 2] discloses using two datasets: normal operation dataset N, generated with the DT simulation which is represented as x_DT, and a small set A of labeled anomalous samples from the real-world measurement system, which is represented as x_S. [Castellani, page 4737, left col, A. Dataset Description, line 3-6] discloses utilizing the recorded data consists of a large number of sensors for heat, cold, and electricity consumption and productions) provide the real-world training data to the encoder neural-network to generate a plurality of real-world embeddings; ([Castellani, page 4735, left col, III. PROPOSED ALGORITHMS, line 2-6; A. Cluster Centers, line 3-22; and Fig. 2] discloses using two datasets: normal operation dataset N, generated with the DT simulation which is represented as x_DT, and a small set A of labeled anomalous samples from the real-world measurement system, which is represented as x_S. The h2 denotes the real-world embeddings generated from x_S) determine, based on the first ([Castellani, page 4736, left col, line 27 – right col, line 11] and [page 4735, Fig. 2] collectively discloses calculating reconstruction losses for each data samples x_DT and x_S and then calculating L_CL based on the distance between h1 and h2 (the first cluster and the second cluster). The total L = L R E C + L C L + L P C L calculated based on L_REC, L_CL, and L_PCL. The L_CL is interpreted as the fidelity score) However, Castellani does not specifically disclose: generate a first cluster based on the plurality of simulation embeddings; generate a second cluster based on the plurality of real-world embeddings; adjust, based on the fidelity score, one or more sensor parameters of the one or more simulated sensor types to generate one or more adjusted simulated sensor types; generate second simulation training data, wherein the second simulation training data comprises second simulated sensor data for the one or more adjusted simulated sensor types; train, based on the second simulated training data, a machine learning model and provide the machine learning model to an autonomous vehicle (AV) that uses the machine learning model to navigate the AV along a travel route. Kim teaches: generate a first cluster based on the plurality of [Kim, 0085] discloses generating the first cluster map based on the first feature data 515a and the second cluster map based on the second feature data 515c by individually propagating them to the clustering layer 530. The feature data correspond to the embeddings, and the cluster maps generated based on the feature data are used to calculate the loss between the cluster maps) generate a second cluster based on the plurality of [Kim, 0085] discloses generating the first cluster map based on the first feature data 515a and the second cluster map based on the second feature data 515c by individually propagating them to the clustering layer 530. The feature data correspond to the embeddings, and the cluster maps generated based on the feature data are used to calculate the loss between the cluster maps) adjust, based on the fidelity score, one or more ([Kim, 0086] discloses adjusting the parameters of layers (i.e., a device, or a program embedded in the device) based on the calculated losses 591 and 592 (i.e., the fidelity score) to generate layers with adjusted parameters) Before the effective filing date of the invention to a person of ordinary skill in the art, it would have been obvious, having the teachings of Castellani and Kim to use the method of generating a first and a second clusters and adjusting parameters based on the difference between the clusters of Kim to implement the encoder-decoder based clustering method of Castellani. The suggestion and/or motivation for doing so is to improve the performance of the simulation method by calculating and reducing the differences between the simulated data and the real-world data. However, Castellani in view of Kim does not specifically disclose: generate second simulation training data, wherein the second simulation training data comprises second simulated sensor data for the one or more adjusted simulated sensor types; train, based on the second simulated training data, a machine learning model and provide the machine learning model to an autonomous vehicle (AV) that uses the machine learning model to navigate the AV along a travel route. Degirmenci teaches: generate second simulation training data, wherein the second simulation training data comprises second simulated sensor data for the one or more adiusted simulated sensor types; ([Degirmenci, 0038] discloses inputting the sensor data 120B (second simulated sensor data [0033]) into the machine learning model 126 to generate output sensor data 128 (second simulation training data) that represents trajectory points for a machine) train, based on the second simulated training data, a machine learning model and ([Degirmenci, 0040] discloses inputting the ground truth data 122 and outputs 128 (the second simulated training data) to the training engine 124 to train the machine learning model 126) provide the machine learning model to an autonomous vehicle (AV) that uses the machine learning model to navigate the AV along a travel route. ([Degirmenci, 0032] If the validator 130 determines that the resultant performance metrics for this sensor configuration are acceptable, the machine learning model 126 may be deployed. [Degirmenci, 0142] and [Degirmenci, 0160] further discloses that the sensors and models are used to navigate the AV) Before the effective filing date of the invention to a person of ordinary skill in the art, it would have been obvious, having the teachings of Castellani, Kim and Degirmenci to use the method of training the machine learning model using the second simulated training data of Degirmenci to implement the encoder-decoder based clustering method of Castellani. The suggestion and/or motivation for doing so is to improve the efficiency of the machine learning system by eliminating the need for costly manual labeling [Degirmenci, 0006]. Claim 8 is a method claim having similar limitation to claim 1. Therefore, claim 8 is rejected under the same rationale as claim 1 above. Claim 15 is a method claim having similar limitation to claim 1. Therefore, claim 15 is rejected under the same rationale as claim 1 above. Claims 2-6, 9-13 and 16-20 are rejected under 35 U.S.C. 103 as being unpatentable over Castellani in view of Kim in view of Degirmenci and further in view of Li et al. (US 11093819 B1, hereinafter ‘Li’). Regarding claim 2, Castellani teaches: The apparatus of claim 1, wherein the at least one processor, also cause the at least one processor to: ([Castellani, page 4737, right col, C. Implementation and Training Details, line 3-8] discloses utilizing a quad-core CPU and 16GB of DDR4 RAM and GPU) receive a first set of sensor data; ([Castellani, page 4737, left col, A. Dataset Description, line 3-6] discloses utilizing the recorded data consists of a large number of sensors for heat, cold, and electricity consumption and productions) determine a realism score for the first set of sensor data based on the first feature embedding (data). ([Castellani, page 4736, left col, line 27 – right col, line 11] and [page 4735, Fig. 2] collectively discloses calculating reconstruction losses for each data samples x_DT and x_S and then calculating L_CL based on the distance between h1 and h2. The total L = L R E C + L C L + L P C L calculated based on L_REC, L_CL, and L_PCL is interpreted as the realism score) However, Castellani in view of Kim and further in view of Degirmenci do not specifically disclose: generate a first feature embedding based on the first set of sensor data; Li teaches: generate a first feature embedding based on the first set of sensor data; ([Li, col 6, line 49 - col 7, line 7] discloses generating a first encoder input data (first feature embedding) based on sensor input data in a pre-processing subsystem 104) Before the effective filing date of the invention to a person of ordinary skill in the art, it would have been obvious, having the teachings of Castellani, Kim, Degirmenci and Li to use the method of pre-processing the sensor data before inputting the data into an encoder of Li to implement the encoder-decoder neural network method of Castellani. The suggestion and/or motivation for doing so is to improve the efficiency of the machine learning method by preprocessing input sensor data to change the data format into one that can be easily processed by the encoder. Regarding claim 3, Castellani teaches: The apparatus of claim 2, wherein the realism score is based on a distance between the first feature embedding (input data) and at least one of the first cluster or the second cluster. ([Castellani, page 4736, left col, line 27 – right col, line 11] and [page 4735, Fig. 2] collectively discloses calculating reconstruction losses for each data samples x_DT and x_S and then calculating L_CL based on the distance between h1 and h2. The total L = L R E C + L C L + L P C L calculated based on L_REC, L_CL, and L_PCL is interpreted as the realism score) Regarding claim 4, Castellani teaches: The apparatus of claim 2, Castellani, page 4737, right col, C. Implementation and Training Details, line 3-8] discloses utilizing a quad-core CPU and 16GB of DDR4 RAM and GPU) comprises providing the first [Castellani, page 4735, left col, III. PROPOSED ALGORITHMS, line 2-6; A. Cluster Centers, line 3-22; and Fig. 2] discloses using two datasets: normal operation dataset N, generated with the DT simulation which is represented as x_DT, and a small set A of labeled anomalous samples from the real-world measurement system, which is represented as x_S. The h2 denotes the real-world embeddings generated from x_S) However, Castellani in view of Kim and further in view of Degirmenci does not specifically disclose: wherein to generate the first feature embedding based on the first set of sensor data comprises providing the first feature embedding to the encoder neural network. Li teaches: wherein to generate the first feature embedding based on the first set of sensor data comprises providing the first feature embedding to the encoder neural network. ([Li, col 6, line 49 - col 7, line 7] discloses generating a first encoder input data (first feature embedding) based on sensor input data in a pre-processing subsystem 104) Regarding claim 5, Castellani teaches: The apparatus of claim 1, wherein the first simulation training data comprises simulated sensor data ([Castellani, page 4735, left col, III. PROPOSED ALGORITHMS, line 2-6; A. Cluster Centers, line 3-22; and Fig. 2] discloses using two datasets: normal operation dataset N, generated with the DT simulation which is represented as x_DT, and a small set A of labeled anomalous samples from the real-world measurement system, which is represented as x_S. [Castellani, page 4736, line 18-24] discloses the neural network evaluating a pair of data samples (x_DT, x_S), which indicates that the neural network receives the pair of data samples. [Castellani, page 4735, right col, B. Siamese Autoencoder] discloses utilizing a Siamese encoder-decoder network to perform the task. The h1 denotes the simulation embeddings generated from the x_DT) Castellani in view of Kim and further in view of Degirmenci does not specifically disclose: data collected from a simulated Light Detection and Ranging (LiDAR) sensor, a simulated Radio Detection and Ranging (RADAR) sensor, a simulated camera sensor, a simulated ultrasonic sensor, or a combination thereof. Li teaches: data collected from a simulated Light Detection and Ranging (LiDAR) sensor, a simulated Radio Detection and Ranging (RADAR) sensor, a simulated camera sensor, a simulated ultrasonic sensor, or a combination thereof. ([Li, col 6, line 8-21] discloses the first sensing subsystem 102a that provides a first data to the encoder, and the second sensing subsystem 102b that provides a second data to the encoder. The first data is interpreted as the simulated data and the second data is interpreted as the real-world data, and utilizing both simulation data and real-world data is taught in Castellani. [Li, col 12, line 25-46] discloses utilizing LiDAR data, RADAR data, or camera sensor data and inputting those data into the pre-processing subsystem to generate encoder inputs) Before the effective filing date of the invention to a person of ordinary skill in the art, it would have been obvious, having the teachings of Castellani, Kim, Degirmenci and Li to use the LiDAR or RADAR data to implement the encoder-decoder neural network method of Castellani. The suggestion and/or motivation for doing so is to specialize the encoder for different kinds of sensor data. Regarding claim 6, Castellani in view of Kim in view of Degirmenci and further in view of Li teaches: The apparatus of claim 1, wherein the real-world training data comprises real- world sensor data collected from a Light Detection and Ranging (LiDAR) sensor, a Radio Detection and Ranging (RADAR) sensor, a camera sensor, an ultrasonic sensor, or a combination thereof. ([Li, col 6, line 8-21] discloses the first sensing subsystem 102a that provides a first data to the encoder, and the second sensing subsystem 102b that provides a second data to the encoder. The first data is interpreted as the simulated data and the second data is interpreted as the real-world data, and utilizing both simulation data and real-world data is taught in Castellani. [Li, col 12, line 25-46] discloses utilizing LiDAR data, RADAR data, or camera sensor data and inputting those data into the pre-processing subsystem to generate encoder inputs) Claim 9 is a method claim having similar limitation to claim 2. Therefore, claim 9 is rejected under the same rationale as claim 2 above. Claim 10 is a method claim having similar limitation to claim 3. Therefore, claim 10 is rejected under the same rationale as claim 3 above. Claim 11 is a method claim having similar limitation to claim 4. Therefore, claim 11 is rejected under the same rationale as claim 4 above. Claim 12 is a method claim having similar limitation to claim 5. Therefore, claim 12 is rejected under the same rationale as claim 5 above. Claim 13 is a method claim having similar limitation to claim 6. Therefore, claim 13 is rejected under the same rationale as claim 6 above. Regarding claim 16, Castellani teaches: The non-transitory computer-readable storage medium of claim 15, wherein the instructions, when executed by the processor, also cause the processor to: ([Castellani, page 4737, right col, C. Implementation and Training Details, line 3-8] discloses utilizing a quad-core CPU and 16GB of DDR4 RAM and GPU) receive a first set of sensor data; ([Castellani, page 4737, left col, A. Dataset Description, line 3-6] discloses utilizing the recorded data consists of a large number of sensors for heat, cold, and electricity consumption and productions) determine a fidelity score for the first set of sensor data based on the first feature embedding (data). ([Castellani, page 4736, left col, line 27 – right col, line 11] and [page 4735, Fig. 2] collectively discloses calculating reconstruction losses for each data samples x_DT and x_S and then calculating L_CL based on the distance between h1 and h2 (the first cluster and the second cluster). The total L = L R E C + L C L + L P C L calculated based on L_REC, L_CL, and L_PCL. The L_CL is interpreted as the fidelity score) However, Castellani in view of Kim and further in view of Degirmenci does not specifically disclose: generate a first feature embedding based on the first set of sensor data; Li teaches: generate a first feature embedding based on the first set of sensor data; ([Li, col 6, line 49 - col 7, line 7] discloses generating a first encoder input data (first feature embedding) based on sensor input data in a pre-processing subsystem 104) Before the effective filing date of the invention to a person of ordinary skill in the art, it would have been obvious, having the teachings of Castellani, Kim and Li to use the method of pre-processing the sensor data before inputting the data into an encoder of Li to implement the encoder-decoder neural network method of Castellani. The suggestion and/or motivation for doing so is to improve the efficiency of the machine learning method by preprocessing input sensor data to change the data format into one that can be easily processed by the encoder. Regarding claim 17, Castellani teaches: The non-transitory computer-readable storage medium of claim 16, wherein the fidelity score is based on a distance between the first feature embedding and at least one of the first cluster or the second cluster. ([Castellani, page 4736, left col, line 27 – right col, line 11] and [page 4735, Fig. 2] collectively discloses calculating reconstruction losses for each data samples x_DT and x_S and then calculating L_CL based on the distance between h1 and h2 (the first cluster and the second cluster). The total L = L R E C + L C L + L P C L calculated based on L_REC, L_CL, and L_PCL. The L_CL is interpreted as the fidelity score) Regarding claim 18, Castellani teaches: The non-transitory computer-readable storage medium of claim 16, comprises: ([Castellani, page 4737, right col, C. Implementation and Training Details, line 3-8] discloses utilizing a quad-core CPU and 16GB of DDR4 RAM and GPU) providing the first [Castellani, page 4735, left col, III. PROPOSED ALGORITHMS, line 2-6; A. Cluster Centers, line 3-22; and Fig. 2] discloses using two datasets: normal operation dataset N, generated with the DT simulation which is represented as x_DT, and a small set A of labeled anomalous samples from the real-world measurement system, which is represented as x_S. The h2 denotes the real-world embeddings generated from x_S) Castellani in view of Kim and further in view of Degirmenci does not specifically disclose: wherein generating the first feature embedding based on the first set of sensor data providing the first feature embedding to the encoder neural network. Li teaches: wherein generating the first feature embedding based on the first set of sensor data ([Li, col 6, line 49 - col 7, line 7] discloses generating a first encoder input data (first feature embedding) based on sensor input data in a pre-processing subsystem 104) providing the first feature embedding to the encoder neural network. ([Li, col 6, line 49 - col 7, line 7] discloses generating a first encoder input data (first feature embedding) based on sensor input data in a pre-processing subsystem 104) Claim 19 is a non-transitory computer-readable storage medium claim having similar limitation to claim 5. Therefore, claim 19 is rejected under the same rationale as claim 5 above. Claim 20 is a non-transitory computer-readable storage medium claim having similar limitation to claim 6. Therefore, claim 20 is rejected under the same rationale as claim 6 above. Claims 7 and 14 are rejected under 35 U.S.C. 103 as being unpatentable over Castellani in view of Kim in view of Degirmenci and further of KAWACHI et al. (US 20210326728 A1, hereinafter ‘Kawachi’). Regarding claim 7, Castellani in view of Kim and further in view of Degirmenci teaches: The apparatus of claim 1. (See 103 rejection above for details) Castellani in view of Kim and further in view of Degirmenci do not specifically disclose: wherein the first cluster or the second cluster comprises a hypersphere. Kawachi teaches: wherein the first cluster or the second cluster comprises a hypersphere. ([Kawachi, 0018, 0029 and 0032] collectively disclose mapping a latent space of an autoencoder to a hypersphere) Before the effective filing date of the invention to a person of ordinary skill in the art, it would have been obvious, having the teachings of Castellani, Kim, Degirmenci and Kawachi to use the hypersphere cluster type to implement the encoder-decoder neural network method of Castellani. The suggestion and/or motivation for doing so is to improve the accuracy of the clustering algorithm. According to [Kawachi, 0018], the reason for using the latent space on the hypersphere is that the latent space on a hypersphere is finite, and therefore when learning advances to a certain extent, entire latent space can be filled with clustering destinations completely, and the target data can be reliably clustered to one of them. Claim 14 is a method claim having similar limitation to claim 7. Therefore, claim 14 is rejected under the same rationale as claim 7 above. Response to Arguments Response to Arguments under 35 U.S.C. 101 Applicant’s arguments, see [Remarks, page 7], filed 11/19/2025, with respect to claims 1-20 have been fully considered and are persuasive. The 35 U.S.C. 101 rejection of claims 1-20 has been withdrawn. Response to Arguments under 35 U.S.C. 103 Arguments: Applicant asserts that even if the cited art discloses two clusters, the cited art does not disclose how the clusters are used (generate better simulated training data that can be used to train a machine learning model that an AV uses to navigate the AV in an environment) [Remarks, page 8-9]. Examiner’s Response: Applicant’s arguments with respect to claims 1, 8 and 15 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument. Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to JUN KWON whose telephone number is (571)272-2072. The examiner can normally be reached Monday – Friday 7:30AM – 4:30PM ET. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Abdullah Kawsar can be reached at (571)270-3169. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /JUN KWON/Examiner, Art Unit 2127 /ABDULLAH AL KAWSAR/Supervisory Patent Examiner, Art Unit 2127
Read full office action

Prosecution Timeline

Nov 23, 2022
Application Filed
Aug 14, 2025
Non-Final Rejection — §101, §103
Nov 19, 2025
Response Filed
Feb 20, 2026
Final Rejection — §101, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602569
EXTRACTING ENTITY RELATIONSHIPS FROM DIGITAL DOCUMENTS UTILIZING MULTI-VIEW NEURAL NETWORKS
2y 5m to grant Granted Apr 14, 2026
Patent 12602609
UPDATING MACHINE LEARNING TRAINING DATA USING GRAPHICAL INPUTS
2y 5m to grant Granted Apr 14, 2026
Patent 12579436
Tensorized LSTM with Adaptive Shared Memory for Learning Trends in Multivariate Time Series
2y 5m to grant Granted Mar 17, 2026
Patent 12572777
Policy-Based Control of Multimodal Machine Learning Model via Activation Analysis
2y 5m to grant Granted Mar 10, 2026
Patent 12493772
LAYERED MULTI-PROMPT ENGINEERING FOR PRE-TRAINED LARGE LANGUAGE MODELS
2y 5m to grant Granted Dec 09, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
38%
Grant Probability
84%
With Interview (+46.2%)
4y 3m
Median Time to Grant
Moderate
PTA Risk
Based on 68 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month