Prosecution Insights
Last updated: April 19, 2026
Application No. 18/082,756

METHOD AND SYSTEM FOR CLASSIFYING SCENARIOS OF A VIRTUAL TEST, AND TRAINING METHOD

Non-Final OA §102
Filed
Dec 16, 2022
Examiner
DO, AN H
Art Unit
2853
Tech Center
2800 — Semiconductors & Electrical Systems
Assignee
Dspace GmbH
OA Round
3 (Non-Final)
91%
Grant Probability
Favorable
3-4
OA Rounds
2y 3m
To Grant
97%
With Interview

Examiner Intelligence

Grants 91% — above average
91%
Career Allow Rate
1293 granted / 1427 resolved
+22.6% vs TC avg
Moderate +7% lift
Without
With
+6.7%
Interview Lift
resolved cases with interview
Typical timeline
2y 3m
Avg Prosecution
25 currently pending
Career history
1452
Total Applications
across all art units

Statute-Specific Performance

§101
11.5%
-28.5% vs TC avg
§103
24.1%
-15.9% vs TC avg
§102
42.7%
+2.7% vs TC avg
§112
4.4%
-35.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 1427 resolved cases

Office Action

§102
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . DETAILED ACTION Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 03 February 2026 has been entered. Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. Claims 1-4 and 6-17 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Xu et al (US 10,438,371). Xu et al disclose the following claimed features: Regarding claim 1, a computer-implemented method for classifying scenarios of a virtual test (Figures 1-3), the method comprising: providing a first data set of sensor data (202, 204) of a travel of an ego vehicle captured by a plurality of vehicle-side surroundings detection sensors; transforming the first data set of sensor data into a single data-reduced second data set of sensor data (214), including the sensor data from each of the plurality of vehicle- side surroundings detection sensors (column 3, lines 45-65), by a first algorithm (212) or a multivariate data analysis method; applying a second machine learning algorithm (216) to the data-reduced second data set of sensor data for classifying scenarios comprised by the second data set (218); and outputting a third data set (224) having a plurality of classes representing a vehicle action; wherein the first algorithm carries out a principle component analysis method, and wherein the principle component analysis method combines correlating first features of the plurality of vehicle-size surroundings detection sensors into a single data- reduced feature as a linear combination of values of the plurality of surroundings detection sensors (Figures 1 and 2). Regarding claim 2, wherein the plurality of vehicle-side surroundings detection sensors includes an essentially identical field of vision in sections, a data set of a first surroundings detection sensor, a data set of a second surroundings detection sensor, and a data set of a third surroundings detection sensor comprising at least one same object (column 1, line 66 to column 2, line 23). Regarding claim 3, wherein the first surroundings detection sensor is formed by a radar sensor, the second surroundings detection sensor is formed by a LIDAR sensor, and the third surroundings detection sensor is formed by a camera sensor (column 1, line 66 to column 2, line 23). Regarding claim 4, wherein the first algorithm carries out a factor analysis method, a principle component analysis method, and/or a correspondence analysis method (Figure 2). Regarding claim 6, wherein the second machine learning algorithm is formed by an artificial neural network, a size of an input layer being given by a number of second features of the data-reduced second data set, and a size of an output layer being given by a number of classes (Figure 2; column 5, line 4 to column 6, line 4). Regarding claim 7, wherein a size of the input layer of the artificial neural network is identical to a size of the output layer of the artificial neural network (Figure 2; column 5, line 4 to column 6, line 4). Regarding claim 8, wherein a number of hidden layers of the artificial neural network is smaller than the size of the input layer of the artificial neural network and the size of the output layer of the artificial neural network (Figure 2; column 5, line 4 to column 6, line 4). Regarding claim 9, wherein the second machine learning algorithm carries out a multiclass classification, in which a probability is calculated for each class, and wherein the class having the highest probability is selected as a prediction (Figures 2 and 3). Regarding claim 10, wherein a fourth data set having a logical scenario is generated based on the selected class representing the vehicle action (Figure 3). Regarding claim 11, wherein the plurality of classes representing the vehicle action comprises at least one value of an acceleration operation, a braking operation, a change in direction and/or lane, a travel at a constant speed of the ego vehicle, a lane ID, and/or a time- or location-related condition for carrying out a vehicle action (column 14, lines 9-18). Regarding claim 12, wherein, for the purpose of transforming the first data set of sensor data into a data-reduced second data set of sensor data, the first algorithm or the multivariate data analysis method comprises: a standardization of the first data set of sensor data of a travel of the ego vehicle captured by the plurality of vehicle-side surroundings detection sensors; a calculation of a covariance matrix from the standardized first data set; a determination of eigenvectors representing principle components; and a creation of a matrix made up of the determined eigenvectors for providing a data-reduced second data set (Figures 2 and 3). Regarding claim 13, a computer-implemented method for providing a trained second machine learning algorithm (216) for classifying scenarios of a virtual test (Figures 4 and 5; column 8, lines 43-62), the method comprising: receiving a single data-reduced second data set of sensor data (214), including the sensor data from each of the plurality of vehicle- side surroundings detection sensors (column 3, lines 45-65), transformed by a first algorithm (212) or a multivariate data analysis method based on a first data set of sensor data (202, 204) of a travel of an ego vehicle captured by a plurality of vehicle-side surroundings detection sensors; receiving a third data set (224) having a plurality of classes representing a vehicle action; and training the second machine learning algorithm (216) by an optimization algorithm, which calculates an extreme value of a loss function for classifying scenarios of a virtual test (Figures 2 and 3); wherein the first algorithm carries out a principle component analysis method, and wherein the principle component analysis method combines correlating first features of the plurality of vehicle-size surroundings detection sensors into a single data- reduced feature as a linear combination of values of the plurality of surroundings detection sensors (Figures 1 and 2). Regarding claim 14, a system for classifying scenarios of a virtual test, the system comprising: a plurality of vehicle-side surroundings detection sensors to provide a first data set of sensor data (202, 204) of a captured travel of an ego vehicle; a transformer to transform the first data set of sensor data into a single data-reduced second data set of sensor data (214), including the sensor data from each of the plurality of vehicle- side surroundings detection sensors (column 3, lines 45-64), by a first algorithm (212) or a multivariate data analysis method; an applicator to apply a second machine learning algorithm (216) to the data-reduced second data set of sensor data for classifying scenarios comprised by the second data set (218), the applicator being configured to output a third data set (224) having a plurality of classes representing a vehicle action (Figures 1-3); wherein the first algorithm carries out a principle component analysis method, and wherein the principle component analysis method combines correlating first features of the plurality of vehicle-size surroundings detection sensors into a single data- reduced feature as a linear combination of values of the plurality of surroundings detection sensors (Figures 1 and 2). Regarding claim 15, a computer program including program code for carrying out the method when the computer program is executed on a computer (column 9, lines 10-17). Regarding claim 16, a non-transitory computer-readable storage medium including program code for carrying out the method when executed on a computer (column 8, line 63 to column 9, line 8). Regarding claim 17, wherein the single data-reduced second data set includes a linear combination of values of the plurality of vehicle-side surroundings detection sensors (column 3, lines 45-64). Response to Arguments Applicant's arguments filed 03 February 2026 have been fully considered but they are not persuasive. Applicant amended independent Claims 1, 13 and 14 by incorporating the limitation of claim 5 into these claims. However, this argument is found not persuasive since the limitation of claim 5 is disclosed or taught by Xu et al in Figures 1 and 2 as shown in the above detailed office action. Contact Information Any inquiry concerning this communication or earlier communications from the examiner should be directed to AN H DO whose telephone number is (571)272-2143. The examiner can normally be reached on M-F 7:5:30pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Stephen Meier can be reached on 571-272-2149. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /AN H DO/Primary Examiner, Art Unit 2853
Read full office action

Prosecution Timeline

Dec 16, 2022
Application Filed
Jul 09, 2025
Non-Final Rejection — §102
Oct 10, 2025
Response Filed
Oct 31, 2025
Final Rejection — §102
Dec 29, 2025
Response after Non-Final Action
Feb 03, 2026
Request for Continued Examination
Feb 10, 2026
Response after Non-Final Action
Feb 21, 2026
Non-Final Rejection — §102 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12599058
METHODS AND SYSTEMS FOR GENERATING FERTILIZER FORMULAS
2y 5m to grant Granted Apr 14, 2026
Patent 12594646
WINDOW LOGIC FOR CONTROL OF POLISHING PROCESS
2y 5m to grant Granted Apr 07, 2026
Patent 12585018
TIME-OF-FLIGHT SENSING SYSTEM
2y 5m to grant Granted Mar 24, 2026
Patent 12578653
METHOD FOR DETERMINING A SAMPLING SCHEME, A SEMICONDUCTOR SUBSTRATE MEASUREMENT APPARATUS, A LITHOGRAPHIC APPARATUS
2y 5m to grant Granted Mar 17, 2026
Patent 12580053
PROTAC TARGET MOLECULE GENERATION METHOD, A COMPUTER SYSTEM, AND A STORAGE MEDIUM
2y 5m to grant Granted Mar 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
91%
Grant Probability
97%
With Interview (+6.7%)
2y 3m
Median Time to Grant
High
PTA Risk
Based on 1427 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month