Prosecution Insights
Last updated: April 19, 2026
Application No. 18/701,909

ARTIFICIAL INTELLIGENCE TRAINING METHOD FOR INDUSTRIAL ROBOT

Non-Final OA §102
Filed
Oct 29, 2024
Examiner
MARC, MCDIEUNEL
Art Unit
3656
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
VAZIL Company Co., Ltd.
OA Round
1 (Non-Final)
91%
Grant Probability
Favorable
1-2
OA Rounds
2y 1m
To Grant
98%
With Interview

Examiner Intelligence

Grants 91% — above average
91%
Career Allow Rate
1187 granted / 1305 resolved
+39.0% vs TC avg
Moderate +7% lift
Without
With
+7.4%
Interview Lift
resolved cases with interview
Fast prosecutor
2y 1m
Avg Prosecution
21 currently pending
Career history
1326
Total Applications
across all art units

Statute-Specific Performance

§101
15.6%
-24.4% vs TC avg
§103
35.1%
-4.9% vs TC avg
§102
5.7%
-34.3% vs TC avg
§112
8.5%
-31.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 1305 resolved cases

Office Action

§102
DETAILED ACTION Claims 1-13 are pending. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Priority Acknowledgment is made of applicant’s claim for foreign priority under 35 U.S.C. 119(a)-(d). Information Disclosure Statement The information disclosure statements provided complies with the provisions of MPEP § 609. It has been placed in the application file, and the information referred to therein has been considered as to the merits. A signed copy of the form is attached. Specification The abstract of the disclosure is objected to because of the word “Disclosed”. Correction is required. See MPEP § 608.01(b). Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of pre-AIA 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – Claims 1-13 are rejected under 102(a)(2) as being anticipated by Khansari Zadeh et al., (US 20250153363). As per claim 1, Khansari Zadeh et al., teaches a method for training a task performance model (see abs., and [0002-0005 and 0007]) of an industrial robot (see abs., and Fig. 1), the method performed by a computing device (see Fig. 1, element 130), the method comprising: constructing a simulation environment based on a task target object (see Fig. 1, element 162A) of an industrial robot (see abs., and Fig. 1); generating additional training data (see abs., and [0020 and 0027-0034]) of a task performance model (see abs., and [0002-0005 and 0007]) of the industrial robot (see abs., and Fig. 1) in the simulation environment (see abs., and [0001]); additionally training the task performance model (see abs., and [0002-0005 and 0007]) based on the additional training data (see abs., and [0020 and 0027-0034]); and updating the task performance model (see abs., and [0002-0005 and 0007]). As per claim 2, Khansari Zadeh et al., teaches wherein the constructing of the simulation environment (see abs., and [0001]) based on the task target object (see Fig. 1, element 162A) of the industrial robot (see abs., and Fig. 1) includes: uploading data (see Fig. 1, the arrows have shown clear evidence of uploading and downloading) for the task target object (see Fig. 1, element 162A) to a cloud system (see pars. [0022 and 0023]); uploading environmental data (see Fig. 2A, elements 201A, 201B) associated with the task target object (see Fig. 1, element 162A) to the cloud system (see pars. [0022 and 0023]); and constructing the simulation environment (see abs., and [0001]) based on the data for the task target object (see Fig. 1, element 162A), and the environmental data (see Fig. 2A, elements 201A, 201B). As per claim 3, Khansari Zadeh et al., teaches wherein the data for the task target object (see Fig. 1, element 162A) includes: point cloud data (see Fig. 1, the arrows have shown clear evidence of uploading and downloading cloud data) of the task target object (see Fig. 1, element 162A). As per claim 4, Khansari Zadeh et al., teaches wherein the task performance model (see abs., and [0002-0005 and 0007]) includes a reinforced learning model (see abs., and [0001]), and wherein the additionally training of the task performance model (see abs., and [0002-0005 and 0007]) based on the additional training data (see abs., and [0020 and 0027-0034]) includes: acquiring state information for reinforced learning based on the simulation environment (see abs., and [0001]); determining a reward (see par. [0002]) for an action of the industrial robot (see abs., and Fig. 1) by using the state information based on the simulation environment (see abs., and [0001]); and performing the reinforced learning of the task performance model (see abs., and [0002-0005 and 0007]) based on the determined reward (see par. [0002]). As per claim 5, Khansari Zadeh et al., teaches wherein the additionally training of the task performance model (see abs., and [0002-0005 and 0007]) based on the additional training data (see abs., and [0020 and 0027-0034]) further includes: performing re-training of the task performance model (see abs., and [0002-0005 and 0007]) when a performance of the additionally trained task performance model is less than a predetermined performance criterion (see abs., and [0002-0005 and 0007]). As per claim 6, Khansari Zadeh et al., teaches wherein the constructing of the simulation environment (see abs., and [0001]) based on the task target object (see Fig. 1, element 162A) of the industrial robot (see abs., and Fig. 1) includes: constructing the simulation environment (see abs., and [0001]) in link with a monitoring operation for the task target object (see Fig. 1, element 162A). As per claim 7, Khansari Zadeh et al., teaches wherein the constructing of the simulation environment (see abs., and [0001]) in link with the monitoring operation for the task target object (see Fig. 1, element 162A) includes: identifying the task target object (see Fig. 1, element 162A) by using an object recognition model (see par. [0091]); and generating a monitoring result (see par. [0002]) based on a result of identifying the task target object (see Fig. 1, element 162A). As per claim 8, Khansari Zadeh et al., teaches wherein the identifying of the task target object (see Fig. 1, element 162A) by using the object recognition model (see par. [0091]) includes: identifying a type of the task target object (see Fig. 1, element 162A): and determining a type of task of the industrial robot based on the type (see abs., and Fig. 1). As per claim 9, Khansari Zadeh et al., teaches wherein the updating of the task performance model (see abs., and [0002-0005 and 0007]) includes: determining whether the task target object (see Fig. 1, element 162A) is an object predefined as an input of the industrial robot (see abs., and Fig. 1) based on the monitoring result (see par. [0002]); updating the task performance model (see abs., and [0002-0005 and 0007]) in the simulation environment (see abs., and [0001]) based on the task target object (see Fig. 1, element 162A) when the task target object (see Fig. 1, element 162A) is an object not predefined as an input of the industrial robot (see abs., and Fig. 1); and maintaining the task performance model (see abs., and [0002-0005 and 0007]) when the task target object (see Fig. 1, element 162A) is an object predefined as an input of the industrial robot (see abs., and Fig. 1). As per claim 10, Khansari Zadeh et al., teaches further comprising: performing user feedback based on the monitoring result (see par. [0002]). As per claim 11, Khansari Zadeh et al., teaches wherein the user feedback includes (see par. [0002]): whether the task target object exists (see Fig. 1, element 162A); whether the task target object (see Fig. 1, element 162A) is an object not predefined as an input of the industrial robot (see abs., and Fig. 1); whether to update the task performance model, and a task performance situation (see abs., and [0002-0005 and 0007]). As per claim 12, Khansari Zadeh et al., teaches a computer program (see pars. [0093-0094]) stored in a non-transitory computer readable storage medium (see par. [0126]), the computer program (see pars. [0093-0094]) including instructions which allow a computing device (see Fig. 1, element 130) to perform operations, the operations comprising: an operation of constructing a simulation environment (see abs., and [0001]) based on a task target object (see Fig. 1, element 162A) of an industrial robot (see abs., and Fig. 1); an operation of generating additional training data (see abs., and [0020 and 0027-0034]) of a task performance model (see abs., and [0002-0005 and 0007]) of the industrial robot (see abs., and Fig. 1) in the simulation environment (see abs., and [0001]); an operation of additionally training the task performance model (see abs., and [0002-0005 and 0007]) based on the additional training data (see abs., and [0020 and 0027-0034]); and an operation of updating the task performance model (see abs., and [0002-0005 and 0007]). As per claim 13, Khansari Zadeh et al., teaches a computing device (see Fig. 1, element 130) comprising: at least one processor (see Fig. 8, element 814 and par. [0021]); and a memory (see Fig. 8, element 624), wherein at least one processor (see Fig. 8, element 814 and par. [0021]) is configured to: construct a simulation environment (see abs., and [0001]) based on a task target object (see Fig. 1, element 162A) of an industrial robot (see abs., and Fig. 1); generate additional training data (see abs., and [0020 and 0027-0034]) of a task performance model (see abs., and [0002-0005 and 0007]) of the industrial robot (see abs., and Fig. 1) in the simulation environment (see abs., and [0001]); additionally train the task performance model (see abs., and [0002-0005 and 0007]) based on the additional training data (see abs., and [0020 and 0027-0034]); and update the task performance model (see abs., and [0002-0005 and 0007]). Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to MCDIEUNEL MARC whose telephone number is (571) 272-6964. The examiner can normally be reached on Work 9:00 AM to 7:30. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, WADE MILES can be reached on (571) 270-7777. The fax phone number for the organization where this application or proceeding is assigned is (571)-273-3976. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. PNG media_image1.png 275 275 media_image1.png Greyscale /McDieunel Marc/ Primary Examiner, Art Unit 3665
Read full office action

Prosecution Timeline

Oct 29, 2024
Application Filed
Mar 31, 2026
Non-Final Rejection — §102 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12600043
Robot for Assisting With and Performing Household Chores
2y 5m to grant Granted Apr 14, 2026
Patent 12596011
METHOD FOR PRESENTING ROADING CONDITIONS, ROAD CONDITION PROCESSING, APPARATUSES, AND COMPUTER DEVICE
2y 5m to grant Granted Apr 07, 2026
Patent 12589485
TEACHING POINT GENERATION DEVICE THAT GENERATES TEACHING POINTS ON BASIS OF OUTPUT OF SENSOR, AND TEACHING POINT GENERATION METHOD
2y 5m to grant Granted Mar 31, 2026
Patent 12582039
SYSTEMS AND METHODS FOR INITIAL HARVEST PATH PREDICTION AND CONTROL
2y 5m to grant Granted Mar 24, 2026
Patent 12576515
OFF-LINE LEARNING FOR ROBOT CONTROL USING A REWARD PREDICTION MODEL
2y 5m to grant Granted Mar 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
91%
Grant Probability
98%
With Interview (+7.4%)
2y 1m
Median Time to Grant
Low
PTA Risk
Based on 1305 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month