Prosecution Insights
Last updated: April 19, 2026
Application No. 18/717,492

TEACHING SYSTEM, ROBOT SYSTEM, TEACHING METHOD FOR ROBOT, AND TEACHING PROGRAM FOR ROBOT

Non-Final OA §102§103§112
Filed
Jun 07, 2024
Examiner
CAIN, AARON G
Art Unit
3656
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Kawasaki Jukogyo Kabushiki Kaisha
OA Round
1 (Non-Final)
40%
Grant Probability
Moderate
1-2
OA Rounds
3y 3m
To Grant
66%
With Interview

Examiner Intelligence

Grants 40% of resolved cases
40%
Career Allow Rate
52 granted / 130 resolved
-12.0% vs TC avg
Strong +26% interview lift
Without
With
+26.1%
Interview Lift
resolved cases with interview
Typical timeline
3y 3m
Avg Prosecution
42 currently pending
Career history
172
Total Applications
across all art units

Statute-Specific Performance

§101
4.3%
-35.7% vs TC avg
§103
57.4%
+17.4% vs TC avg
§102
19.7%
-20.3% vs TC avg
§112
17.7%
-22.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 130 resolved cases

Office Action

§102 §103 §112
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Status of Claims The Office Action is in response to the application filed 06/07/2024. Claims 1-12 are presently pending and are presented for examination. Information Disclosure Statement The information disclosure statements (IDS) submitted on 06/07/2024 and 09/10/2025 are in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statements are being considered by the examiner. Claim Interpretation The following is a quotation of 35 U.S.C. 112(f): (f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph: An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) is/are: “a teaching point generator that generates a teaching point” and “an image generator that generates a virtual image” in claim 1. Take note, in Advanced Ground Information Systems, Inc. v. Life360, Inc., 830 F.3d 1341, 119 USPQ2d 1526 (Fed. Cir. 2016), the Federal Circuit determined that the term "symbol generator" is a computer-implemented means-plus- function limitation and that "[t]he specifications of the patents-in-suit do not disclose an operative algorithm for the claim elements reciting 'symbol generator.'" 830 F.3d at 1348-49, 119 USPQ2d at 1529-30. This means the word “generator” is a generic placeholder for the purposes of interpretation under 35 U.S.C. 112(f). Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof. If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claim 5 rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. The following is taken from 2173.04 of the MPEP: Breadth of a claim is not to be equated with indefiniteness. In reMiller, 441 F.2d 689, 169 USPQ 597 (CCPA 1971); In re Gardner, 427 F.2d 786, 788, 166 USPQ 138, 140 (CCPA 1970) ("Breadth is not indefiniteness."). A broad claim is not indefinite merely because it encompasses a wide scope of subject matter provided the scope is clearly defined. But a claim is indefinite when the boundaries of the protected subject matter are not clearly delineated and the scope is unclear. For example, a genus claim that covers multiple species is broad, but is not indefinite because of its breadth, which is otherwise clear. But a genus claim that could be interpreted in such a way that it is not clear which species are covered would be indefinite (e.g., because there is more than one reasonable interpretation of what species are included in the claim). Claim 5 recites the limitation “a predetermined injection object” in line 2. The common definition of “an object”, as taken from Dictionary.com, is “anything that is visible or tangible and is relatively stable in form.” (OBJECT Definition & Meaning | Dictionary.com). However, the specification of the current application provides examples of an injection object in paragraph 152: “The robot I is not limited to an industrial robot and may be a medical robot. The treatment performed by the robot 1 is not limited to coating, and may be welding, cleaning, or shot blasting, for example. The treatment performed by the robot 1 may also be inspection of a workpiece. The injection object injected by the robot 1 is not limited to paint, and may be ink, a cleaning solution, water, a filler metal, a polishing agent (e.g., short material), a sealing material, a laser, flame, ultrasonic waves, electromagnetism, and so forth. The inspection may be a treatment of inspecting appearance of a workpiece with a camera included in the end effector 12” [0152]. Items such as “paint, ink, a cleaning solution, or water” are in a completely different genus of objects than “a laser, flame, ultrasonic waves, electromagnetism”. The second category do not include items that fit the common definition of “an object”, as they are not tangible, and the first category are questionable as they are not relatively stable in form. The difference between these objects also radically impact the nature of the invention, as the image generator of claim 1 will have to be different depending on the type of object. For instance, a standard camera may be sufficient for showing paint, but not for showing electromagnetism. Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. Claim(s) 1-6 and 8-12 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Kuwahara US 20150112482 A1 (“Kuwahara”). Regarding Claim 1. Kuwahara teaches a teaching system comprising: a teaching point generator that generates a teaching point of a robot (A “teaching point” is information indicating a target position through which each joint of the robot is caused to pass to reproductively operate the robot. FIG. 6F shows a dialog box that reads “generate teaching point on projection plane”, where an operator can select whether or not to generate a teaching point on the designated plane [paragraph 96]), the robot including a tool that performs a treatment on a workpiece in a non-contact manner (FIG. 1 shows the robot 30 and a workpiece W, where the workpiece is a door handle. The primary example in Kuwahara of work to be done by the robot is a coating robot that is applying a coating to a workpiece [paragraph 21]); an operator that is operated by a user (The teaching controller 11 outputs a virtual image including the robot 30 whose operation is subjected to a simulation operation to the display unit 12 based on an operation performed by an operator with the operating unit 13 [paragraph 25]); an image generator that generates a virtual image in which a virtual tool corresponding to the tool and a virtual workpiece corresponding to the workpiece are placed in a virtual space (The teaching controller 11 outputs a virtual image including the robot 30 whose operation is subjected to a simulation operation to the display unit 12 based on an operation performed by an operator with the operating unit 13. The virtual image further includes a workpiece W having a processed surface to be processed by the robot 30 [paragraph 25]); and a display that displays the virtual image (The teaching controller 11 outputs a virtual image including the robot 30 whose operation is subjected to a simulation operation to the display unit 12 based on an operation performed by an operator with the operating unit 13. The virtual image further includes a workpiece W having a processed surface to be processed by the robot 30 [paragraph 25]), wherein the image generator generates the virtual image in which the virtual tool moves in accordance with an operation to the operator and performs a treatment on the virtual workpiece (The teaching controller 11 outputs a virtual image including the robot 30 whose operation is subjected to a simulation operation to the display unit 12 based on an operation performed by an operator with the operating unit 13. The virtual image further includes a workpiece W having a processed surface to be processed by the robot 30 [paragraph 25]. The teaching controller 11 generates a job program for operating the robot 30 from the virtual image based on an operation performed by the operator with the operating unit 13 and registers the job program in the job information DB 14 [paragraph 26]. Furthermore, the teaching system 10 can read the teaching points and the job program registered in the job information DB 14 based on the instruction operation performed by the operator. Thus, the teaching system 10 can display the virtual image of the robot 30 whose tip of the end effector 35 reaches a specific teaching point and reproduce a series of operation of the robot 30 performed by the job program on the display unit 12 [paragraph 70]), and the teaching point generator generates the teaching point corresponding to a position of the virtual tool generated by the image generator in the virtual space (FIGS. 6A-6H). Regarding Claim 2. Kuwahara teaches the teaching system according to claim 1. Kuwahara also teaches: wherein the image generator applies a display to a portion of the virtual workpiece subjected to the treatment by the tool to indicate that the portion has been subjected to the treatment (FIG. 4 shows the virtual image displayed on the display unit with a robot and a workpiece [paragraph 64]. FIG. 5A shows the virtual image after a partial application of the coating of surface P [paragraph 73]). Regarding Claim 3. Kuwahara teaches the teaching system according to claim 1. Kuwahara also teaches: wherein the image generator displays an image indicating a degree of a distance from the virtual tool in the virtual image (The work line generating unit 111e generates the work line WC such that the group of target points is arranged not on the actual coating surface P of the workpiece W but on the projection plane PP as illustrated in FIG. 6G. At this time, the projection plane PP is generated at a position away from the point P1 in the normal direction by the distance "d" [paragraph 97, FIG. 6]). Regarding Claim 4. Kuwahara teaches the teaching system according to claim 1. Kuwahara also teaches: wherein the image generator displays an image indicating a posture of the virtual tool in the virtual image (FIG. 4 shows an example of the image generator displaying an image of a posture of the tool in the virtual image. The image generating unit generates a virtual image including a robot and a workpiece having a processed surface to be processed by the robot. The projecting unit generates a projection plane orthogonal to a normal direction of a desired point on the processed surface selected on the virtual image and projects the processed surface onto the projection plane [paragraph 106]. The work line generating unit generates a work line serving as a group of target points for the robot based on setting contents received via the projection plane. The arithmetic unit calculates a teaching value including a position and a posture of the robot at each point of the target points [paragraph 107]). Regarding Claim 5. Kuwahara teaches the teaching system according to claim 1. Kuwahara also teaches: wherein the tool injects a predetermined injection object toward the workpiece in the treatment (paragraph 21 describes how the robot can be a painting robot coating a workpiece), and the image generator displays a virtual injection object corresponding to the injection object or a virtual injection range corresponding to an injection range of the injection object, in the virtual image (FIG. 5A). Regarding Claim 6. Kuwahara teaches the teaching system according to claim 1. Kuwahara also teaches: wherein the treatment is coating, welding, cleaning, or shot blasting (paragraph 21). Regarding Claim 8. Kuwahara teaches the teaching system according to claim 1. Kuwahara also teaches: wherein the robot further includes a robot arm to which the tool is coupled, and the image generator displays a virtual arm corresponding to the robot arm in the virtual image (FIG. 4). Regarding Claim 9. Kuwahara teaches the teaching system according to claim 8. Kuwahara also teaches: wherein the image generator switches display of the virtual arm in the virtual image between display and non-display (Based on the explanation of what is meant by “display” and “non-display” in paragraph [0117] of the present application, FIGS. 6A-6F show an example of the display with a dialogue box in FIG. 6C that switches on and off the generation of teaching points on a projection plane. FIG. 6E shows the virtual image with a group of target points arranged on the coating surface, while FIG. 4 shows the display with no visible teaching points). Regarding Claim 10. Kuwahara teaches a robot system comprising: the teaching system according to claim 1; and a robot that moves in accordance with the teaching point generated by the teaching point generator (the robot in FIGS. 6A-6H, shown to follow the teaching points generated by the system). Regarding Claim 11. Kuwahara teaches a teaching method for a robot including a tool that performs a treatment on a workpiece in a non-contact manner (FIG. 1 shows the robot 30 and a workpiece W, where the workpiece is a door handle. The primary example in Kuwahara of work to be done by the robot is a coating robot that is applying a coating to a workpiece [paragraph 21]), the method comprising: generating a virtual image in which a virtual tool corresponding to the tool and a virtual workpiece corresponding to the workpiece are placed in a virtual space (The teaching controller 11 outputs a virtual image including the robot 30 whose operation is subjected to a simulation operation to the display unit 12 based on an operation performed by an operator with the operating unit 13. The virtual image further includes a workpiece W having a processed surface to be processed by the robot 30 [paragraph 25]); displaying the virtual image (The teaching controller 11 outputs a virtual image including the robot 30 whose operation is subjected to a simulation operation to the display unit 12 based on an operation performed by an operator with the operating unit 13. The virtual image further includes a workpiece W having a processed surface to be processed by the robot 30 [paragraph 25]); moving the virtual tool in the virtual image in accordance with an operation from a user to an operator for moving the virtual tool and causing the virtual tool to perform a treatment on the virtual workpiece (the robot in FIGS. 6A-6H, shown to follow the teaching points generated by the system); and generating a teaching point corresponding to a position of the virtual tool in the virtual space (FIGS. 6A-6H). Regarding Claim 12. Kuwahara teaches a non-transitory storage medium (The storage unit at 112 of FIG. 2 is a storage device, such as a hard disk drive and a non-volatile memory [paragraph 103]) storing teaching program for a robot including a tool that performs a treatment on a workpiece in a non-contact manner (FIG. 1 shows the robot 30 and a workpiece W, where the workpiece is a door handle. The primary example in Kuwahara of work to be done by the robot is a coating robot that is applying a coating to a workpiece [paragraph 21]), the program causing a computer to perform the functions of: generating a virtual image in which a virtual tool corresponding to the tool and a virtual workpiece corresponding to the workpiece are placed in a virtual space (The teaching controller 11 outputs a virtual image including the robot 30 whose operation is subjected to a simulation operation to the display unit 12 based on an operation performed by an operator with the operating unit 13. The virtual image further includes a workpiece W having a processed surface to be processed by the robot 30 [paragraph 25]); moving the virtual tool in the virtual image in accordance with operation from a user to an operator for moving the virtual tool (The teaching controller 11 outputs a virtual image including the robot 30 whose operation is subjected to a simulation operation to the display unit 12 based on an operation performed by an operator with the operating unit 13. The virtual image further includes a workpiece W having a processed surface to be processed by the robot 30 [paragraph 25]. The teaching controller 11 generates a job program for operating the robot 30 from the virtual image based on an operation performed by the operator with the operating unit 13 and registers the job program in the job information DB 14 [paragraph 26]. Furthermore, the teaching system 10 can read the teaching points and the job program registered in the job information DB 14 based on the instruction operation performed by the operator. Thus, the teaching system 10 can display the virtual image of the robot 30 whose tip of the end effector 35 reaches a specific teaching point and reproduce a series of operation of the robot 30 performed by the job program on the display unit 12 [paragraph 70]); and generating a teaching point corresponding to a position of the virtual tool in the virtual space (FIGS. 6A-6H). Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 7 is rejected under 35 U.S.C. 103 as being unpatentable over Kuwahara US 20150112482 A1 (“Kuwahara”) as applied to claim 1 above, and further in view of Adams et al. US 20120276281 A1 (“Adams”). Regarding Claim 7. Kuwahara teaches the teaching system according to claim 1. Kuwahara also teaches: wherein the image generator displays, in the virtual image, an image indicating that a portion of the virtual workpiece subjected to the treatment has been subjected to the treatment (FIG. 5A shows the painting robot covering a plurality of surfaces to be selected in the virtual image and coated by the robot). Kuwahara does not teach: and being determinable for a portion where the treatment overlaps (Kuwahara is silent about overlaps, although this is implicit due to the nature of any application of paint through a sprayer tool, as the sprayed paint will randomly overlap areas already treated). However, Adams teaches: and being determinable for a portion where the treatment overlaps (FIG. 2 shows a method of applying multiple layers of paint to a workpiece [paragraph 19]. This would necessarily involve overlapping portions of the workpiece to apply the treatment, as the entire workpiece becomes the portion where the treatment is applied multiple times). It would have been obvious to one of ordinary skill in the art at the time the invention was filed to modify the invention of Kuwahara with and being determinable for a portion where the treatment overlaps as taught by Adams, so that the robot can be used to apply multiple coats of treatment when necessary. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to AARON G CAIN whose telephone number is (571)272-7009. The examiner can normally be reached Monday: 7:30am - 4:30pm EST to Friday 7:30pm - 4:30am. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Wade Miles can be reached at (571) 270-7777. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /AARON G CAIN/Examiner, Art Unit 3656
Read full office action

Prosecution Timeline

Jun 07, 2024
Application Filed
Dec 04, 2025
Non-Final Rejection — §102, §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12573302
METHOD FOR INFRASTRUCTURE-SUPPORTED ASSISTING OF A MOTOR VEHICLE
2y 5m to grant Granted Mar 10, 2026
Patent 12558790
METHOD AND COMPUTING SYSTEMS FOR PERFORMING OBJECT DETECTION
2y 5m to grant Granted Feb 24, 2026
Patent 12552019
MACHINE LEARNING METHOD AND ROBOT SYSTEM
2y 5m to grant Granted Feb 17, 2026
Patent 12544144
DENTAL ROBOT AND ORAL NAVIGATION METHOD
2y 5m to grant Granted Feb 10, 2026
Patent 12541205
MOVEMENT CONTROL SUPPORT DEVICE AND METHOD
2y 5m to grant Granted Feb 03, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
40%
Grant Probability
66%
With Interview (+26.1%)
3y 3m
Median Time to Grant
Low
PTA Risk
Based on 130 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month