Prosecution Insights
Last updated: April 19, 2026
Application No. 18/394,121

OFFLINE TEACHING DEVICE AND OFFLINE TEACHING METHOD

Final Rejection §103§DP
Filed
Dec 22, 2023
Examiner
LE, TIEN MINH
Art Unit
3656
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Panasonic Intellectual Property Management Co., Ltd.
OA Round
2 (Final)
68%
Grant Probability
Favorable
3-4
OA Rounds
2y 12m
To Grant
92%
With Interview

Examiner Intelligence

Grants 68% — above average
68%
Career Allow Rate
55 granted / 81 resolved
+15.9% vs TC avg
Strong +24% interview lift
Without
With
+23.8%
Interview Lift
resolved cases with interview
Typical timeline
2y 12m
Avg Prosecution
30 currently pending
Career history
111
Total Applications
across all art units

Statute-Specific Performance

§101
8.1%
-31.9% vs TC avg
§103
51.7%
+11.7% vs TC avg
§102
18.5%
-21.5% vs TC avg
§112
18.8%
-21.2% vs TC avg
Black line = Tech Center average estimate • Based on career data from 81 resolved cases

Office Action

§103 §DP
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . This is a Final Office Action on the merits. Claims 1-10 are currently pending and are addressed below. Response to Amendment 1. The amendment filed 11/19/2025 has been entered. Claims 1-10 remain pending in the application. Applicant’s amendments to the Claims and Terminal disclaimer have overcome each 112(b) and Double Patenting rejection previously set forth in the Non-Final Office Action mailed August 20, 2025. Response to Arguments 2. Regarding the rejection made under 35 USC 103, Applicant’s arguments filed 11/19/2025 have been fully considered but moot because the arguments do not apply to the combination of references and/or rationale being used in the current rejection. Claim Interpretation 3. The following is a quotation of 35 U.S.C. 112(f): (f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph: An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. 4. The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked. As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph: (A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function; (B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and (C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function. Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function. Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function. Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. 5. This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) is/are: “input unit” in claim 1. “acquisition unit” in claim 1. “generation unit” in claims 1, 5, 6, and 7. “control unit” in claims 1, 2, 3, 4, 5, 6, and 7. “input device” in claim 8 and 9. Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof. If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. Claim Rejections - 35 USC § 103 6. In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. 7. Claims 1-4 and 8-10 is/are rejected under 35 U.S.C. 103 as being unpatentable over Kuwahara et al. (US 20190143514, hereinafter Kuwahara) in view of Izawa et al. (US 20070145027, hereinafter Izawa) and in further view of Shimodaira et al. (US 20180250823, hereinafter Shimodaira). Regarding claim 1, Kuwahara teaches an offline teaching device (see at least Fig. 1) comprising: an input unit that receives an operator operation (see at least [0066]: “The generation part 11d generates the first teaching information 12b, which specifies motions of the examination robot 20C, based on the motion path TR2 (see FIG. 2C) determined by the determination part 11c. Then, the generation part 11d causes the storage 12 to store the generated first teaching information 12b. In response to, for example, an input operation performed by a worker, the outputting part 11e outputs the first teaching information 12b stored in the storage 12 to the robot controller 30 connected to the examination robot 20C.”); an acquisition unit that acquires three-dimensional shape data of a workpiece produced by welding, an operation trajectory of the welding (see at least [0024]: “A teaching method according to this embodiment will be outlined by referring to FIG. 1. FIG. 1 outlines the teaching method according to this embodiment. In the following description, welding work is performed as an example of work performed on a workpiece W, and a welding trace (bead trace) left as a result of welding work is examined. Another possible example of the work performed on the workpiece W is to change roughness of the surface of the workpiece W. Still another possible example of the work is to form a groove on the workpiece W. Still another possible example of the work is to draw a picture on the workpiece W.”; [0026]: “The examination device 100 of fhe examination robot 20C obtains a three-dimensional shape of the work region WR by, for example, radiating light to the work region WR and moving the light while picking up an image of the light. Then, the examination device 100 determines whether the three-dimensional shape indicates a normal work result. For reference purposes, an examinable range 101 is indicated by broken lines in FIG. 1. The examinable range 101 is a range in which the examination device 100 is able to examine the work region WR.”; [0036]: “First, a case where second teaching information is used as result information will be described. As illustrated in FIG. 2A, the second teaching information obtained by the teaching apparatus 10 (see FIG. 1) is information corresponding to the motion path, TR1, of the work robot 20W. The motion path (work path) TR1 is a path taken by a representative point set on the work robot 20W (an example representative point is the leading end of the work robot 20W).”), and a scanning range of a sensor configured to scan an appearance shape of the workpiece (see at least [0026]: “The examination device 100 of fhe examination robot 20C obtains a three-dimensional shape of the work region WR by, for example, radiating light to the work region WR and moving the light while picking up an image of the light. Then, the examination device 100 determines whether the three-dimensional shape indicates a normal work result. For reference purposes, an examinable range 101 is indicated by broken lines in FIG. 1. The examinable range 101 is a range in which the examination device 100 is able to examine the work region WR.”; [0027]: “The examinable range 101 corresponds to the range of vision conceivable by the examination device 100. Alternatively, the examinable range 101 may include the range of vision conceivable by the examination device 100 and the range of depth conceivable by the examination device 100. The examination robot 20C makes a motion such that the work region WR on the workpiece W is included in the examinable range 101.”); a generation unit that generates at least one three-dimensional region of an inspection area based on the acquired scanning range and a scanning section, which is an area to be scanned by the sensor, the at least one three-dimensional region to be scanned by the sensor (see at least [0027]: “The examinable range 101 corresponds to the range of vision conceivable by the examination device 100. Alternatively, the examinable range 101 may include the range of vision conceivable by the examination device 100 and the range of depth conceivable by the examination device 100. The examination robot 20C makes a motion such that the work region WR on the workpiece W is included in the examinable range 101.”; Fig. 2C and [0043]: “As seen from FIG. 2C, the examination sections and the sections ON1 to ON4 do not overlap. This is because the examination device 100 illustrated in FIG. 1 and the work tool 200 are different from each other in shape. Based on the difference in shape, the teaching apparatus 10 generates examination-use paths such that the work regions WR1 to WR4 are included in the examinable range 101 (see FIG. 1).”); and a control unit that disposes at least one of the three-dimensional region to be scanned by the sensor on the three-dimensional shape data of the workpiece based on the operator operation input to the input unit, and that creates and outputs, to a robot, a teaching program for scanning the at least one three-dimensional region based on the at least one three-dimensional region and the operation trajectory of the welding (see at least Fig. 1 and [0026]: “The examination device 100 of fhe examination robot 20C obtains a three-dimensional shape of the work region WR by, for example, radiating light to the work region WR and moving the light while picking up an image of the light. Then, the examination device 100 determines whether the three-dimensional shape indicates a normal work result. For reference purposes, an examinable range 101 is indicated by broken lines in FIG. 1. The examinable range 101 is a range in which the examination device 100 is able to examine the work region WR.”; [0027]: “The examinable range 101 corresponds to the range of vision conceivable by the examination device 100. Alternatively, the examinable range 101 may include the range of vision conceivable by the examination device 100 and the range of depth conceivable by the examination device 100. The examination robot 20C makes a motion such that the work region WR on the workpiece W is included in the examinable range 101.”; [0028]: “In the embodiment of FIG. 1, the examination robot 20C and the work robot 20W are the same type of robots 20, and the same type of robot controllers 30 are used to control motions of the examination robot 20C and the work robot 20W. This configuration, however, is not intended in a limiting sense. Another possible embodiment is that the examination robot 20C and the work robot 20W are different types of robots, and the robot controllers 30 are different types of robot controllers.”; [0029]: “As illustrated in FIG. 1, in the teaching method according to this embodiment, a teaching apparatus 10 obtains “second teaching information” (for example, work-use teaching information) from the robot controller 30 that controls motions of the work robot 20W. The second teaching information is teaching information that specifies motions of the work robot 20W.”; [0040]: “A case where shape information of the workpiece W is used as result information will be described by referring to FIG. 2B. As illustrated in FIG. 2B, the shape information obtained by the teaching apparatus 16 (see FIG. 1) is information including a three-dimensional shape of the workpiece W and the shape and position of the work region WR on the workpiece W. A specific example of the shape information is three-dimensional CAD (Computer Aided Design) data of the workpiece W including information indicating the shape and position of the work region WR. In FIG. 2B. the work regions WR have three-dimensional shapes, similarly to the work regions WR illustrated in FIG. 2A. The work regions WR, however, may be flat work regions WR having, for example, circular shapes or rectangular shapes. This will be described later by referring to FIG. 8.”). Kuwahara fails to explicitly teach outputting, to a welding robot that performs the welding, a program for scanning the three-dimensional region. However, Izawa teaches a system and method for using a working robot to do a welding operation that comprises outputting, to a welding robot that performs a welding, a program for scanning a three-dimensional region (see at least Fig. 1 and [0045]: “The laser sensor head 17 detects the shape of a region including the welding point on the work W, by means of laser. The laser sensor head 17 is attached to an outer surface of the welding torch 16. More specifically, the laser sensor head 17 is attached on a side of the welding torch 16 along a direction of welding (Direction X). The laser sensor head 17 applies a laser beam to the work W, receives a reflected beam, and thereby detects the shape of the work W as two-dimensional information. More specifically, the laser sensor head 17 scans in Direction Y, as the welding torch 16 moves in the Direction X in FIG. 2, at a predetermined timing (at a predetermined pitch in Direction X), whereby the laser sensor head 17 detects an outer shape of the work W in the ZY plane, at each of the scanning point. For example, at a scanning point X1, the sensor head outputs a square pulse signal representing the outer shape of the work W in the ZY plane. The work W shape information detected by the laser sensor head 17 is fed to the sensor controller 20.”; [0047]: “The personal computer 30 generates instruction-point data, based on the coordinate information about the welding points of the welding torch 16 sent from the sensor controller 20, and gives the generated data to the robot controller 40. The "instruction-point data" is coordinate information which defines a path for the welding torch 16 to move on, with a plurality of three-dimensional points and an attitude of the welding torch 16 at each of the points.”). Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Kuwahara to incorporate the teachings of Izawa and provide a means to output, to a welding robot that performs a welding, a program for scanning a three-dimensional region, with a reasonable expectation of success, in order to utilize a same robot to scan and weld to simplify a set up and minimize cost of having multiple robots. The combination of Kuwahara and Izawa fails to explicitly teach displaying the at least one three-dimensional region to be scanned by the sensor on the three-dimensional shape data of the workpiece. However, Shimodaira teaches an apparatus and method for controlling a robot that displays at least one three-dimensional region to be scanned by a sensor on the three-dimensional shape data of a workpiece (see at least Figs. 1-2 and [0229]: “The three-dimensional shape data may be generated on the sensor unit side. In this case, an image processing IC or the like realizing a function of generating three-dimensional shape data is provided on the sensor unit side. Alternatively, there may be a configuration in which three-dimensional shape data is not generated by the robot setting apparatus side, and the robot setting apparatus performs image processing on an image captured by the sensor unit side so as to generate three-dimensional shape data such as a three-dimensional image.”; [0233]: “The display unit 3 is a member for displaying a three-dimensional shape of a workpiece acquired in the robot setting apparatus 100 or checking various settings or an operation state, and may employ a liquid crystal monitor, an organic EL display, or a CRT.”). Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Kuwahara and Izawa to incorporate the teachings of Shimodaira and provide a means to displays at least one three-dimensional region to be scanned by a sensor on the three-dimensional shape data of a workpiece, with a reasonable expectation of success, in order to provide a visualization to the user of the workpiece and region. Regarding claim 2, modified Kuwahara teaches the limitations of claim 1. Kuwahara further teaches wherein the control unit creates the teaching program based on the at least one three-dimensional region, the operation trajectory of the welding, and operation information of the welding robot that performs the welding, which is associated with the three-dimensional shape data (see at least [0026]: “The examination device 100 of fhe examination robot 20C obtains a three-dimensional shape of the work region WR by, for example, radiating light to the work region WR and moving the light while picking up an image of the light. Then, the examination device 100 determines whether the three-dimensional shape indicates a normal work result. For reference purposes, an examinable range 101 is indicated by broken lines in FIG. 1. The examinable range 101 is a range in which the examination device 100 is able to examine the work region WR.”; [0027]: “The examinable range 101 corresponds to the range of vision conceivable by the examination device 100. Alternatively, the examinable range 101 may include the range of vision conceivable by the examination device 100 and the range of depth conceivable by the examination device 100. The examination robot 20C makes a motion such that the work region WR on the workpiece W is included in the examinable range 101.”; [0029]: “As illustrated in FIG. 1, in the teaching method according to this embodiment, a teaching apparatus 10 obtains “second teaching information” (for example, work-use teaching information) from the robot controller 30 that controls motions of the work robot 20W. The second teaching information is teaching information that specifies motions of the work robot 20W.”; [0040]: “A case where shape information of the workpiece W is used as result information will be described by referring to FIG. 2B. As illustrated in FIG. 2B, the shape information obtained by the teaching apparatus 16 (see FIG. 1) is information including a three-dimensional shape of the workpiece W and the shape and position of the work region WR on the workpiece W. A specific example of the shape information is three-dimensional CAD (Computer Aided Design) data of the workpiece W including information indicating the shape and position of the work region WR. In FIG. 2B. the work regions WR have three-dimensional shapes, similarly to the work regions WR illustrated in FIG. 2A. The work regions WR, however, may be flat work regions WR having, for example, circular shapes or rectangular shapes. This will be described later by referring to FIG. 8.”). Regarding claim 3, modified Kuwahara teaches the limitations of claim 2. Kuwahara further teaches wherein the control unit generates, based on the operation information, various operations of the welding robot for the workpiece and a scanning operation for each of the at least one three- dimensional region executed by the robot, and generates the teaching program by associating the various operations with the scanning operation corresponding to each of the at least one generated three-dimensional region (see at least Fig. 2A and [0038]: “In FIG. 2A, suffixes 1 to 4 are added to the end of “WR” of the work region WR on the workpiece W. The suffixes indicate the order in which the welding work proceeds. Similarly, suffixes 1 to 4 are added to the end of “ON” of the sections ON so that sections ON1 to ON4 respectively correspond to the work regions WR1 to WR4. Suffixes 1 to 3 are added to the end of “OFF” of the sections OFF (sections OFF1 to OFF3), the suffixes indicating the order in which the sections OFF are passed. Each of the work regions WR includes a three-dimensional shape such as a bead trace.”). Kuwahara fails to explicitly teach a scanning operation executed by the welding robot However, Izawa teaches a system and method for using a working robot to do a welding operation that comprises a scanning operation executed by a welding robot (see at least Fig. 1 and [0045]: “The laser sensor head 17 detects the shape of a region including the welding point on the work W, by means of laser. The laser sensor head 17 is attached to an outer surface of the welding torch 16. More specifically, the laser sensor head 17 is attached on a side of the welding torch 16 along a direction of welding (Direction X). The laser sensor head 17 applies a laser beam to the work W, receives a reflected beam, and thereby detects the shape of the work W as two-dimensional information. More specifically, the laser sensor head 17 scans in Direction Y, as the welding torch 16 moves in the Direction X in FIG. 2, at a predetermined timing (at a predetermined pitch in Direction X), whereby the laser sensor head 17 detects an outer shape of the work W in the ZY plane, at each of the scanning point. For example, at a scanning point X1, the sensor head outputs a square pulse signal representing the outer shape of the work W in the ZY plane. The work W shape information detected by the laser sensor head 17 is fed to the sensor controller 20.”; [0047]: “The personal computer 30 generates instruction-point data, based on the coordinate information about the welding points of the welding torch 16 sent from the sensor controller 20, and gives the generated data to the robot controller 40. The "instruction-point data" is coordinate information which defines a path for the welding torch 16 to move on, with a plurality of three-dimensional points and an attitude of the welding torch 16 at each of the points.”). Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Kuwahara to incorporate the teachings of Izawa and provide a means to scan operation executed by a welding robot, with a reasonable expectation of success, in order to utilize a same robot to scan and weld to simplify a set up and minimize cost of having multiple robots. Regarding claim 4, Kuwahara teaches the limitations of claim 1. Kuwahara further teaches wherein the control unit extracts a welding line of the welding associated with the three-dimensional shape data, and creates and outputs the teaching program in which the welding line included in the at least one three-dimensional region is set as a scanning portion of the sensor (see at least [0024]: “A teaching method according to this embodiment will be outlined by referring to FIG. 1. FIG. 1 outlines the teaching method according to this embodiment. In the following description, welding work is performed as an example of work performed on a workpiece W, and a welding trace (bead trace) left as a result of welding work is examined. Another possible example of the work performed on the workpiece W is to change roughness of the surface of the workpiece W. Still another possible example of the work is to form a groove on the workpiece W. Still another possible example of the work is to draw a picture on the workpiece W.”; [0026]: “The examination device 100 of fhe examination robot 20C obtains a three-dimensional shape of the work region WR by, for example, radiating light to the work region WR and moving the light while picking up an image of the light. Then, the examination device 100 determines whether the three-dimensional shape indicates a normal work result. For reference purposes, an examinable range 101 is indicated by broken lines in FIG. 1. The examinable range 101 is a range in which the examination device 100 is able to examine the work region WR.”; [0027]: “The examinable range 101 corresponds to the range of vision conceivable by the examination device 100. Alternatively, the examinable range 101 may include the range of vision conceivable by the examination device 100 and the range of depth conceivable by the examination device 100. The examination robot 20C makes a motion such that the work region WR on the workpiece W is included in the examinable range 101.”). Regarding claim 8, Kuwahara teaches an offline teaching method performed by an offline teaching device including one or more computers communicably connected to an input device capable of receiving an operator operation, the offline teaching method (see at least Fig. 1 and [0066]: “The generation part 11d generates the first teaching information 12b, which specifies motions of the examination robot 20C, based on the motion path TR2 (see FIG. 2C) determined by the determination part 11c. Then, the generation part 11d causes the storage 12 to store the generated first teaching information 12b. In response to, for example, an input operation performed by a worker, the outputting part 11e outputs the first teaching information 12b stored in the storage 12 to the robot controller 30 connected to the examination robot 20C.”) comprising: acquiring three-dimensional shape data of a workpiece produced by welding, an operation trajectory of the welding (see at least [0024]: “A teaching method according to this embodiment will be outlined by referring to FIG. 1. FIG. 1 outlines the teaching method according to this embodiment. In the following description, welding work is performed as an example of work performed on a workpiece W, and a welding trace (bead trace) left as a result of welding work is examined. Another possible example of the work performed on the workpiece W is to change roughness of the surface of the workpiece W. Still another possible example of the work is to form a groove on the workpiece W. Still another possible example of the work is to draw a picture on the workpiece W.”; [0026]: “The examination device 100 of fhe examination robot 20C obtains a three-dimensional shape of the work region WR by, for example, radiating light to the work region WR and moving the light while picking up an image of the light. Then, the examination device 100 determines whether the three-dimensional shape indicates a normal work result. For reference purposes, an examinable range 101 is indicated by broken lines in FIG. 1. The examinable range 101 is a range in which the examination device 100 is able to examine the work region WR.”; [0036]: “First, a case where second teaching information is used as result information will be described. As illustrated in FIG. 2A, the second teaching information obtained by the teaching apparatus 10 (see FIG. 1) is information corresponding to the motion path, TR1, of the work robot 20W. The motion path (work path) TR1 is a path taken by a representative point set on the work robot 20W (an example representative point is the leading end of the work robot 20W).”), and a scanning range of a sensor configured to scan an appearance shape of the workpiece (see at least [0026]: “The examination device 100 of fhe examination robot 20C obtains a three-dimensional shape of the work region WR by, for example, radiating light to the work region WR and moving the light while picking up an image of the light. Then, the examination device 100 determines whether the three-dimensional shape indicates a normal work result. For reference purposes, an examinable range 101 is indicated by broken lines in FIG. 1. The examinable range 101 is a range in which the examination device 100 is able to examine the work region WR.”; [0027]: “The examinable range 101 corresponds to the range of vision conceivable by the examination device 100. Alternatively, the examinable range 101 may include the range of vision conceivable by the examination device 100 and the range of depth conceivable by the examination device 100. The examination robot 20C makes a motion such that the work region WR on the workpiece W is included in the examinable range 101.”); generating at least one three-dimensional region of an inspection area based the acquired scanning range and a scanning section, which is an area to be scanned by the sensor, the at least one three-dimensional region to be scanned by the sensor (see at least [0027]: “The examinable range 101 corresponds to the range of vision conceivable by the examination device 100. Alternatively, the examinable range 101 may include the range of vision conceivable by the examination device 100 and the range of depth conceivable by the examination device 100. The examination robot 20C makes a motion such that the work region WR on the workpiece W is included in the examinable range 101.”; Fig. 2C and [0043]: “As seen from FIG. 2C, the examination sections and the sections ON1 to ON4 do not overlap. This is because the examination device 100 illustrated in FIG. 1 and the work tool 200 are different from each other in shape. Based on the difference in shape, the teaching apparatus 10 generates examination-use paths such that the work regions WR1 to WR4 are included in the examinable range 101 (see FIG. 1).”); disposing the at least one of the three-dimensional region to be scanned by the sensor on the three-dimensional shape data of the workpiece based on the operator operation acquired from the input device (see at least Fig. 1 and [0026]: “The examination device 100 of fhe examination robot 20C obtains a three-dimensional shape of the work region WR by, for example, radiating light to the work region WR and moving the light while picking up an image of the light. Then, the examination device 100 determines whether the three-dimensional shape indicates a normal work result. For reference purposes, an examinable range 101 is indicated by broken lines in FIG. 1. The examinable range 101 is a range in which the examination device 100 is able to examine the work region WR.”; [0040]: “A case where shape information of the workpiece W is used as result information will be described by referring to FIG. 2B. As illustrated in FIG. 2B, the shape information obtained by the teaching apparatus 16 (see FIG. 1) is information including a three-dimensional shape of the workpiece W and the shape and position of the work region WR on the workpiece W. A specific example of the shape information is three-dimensional CAD (Computer Aided Design) data of the workpiece W including information indicating the shape and position of the work region WR. In FIG. 2B. the work regions WR have three-dimensional shapes, similarly to the work regions WR illustrated in FIG. 2A. The work regions WR, however, may be flat work regions WR having, for example, circular shapes or rectangular shapes. This will be described later by referring to FIG. 8.”); and creating and outputting, to a robot, a teaching program for scanning the at least one three-dimensional region based on the at least one three- dimensional region and the operation trajectory of the welding (see at least Fig. 1 and [0026]: “The examination device 100 of fhe examination robot 20C obtains a three-dimensional shape of the work region WR by, for example, radiating light to the work region WR and moving the light while picking up an image of the light. Then, the examination device 100 determines whether the three-dimensional shape indicates a normal work result. For reference purposes, an examinable range 101 is indicated by broken lines in FIG. 1. The examinable range 101 is a range in which the examination device 100 is able to examine the work region WR.”; [0027]: “The examinable range 101 corresponds to the range of vision conceivable by the examination device 100. Alternatively, the examinable range 101 may include the range of vision conceivable by the examination device 100 and the range of depth conceivable by the examination device 100. The examination robot 20C makes a motion such that the work region WR on the workpiece W is included in the examinable range 101.”; [0028]: “In the embodiment of FIG. 1, the examination robot 20C and the work robot 20W are the same type of robots 20, and the same type of robot controllers 30 are used to control motions of the examination robot 20C and the work robot 20W. This configuration, however, is not intended in a limiting sense. Another possible embodiment is that the examination robot 20C and the work robot 20W are different types of robots, and the robot controllers 30 are different types of robot controllers.”; [0029]: “As illustrated in FIG. 1, in the teaching method according to this embodiment, a teaching apparatus 10 obtains “second teaching information” (for example, work-use teaching information) from the robot controller 30 that controls motions of the work robot 20W. The second teaching information is teaching information that specifies motions of the work robot 20W.”; [0040]: “A case where shape information of the workpiece W is used as result information will be described by referring to FIG. 2B. As illustrated in FIG. 2B, the shape information obtained by the teaching apparatus 16 (see FIG. 1) is information including a three-dimensional shape of the workpiece W and the shape and position of the work region WR on the workpiece W. A specific example of the shape information is three-dimensional CAD (Computer Aided Design) data of the workpiece W including information indicating the shape and position of the work region WR. In FIG. 2B. the work regions WR have three-dimensional shapes, similarly to the work regions WR illustrated in FIG. 2A. The work regions WR, however, may be flat work regions WR having, for example, circular shapes or rectangular shapes. This will be described later by referring to FIG. 8.”). Kuwahara fails to explicitly teach outputting, to a welding robot that performs the welding, a program for scanning the three-dimensional region. However, Izawa teaches a system and method for using a working robot to do a welding operation that comprises outputting, to a welding robot that performs a welding, a program for scanning a three-dimensional region (see at least Fig. 1 and [0045]: “The laser sensor head 17 detects the shape of a region including the welding point on the work W, by means of laser. The laser sensor head 17 is attached to an outer surface of the welding torch 16. More specifically, the laser sensor head 17 is attached on a side of the welding torch 16 along a direction of welding (Direction X). The laser sensor head 17 applies a laser beam to the work W, receives a reflected beam, and thereby detects the shape of the work W as two-dimensional information. More specifically, the laser sensor head 17 scans in Direction Y, as the welding torch 16 moves in the Direction X in FIG. 2, at a predetermined timing (at a predetermined pitch in Direction X), whereby the laser sensor head 17 detects an outer shape of the work W in the ZY plane, at each of the scanning point. For example, at a scanning point X1, the sensor head outputs a square pulse signal representing the outer shape of the work W in the ZY plane. The work W shape information detected by the laser sensor head 17 is fed to the sensor controller 20.”; [0047]: “The personal computer 30 generates instruction-point data, based on the coordinate information about the welding points of the welding torch 16 sent from the sensor controller 20, and gives the generated data to the robot controller 40. The "instruction-point data" is coordinate information which defines a path for the welding torch 16 to move on, with a plurality of three-dimensional points and an attitude of the welding torch 16 at each of the points.”). Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Kuwahara to incorporate the teachings of Izawa and provide a means to output, to a welding robot that performs a welding, a program for scanning a three-dimensional region, with a reasonable expectation of success, in order to utilize a same robot to scan and weld to simplify a set up and minimize cost of having multiple robots. The combination of Kuwahara and Izawa fails to explicitly teach displaying the at least one three-dimensional region to be scanned by the sensor on the three-dimensional shape data of the workpiece. However, Shimodaira teaches an apparatus and method for controlling a robot that displays at least one three-dimensional region to be scanned by a sensor on the three-dimensional shape data of a workpiece (see at least Figs. 1-2 and [0229]: “The three-dimensional shape data may be generated on the sensor unit side. In this case, an image processing IC or the like realizing a function of generating three-dimensional shape data is provided on the sensor unit side. Alternatively, there may be a configuration in which three-dimensional shape data is not generated by the robot setting apparatus side, and the robot setting apparatus performs image processing on an image captured by the sensor unit side so as to generate three-dimensional shape data such as a three-dimensional image.”; [0233]: “The display unit 3 is a member for displaying a three-dimensional shape of a workpiece acquired in the robot setting apparatus 100 or checking various settings or an operation state, and may employ a liquid crystal monitor, an organic EL display, or a CRT.”). Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Kuwahara and Izawa to incorporate the teachings of Shimodaira and provide a means to displays at least one three-dimensional region to be scanned by a sensor on the three-dimensional shape data of a workpiece, with a reasonable expectation of success, in order to provide a visualization to the user of the workpiece and region. Regarding claim 9, Kuwahara teaches an offline teaching method performed using an offline teaching device including one or more computers communicably connected to an input device by an operator operating the input device, the offline teaching method (see at least Fig. 1 and [0066]: “The generation part 11d generates the first teaching information 12b, which specifies motions of the examination robot 20C, based on the motion path TR2 (see FIG. 2C) determined by the determination part 11c. Then, the generation part 11d causes the storage 12 to store the generated first teaching information 12b. In response to, for example, an input operation performed by a worker, the outputting part 11e outputs the first teaching information 12b stored in the storage 12 to the robot controller 30 connected to the examination robot 20C.”) comprising: inputting three-dimensional shape data of a workpiece produced by welding to the computer (see at least [0024]: “A teaching method according to this embodiment will be outlined by referring to FIG. 1. FIG. 1 outlines the teaching method according to this embodiment. In the following description, welding work is performed as an example of work performed on a workpiece W, and a welding trace (bead trace) left as a result of welding work is examined. Another possible example of the work performed on the workpiece W is to change roughness of the surface of the workpiece W. Still another possible example of the work is to form a groove on the workpiece W. Still another possible example of the work is to draw a picture on the workpiece W.”; [0026]: “The examination device 100 of fhe examination robot 20C obtains a three-dimensional shape of the work region WR by, for example, radiating light to the work region WR and moving the light while picking up an image of the light. Then, the examination device 100 determines whether the three-dimensional shape indicates a normal work result. For reference purposes, an examinable range 101 is indicated by broken lines in FIG. 1. The examinable range 101 is a range in which the examination device 100 is able to examine the work region WR.”; [0036]: “First, a case where second teaching information is used as result information will be described. As illustrated in FIG. 2A, the second teaching information obtained by the teaching apparatus 10 (see FIG. 1) is information corresponding to the motion path, TR1, of the work robot 20W. The motion path (work path) TR1 is a path taken by a representative point set on the work robot 20W (an example representative point is the leading end of the work robot 20W).”); inputting, to the computer, a scanning section, which is an area to be scanned by a sensor, in which an appearance shape of the workpiece is to be scanned (see at least [0026]: “The examination device 100 of fhe examination robot 20C obtains a three-dimensional shape of the work region WR by, for example, radiating light to the work region WR and moving the light while picking up an image of the light. Then, the examination device 100 determines whether the three-dimensional shape indicates a normal work result. For reference purposes, an examinable range 101 is indicated by broken lines in FIG. 1. The examinable range 101 is a range in which the examination device 100 is able to examine the work region WR.”; [0027]: “The examinable range 101 corresponds to the range of vision conceivable by the examination device 100. Alternatively, the examinable range 101 may include the range of vision conceivable by the examination device 100 and the range of depth conceivable by the examination device 100. The examination robot 20C makes a motion such that the work region WR on the workpiece W is included in the examinable range 101.”); and disposing a three-dimensional region of an inspection area based on a scanning portion corresponding to the scanning section to be scanned by the sensor on the three-dimensional shape data of the workpiece (see at least Fig. 1 and [0026]: “The examination device 100 of fhe examination robot 20C obtains a three-dimensional shape of the work region WR by, for example, radiating light to the work region WR and moving the light while picking up an image of the light. Then, the examination device 100 determines whether the three-dimensional shape indicates a normal work result. For reference purposes, an examinable range 101 is indicated by broken lines in FIG. 1. The examinable range 101 is a range in which the examination device 100 is able to examine the work region WR.”; [0040]: “A case where shape information of the workpiece W is used as result information will be described by referring to FIG. 2B. As illustrated in FIG. 2B, the shape information obtained by the teaching apparatus 16 (see FIG. 1) is information including a three-dimensional shape of the workpiece W and the shape and position of the work region WR on the workpiece W. A specific example of the shape information is three-dimensional CAD (Computer Aided Design) data of the workpiece W including information indicating the shape and position of the work region WR. In FIG. 2B. the work regions WR have three-dimensional shapes, similarly to the work regions WR illustrated in FIG. 2A. The work regions WR, however, may be flat work regions WR having, for example, circular shapes or rectangular shapes. This will be described later by referring to FIG. 8.”); and creating a teaching program for causing a robot to scan the three-dimensional region of the inspection area based on the scanning portion corresponding to the scanning section in the three-dimensional shape data (see at least Fig. 1 and [0026]: “The examination device 100 of fhe examination robot 20C obtains a three-dimensional shape of the work region WR by, for example, radiating light to the work region WR and moving the light while picking up an image of the light. Then, the examination device 100 determines whether the three-dimensional shape indicates a normal work result. For reference purposes, an examinable range 101 is indicated by broken lines in FIG. 1. The examinable range 101 is a range in which the examination device 100 is able to examine the work region WR.”; [0027]: “The examinable range 101 corresponds to the range of vision conceivable by the examination device 100. Alternatively, the examinable range 101 may include the range of vision conceivable by the examination device 100 and the range of depth conceivable by the examination device 100. The examination robot 20C makes a motion such that the work region WR on the workpiece W is included in the examinable range 101.”; [0028]: “In the embodiment of FIG. 1, the examination robot 20C and the work robot 20W are the same type of robots 20, and the same type of robot controllers 30 are used to control motions of the examination robot 20C and the work robot 20W. This configuration, however, is not intended in a limiting sense. Another possible embodiment is that the examination robot 20C and the work robot 20W are different types of robots, and the robot controllers 30 are different types of robot controllers.”; [0029]: “As illustrated in FIG. 1, in the teaching method according to this embodiment, a teaching apparatus 10 obtains “second teaching information” (for example, work-use teaching information) from the robot controller 30 that controls motions of the work robot 20W. The second teaching information is teaching information that specifies motions of the work robot 20W.”; [0040]: “A case where shape information of the workpiece W is used as result information will be described by referring to FIG. 2B. As illustrated in FIG. 2B, the shape information obtained by the teaching apparatus 16 (see FIG. 1) is information including a three-dimensional shape of the workpiece W and the shape and position of the work region WR on the workpiece W. A specific example of the shape information is three-dimensional CAD (Computer Aided Design) data of the workpiece W including information indicating the shape and position of the work region WR. In FIG. 2B. the work regions WR have three-dimensional shapes, similarly to the work regions WR illustrated in FIG. 2A. The work regions WR, however, may be flat work regions WR having, for example, circular shapes or rectangular shapes. This will be described later by referring to FIG. 8.”). Kuwahara fails to explicitly teach causing a welding robot that performs the welding to scan a three-dimensional region. However, Izawa teaches a system and method for using a working robot to do a welding operation that causes a welding robot that performs a welding to scan a three-dimensional region (see at least Fig. 1 and [0045]: “The laser sensor head 17 detects the shape of a region including the welding point on the work W, by means of laser. The laser sensor head 17 is attached to an outer surface of the welding torch 16. More specifically, the laser sensor head 17 is attached on a side of the welding torch 16 along a direction of welding (Direction X). The laser sensor head 17 applies a laser beam to the work W, receives a reflected beam, and thereby detects the shape of the work W as two-dimensional information. More specifically, the laser sensor head 17 scans in Direction Y, as the welding torch 16 moves in the Direction X in FIG. 2, at a predetermined timing (at a predetermined pitch in Direction X), whereby the laser sensor head 17 detects an outer shape of the work W in the ZY plane, at each of the scanning point. For example, at a scanning point X1, the sensor head outputs a square pulse signal representing the outer shape of the work W in the ZY plane. The work W shape information detected by the laser sensor head 17 is fed to the sensor controller 20.”; [0047]: “The personal computer 30 generates instruction-point data, based on the coordinate information about the welding points of the welding torch 16 sent from the sensor controller 20, and gives the generated data to the robot controller 40. The "instruction-point data" is coordinate information which defines a path for the welding torch 16 to move on, with a plurality of three-dimensional points and an attitude of the welding torch 16 at each of the points.”). Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Kuwahara to incorporate the teachings of Izawa and provide a means causes a welding robot that performs a welding to scan a three-dimensional region, with a reasonable expectation of success, in order to utilize a same robot to scan and weld to simplify a set up and minimize cost of having multiple robots. The combination of Kuwahara and Izawa fails to explicitly teach displaying the at least one three-dimensional region to be scanned by the sensor on the three-dimensional shape data of the workpiece. However, Shimodaira teaches an apparatus and method for controlling a robot that displays at least one three-dimensional region to be scanned by a sensor on the three-dimensional shape data of a workpiece (see at least Figs. 1-2 and [0229]: “The three-dimensional shape data may be generated on the sensor unit side. In this case, an image processing IC or the like realizing a function of generating three-dimensional shape data is provided on the sensor unit side. Alternatively, there may be a configuration in which three-dimensional shape data is not generated by the robot setting apparatus side, and the robot setting apparatus performs image processing on an image captured by the sensor unit side so as to generate three-dimensional shape data such as a three-dimensional image.”; [0233]: “The display unit 3 is a member for displaying a three-dimensional shape of a workpiece acquired in the robot setting apparatus 100 or checking various settings or an operation state, and may employ a liquid crystal monitor, an organic EL display, or a CRT.”). Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Kuwahara and Izawa to incorporate the teachings of Shimodaira and provide a means to displays at least one three-dimensional region to be scanned by a sensor on the three-dimensional shape data of a workpiece, with a reasonable expectation of success, in order to provide a visualization to the user of the workpiece and region. Regarding claim 10, modified Kuwahara teaches the limitations of claim 1. The combination of Kuwahara fails to explicitly teach wherein the at least one three-dimensional region that is displayed is movable on a screen. However, Shimodaira teaches an apparatus and method for controlling a robot wherein at least one three-dimensional region that is displayed is movable on a screen (see at least Figs. 1-2 and [0229]: “The three-dimensional shape data may be generated on the sensor unit side. In this case, an image processing IC or the like realizing a function of generating three-dimensional shape data is provided on the sensor unit side. Alternatively, there may be a configuration in which three-dimensional shape data is not generated by the robot setting apparatus side, and the robot setting apparatus performs image processing on an image captured by the sensor unit side so as to generate three-dimensional shape data such as a three-dimensional image.”; [0233]: “The display unit 3 is a member for displaying a three-dimensional shape of a workpiece acquired in the robot setting apparatus 100 or checking various settings or an operation state, and may employ a liquid crystal monitor, an organic EL display, or a CRT.”). Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Kuwahara to incorporate the teachings of Shimodaira and provide at least one three-dimensional region that is displayed is movable on a screen, with a reasonable expectation of success, in order to provide a visualization to the user of movement of a workpiece and region. Claim Rejections - 35 USC § 103 8. Claim 5 is/are rejected under 35 U.S.C. 103 as being unpatentable over Kuwahara et al. (US 20190143514, hereinafter Kuwahara) and Izawa et al. (US 20070145027, hereinafter Izawa) and Shimodaira et al. (US 20180250823, hereinafter Shimodaira) in further view of Clever et al. (US 20210162600, hereinafter Clever). Regarding claim 5, modified Kuwahara teaches the limitations of claim 1. Kuwahara further teaches wherein the generation unit disposes at least one three-dimensional region based on the operator operation such that there is a plurality of three-dimensional regions (see at least [0034]: “Specifically, in the case where second teaching information is used as result information, the work region WR is obtained indirectly from the motion path of the work robot 20W. In the case where shape information indicating a three-dimensional shape of the workpiece W including the work regions WR on the workpiece W is directly obtainable, the shape information may be used as result information.”; [0040]: “A case where shape information of the workpiece W is used as result information will be described by referring to FIG. 2B. As illustrated in FIG. 2B, the shape information obtained by the teaching apparatus 16 (see FIG. 1) is information including a three-dimensional shape of the workpiece W and the shape and position of the work region WR on the workpiece W. A specific example of the shape information is three-dimensional CAD (Computer Aided Design) data of the workpiece W including information indicating the shape and position of the work region WR. In FIG. 2B. the work regions WR have three-dimensional shapes, similarly to the work regions WR illustrated in FIG. 2A. The work regions WR, however, may be flat work regions WR having, for example, circular shapes or rectangular shapes. This will be described later by referring to FIG. 8.”), and the control unit creates the teaching program for scanning the at least one three-dimensional region based on at least one three-dimensional region among all the three-dimensional regions including the at least one three-dimensional region and the operation trajectory of the welding (see at least [0026]: “The examination device 100 of fhe examination robot 20C obtains a three-dimensional shape of the work region WR by, for example, radiating light to the work region WR and moving the light while picking up an image of the light. Then, the examination device 100 determines whether the three-dimensional shape indicates a normal work result. For reference purposes, an examinable range 101 is indicated by broken lines in FIG. 1. The examinable range 101 is a range in which the examination device 100 is able to examine the work region WR.”; [0027]: “The examinable range 101 corresponds to the range of vision conceivable by the examination device 100. Alternatively, the examinable range 101 may include the range of vision conceivable by the examination device 100 and the range of depth conceivable by the examination device 100. The examination robot 20C makes a motion such that the work region WR on the workpiece W is included in the examinable range 101.”; [0029]: “As illustrated in FIG. 1, in the teaching method according to this embodiment, a teaching apparatus 10 obtains “second teaching information” (for example, work-use teaching information) from the robot controller 30 that controls motions of the work robot 20W. The second teaching information is teaching information that specifies motions of the work robot 20W.”). Kuwahara fails to explicitly teach displaying the at least one three-dimensional region. However, Shimodaira teaches an apparatus and method for controlling a robot that displays at least one three-dimensional region (see at least Figs. 1-2 and [0229]: “The three-dimensional shape data may be generated on the sensor unit side. In this case, an image processing IC or the like realizing a function of generating three-dimensional shape data is provided on the sensor unit side. Alternatively, there may be a configuration in which three-dimensional shape data is not generated by the robot setting apparatus side, and the robot setting apparatus performs image processing on an image captured by the sensor unit side so as to generate three-dimensional shape data such as a three-dimensional image.”; [0233]: “The display unit 3 is a member for displaying a three-dimensional shape of a workpiece acquired in the robot setting apparatus 100 or checking various settings or an operation state, and may employ a liquid crystal monitor, an organic EL display, or a CRT.”). Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Kuwahara to incorporate the teachings of Shimodaira and provide a means to displays at least one three-dimensional region, with a reasonable expectation of success, in order to provide a visualization to the user of the workpiece and region. The combination of Kuwahara and Shimodaira fails to explicitly teach duplicating the region based on the operation. However, Clever teaches a method and system for programming an industrial robot that duplicate a region based on an operation (see at least [0049]: “After marking the image portion representing the workpiece 8 displayed on the display 18 with the rectangular frame 17, the image area inside the rectangular frame 17 is copied and joined to the rectangular frame 17 so that the copied image area is moved together with the frame 17 when moving the frame in the captured image in further programming steps. In order to allow a precise positioning of the marked object 17 the copied image area is preferably displayed on the captured image 12 as a transparent image area, so that the operator can recognize other objects which are located in the workspace and displayed in the digital image 12 on the screen 18, in order to exactly move the frame to a desired position.”). Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Kuwahara and Shimodaira to incorporate the teachings of Kim and provide a means to duplicate a region based on an operation, with a reasonable expectation of success, in order to allow for copying and manipulating of regions for higher flexibility. Claim Rejections - 35 USC § 103 9. Claim 6 is/are rejected under 35 U.S.C. 103 as being unpatentable over Kuwahara et al. (US 20190143514, hereinafter Kuwahara) and Izawa et al. (US 20070145027, hereinafter Izawa) and Shimodaira et al. (US 20180250823, hereinafter Shimodaira) in further view of Einecke et al. (US 20200233413, hereinafter Einecke). Regarding claim 6, modified Kuwahara teaches the limitations of claim 1. Kuwahara further teaches wherein the at least one three-dimensional region is a plurality of three-dimensional regions (see at least [0043]: “As seen from FIG. 2C, the examination sections and the sections ON1 to ON4 do not overlap. This is because the examination device 100 illustrated in FIG. 1 and the work tool 200 are different from each other in shape. Based on the difference in shape, the teaching apparatus 10 generates examination-use paths such that the work regions WR1 to WR4 are included in the examinable range 101 (see FIG. 1).”), the generation unit dispose one of the plurality of three-dimensional regions based on the operator operation (see at least [0034]: “Specifically, in the case where second teaching information is used as result information, the work region WR is obtained indirectly from the motion path of the work robot 20W. In the case where shape information indicating a three-dimensional shape of the workpiece W including the work regions WR on the workpiece W is directly obtainable, the shape information may be used as result information.”; [0040]: “A case where shape information of the workpiece W is used as result information will be described by referring to FIG. 2B. As illustrated in FIG. 2B, the shape information obtained by the teaching apparatus 16 (see FIG. 1) is information including a three-dimensional shape of the workpiece W and the shape and position of the work region WR on the workpiece W. A specific example of the shape information is three-dimensional CAD (Computer Aided Design) data of the workpiece W including information indicating the shape and position of the work region WR. In FIG. 2B. the work regions WR have three-dimensional shapes, similarly to the work regions WR illustrated in FIG. 2A. The work regions WR, however, may be flat work regions WR having, for example, circular shapes or rectangular shapes. This will be described later by referring to FIG. 8.”), and the control unit creates the teaching program for scanning the at least one three- dimensional region based on the operation trajectory of the welding and the at least one three-dimensional region among all the three-dimensional regions (see at least [0026]: “The examination device 100 of fhe examination robot 20C obtains a three-dimensional shape of the work region WR by, for example, radiating light to the work region WR and moving the light while picking up an image of the light. Then, the examination device 100 determines whether the three-dimensional shape indicates a normal work result. For reference purposes, an examinable range 101 is indicated by broken lines in FIG. 1. The examinable range 101 is a range in which the examination device 100 is able to examine the work region WR.”; [0027]: “The examinable range 101 corresponds to the range of vision conceivable by the examination device 100. Alternatively, the examinable range 101 may include the range of vision conceivable by the examination device 100 and the range of depth conceivable by the examination device 100. The examination robot 20C makes a motion such that the work region WR on the workpiece W is included in the examinable range 101.”; [0029]: “As illustrated in FIG. 1, in the teaching method according to this embodiment, a teaching apparatus 10 obtains “second teaching information” (for example, work-use teaching information) from the robot controller 30 that controls motions of the work robot 20W. The second teaching information is teaching information that specifies motions of the work robot 20W.”). Kuwahara fails to explicitly teach deleting one of the plurality of three-dimensional regions based on the operator operation. However, Einecke teaches a method and system for generating a representation based on which an autonomous device operates that deletes one of a plurality of three-dimensional regions based on an operation (see at least [0009]: “Examples for such autonomous devices are service robots, e.g. lawnmowers as mentioned above, or vacuum cleaners; industrial robots, e.g. transportation systems or welding robots; or autonomous vehicles, e.g. autonomous cars or drones.”; [0083]: “Once the map is segmented and thus the areas are defined and additional information in form of labels is associated with the respective areas, a representation of the work environment is generated in step S4. It is to be noted that this representation may be the representation that is transferred to the autonomous device 3 in step S7. Alternatively, this representation may be an intermediate representation which is visualized by the input/output device 12 in step S5, for using an augmented reality device so that a human may delete, adapt or add areas and/or information to the intermediate representation. In step S6, an input is received from the human that modifies the generated and visualized intermediate representation. The method then proceeds with steps S2, S3 and S4.”; [0086]: “It is to be noted that generating, deleting or amending areas may also refer to generating, deleting or amending subareas. Such subareas can be included in an area so that a plurality of subareas together form the entire or at least a portion of a larger area. A label associated with the larger area is valid also for the subareas, but the subareas may be additionally labelled with further additional information. For example, an entire work area of an autonomous lawnmower may comprise as separate subareas a front yard zone and a backyard zone. These subareas may be differently labelled in order to cause a behavior of the autonomous lawnmower to be different in the subareas. Such different labels may be used by the autonomous device 3 to define different timings of the operation of the autonomous device in the different subareas.”). Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Kuwahara to incorporate the teachings of Einecke and provide a means to delete one of a plurality of three-dimensional regions based on an operation, with a reasonable expectation of success, in order to allow for removing and manipulating of regions for higher flexibility. Claim Rejections - 35 USC § 103 10. Claim 7 is/are rejected under 35 U.S.C. 103 as being unpatentable over Kuwahara et al. (US 20190143514, hereinafter Kuwahara) and Izawa et al. (US 20070145027, hereinafter Izawa) and Shimodaira et al. (US 20180250823, hereinafter Shimodaira) in further view of Takeda (US 20200070281, hereinafter Takeda). Regarding claim 7, modified Kuwahara teaches the limitations of claim 1. Kuwahara further teaches wherein the at least one three-dimensional region is a plurality of three-dimensional regions (see at least [0043]: “As seen from FIG. 2C, the examination sections and the sections ON1 to ON4 do not overlap. This is because the examination device 100 illustrated in FIG. 1 and the work tool 200 are different from each other in shape. Based on the difference in shape, the teaching apparatus 10 generates examination-use paths such that the work regions WR1 to WR4 are included in the examinable range 101 (see FIG. 1).”), the generation unit disposes a plurality of divided three-dimensional regions (see at least [0034]: “Specifically, in the case where second teaching information is used as result information, the work region WR is obtained indirectly from the motion path of the work robot 20W. In the case where shape information indicating a three-dimensional shape of the workpiece W including the work regions WR on the workpiece W is directly obtainable, the shape information may be used as result information.”; [0040]: “A case where shape information of the workpiece W is used as result information will be described by referring to FIG. 2B. As illustrated in FIG. 2B, the shape information obtained by the teaching apparatus 16 (see FIG. 1) is information including a three-dimensional shape of the workpiece W and the shape and position of the work region WR on the workpiece W. A specific example of the shape information is three-dimensional CAD (Computer Aided Design) data of the workpiece W including information indicating the shape and position of the work region WR. In FIG. 2B. the work regions WR have three-dimensional shapes, similarly to the work regions WR illustrated in FIG. 2A. The work regions WR, however, may be flat work regions WR having, for example, circular shapes or rectangular shapes. This will be described later by referring to FIG. 8.”), and the control unit creates the teaching program for scanning the at least one three-dimensional region based on the operation trajectory of the welding and the at least one three-dimensional region among all the three-dimensional regions (see at least [0026]: “The examination device 100 of fhe examination robot 20C obtains a three-dimensional shape of the work region WR by, for example, radiating light to the work region WR and moving the light while picking up an image of the light. Then, the examination device 100 determines whether the three-dimensional shape indicates a normal work result. For reference purposes, an examinable range 101 is indicated by broken lines in FIG. 1. The examinable range 101 is a range in which the examination device 100 is able to examine the work region WR.”; [0027]: “The examinable range 101 corresponds to the range of vision conceivable by the examination device 100. Alternatively, the examinable range 101 may include the range of vision conceivable by the examination device 100 and the range of depth conceivable by the examination device 100. The examination robot 20C makes a motion such that the work region WR on the workpiece W is included in the examinable range 101.”; [0029]: “As illustrated in FIG. 1, in the teaching method according to this embodiment, a teaching apparatus 10 obtains “second teaching information” (for example, work-use teaching information) from the robot controller 30 that controls motions of the work robot 20W. The second teaching information is teaching information that specifies motions of the work robot 20W.”). Kuwahara fails to explicitly teach dividing the region based on the operator operation. However, Takeda teaches an apparatus and method for a laser machining which performs teaching for a laser machining system that divides a region based on an operation (see at least Fig. 4 and [0044]: “FIG. 4 is a flowchart showing the details of the welding point group determination process performed in step S2. Below, grouping is performed on a welding point group G0 as shown on the left side of FIG. 6 as an example. First, in step S21, the welding point group G0 is grouped into provisional welding point groups. A single group defines a plurality of welding points on which welding is performed while the robot 10 is operated by a single operation command. In the single group, the robot 10 is operated by the single operation command, and while the scanner 50 performs a scanning operation, each welding point belonging to the group is welded. In the single operation command, the robot 10 operates linearly at a constant speed. The welding point group G0 is provisionally divided into three welding point groups G1 to G3, as shown on the right side of FIG. 6 as an example.”). Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Kuwahara to incorporate the teachings of Takeda and provide a means to divide a region based on an operation, with a reasonable expectation of success, in order to allow for dividing and manipulating of regions for higher flexibility. Conclusion THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any extension fee pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to TIEN MINH LE whose telephone number is (571)272-3903. The examiner can normally be reached Monday to Friday (8:30am-5:30pm eastern time). Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Khoi Tran can be reached on (571)272-6919. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /T.M.L./Examiner, Art Unit 3656 /KHOI H TRAN/Supervisory Patent Examiner, Art Unit 3656
Read full office action

Prosecution Timeline

Dec 22, 2023
Application Filed
Aug 18, 2025
Non-Final Rejection — §103, §DP
Nov 19, 2025
Response Filed
Feb 12, 2026
Final Rejection — §103, §DP (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12566070
DETERMINATION APPARATUS AND DETERMINATION METHOD
2y 5m to grant Granted Mar 03, 2026
Patent 12528325
A CONTROL SYSTEM FOR A VEHICLE
2y 5m to grant Granted Jan 20, 2026
Patent 12508704
Marker Detection Apparatus and Robot Teaching System
2y 5m to grant Granted Dec 30, 2025
Patent 12509122
VEHICLE SELECTION DEVICE AND VEHICLE SELECTION METHOD
2y 5m to grant Granted Dec 30, 2025
Patent 12466074
IMAGE PROCESSING METHOD, IMAGE PROCESSING APPARATUS, ROBOT-MOUNTED TRANSFER DEVICE, AND SYSTEM
2y 5m to grant Granted Nov 11, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
68%
Grant Probability
92%
With Interview (+23.8%)
2y 12m
Median Time to Grant
Moderate
PTA Risk
Based on 81 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month