Prosecution Insights
Last updated: April 19, 2026
Application No. 18/884,301

COMPUTER-IMPLEMENTED METHOD FOR CONTROLLING A ROBOT, ROBOT CONTROL METHOD, SYSTEM, ARTICLE MANUFACTURING METHOD, AND RECORDING MEDIUM

Non-Final OA §101§102§103§112
Filed
Sep 13, 2024
Examiner
DANG, TRANG THANH
Art Unit
3656
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Canon Kabushiki Kaisha
OA Round
1 (Non-Final)
44%
Grant Probability
Moderate
1-2
OA Rounds
3y 3m
To Grant
75%
With Interview

Examiner Intelligence

Grants 44% of resolved cases
44%
Career Allow Rate
16 granted / 36 resolved
-7.6% vs TC avg
Strong +31% interview lift
Without
With
+30.7%
Interview Lift
resolved cases with interview
Typical timeline
3y 3m
Avg Prosecution
24 currently pending
Career history
60
Total Applications
across all art units

Statute-Specific Performance

§101
7.9%
-32.1% vs TC avg
§103
39.8%
-0.2% vs TC avg
§102
21.0%
-19.0% vs TC avg
§112
28.7%
-11.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 36 resolved cases

Office Action

§101 §102 §103 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Pursuant to communications filed on 09/13/2024, this is a First Action Non-Final Rejection on the Merits. Claims 1-26 are currently pending in the instant application. Priority The applicant’s claims to priority of JP2023-166526 on 09/27/2023 is acknowledged. Information Disclosure Statement The information disclosure statements (IDS) submitted on 09/13/2024 and 03/10/2025 are in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claim 26 is rejected under 35 U.S.C. 101 because the claimed invention is directed to non-statutory subject matter. The claim does not fall within at least one of the four categories of patent eligible subject matter because the claim is directed to signal per se, i.e. a recording medium storing a program. Claim Interpretation The following is a quotation of 35 U.S.C. 112(f): (f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph: An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked. As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph: (A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function; (B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and (C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function. Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function. Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function. Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) is/are: “imaging unit configured to …” in claims 8, 18, and 24; “control unit configured to …” in claim 22. Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof. If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 1-10 and 21-26 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claim 1 recites the limitation "a reference positional relationship" in line 7. There is insufficient antecedent basis for this limitation in the claim. It is unclear whether "a reference positional relationship" in line 7 refers to “a reference positional relationship” in line 4. Therefore, this renders the claim indefinite. The examiner assumes “the reference positional relationship” for further examination. Appropriate correction and/or clarification is required. Claim 3 recites the limitation " a positional relationship" in lines 1-2. There is insufficient antecedent basis for this limitation in the claim. It is unclear whether "a positional relationship" in lines 1-2 of claim 3 refers to “a positional relationship” in line 2 of claim 1. Therefore, this renders the claim indefinite. The examiner assumes “the positional relationship” for further examination. Appropriate correction and/or clarification is required. Claim 3 recites the limitation “the determining based on the image feature amount”. It is unclear what “the determining based on the image feature amount” refers to “determining whether the first object and the second object are in a reference positional relationship based on the current image” or “the determining such that the first object and the second object have a target positional relationship”. Therefore, this renders the claim indefinite. The examiner assumes “the determining whether the first object and the second object are in a reference positional relationship based on the image feature amount”. Appropriate correction and/or clarification is required. Claim 22 recites the limitation “a control unit configured to control the robot”. However, the claim also recites “the computer” to control the position and posture of the robot according to claim 1. The instant specification states, “The control device 400 serving as the control unit is a computer that controls the entire robot system 1000 and performs various types of work such as the assembling work” (paragraph [0038]). It is unclear whether “a control unit” refers to “the computer”. Therefore, this renders the claim indefinite. The examiner assumes “a control unit” is “the computer” for further examination. Appropriate correction and/or clarification is required. Claim 25 recites the limitation “controlling the robot by the method according to claim 1 to assemble the first object to the second object.” The recited limitation does not set forth any active steps to assemble the first object to the second object because the method in claim 1 only determines the positional relationship between the first object and the second object based on the current image. Therefore, this renders the claim indefinite because the scope of the claim is unclear. Appropriate correction and/or clarification is required. Claims 21, 25, and 26 are also rejected under this section because they depend on a rejected independent claim. The dependent claims are rejected as they depend on a rejected independent claim. Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claims 1-4, 6-7, 9-10, 21-22, and 24-26 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Kenji et al. (JP2015085450A, hereinafter “Kenji”). Regarding claim 1, Kenji discloses a computer-implemented method for controlling a robot (see at least Figs. 3, 8, 10, 11), the method comprising: acquiring a current image showing a positional relationship between a first object held by the robot and a second object (see at least Figs. 10, 11, 12A-D, par. [0003, 0104-0105, 0108-0110], steps S304/S309, acquiring captured image PMI101/PMI21 showing a positional relationship between workpiece WK1 held by a robot AM and workpiece WK2); determining whether the first object and the second object are in a reference positional relationship based on the current image (see at least Figs. 10, 11, 12A-D, par. [0106-0107], step S306, determining whether the workpiece WK1 is in a reference positional relationship, i.e. target position and orientation GC2 based on the captured image PIM101); and controlling, in a case where a computer determines that the first object and the second object are in a reference positional relationship based on the current image (see at least Figs. 3, 11, par. [0100-0103, 0106, 0112], step S307-Yes, the control unit 120 is configured to determine the workpiece WK1 is at the reference positional relationship based on comparing the captured image PM101 with the reference image RIM in Fig. 12A, i.e. the attention feature point IP1 is adjacent to the goal feature point GP1, and the attention feature point IP2 is adjacent to the goal feature point GP2), a position and a posture of the robot according to a trajectory of the robot set (see at least Figs. 5, 11, 12A-D, par. [0044], “The control unit 120 then performs processing to detect the feature amount of the object to be assembled based on the captured image, and moves the object to be assembled based on the feature amount of the object to be assembled. It should be noted that moving the assembly target includes a process of outputting control information (control signals) for the robot 300“; par. [0110-0111], step S311, moving a position and a posture of the robot AM according to a target feature amount FA and goal feature amount FB, e.g. the workpiece WK1 is moved in direction YJ2 so that the target feature point IP1 approaches the goal feature point GP1 and the target feature point IP2 approaches the goal feature point GP2) before the determining such that the first object and the second object have a target positional relationship (see at least Figs. 11, 12A-D, par. [0112], step S312-Yes, determining whether the workpiece WK1 and the workpiece WK2 are in a target position and posture relationship). PNG media_image1.png 1071 779 media_image1.png Greyscale (Kenji, Fig. 11) PNG media_image2.png 1085 752 media_image2.png Greyscale (Kenji, Figs. 12A-D) Regarding claim 2, Kenji teaches all the limitations in claim 1 as discussed above. Kenji further teaches wherein both the first object and the second object are included in the current image (see at least Figs. 10, 12A-D, par. [0043], the captured image includes the workpiece WK1 and workpiece WK2). Regarding claim 3, Kenji teaches all the limitations in claim 1 as discussed above. Kenji further teaches wherein the computer is configured to acquire an image feature amount according to a positional relationship between a portion of the first object and a portion of the second object from the current image, and to perform the determining based on the image feature amount (see at least Figs. 3, 10, par. [0012-0013], “In one aspect of the present invention, the control unit may perform feature amount detection processing for the object to be assembled and the object to be assembled, based on one or more captured images in which the object to be assembled and the object to be assembled are captured, and may move the object to be assembled, based on the feature amount of the object to be assembled and the feature amount of the object to be assembled, so that the relative position and posture relationship between the object to be assembled and the object to be assembled becomes a target relative position and posture relationship. This makes it possible to perform an assembly operation based on the feature amount of the assembly object and the feature amount of the object to be assembled, which are detected from the captured image”). Regarding claim 4, Kenji teaches all the limitations in claim 1 as discussed above. Kenji further teaches wherein in the controlling of the position and the posture of the robot, the position and the posture of the robot are changed according to an operation of the robot instructed by a user after the acquiring the current image (see at least par. [0101], “This target position and orientation is set by the instructor (user) when generating the reference image.” This discloses that after acquiring the images, the user will set operation movement for the robot hand to move accordingly to the reference image to a pre-assembly position where the workpiece WK1 is adjacent to the workpiece WK2 (par. [0091], “...This makes it possible to bring the device into a state immediately before assembly…”). To accommodate for misalignment or change of posture of the workpiece WK2, the robot is instructed to use second visual servo to move the workpiece WK1 to final assembling position from the pre-assemble position (par. [0114], “…In other words, even if the position and posture of the object to be assembled when the reference image is generated is misaligned (different) from the position and posture of the object to be assembled during the actual assembly work, the second visual servo will accommodate the misalignment of the object to be assembled, so there is no need to use a different reference image in the first visual servo, and the same reference image can be used each time….”). Regarding claim 6, Kenji teaches all the limitations in claim 1 as discussed above. Kenji further teaches wherein the computer is configured to control, in a case where it is determined that the first object and the second object are in the reference positional relationship (see at least Figs. 3, 11, par. [0100-0103, 0106], step S307-Yes, the control unit 120 is configured to determine the workpiece WK1 is at the reference positional relationship based on comparing the captured image PM101 with the reference image RIM in Fig. 12A), the position and the posture of the robot by position control such that the first object and the second object have the target positional relationship (see at least Figs. 5, 11, 12A-D, par. [0110-0111], step S311, moving the position and the posture of the robot AM according to a target feature amount FA and goal feature amount FB, e.g. the workpiece WK1 is moved so that the target feature point IP1 approaches the goal feature point GP1 and the target feature point IP2 approaches the goal feature point GP2). Regarding claim 7, Kenji teaches all the limitations in claim 1 as discussed above. Kenji further teaches wherein the first object and the second object are separated from each other in the reference positional relationship (see at least Fig. 12C, par. [0112], the workpiece WK1 is at the position and orientation GC2 such that the attention feature point IP1 is adjacent to the goal feature point GP1, and the attention feature point IP2 is adjacent to the goal feature point GP2), and the first object and the second object are in contact with each other in the target positional relationship (see at least Fig. 12D, par. [0129], the target feature point IP1 coincides with the goal feature point GP1 and the target feature point IP2 approaches the goal feature point GP2). Regarding claim 9, Kenji teaches all the limitations in claim 1 as discussed above. Kenji further teaches wherein the computer is configured to acquire a reference image showing that a first teaching object held by the robot and a second teaching object are in the reference positional relationship (see at least Fig. 12A, par. [0101], “For example, in FIG. 10, the position GC2 is the target position and orientation, and the reference image RIM in FIG. 12A shows the workpiece WK1 positioned at the target position and orientation GC2. This target position and orientation is set by the instructor (user) when generating the reference image”), and to determine whether the first object and the second object are in the reference positional relationship by using the current image and the reference image (see at least Figs. 11, 12A-D, par. [0106-0107], “Then, the control unit 120 determines whether the assembly object WK1 is in the target position and orientation GC2 (S306), and if it determines that the assembly object WK1 is in the target position and orientation GC2, it transitions to the second visual servo […]In this way, in the first visual servoing, the robot is controlled while comparing the feature amounts of the assembly target WK1 in the reference image RIM and the first captured image PIM101”). Regarding claim 10, Kenji teaches all the limitations in claim 1 as discussed above. Kenji further teaches wherein the computer is configured to acquire a reference image showing that a first teaching object held by the robot and a second teaching object are in the reference positional relationship (see at least Fig. 12A, par. [0101], “For example, in FIG. 10, the position GC2 is the target position and orientation, and the reference image RIM in FIG. 12A shows the workpiece WK1 positioned at the target position and orientation GC2. This target position and orientation is set by the instructor (user) when generating the reference image”), and to determine whether the first object and the second object are in the reference positional relationship by using the current image and the reference image (see at least Figs. 11, 12A-D, par. [0106-0107], “Then, the control unit 120 determines whether the assembly object WK1 is in the target position and orientation GC2 (S306), and if it determines that the assembly object WK1 is in the target position and orientation GC2, it transitions to the second visual servo […]In this way, in the first visual servoing, the robot is controlled while comparing the feature amounts of the assembly target WK1 in the reference image RIM and the first captured image PIM101”). Regarding claim 21, Kenji discloses a system configured to execute the method according to claim 1, the system comprising the computer (see at least Figs. 1, 3, 11, par. [0041-0044], a system configured to execute the method as described in Figs. 3 and 11. The system comprising a control unit 120 such as various processors (CPUs, etc.) and ASICs (gate arrays, etc.), or by programs). Regarding claim 22, Kenji teaches all the limitations in claim 21 as discussed above. Kenji further teaches further comprising: the robot (see at least Figs. 1, 3, par. [0041-0044], robot 300); and a control unit configured to control the robot (see at least Figs. 1, 3, par. [0041-0044], the control unit 120 configured to control the robot 300). Regarding claim 24, Kenji teaches all the limitations in claim 21 as discussed above. Kenji further teaches further comprising an imaging unit configured to capture the current image (see at least Figs. 1, 3, par. [0041-0044], captured image acquisition unit 110 acquires a captured image). Regarding claim 25, Kenji discloses an article manufacturing method comprising: controlling the robot by the method according to claim 1 to assemble the first object to the second object (see at least Figs. 1, 3, 11, 12A-D, par. [0094-0115], claim 1 rejection as discussed above). Regarding claim 26, Kenji discloses a recording medium storing a program for causing the computer to execute the method according to claim 1 (see at least Fig. 1, par. [0165], claim 1 rejection as discussed above). Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 5 and 23 are rejected under 35 U.S.C. 103 as being unpatentable over Kenji et al. (JP2015085450A, hereinafter “Kenji”) as applied to claims 1 and 21 above, and further in view of Takashi (JP2013180380A). Regarding claim 5, Kenji teaches all the limitations in claim 1 as discussed above. Kenji fails to specifically teach wherein the computer is configured to control the position and the posture of the robot by force control. Takashi teaches, at least par. [0007-0009, 0076-0080], a method and a system to provide a visual servoing control and a compliant motion control for controlling the manipulator of the robot based on the captured image of the object held by the robot and the load information indicating the force applied on the object. In view of Takashi' s teachings, it would have been obvious to one of ordinary skill in the art before the effective filling date of the instant application to include, with Kenji’s method, wherein the computer is configured to control the position and the posture of the robot by force control, with a reasonable expectation of success. This modification would allow to move the object such that the object does not receive an excessive force even if the object contacts another object while moving the object based on the captured image of the object. Regarding claim 23, Kenji teaches all the limitations in claim 21 as discussed above. Kenji further teaches wherein the robot is an articulated robot (see at least Figs. 1, 3, par. [0042], the robot 300 also has an end effector (hand) 310 and an arm 320), and the articulated robot includes one or more joints (see at least par. [0156], “The arm is a part of the robot 300 and refers to a movable part including one or more joints). Kenji fails to specifically teach a force sensing sensor is provided in at least one joint of the articulated robot. Takashi teaches, at Fig. 2, par. [0021, 0023], a force sensor 20d is provided in at least one joint of the articulated robot. In view of Takashi's teachings, it would have been obvious to one of ordinary skill in the art before the effective filling date of the instant application to include, with Kenji’s system, a force sensing sensor is provided in at least one joint of the articulated robot, with a reasonable expectation of success. This modification would allow to move the object such that the object does not receive an excessive force even if the object contacts another object while moving the object based on the captured image of the object. Claims 8, 11-14, and 16-20 are rejected under 35 U.S.C. 103 as being unpatentable over Kenji et al. (JP2015085450A, hereinafter “Kenji”) as applied to claims 1 and 2 above, and further in view of Ito (US 20130054025 A1). Regarding claim 8, Kenji teaches all the limitations in claims 1 and 2 as discussed above. Kenji further teaches wherein an imaging unit configured to capture the current image (Kenji, see at least Figs. 1, 3, par. [0042], an imaging unit 200/CM configured to capture current images). Kenji fails to specifically teach wherein an imaging unit is disposed so as to move according to a change in the position and the posture of the robot. Ito teaches, see at least Fig. 11A, par. [0102-0103], an on-hand camera 117 is disposed so as to move according to a change in the position and the posture of the robot arm 101. In view of Ito's teachings, it would have been obvious to one of ordinary skill in the art before the effective filling date of the instant application to include, with Kenji’s method, wherein an imaging unit is disposed so as to move according to a change in the position and the posture of the robot, with a reasonable expectation of success. This modification would allow to obtain image data of an installation operation of the robot arm. Regarding claim 11, Kenji discloses a robot control method comprising: setting a target position and orientation of an object to be assembled when generating and storing a reference image (see at least par. [0018, 0101]); acquiring a reference image showing that the first teaching object and the second teaching object are in the reference positional relationship (see at least Fig. 11, 12A, par. [0100, 0101], step S302, obtaining a reference image (goal image) RIM as shown in FIG. 12(A) that the workpiece WK1 positioned at the target position and orientation GC2. This target position and orientation is set by the instructor (user) when generating the reference image); and controlling, by a computer (see at least Fig. 3, control unit 102), in a case where the computer acquires a current image showing a positional relationship between a first object held by a robot and a second object and determines that the first object and the second object are in the reference positional relationship based on the reference image and the current image (see at least Figs. 3, 11, par. [0100-0103, 0106, 0112], step S307-Yes, the control unit 120 is configured to determine the workpiece WK1 is at the reference positional relationship based on comparing the captured image PM101 with the reference image RIM in Fig. 12A, i.e. the attention feature point IP1 is adjacent to the goal feature point GP1, and the attention feature point IP2 is adjacent to the goal feature point GP2), a position and a posture of the robot holding the first object such that the first object and the second object have the target positional relationship (see at least Figs. 5, 11, 12A-D, par. [0044], “The control unit 120 then performs processing to detect the feature amount of the object to be assembled based on the captured image, and moves the object to be assembled based on the feature amount of the object to be assembled. It should be noted that moving the assembly target includes a process of outputting control information (control signals) for the robot 300“; par. [0110-0111], step S311, moving a position and a posture of the robot AM according to a target feature amount FA and goal feature amount FB, e.g. the workpiece WK1 is moved in direction YJ2 so that the target feature point IP1 approaches the goal feature point GP1 and the target feature point IP2 approaches the goal feature point GP2). Kenji fails to specifically teach setting a position and a posture of a mechanical device such that a first teaching object held by the mechanical device and a second teaching object have a target positional relationship; changing the position and the posture of the mechanical device holding the first teaching object such that the first teaching object and the second teaching object that are in the target positional relationship have a reference positional relationship. Ito teaches, see at least Figs. 11A-B, 13A-C, par. [0103, 0104], setting a position and a posture such that a first teaching object 104 held by a robot arm 101 and a second teaching object 116 are in an installation completion state, capturing and storing image data on the installation completion state by on-hand camera 117 to use as data indicating a target state at the time of performing the installation operation; at least Figs. 12A-B, 13A-C, par. [0106-0111], changing the position and posture of the robot arm 101 such that the first teaching object 104 held by a robot arm 101 and the second teaching object 117 from the installation completion state to have a reference positional relationship as illustrated in Fig. 13B, capturing and storing image data as teaching data. In view of Ito's teachings, it would have been obvious to one of ordinary skill in the art before the effective filling date of the instant application to include, with Kenji’s method, setting a position and a posture of a mechanical device such that a first teaching object held by the mechanical device and a second teaching object have a target positional relationship; changing the position and the posture of the mechanical device holding the first teaching object such that the first teaching object and the second teaching object that are in the target positional relationship have a reference positional relationship, with a reasonable expectation of success. This modification would allow to generate teaching data indicating a midway path in each state during the installation operation. Regarding claim 12, the combination of Kenji and Ito teaches all the limitations of claim 11 as discussed above. The combination of Kenji and Ito further teaches wherein in the controlling of the position and the posture of the robot, the position and the posture of the robot are changed according to an operation of the robot set before the determination (Kenji, see at least Figs. 10, 11, 12A, par. [0100], “First, in preparation for the first visual servo, the robot's hand HD grasps the assembly target WK1 and moves it to the target position and orientation (S301), and the imaging unit 200 (camera CM in FIG. 10) captures an image of the assembly target WK1 at the target position and orientation to obtain a reference image (goal image) RIM as shown in FIG. 12(A) (S302)”) or an operation of the robot instructed by a user after acquiring the current image (Kenji, see at least par. [0101], “This target position and orientation is set by the instructor (user) when generating the reference image.” This discloses that after acquiring the images, the user will set operation movement for the robot hand to move accordingly to the reference image to a pre-assembly position where the workpiece WK1 is adjacent to the workpiece WK2 (par. [0091], “...This makes it possible to bring the device into a state immediately before assembly…”). To accommodate for misalignment or change of posture of the workpiece WK2, the robot is instructed to use second visual servo to move the workpiece WK1 to final assembling position from the pre-assemble position (par. [0114], “…In other words, even if the position and posture of the object to be assembled when the reference image is generated is misaligned (different) from the position and posture of the object to be assembled during the actual assembly work, the second visual servo will accommodate the misalignment of the object to be assembled, so there is no need to use a different reference image in the first visual servo, and the same reference image can be used each time….”; Ito, see at least Fig. 12A, par. [0114], “Note that the processes in steps S1210 and S1211 may be performed by user's manual operation using a robot operation accepting unit, which is not shown in the figure”). Regarding claim 13, the combination of Kenji and Ito teaches all the limitations of claim 11 as discussed above. The combination of Kenji and Ito further teaches wherein in the controlling of the position and the posture of the robot, the position and the posture of the robot are changed based on transition of the position and the posture of the mechanical device in the changing (Kenji, see at least Figs. 10, 12A-D, par. [0042-0043], “The robot 300 also has an end effector (hand) 310 and an arm 320 […] The control unit 120 then performs processing to detect the feature amount of the object to be assembled based on the captured image, and moves the object to be assembled based on the feature amount of the object to be assembled. It should be noted that moving the assembly target includes a process of outputting control information (control signals) for the robot 300”). Regarding claim 14, the combination of Kenji and Ito teaches all the limitations of claim 11 as discussed above. The combination of Kenji and Ito further teaches wherein the second teaching object is included in the reference image (Kenji, see at least Fig. 12A, reference image RIM includes workpiece WK2), and the second object is included in the current image (Kenji, see at least Fig. 12B, captured image PIM101 includes workpiece WK2). Regarding claim 16, the combination of Kenji and Ito teaches all the limitations of claim 11 as discussed above. The combination of Kenji and Ito further teaches wherein the computer is configured to control, in a case where it is determined that the first object and the second object are in the reference positional relationship (Kenji, see at least Figs. 3, 11, par. [0100-0103, 0106], step s307-Yes, the control unit 120 is configured to determine the workpiece WK1 is at the reference positional relationship based on comparing the captured image PM101 with the reference image RIM in Fig. 12A), the position and the posture of the robot by position control such that the first object and the second object have the target positional relationship (Kenji, see at least Figs. 5, 11, 12A-D, par. [0108-0113], steps S308-S312, moving the position and the posture of the robot AM according to a target feature amount FA and goal feature amount FB, e.g. the workpiece WK1 is moved so that the target feature point IP1 approaches the goal feature point GP1 and the target feature point IP2 approaches the goal feature point GP2). Regarding claim 17, the combination of Kenji and Ito teaches all the limitations of claim 11 as discussed above. The combination of Kenji and Ito further teaches wherein the first teaching object and the second teaching object are separated from each other in the reference positional relationship (Kenji, see at least Fig. 12A, the workpiece WK1 and the WK2 are separated from each other as the workpiece WK1 positioned at the target position and orientation GC2), and the first teaching object and the second teaching object are in contact with each other in the target positional relationship (Kenji, see at least Fig. 2A, par. [0039], “The reference image RIM in FIG. 2A shows an assembly target object WK1R (corresponding to WK1R in FIG. 1) in an assembled state (or immediately before being assembled) with an assembly receiving object WK2”). Regarding claim 18, the combination of Kenji and Ito teaches all the limitations of claim 11 as discussed above. The combination of Kenji and Ito further teaches wherein an imaging unit configured to capture the current image is disposed so as to move according to a change in the position and the posture of the robot (Ito, see at least Fig. 11A, par. [0102-0103], an on-hand camera 117 is disposed so as to move according to a change in the position and the posture of the robot arm 101). Regarding claim 19, the combination of Kenji and Ito teaches all the limitations of claim 11 as discussed above. The combination of Kenji and Ito further teaches wherein the position and the posture of the robot are controlled by visual servoing such that the first object and the second object have the reference positional relationship (Kenji, see at least Figs. 11, 12A-D, par. [0094-0114], “For example, in Figure 10, the movement of the assembly object WK1 from position GC1 to position GC2 (movement indicated by arrow YJ1) is performed by a first visual servo using a reference image, and the movement of the assembly object WK1 from position GC2 to position GC3 (movement indicated by arrow YJ2) is performed by a second visual servo using the feature values of the assembly receiving object WK2. It is assumed that positions GC1 to GC3 are the positions of the center of gravity of the assembly target WK1”). Regarding claim 20, the combination of Kenji and Ito teaches all the limitations of claim 11 as discussed above. The combination of Kenji and Ito further teaches wherein the mechanical device is the robot (Kenji, see at least Figs. 1, 3, robot 300). Claim 15 is rejected under 35 U.S.C. 103 as being unpatentable over Kenji et al. (JP2015085450A, hereinafter “Kenji”), in view of Ito (US 20130054025 A1) as applied to claim 11 above, and further in view of Takashi (JP2013180380A). Regarding claim 15, the combination of Kenji and Ito teaches all the limitations in claim 11 as discussed above. The combination of Kenji and Ito Kenji further teaches force sensors (not shown) provided to the robot arm 101 and the hand mechanism 102 output abnormal values (Ito, see at least Fig. 1, par. [0040, 0041]). The combination of Kenji and Ito Kenji fails to specifically teach wherein the computer is configured to control, in a case where it is determined that the first object and the second object are in the reference positional relationship, the position and the posture of the robot by force control such that the first object and the second object have the target positional relationship. Takashi teaches, at least Fig. 12, par. [0007-0009, 0076-0080], a method to provide a compliant motion control for controlling the manipulator of the robot based on the captured image of the object held by the robot and the load information indicating the force applied to the object. In view of Takashi's teachings, it would have been obvious to one of ordinary skill in the art before the effective filling date of the instant application to include, with the combination of Kenji and Ito , wherein the computer is configured to control, in a case where it is determined that the first object and the second object are in the reference positional relationship, the position and the posture of the robot by force control such that the first object and the second object have the target positional relationship, with a reasonable expectation of success. This modification would allow to move the object such that the object does not receive an excessive force even if the object contacts another object while moving the object based on the captured image of the object. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Motoyoshi (US 20150363935 A1) discloses a system and method for visual servo control for obtaining an image in real time to control a robot based on the information of the image and a reference image in which the objects are in the target state. The reference image is also continuously obtained during the visual servo control. Shiraishi et al. (US 20180281197 A1) discloses a system and method for obtaining teaching information based on based on an image and the movement of the manipulation target object to the teaching position. Any inquiry concerning this communication or earlier communications from the examiner should be directed to TRANG DANG whose telephone number is (703)756-1049. The examiner can normally be reached Monday-Friday 8:00-5:00. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Khoi Tran can be reached at (571)272-6919. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /TRANG DANG/Examiner, Art Unit 3656 /KHOI H TRAN/Supervisory Patent Examiner, Art Unit 3656
Read full office action

Prosecution Timeline

Sep 13, 2024
Application Filed
Jan 14, 2026
Non-Final Rejection — §101, §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12576884
RIGHT-OF-WAY-BASED SEMANTIC COVERAGE AND AUTOMATIC LABELING FOR TRAJECTORY GENERATION IN AUTONOMOUS SYTEMS
2y 5m to grant Granted Mar 17, 2026
Patent 12559074
AIRCRAFT SYSTEM
2y 5m to grant Granted Feb 24, 2026
Patent 12493302
LONGITUDINAL TRIM CONTROL MOVEMENT DURING TAKEOFF ROTATION
2y 5m to grant Granted Dec 09, 2025
Patent 12461529
ROBOT PATH PLANNING APPARATUS AND METHOD THEREOF
2y 5m to grant Granted Nov 04, 2025
Patent 12429878
Systems and Methods for Dynamic Object Removal from Three-Dimensional Data
2y 5m to grant Granted Sep 30, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
44%
Grant Probability
75%
With Interview (+30.7%)
3y 3m
Median Time to Grant
Low
PTA Risk
Based on 36 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month