Prosecution Insights
Last updated: April 19, 2026
Application No. 18/836,469

ROBOT SYSTEM AND CALIBRATION METHOD

Non-Final OA §103§112
Filed
Aug 07, 2024
Examiner
KENIRY, HEATHER J
Art Unit
3657
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Fanuc Corporation
OA Round
1 (Non-Final)
78%
Grant Probability
Favorable
1-2
OA Rounds
2y 7m
To Grant
99%
With Interview

Examiner Intelligence

Grants 78% — above average
78%
Career Allow Rate
80 granted / 102 resolved
+26.4% vs TC avg
Strong +22% interview lift
Without
With
+22.1%
Interview Lift
resolved cases with interview
Typical timeline
2y 7m
Avg Prosecution
32 currently pending
Career history
134
Total Applications
across all art units

Statute-Specific Performance

§101
13.1%
-26.9% vs TC avg
§103
50.8%
+10.8% vs TC avg
§102
14.8%
-25.2% vs TC avg
§112
18.9%
-21.1% vs TC avg
Black line = Tech Center average estimate • Based on career data from 102 resolved cases

Office Action

§103 §112
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . DETAILED ACTION This is the first Office action on the merits. Claims 1-9 are currently pending and addressed below. Preliminary amendments filed and received on 08/07/2024 and have been accepted and approved. Information Disclosure Statement The information disclosure statement (IDS) submitted on 08/07/2024 has been received. The submission is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner. Claim Interpretation The following is a quotation of 35 U.S.C. 112(f): (f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph: An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked. As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph: (A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function; (B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and (C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function. Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function. Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function. Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Regarding claim 1, “control device” will be interpreted under 112(f) because of the following three-prong analysis: Prong 1: The claim uses the nonce term “device”. Prong 2: The claim uses functional language to modify the nonce term. Prong 3: Sufficient structure for performing the function is not recited within the claim. The above claim limitation invokes 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. However, the written description fails to disclose the corresponding structure, material, or acts for performing the entire claimed function and to clearly link the structure, material, or acts to the function. Therefor a corresponding 112(b) rejection is presented below. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claim limitation “control device” invokes 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. However, the written description fails to disclose the corresponding structure, material, or acts for performing the entire claimed function and to clearly link the structure, material, or acts to the function. There is no clear link to any structure such as a processor or computer which is capable of performing the claimed functionality of a “control device”. Therefore, the claim is indefinite and is rejected under 35 U.S.C. 112(b) or pre-AIA 35 U.S.C. 112, second paragraph. Claims 2-8 depend from claim 1 and are therefore also rejected under 35 U.S.C. 112(b). Applicant may: (a) Amend the claim so that the claim limitation will no longer be interpreted as a limitation under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph; (b) Amend the written description of the specification such that it expressly recites what structure, material, or acts perform the entire claimed function, without introducing any new matter (35 U.S.C. 132(a)); or (c) Amend the written description of the specification such that it clearly links the structure, material, or acts disclosed therein to the function recited in the claim, without introducing any new matter (35 U.S.C. 132(a)). If applicant is of the opinion that the written description of the specification already implicitly or inherently discloses the corresponding structure, material, or acts and clearly links them to the function so that one of ordinary skill in the art would recognize what structure, material, or acts perform the claimed function, applicant should clarify the record by either: (a) Amending the written description of the specification such that it expressly recites the corresponding structure, material, or acts for performing the claimed function and clearly links or associates the structure, material, or acts to the claimed function, without introducing any new matter (35 U.S.C. 132(a)); or (b) Stating on the record what the corresponding structure, material, or acts, which are implicitly or inherently set forth in the written description of the specification, perform the claimed function. For more information, see 37 CFR 1.75(d) and MPEP §§ 608.01(o) and 2181. Claim 1 is rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. It is unclear how the claimed “display” may have a “center of gravity”. According to the specification this is either a pattern on the surface of the “positioning target object” or an object. In the event that this is a pattern, it is unclear how a center of gravity is determined for a pattern. Is this meant to be the center of the patter? The center of gravity of the surface/object which the pattern is attached to? Clarification on the record is earnestly solicited. Claims 2-8 inherit the 112(b) rejection as they are dependent on claim 1 and are therefore also rejected under 35 U.S.C. 112(b). The term “close” in claim 1 is a relative term which renders the claim indefinite. The term “close” is not defined by the claim, the specification does not provide a standard for ascertaining the requisite degree, and one of ordinary skill in the art would not be reasonably apprised of the scope of the invention. It is unclear what the metes and bounds of the claim are in light of the term “close”. This may be within 1 cm, 1 mm, or any other value which may be determined to be “close” in a given moment. Clarification on the record is earnestly solicited. Claims 2-8 inherit the 112(b) rejection as they are dependent on claim 1 and are therefore also rejected under 35 U.S.C. 112(b). Claim 5 recites the limitation "the calculated center of gravity" in lines 4-5. There is insufficient antecedent basis for this limitation in the claim. The term “close” in claim 9 is a relative term which renders the claim indefinite. The term “close” is not defined by the claim, the specification does not provide a standard for ascertaining the requisite degree, and one of ordinary skill in the art would not be reasonably apprised of the scope of the invention. It is unclear what the metes and bounds of the claim are in light of the term “close”. This may be within 1 cm, 1 mm, or any other value which may be determined to be “close” in a given moment. Clarification on the record is earnestly solicited. Claims 2-8 inherit the 112(b) rejection as they are dependent on claim 1 and are therefore also rejected under 35 U.S.C. 112(b). Claim 9 is rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. It is unclear, based on the currently provided claim language, what may be considered as “the display is not included in the image”. Is this meant to apply when there is no portion of the display in the image? Less than a certain threshold in the image? The whole display is not in the image? How is the center determined if no portion of the display is within the image? Clarification on the record is earnestly solicited. Claim 9 is rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. The limitation of “repeating the process” is unclear. Is this a repetition of only the capturing of the image? Of the entire process beginning with the image capture? Is this only repeated a single time? Or is this repeated indefinitely? Is there a threshold reached to indicate the end of the repetition cycle? Clarification on the record is earnestly solicited. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claim(s) 1-6 and 8-9 is/are rejected under 35 U.S.C. 103 as being unpatentable over Hwang et al. (US 20180111271 A1), hereinafter Hwang in view of Tonogai et al. (US 20190047152 A1), hereinafter Tonogai. Regarding claim 1, Hwang teaches: 1. (Original) A robot system comprising; a robot; a control device configured to control the robot; (Paragraph 0037, "Since the mechanical arm positioning method 600 first adjusts the movable end 114 to position the center point of the positioning image at the image center of the comparison image, for example, the center point 822B of the positioning image 820B is overlapped with the image center 802B of the comparison image 800B so that the movable end 114 is collinear with the fixed point A along the direction Z1 perpendicular to the plane of the positioning pattern 400. After that, the movable end 114 is adjusted along the direction Z1 such that the area of the positioning image to be substantially equal to the predetermined area, for example, such that the area A.sub.2 of the positioning image 920A′ to be substantially equal to the predetermined area A.sub.0. As a result, the movable end 114 can be adjusted to the fixed point A from the other moving points P1, P2, P3 in the space with the assistance of the image-capturing module 200. Even more, the computing device 300 can further perform the mechanical arm positioning method 600 automatically to achieve full automation of the positioning of the mechanical arm system 100 through judging the comparison image captured by the image-capturing module 200 to actuate the mechanical arm 110 correspondingly.") … and a camera attached to one of the positioning target object and the robot, wherein the control device is configured to cause the camera to capture an image (Paragraph 0012, "Another aspect of the present invention is related to a mechanical arm system that utilizes the image-capturing module disposed at the movable end of the mechanical arm to capture the positioning pattern so as to generate the comparison image with the image of the positioning pattern. In addition, distance relationships between movable end and the fixed point along various axes in the space are determined through comparing the relative position and relative area between the image of the positioning pattern and the comparison image so as to drive the driving member to adjust the movable end to the fixed point. As a result, the movable end of the mechanical arm can be more accurately positioned at the fixed point, and the amount of computation and computation time required for adjusting the mechanical arm are reduced to reduce the burden of the computing device and the length of the computation time. At the same time, the time required for repositioning is reduced.") of a display including a first feature (Paragraph 0031, "A description is provided with reference to FIG. 1 and FIG. 2. The image-capturing module 200 is fixed to the movable end 114, and can freely move in a space with the movable end 114. In other embodiments, the image-capturing module 200 may be further fixed to a position beside the gripping unit 116. The image-capturing module 200 may be configured to capture a positioning pattern 400 in a field of view 220 at different moving points, such as the fixed point A, moving points P1, P2, P3, etc., and generate a comparison image with a positioning image, for example, comparison images 800A-900B and positioning images 820A-920B depicted in FIG. 5A to FIG. 6B. However, the present invention is not limited in this regard, and a detailed description is provided as follows. The positioning image corresponds to the positioning pattern 400. In one embodiment, the positioning pattern 400 may be a two-dimensional QR code or some other suitable two-dimensional patterns.") … and cause the robot to move so that a center of gravity position of the display included in the image acquired by the camera is brought close to a center of the image. (Paragraph 0013, "The invention provides a mechanical arm system. The mechanical arm system comprises a mechanical arm, an image-capturing module, and a computing device. The mechanical arm comprises a movable end and at least one driving member. The driving member is configured to move the movable end to a fixed point. The image-capturing module is fixed to the movable end. The image-capturing module is configured to capture a positioning pattern at a moving point so as to generate a comparison image with a positioning image. The positioning image corresponds to the positioning pattern. The computing device is configured to determine whether a center of the positioning image is located at a center of the comparison image. If not, the driving member is driven to adjust a position of the movable end in parallel with a plane where the positioning pattern is located such that the center of the positioning image to be located at the center of the comparison image. The computing device is further configured to determine whether an area of the positioning image is substantially equal to a predetermined area. If not, the driving member is driven to adjust a position of the movable end along a direction perpendicular to the plane where the positioning pattern is located to change a distance between the image-capturing module and the positioning pattern so as such that the area of the positioning image to be substantially equal to the predetermined area.") Hwang does not specifically teach a positioning target object or determining an origin of the robot or target object. However, Tonogai, in the same field of endeavor of robotics, teaches: … a positioning target object; (Paragraph 0019, "According to an aspect, the image data is changed so as to differentiate the size of the image pattern in accordance with the coordinates of the leading end of the robot arm, and thus, an image pattern with a size appropriate for calibration can be displayed in accordance with a relative positional relationship between the display device and the image capture device. A calibration of a coordinate system of an image capture device and a coordinate system of a robot arm is performed to improve the accuracy of predetermined processing for an object using a robot arm (e.g. gripping, suction, fitting, winding etc. of the object). Accordingly, the calibration accuracy can be improved by changing the image data so as to differentiate the size of the image pattern in accordance with the coordinates of the leading end of the robot arm that acts on an object, and performing a calibration using a plurality of types of captured images that are based on different image patterns.") … from which an origin coordinate of the other one of the positioning target object and the robot can be acquired, (Paragraph 0080, "Next, the coordinates of the display 22 (calibration object) are obtained. Specifically, the coordinates of the display 22 are obtained based on the known shape data (length data) of the display 22, with the leading end coordinates of the robot arm R serving as a reference. If the position and orientation of the display 22 (calibration object) relative to the leading end coordinate system of the robot arm R are changeable, the coordinates of the display 22 are obtained based not only on the shape data of the display 22, but also on a changed position and orientation. Next, the coordinates of the display 22 are obtained with the origin coordinates of the robot arm R serving as a reference, based on the coordinates of the display 22 relative to the leading end coordinate system of the robot arm R.") … It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to combine the robotic system and control methods as taught by Hwang with the target surface and ability to determine an origin of the robot and other objects relative to the pattern as taught by Tonogai. While Hwang is silent on the plane which contains the calibration pattern being a surface which has the object being worked upon, Tonogai, specifically discusses the processing of the object on the surface which contains the calibration pattern. Combining the calibration methods which Hwang uses with the ability to determine relative positioning of the robot and other structures within the environment as taught by Tonogai would ensure that the robot may accurately perform operations on the object being processed. Regarding claim 2, where all the limitations of claim 1 are discussed above, Hwang further teaches: 2. (Currently Amended) The robot system according to claim 1, wherein the control device is further configured to determine whether or not a whole of the first feature is included in the image acquired by the camera, (Paragraph 0014, "In the foregoing, the computing device is further configured to determine a magnitude relationship between the area of the positioning image and the predetermined area. The driving member is driven such that the movable end to move away from the positioning pattern along the direction perpendicular to the plane where the positioning pattern is located if the area of the positioning image is larger than the predetermined area. The driving member is driven to adjust the mechanical arm such that the movable end to move closer to the positioning pattern along the direction perpendicular to the plane where the positioning pattern is located if the area of the positioning is smaller than the predetermined area.") … based on the first feature in the image (Paragraph 0031, "A description is provided with reference to FIG. 1 and FIG. 2. The image-capturing module 200 is fixed to the movable end 114, and can freely move in a space with the movable end 114. In other embodiments, the image-capturing module 200 may be further fixed to a position beside the gripping unit 116. The image-capturing module 200 may be configured to capture a positioning pattern 400 in a field of view 220 at different moving points, such as the fixed point A, moving points P1, P2, P3, etc., and generate a comparison image with a positioning image, for example, comparison images 800A-900B and positioning images 820A-920B depicted in FIG. 5A to FIG. 6B. However, the present invention is not limited in this regard, and a detailed description is provided as follows. The positioning image corresponds to the positioning pattern 400. In one embodiment, the positioning pattern 400 may be a two-dimensional QR code or some other suitable two-dimensional patterns.") when it is determined that the whole of the first feature is included in the image. (Paragraph 0014, "In the foregoing, the computing device is further configured to determine a magnitude relationship between the area of the positioning image and the predetermined area. The driving member is driven such that the movable end to move away from the positioning pattern along the direction perpendicular to the plane where the positioning pattern is located if the area of the positioning image is larger than the predetermined area. The driving member is driven to adjust the mechanical arm such that the movable end to move closer to the positioning pattern along the direction perpendicular to the plane where the positioning pattern is located if the area of the positioning is smaller than the predetermined area.") Hwang does not specifically teach determining the origin relative to the display. However, Tonogai, in the same field of endeavor of robotics, teaches: … and acquire the origin coordinate of the other one of the positioning target object and the robot (Paragraph 0080, "Next, the coordinates of the display 22 (calibration object) are obtained. Specifically, the coordinates of the display 22 are obtained based on the known shape data (length data) of the display 22, with the leading end coordinates of the robot arm R serving as a reference. If the position and orientation of the display 22 (calibration object) relative to the leading end coordinate system of the robot arm R are changeable, the coordinates of the display 22 are obtained based not only on the shape data of the display 22, but also on a changed position and orientation. Next, the coordinates of the display 22 are obtained with the origin coordinates of the robot arm R serving as a reference, based on the coordinates of the display 22 relative to the leading end coordinate system of the robot arm R.") … It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to combine the robotic system and control methods as taught by Hwang with the ability to determine an origin of the robot and other objects relative to the pattern as taught by Tonogai. Combining the calibration methods which Hwang uses with the ability to determine relative positioning of the robot and other structures within the environment as taught by Tonogai would ensure that the robot may accurately perform operations on the object being processed. Regarding claim 3, where all the limitations of claim 2 are discussed above, Hwang further teaches: 3. (Currently Amended) The robot system according to claim 2, wherein the control device is further configured to operate the robot so that the center of gravity position of the display included in the image is brought close to the center of the image (Paragraph 0013, "The invention provides a mechanical arm system. The mechanical arm system comprises a mechanical arm, an image-capturing module, and a computing device. The mechanical arm comprises a movable end and at least one driving member. The driving member is configured to move the movable end to a fixed point. The image-capturing module is fixed to the movable end. The image-capturing module is configured to capture a positioning pattern at a moving point so as to generate a comparison image with a positioning image. The positioning image corresponds to the positioning pattern. The computing device is configured to determine whether a center of the positioning image is located at a center of the comparison image. If not, the driving member is driven to adjust a position of the movable end in parallel with a plane where the positioning pattern is located such that the center of the positioning image to be located at the center of the comparison image. The computing device is further configured to determine whether an area of the positioning image is substantially equal to a predetermined area. If not, the driving member is driven to adjust a position of the movable end along a direction perpendicular to the plane where the positioning pattern is located to change a distance between the image-capturing module and the positioning pattern so as such that the area of the positioning image to be substantially equal to the predetermined area.") when it is determined that the whole of the first feature is not included in the image acquired by the camera. (Paragraph 0014, "In the foregoing, the computing device is further configured to determine a magnitude relationship between the area of the positioning image and the predetermined area. The driving member is driven such that the movable end to move away from the positioning pattern along the direction perpendicular to the plane where the positioning pattern is located if the area of the positioning image is larger than the predetermined area. The driving member is driven to adjust the mechanical arm such that the movable end to move closer to the positioning pattern along the direction perpendicular to the plane where the positioning pattern is located if the area of the positioning is smaller than the predetermined area.") Regarding claim 4, where all the limitations of claim 1 are discussed above, Hwang further teaches: 4. (Currently Amended) The robot system according to(Paragraph 0031, "A description is provided with reference to FIG. 1 and FIG. 2. The image-capturing module 200 is fixed to the movable end 114, and can freely move in a space with the movable end 114. In other embodiments, the image-capturing module 200 may be further fixed to a position beside the gripping unit 116. The image-capturing module 200 may be configured to capture a positioning pattern 400 in a field of view 220 at different moving points, such as the fixed point A, moving points P1, P2, P3, etc., and generate a comparison image with a positioning image, for example, comparison images 800A-900B and positioning images 820A-920B depicted in FIG. 5A to FIG. 6B. However, the present invention is not limited in this regard, and a detailed description is provided as follows. The positioning image corresponds to the positioning pattern 400. In one embodiment, the positioning pattern 400 may be a two-dimensional QR code or some other suitable two-dimensional patterns." Examiner note: The first "feature" may be considered to be the center of the code pattern and the surrounding pattern may be considered to be a second feature.) Regarding claim 5, where all the limitations of claim 1 are discussed above, Hwang further teaches: 5. (Currently Amended) The robot system according to claim 4, wherein the control device is further configured to cause the camera to capture an image of the display again (Paragraph 0012, "Another aspect of the present invention is related to a mechanical arm system that utilizes the image-capturing module disposed at the movable end of the mechanical arm to capture the positioning pattern so as to generate the comparison image with the image of the positioning pattern. In addition, distance relationships between movable end and the fixed point along various axes in the space are determined through comparing the relative position and relative area between the image of the positioning pattern and the comparison image so as to drive the driving member to adjust the movable end to the fixed point. As a result, the movable end of the mechanical arm can be more accurately positioned at the fixed point, and the amount of computation and computation time required for adjusting the mechanical arm are reduced to reduce the burden of the computing device and the length of the computation time. At the same time, the time required for repositioning is reduced.") after moving the robot in a direction away from the display when it is determined that the whole of the first feature is not included in the image acquired by the camera (Paragraph 0008, "In the foregoing, the step of determining whether the area of the positioning image is substantially equal to a predetermined area comprises: determining a magnitude relationship between the area of the positioning image and the predetermined area; adjusting the mechanical arm such that the mechanical arm to move away from the positioning pattern along a direction perpendicular to the plane where the positioning pattern is located if the area of the positioning image is larger than the predetermined area; and adjusting the mechanical arm such that the mechanical arm to move closer to the positioning pattern along the direction perpendicular to the plane where the positioning pattern is located if the area of the positioning image is smaller than the predetermined area.") and the calculated center of gravity position is located at a position within a predetermined range of the center of the image. (Paragraph 0013, "The invention provides a mechanical arm system. The mechanical arm system comprises a mechanical arm, an image-capturing module, and a computing device. The mechanical arm comprises a movable end and at least one driving member. The driving member is configured to move the movable end to a fixed point. The image-capturing module is fixed to the movable end. The image-capturing module is configured to capture a positioning pattern at a moving point so as to generate a comparison image with a positioning image. The positioning image corresponds to the positioning pattern. The computing device is configured to determine whether a center of the positioning image is located at a center of the comparison image. If not, the driving member is driven to adjust a position of the movable end in parallel with a plane where the positioning pattern is located such that the center of the positioning image to be located at the center of the comparison image. The computing device is further configured to determine whether an area of the positioning image is substantially equal to a predetermined area. If not, the driving member is driven to adjust a position of the movable end along a direction perpendicular to the plane where the positioning pattern is located to change a distance between the image-capturing module and the positioning pattern so as such that the area of the positioning image to be substantially equal to the predetermined area.") Regarding claim 6, where all the limitations of claim 4 are discussed above, Hwang further teaches: 6. (Currently Amended) The robot system according to claim 4 further configured to calculate the center of gravity position using the whole of the display included in the image. (Paragraph 0013, "The invention provides a mechanical arm system. The mechanical arm system comprises a mechanical arm, an image-capturing module, and a computing device. The mechanical arm comprises a movable end and at least one driving member. The driving member is configured to move the movable end to a fixed point. The image-capturing module is fixed to the movable end. The image-capturing module is configured to capture a positioning pattern at a moving point so as to generate a comparison image with a positioning image. The positioning image corresponds to the positioning pattern. The computing device is configured to determine whether a center of the positioning image is located at a center of the comparison image. If not, the driving member is driven to adjust a position of the movable end in parallel with a plane where the positioning pattern is located such that the center of the positioning image to be located at the center of the comparison image. The computing device is further configured to determine whether an area of the positioning image is substantially equal to a predetermined area. If not, the driving member is driven to adjust a position of the movable end along a direction perpendicular to the plane where the positioning pattern is located to change a distance between the image-capturing module and the positioning pattern so as such that the area of the positioning image to be substantially equal to the predetermined area.") Regarding claim 8, where all the limitations of claim 3 are discussed above, Hwang further teaches: 8. (Currently Amended) The robot system according to claim 3, wherein the control device is further configured to calculate the center of gravity position (Paragraph 0013, "The invention provides a mechanical arm system. The mechanical arm system comprises a mechanical arm, an image-capturing module, and a computing device. The mechanical arm comprises a movable end and at least one driving member. The driving member is configured to move the movable end to a fixed point. The image-capturing module is fixed to the movable end. The image-capturing module is configured to capture a positioning pattern at a moving point so as to generate a comparison image with a positioning image. The positioning image corresponds to the positioning pattern. The computing device is configured to determine whether a center of the positioning image is located at a center of the comparison image. If not, the driving member is driven to adjust a position of the movable end in parallel with a plane where the positioning pattern is located such that the center of the positioning image to be located at the center of the comparison image. The computing device is further configured to determine whether an area of the positioning image is substantially equal to a predetermined area. If not, the driving member is driven to adjust a position of the movable end along a direction perpendicular to the plane where the positioning pattern is located to change a distance between the image-capturing module and the positioning pattern so as such that the area of the positioning image to be substantially equal to the predetermined area.") … Huang is silent on the use of a single feature in the pattern. However, Tonogai, in the same field of endeavor of robotics, teaches: … using only the first feature included in the image. (Paragraph 0075, "FIG. 3 shows an example of an initial setting procedure for the display device D. Initially, the type of calibration pattern to be displayed on the display 22 is determined (S31). FIGS. 4A to 4D show examples of calibration patterns. FIG. 4A shows a dot image pattern constituted by a square frame line and a total of 49 (7 rows by 7 columns) black dots that are arranged at regular intervals in this frame line. Meanwhile, one of the corners is painted in a triangular shape to specify the direction of the square. That is to say, the calibration pattern shown in FIG. 4A is a pattern that includes black dots arranged at predetermined intervals in the frame line, and a polygonal mark marker for specifying the direction of the pattern. FIG. 4B shows image patterns of an AR marker (left) and a two-dimensional barcode (right). FIG. 4C shows an example of a checkerboard pattern (an image pattern in which black squares and white squares are arranged alternately), and FIG. 4D shows an example of an image pattern constituted by triangles. Various image patterns can be used in accordance with the positional relationship between the robot R and the image sensor S, the purpose of calibration, required accuracy, or the like. For example, the number and size of black dots in the aforementioned dot pattern can be set arbitrarily. In an embodiment, the dot image pattern is selected. The display control unit 20 is configured to output image data for displaying the selected image pattern to the display 22.") It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to combine the robotic system and operation methods as taught by Hwang with the ability to use different types of patterns, including those with a single feature as taught by Tonogai to perform the calibration of the system. This would ensure the system is able to adapt to different types of calibration patterns and still operate efficiently and accurately. Regarding claim 9, Hwang further teaches: 9. (Currently Amended) A calibration method comprising: capturing an image, with a camera (Paragraph 0012, "Another aspect of the present invention is related to a mechanical arm system that utilizes the image-capturing module disposed at the movable end of the mechanical arm to capture the positioning pattern so as to generate the comparison image with the image of the positioning pattern. In addition, distance relationships between movable end and the fixed point along various axes in the space are determined through comparing the relative position and relative area between the image of the positioning pattern and the comparison image so as to drive the driving member to adjust the movable end to the fixed point. As a result, the movable end of the mechanical arm can be more accurately positioned at the fixed point, and the amount of computation and computation time required for adjusting the mechanical arm are reduced to reduce the burden of the computing device and the length of the computation time. At the same time, the time required for repositioning is reduced.") attached to one of a robot (Paragraph 0037, "Since the mechanical arm positioning method 600 first adjusts the movable end 114 to position the center point of the positioning image at the image center of the comparison image, for example, the center point 822B of the positioning image 820B is overlapped with the image center 802B of the comparison image 800B so that the movable end 114 is collinear with the fixed point A along the direction Z1 perpendicular to the plane of the positioning pattern 400. After that, the movable end 114 is adjusted along the direction Z1 such that the area of the positioning image to be substantially equal to the predetermined area, for example, such that the area A.sub.2 of the positioning image 920A′ to be substantially equal to the predetermined area A.sub.0. As a result, the movable end 114 can be adjusted to the fixed point A from the other moving points P1, P2, P3 in the space with the assistance of the image-capturing module 200. Even more, the computing device 300 can further perform the mechanical arm positioning method 600 automatically to achieve full automation of the positioning of the mechanical arm system 100 through judging the comparison image captured by the image-capturing module 200 to actuate the mechanical arm 110 correspondingly.") and … (Paragraph 0031, "A description is provided with reference to FIG. 1 and FIG. 2. The image-capturing module 200 is fixed to the movable end 114, and can freely move in a space with the movable end 114. In other embodiments, the image-capturing module 200 may be further fixed to a position beside the gripping unit 116. The image-capturing module 200 may be configured to capture a positioning pattern 400 in a field of view 220 at different moving points, such as the fixed point A, moving points P1, P2, P3, etc., and generate a comparison image with a positioning image, for example, comparison images 800A-900B and positioning images 820A-920B depicted in FIG. 5A to FIG. 6B. However, the present invention is not limited in this regard, and a detailed description is provided as follows. The positioning image corresponds to the positioning pattern 400. In one embodiment, the positioning pattern 400 may be a two-dimensional QR code or some other suitable two-dimensional patterns.") … determining whether or not a whole of the first feature is included in the image captured by the camera, … when it is determined that the first feature is included in the image; (Paragraph 0014, "In the foregoing, the computing device is further configured to determine a magnitude relationship between the area of the positioning image and the predetermined area. The driving member is driven such that the movable end to move away from the positioning pattern along the direction perpendicular to the plane where the positioning pattern is located if the area of the positioning image is larger than the predetermined area. The driving member is driven to adjust the mechanical arm such that the movable end to move closer to the positioning pattern along the direction perpendicular to the plane where the positioning pattern is located if the area of the positioning is smaller than the predetermined area.") controlling the robot to make a center of gravity of the display included in the image close to a center of the image when it is determined that the display is not included in the image; and repeating the processes from capturing the image of the display. (Paragraph 0013, "The invention provides a mechanical arm system. The mechanical arm system comprises a mechanical arm, an image-capturing module, and a computing device. The mechanical arm comprises a movable end and at least one driving member. The driving member is configured to move the movable end to a fixed point. The image-capturing module is fixed to the movable end. The image-capturing module is configured to capture a positioning pattern at a moving point so as to generate a comparison image with a positioning image. The positioning image corresponds to the positioning pattern. The computing device is configured to determine whether a center of the positioning image is located at a center of the comparison image. If not, the driving member is driven to adjust a position of the movable end in parallel with a plane where the positioning pattern is located such that the center of the positioning image to be located at the center of the comparison image. The computing device is further configured to determine whether an area of the positioning image is substantially equal to a predetermined area. If not, the driving member is driven to adjust a position of the movable end along a direction perpendicular to the plane where the positioning pattern is located to change a distance between the image-capturing module and the positioning pattern so as such that the area of the positioning image to be substantially equal to the predetermined area.") Hwang does not specifically teach a positioning target object or determining an origin of the robot or target object. However, Tonogai, in the same field of endeavor of robotics, teaches: … a positioning target object (Paragraph 0019, "According to an aspect, the image data is changed so as to differentiate the size of the image pattern in accordance with the coordinates of the leading end of the robot arm, and thus, an image pattern with a size appropriate for calibration can be displayed in accordance with a relative positional relationship between the display device and the image capture device. A calibration of a coordinate system of an image capture device and a coordinate system of a robot arm is performed to improve the accuracy of predetermined processing for an object using a robot arm (e.g. gripping, suction, fitting, winding etc. of the object). Accordingly, the calibration accuracy can be improved by changing the image data so as to differentiate the size of the image pattern in accordance with the coordinates of the leading end of the robot arm that acts on an object, and performing a calibration using a plurality of types of captured images that are based on different image patterns.") … from which an origin coordinate of the other one of the robot and the positioning target object can be acquired; … acquiring the origin coordinate based on the first feature in the image (Paragraph 0080, "Next, the coordinates of the display 22 (calibration object) are obtained. Specifically, the coordinates of the display 22 are obtained based on the known shape data (length data) of the display 22, with the leading end coordinates of the robot arm R serving as a reference. If the position and orientation of the display 22 (calibration object) relative to the leading end coordinate system of the robot arm R are changeable, the coordinates of the display 22 are obtained based not only on the shape data of the display 22, but also on a changed position and orientation. Next, the coordinates of the display 22 are obtained with the origin coordinates of the robot arm R serving as a reference, based on the coordinates of the display 22 relative to the leading end coordinate system of the robot arm R.") … It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to combine the robotic system and control methods as taught by Hwang with the target surface and ability to determine an origin of the robot and other objects relative to the pattern as taught by Tonogai. While Hwang is silent on the plane which contains the calibration pattern being a surface which has the object being worked upon, Tonogai, specifically discusses the processing of the object on the surface which contains the calibration pattern. Combining the calibration methods which Hwang uses with the ability to determine relative positioning of the robot and other structures within the environment as taught by Tonogai would ensure that the robot may accurately perform operations on the object being processed. Claim(s) 7 is/are rejected under 35 U.S.C. 103 as being unpatentable over Hwang in view of Tonogai and in further view of Huang et al. (US 9193073 B1), hereinafter Huang. Regarding claim 7, where all the limitations of claim 4 are discussed above, Hwang further teaches: 7. (Currently Amended) The robot system according to further configured to calculate the center of gravity position (Paragraph 0013, "The invention provides a mechanical arm system. The mechanical arm system comprises a mechanical arm, an image-capturing module, and a computing device. The mechanical arm comprises a movable end and at least one driving member. The driving member is configured to move the movable end to a fixed point. The image-capturing module is fixed to the movable end. The image-capturing module is configured to capture a positioning pattern at a moving point so as to generate a comparison image with a positioning image. The positioning image corresponds to the positioning pattern. The computing device is configured to determine whether a center of the positioning image is located at a center of the comparison image. If not, the driving member is driven to adjust a position of the movable end in parallel with a plane where the positioning pattern is located such that the center of the positioning image to be located at the center of the comparison image. The computing device is further configured to determine whether an area of the positioning image is substantially equal to a predetermined area. If not, the driving member is driven to adjust a position of the movable end along a direction perpendicular to the plane where the positioning pattern is located to change a distance between the image-capturing module and the positioning pattern so as such that the area of the positioning image to be substantially equal to the predetermined area.") … Hwang does not specifically discuss applying weights to different features within the pattern. However, Huang, in the same field of endeavor of robotics, teaches: … by giving a larger weight to the first feature than to the second feature included in the image. (Col 3, Line 58-Col 4, Line 23, "FIG. 2 illustrates a diagram of an encoded calibration plate 20 of a robot calibration apparatus according to an embodiment of the present invention. The encoded calibration plate 20 may have a chessboard pattern. The chessboard pattern may comprise of interchanging black squares 21 and white squares 22. The black squares 21 may have orientation encodings for indicating an orientation of the encoded calibration plate 20 and the white squares 22 may have coordinates encodings for indicating positions on the encoded calibration plate 20. An orientation encoding of a black square 21 may include an icon 23 positioned near a corner of the black square 21 to indicate that an origin O of the encoded calibration plate 20 may be close to a corresponding corner of the encoded calibration plate 20. A coordinates encoding in a white square 22 may comprise encoding icons arranged in a matrix to indicate a position of the white square 22 relative to the origin O of the encoded calibration plate 20. In an embodiment, the encoding icons in a white square 22 may comprise of only solid icons 25, or a combination of both hollow icons 24 and solid icons 25 arranged in two columns. As shown in FIG. 2, a first column on one side of the white square 22 may be used to represent an X-coordinate of the encoded calibration plate 20 and a second column on another side of the white square 22 may be used to represent a Y-coordinate of the encoded calibration plate 20. Each of the X-coordinate and Y-coordinate may be represented by a combination of an encoding icon in row A, an encoding icon in row B, and an encoding icon in row C. Each encoding icon may represent a binary bit. Encoding icons from row A may have a binary weight of 2.sup.0, encoding icons from row B may have a binary weight of 2.sup.1, and encoding icons from row C may have a binary weight of 2.sup.2. The hollow icons 24 may represent a bit 0 value and the solid icons 25 may represent a bit 1 value.") It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to combine the robotic system and operating methods as taught by Hwang with the ability to provide different weights to different portions of the calibration pattern as taught by Huang. Incorporating weights into the calibration code/pattern as taught by Huang would allow the system to determine the positioning of the visual more efficiently even when the image does not include the entire pattern and allow the system to determine the appropriate movements to make in order to reach a calibrated state. Conclusion The Examiner has cited particular paragraphs or columns and line numbers in the referencesapplied to the claims above for the convenience of the Applicant. Although the specified citations arerepresentative of the teachings of the art and are applied to specific limitations within the individual claim, other passages and figures may apply as well. It is respectfully requested of the Applicant in preparing responses, to fully consider the references in their entirety as potentially teaching all or part of the claimed invention, as well as the context of the passage as taught by the prior art or disclosed by the Examiner. See MPEP 2141.02 [R-07.2015] VI. A prior art reference must be considered in its entirety, i.e., as a whole, including portions that would lead away from the claimed Invention. W.L. Gore & Associates, Inc. v. Garlock, Inc., 721 F.2d 1540, 220 USPQ 303 (Fed. Cir. 1983), cert, denied, 469 U.S. 851 (1984). See also MPEP §2123. Any inquiry concerning this communication or earlier communications from the examiner should be directed to HEATHER KENIRY whose telephone number is (571)270-5468. The examiner can normally be reached M-F 7:30-5:30. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Adam Mott can be reached at (571) 270-5376. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /H.J.K./Examiner, Art Unit 3657 /ADAM R MOTT/Supervisory Patent Examiner, Art Unit 3657
Read full office action

Prosecution Timeline

Aug 07, 2024
Application Filed
Jan 19, 2026
Non-Final Rejection — §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12600035
INFORMATION PROCESSING METHOD, INFORMATION PROCESSING APPARATUS, ROBOT SYSTEM, MANUFACTURING METHOD OF PRODUCT, AND STORAGE MEDIUM
2y 5m to grant Granted Apr 14, 2026
Patent 12583123
ITERATIVE CONTROL OF ROBOT FOR TARGET OBJECT
2y 5m to grant Granted Mar 24, 2026
Patent 12576539
INFORMATION PROCESSING DEVICE, INFORMATION PROCESSING METHOD, AND PROGRAM
2y 5m to grant Granted Mar 17, 2026
Patent 12562076
LEARNING ASSISTANCE SYSTEM, LEARNING ASSISTANCE METHOD, AND LEARNING ASSISTANCE STORAGE MEDIUM
2y 5m to grant Granted Feb 24, 2026
Patent 12558780
MULTI-PURPOSE ROBOTS AND COMPUTER PROGRAM PRODUCTS, AND METHODS FOR OPERATING THE SAME
2y 5m to grant Granted Feb 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
78%
Grant Probability
99%
With Interview (+22.1%)
2y 7m
Median Time to Grant
Low
PTA Risk
Based on 102 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month