Prosecution Insights
Last updated: April 19, 2026
Application No. 17/309,353

OBJECT GRASPING SYSTEM

Non-Final OA §103
Filed
May 20, 2021
Examiner
KENIRY, HEATHER J
Art Unit
3657
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Japan Cash Machine Co. Ltd.
OA Round
5 (Non-Final)
78%
Grant Probability
Favorable
5-6
OA Rounds
2y 7m
To Grant
99%
With Interview

Examiner Intelligence

Grants 78% — above average
78%
Career Allow Rate
80 granted / 102 resolved
+26.4% vs TC avg
Strong +22% interview lift
Without
With
+22.1%
Interview Lift
resolved cases with interview
Typical timeline
2y 7m
Avg Prosecution
32 currently pending
Career history
134
Total Applications
across all art units

Statute-Specific Performance

§101
13.1%
-26.9% vs TC avg
§103
50.8%
+10.8% vs TC avg
§102
14.8%
-25.2% vs TC avg
§112
18.9%
-21.1% vs TC avg
Black line = Tech Center average estimate • Based on career data from 102 resolved cases

Office Action

§103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . DETAILED ACTION This Office action is in response to the amendment filed on 12/05/2025. Claims 9, 15, 21, and 27 are currently pending with claims 9, 21, and 27 being amended, and claims 10-14, 16-20, 22-23, 25-26, and 28 being cancelled. Response to Amendment The amendments to the claims submitted on 12/05/2025 have been received and accepted. They overcome any claim objections set forth in the previous Office action except for those set forth in the claim objection section below. Response to Arguments Applicant’s amendments and arguments, see communications, filed 12/05/2025, with respect to the rejection(s) of claim(s) 9-23 and 25-28 under 35 U.S.C. 103 have been fully considered and are persuasive. Therefore, the rejection has been withdrawn. However, upon further consideration, a new ground(s) of rejection is made in view of Shi et al. (US 20140163729 A1). Examiner notes wherein Applicant argues the newly amended limitations, which have not been addressed by the prior art of record. As such, Examiner has augmented the below rejection(s) in view of the prior art of record to address the newly amended limitations. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claim(s) 9 and 21 is/are rejected under 35 U.S.C. 103 as being unpatentable over Suzuki et al. (US 20130238124 A1), hereinafter Suzuki in view of Corkum et al. (US 20180126553 A1), hereinafter Corkum, Leroux et al. (US 7583835 B2), hereinafter Leroux, and Shi et al. (US 20140163729 A1), hereinafter Shi. Regarding claim 9, Suzuki teaches: 9. (currently amended) An object grasping system, comprising: a camera (Paragraphs 0031-0032, "The projector of the sensor unit 101 can irradiate a target object with uniform-luminance light. The camera of the sensor unit 101 senses an image of the target object irradiated with the uniform-luminance light, and outputs a two-dimensional image to the sensor information acquisition unit 111. The sensor unit 102 includes a compact projector and a compact camera for sensing a two-dimensional image. The sensor unit 102 is fixed and mounted near the hand whose position and orientation can be controlled (changed) by the angle of each joint of the robot 100. The sensor unit 102 senses a target object gripped by the hand. Assume that the relative positional relationship between the projector and camera of the sensor unit 102 has been obtained in advance by calibration. Although the image processing unit 110 processes an image sensed by the sensor unit 102 in the embodiment, the sensor unit 102 may incorporate an image processing mechanism to output an image processing result."); a grasping unit (Paragraph 0027, " A robot 100 is an articulated robot and operates in response to a control instruction from a robot controller unit 120. A hand serving as an end effector is mounted on the distal end of the robot 100 and can do work for a target object. In the embodiment, a hand with a chuck mechanism capable of gripping a target object is used as the end effector. The end effector may use a motor-driven hand or a suction pad for sucking a target object by air pressure."); and a control unit (Paragraph 0027, " A robot 100 is an articulated robot and operates in response to a control instruction from a robot controller unit 120.") for moving the grasping unit toward -an object (See Figure 10, step S1006, the robot executes the instructions according to the work requested. This includes moving the end effector towards the target object. Paragraph 0063, "For example, if the robot work instruction unit 121 receives the position and orientation of a target object from the position and orientation measurement processing unit 113, it generates an instruction signal to move the hand to the position and orientation in which a target object in the received position and orientation can be gripped, and grip the target object.") … specifying a relative position of the object with respect to the grasping unit based on … taken by the camera (Paragraph 0044, "A plurality of cameras may be arranged by setting up a scaffold for image sensing, the user may hold a camera to sense an image, or a camera mounted on the robot may sense an image while moving the robot. Although an image may be sensed by any method, the relative position and orientation between the camera and the target object 103 in image sensing is obtained and stored in association with the sensed image. When a plurality of cameras are arranged on the scaffold, the relative position and orientation can be obtained from the shape of the scaffold. When the user holds a camera, the relative position and orientation can be obtained from a position and orientation sensor by mounting it on the camera. When the camera mounted on the robot senses an image, the relative position and orientation can be obtained using control information of the robot." See also Figures 4 and 10, which demonstrates that the process of sensing (imaging the environment) is done repeatedly until the object has been successfully gripped and the task is complete. This shows that sensor information is collected and then processed in order to determine information on the target object and determine if the robot has successfully grasped the target object. This is done within a loop which anticipates the continuous aspect of the claimed limitation. In figure 4, the process also includes, within the loop, the step S409 (“EXECUTE ROBOT WORK”) which demonstrates that this process is done as the grasping mechanism is brought closer to the target object in order to grasp it. Also see Paragraph 0028, “Assume that calibration work has been performed in advance by a well-known technique for the position and orientation of a sensor unit 101, the positions and orbits of the robot 100 and the hand, and the relative position and orientation between the arm of the robot 100 and a sensor unit 102. This makes it possible to convert the position and orientation of a target object in a pallet 104 that is measured by a position and orientation measurement processing unit 113, and the position and orientation of the target object measured by an extraction state measurement processing unit 115 into those in a work space coordinate system fixed in a space where the pallet 104 is placed. The robot 100 can also be controlled to move the hand to a position and orientation designated in the work space coordinate system.” Which discusses the sensor unit 102 that is used to capture information on the target object which can then be processed for control of the system. Also see Paragraphs 0160-0161, “In step S1002, the position and orientation measurement processing unit 913 obtains (measures) the position and orientation of at least one target object among a plurality of target objects in the sensed image received from the sensor information acquisition unit 911. The obtained position and orientation is used to predict whether extraction of a target object will succeed or to grip a target object by the robot. The processing in step S1002 is the same as that in step S402, and a description thereof will not be repeated. After obtaining the position and orientation of the gripping candidate object, the position and orientation measurement processing unit 913 sends the obtained position and orientation to the prediction unit 914 and the target object selection unit 915. If the position and orientation measurement processing unit 913 obtains the positions and orientations of a plurality of gripping candidate objects, it sends all the obtained positions and orientations to the prediction unit 914 and the target object selection unit 915.”) as the grasping unit moves towards the object (See Figure 10, step S1006, the robot executes the instructions according to the work requested. This includes moving the end effector towards the target object. Paragraph 0063, "For example, if the robot work instruction unit 121 receives the position and orientation of a target object from the position and orientation measurement processing unit 113, it generates an instruction signal to move the hand to the position and orientation in which a target object in the received position and orientation can be gripped, and grip the target object.") … and wherein said control unit is further configured to: (i) detect a plurality of objects; (Paragraph 0070, “In step S402, the position and orientation measurement processing unit 113 obtains (measures) the position and orientation of at least one target object among a plurality of target objects in the sensed image received from the sensor information acquisition unit 111. The obtained position and orientation is used to extract and grip a target object by the robot.”) (ii) identify a first object from the plurality of objects (Paragraph 0100, “In the embodiment, one target object is selected from a plurality of target objects whose positions and orientations have been measured by the robot work instruction unit 121. However, the position and orientation measurement processing unit 113 may select one target object and then output its position and orientation to the robot work instruction unit 121.” As well as Paragraph 0115, “In step S411, the target object selection unit 117 determines whether a target object to be gripped next has been selected in step S410. For example, in FIG. 7, the positions and orientations of the three target objects except for the target object 103' have been measured. However, if the position and orientation of even one target object has not been measured except for the target object 103', a target object to be gripped next cannot be selected. As a matter of course, when a target object matching the selection condition is not detected in step S410, a target object to be gripped next cannot be selected. These situations can be determined by referring to the position and orientation measurement result in step S402 and the gripping instruction list.”) … and (iii) grasp the first object before the remaining plurality of objects. (Paragraphs 0118-0119, “In step S412, the target object selection unit 117 sends, to the gripping instruction unit 118, the position and orientation of the selected target object (a position and orientation selected as the position and orientation of a gripping candidate among the positions and orientations obtained in step S402). The gripping instruction unit 118 sends the position and orientation of the selected target object to the robot work instruction unit 121. The robot work instruction unit 121 generates an instruction signal to move the hand of the robot 100 to the position and orientation received from the gripping instruction unit 118, and then sends the generated instruction signal to the robot control unit 122. The gripping instruction unit 118 registers the selected target object in the gripping instruction list. Thereafter, the process returns to step S404.”) Suzuki does not specifically disclose the camera capturing images of the workspace continuously for alterations to the trajectory, determining a distance between the camera and the object based on focal length and image data, or comparing the orientation and position of the grasper with that of the target object. However, Corkum, in the same field of endeavor of robotic control, teaches: … while repeatedly … a plurality of continuous images … such that movement of the grasping arm may be adjusted accordingly (Paragraph 0017, "For example, the controller 160 can: locate an object reference frame—including an object coordinate system—relative to a target object identified in an image recorded from the camera 150; orient the object reference frame in real space relative to the end effector 140 based on the position and orientation of the target object identified in the image and based on a known offset between the camera 150 and the end effector 140; project the preplanned trajectory into the object reference frame; implement closed-loop controls to move the end effector 140 along the preplanned trajectory toward the terminus of the preplanned trajectory at which the end effector 140 may accurately engage the target object; refine the location and orientation of the object reference coordinate system as the arm moves the end effector 140 and the camera 150 closer to the target object (which may yield a higher-resolution image of the target object); and repeat this process regularly—such as at a rate of 2 Hz, 20 Hz, or for every 10-millimeter interval traversed by the end effector 140—until the end effector 140 engages the target object. By thus realigning the preplanned trajectory to the target object (e.g., to an object feature or constellation of object features) detected in the field of view of the camera 150 as the end effector 140 approaches the target object, the system 100 can achieve increased locational accuracy of the end effector 140 relative to the target object as the end effector 140 nears the target object while also accommodating wide variances in the location and orientation of the target object from its expected location and orientation and/or accommodating wide variances in the location and orientation of one unit of the target object to a next unit of the target object.") However, Leroux, in the same field of endeavor of robotic grasping, teaches: and calculating a three dimensional distance between the object and camera based on a focal length of the camera and an occupied ratio of an image area corresponding to the target object with respect to an entire image area; (Column 3 Line 45 – Column 4 Line 12, “We can distinguish the relationship: .DELTA..sub.m=T.sub.p.DELTA..sub.p where .DELTA.m and T.sub.p are expressed in mm, .DELTA..sub.p is the movement in pixels of the object between the two images (without dimensions) and T.sub.p represents the size of a pixel on the image of the camera 2. We can thus calculate the coordinates of the object 4: Z=F(D/(.DELTA..sub.pT.sub.p)-1) Where Z is the focal length of the camera 2 at the surface of the object 4; F is the focal length of the lens (in mm); D is the movement of the camera 2 between the taking of two images (in mm); T.sub.p is the size of a pixel of the CCD sensor (in mm). We can then deduce the other coordinates of the object 4: X=X.sub.pT.sub.p(Z+F)/F and Y=Y.sub.pT.sub.p(Z+F)/F, where X is the abscissa of the object in the camera indicator (in mm); Y is the ordinate of the object in the camera indicator (in mm); X.sub.p is the abscissa of the object in the image (in pixels, therefore without dimension); Y.sub.p is the ordinate of the object in the image (in pixels, therefore without dimension); The method may be applied to chains of three or more images. Each image has characteristic points that may be paired with a point or no points of the previous image and the following image. If a point can be found by pairing on all of the images of a chain, it is more likely that it truly belongs to the object 4, which increases the reliability of the method.” Please also see figure 1 which demonstrates that this system is operating in a three dimensional environment and the arm has multiple degrees of freedom therefor the distance is a three dimensional distance. Examiner further notes that any distance may be understood to be a three dimensional distance even if one or two of the dimensions have a value of zero.) However, Shi, in the same field of endeavor of robotics, teaches: … that has a posture most similar to the posture of the grasping unit; (Paragraph 0096, "The method 300 commences 301 and flow of the algorithm proceeds to block 302 whereat the computer processor obtains data regarding the grasping device 100. The present algorithm for planning a grasp considers the grasping device data, along with object data, to determine how the grasping device characteristics match up with object dimensions and pose, for determining how to position the grasping device for grasping the object.") … It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to combine the robotic grasping system as taught by Suzuki with the visual servoing methods as taught by Corkum as well as the ability to determine a relative position of the object with respect to the camera using focal length and image data (pixel size) as taught by Leroux. Further, Shi teaches a method of comparing the pose of the end effector with the pose of the object to be grasped in order to determine how closely they match and how to adjust the grasping device for an effective grasp. It would be obvious to incorporate this processing into the method of selecting a gripping candidate as taught by Suzuki in order to select a candidate requiring the least movement before grasping. Using this combined method, the system would be capable of determining the position of objects even if partially occluded. This would ensure highly accurate control of the robotic system and allow the system to react efficiently to unexpected movement or changes in the workspace or to the workpiece position/orientation. Regarding claim 21, Suzuki teaches: 21. (currently amended) An object grasping method, comprising: shooting with a camera a plurality of continuous images (Paragraph 0141, "The sensor unit 901 includes a projector and a camera for sensing a two-dimensional image as two-dimensional information. The sensor unit 901 is fixed above the pallet 904, senses an image of heaped target objects 903, and outputs the sensed image to a sensor information acquisition unit 911. Although an image processing unit 910 processes the image sensed by the sensor unit 901 in the embodiment, the sensor unit 901 may incorporate an image processing mechanism to output an image processing result. Assume that the relative positional relationship between the projector and camera of the sensor unit 901 has been obtained in advance by calibration." And Paragraphs 0160-0161, “In step S1002, the position and orientation measurement processing unit 913 obtains (measures) the position and orientation of at least one target object among a plurality of target objects in the sensed image received from the sensor information acquisition unit 911. The obtained position and orientation is used to predict whether extraction of a target object will succeed or to grip a target object by the robot. The processing in step S1002 is the same as that in step S402, and a description thereof will not be repeated. After obtaining the position and orientation of the gripping candidate object, the position and orientation measurement processing unit 913 sends the obtained position and orientation to the prediction unit 914 and the target object selection unit 915. If the position and orientation measurement processing unit 913 obtains the positions and orientations of a plurality of gripping candidate objects, it sends all the obtained positions and orientations to the prediction unit 914 and the target object selection unit 915.” also see Figure 10, step S1001 and the loop that this processing is a part of. This demonstrates that as the robot executes commands to grip the target object, the system continues to sense the object and environment and use that information to determine control of the robot as long as the end has not been reached.) to detect a plurality of objects; (Paragraph 0070, “In step S402, the position and orientation measurement processing unit 113 obtains (measures) the position and orientation of at least one target object among a plurality of target objects in the sensed image received from the sensor information acquisition unit 111. The obtained position and orientation is used to extract and grip a target object by the robot.”) identifying a first object from the plurality of objects (Paragraph 0100, “In the embodiment, one target object is selected from a plurality of target objects whose positions and orientations have been measured by the robot work instruction unit 121. However, the position and orientation measurement processing unit 113 may select one target object and then output its position and orientation to the robot work instruction unit 121.” As well as Paragraph 0115, “In step S411, the target object selection unit 117 determines whether a target object to be gripped next has been selected in step S410. For example, in FIG. 7, the positions and orientations of the three target objects except for the target object 103' have been measured. However, if the position and orientation of even one target object has not been measured except for the target object 103', a target object to be gripped next cannot be selected. As a matter of course, when a target object matching the selection condition is not detected in step S410, a target object to be gripped next cannot be selected. These situations can be determined by referring to the position and orientation measurement result in step S402 and the gripping instruction list.”) … repeatedly identifying a relative three-dimensional position of [[an]] the first object with respect to [[a]] the grasping unit based on the plurality of captured images (Paragraph 0044, "A plurality of cameras may be arranged by setting up a scaffold for image sensing, the user may hold a camera to sense an image, or a camera mounted on the robot may sense an image while moving the robot. Although an image may be sensed by any method, the relative position and orientation between the camera and the target object 103 in image sensing is obtained and stored in association with the sensed image. When a plurality of cameras are arranged on the scaffold, the relative position and orientation can be obtained from the shape of the scaffold. When the user holds a camera, the relative position and orientation can be obtained from a position and orientation sensor by mounting it on the camera. When the camera mounted on the robot senses an image, the relative position and orientation can be obtained using control information of the robot." And Paragraphs 0160-0161, “In step S1002, the position and orientation measurement processing unit 913 obtains (measures) the position and orientation of at least one target object among a plurality of target objects in the sensed image received from the sensor information acquisition unit 911. The obtained position and orientation is used to predict whether extraction of a target object will succeed or to grip a target object by the robot. The processing in step S1002 is the same as that in step S402, and a description thereof will not be repeated. After obtaining the position and orientation of the gripping candidate object, the position and orientation measurement processing unit 913 sends the obtained position and orientation to the prediction unit 914 and the target object selection unit 915. If the position and orientation measurement processing unit 913 obtains the positions and orientations of a plurality of gripping candidate objects, it sends all the obtained positions and orientations to the prediction unit 914 and the target object selection unit 915.” See also Figure 10, which demonstrates that the process of sensing (imaging the environment) is done repeatedly until the object has been successfully gripped and the task is complete. Also see Paragraph 0024, “In the first embodiment, the positions and orientations of target objects heaped in a pallet are measured using the first sensor (a projector and camera) for acquiring two-dimensional information (a two-dimensional image) and three-dimensional information (a range image or a two-dimensional image for obtaining three-dimensional point group data) about target objects.”); moving the grasping unit toward the first object (See Figure 10, step S1006, the robot executes the instructions according to the work requested. This includes moving the end effector towards the target object. Paragraph 0063, "For example, if the robot work instruction unit 121 receives the position and orientation of a target object from the position and orientation measurement processing unit 113, it generates an instruction signal to move the hand to the position and orientation in which a target object in the received position and orientation can be gripped, and grip the target object." And Paragraphs 0160-0161, “In step S1002, the position and orientation measurement processing unit 913 obtains (measures) the position and orientation of at least one target object among a plurality of target objects in the sensed image received from the sensor information acquisition unit 911. The obtained position and orientation is used to predict whether extraction of a target object will succeed or to grip a target object by the robot. The processing in step S1002 is the same as that in step S402, and a description thereof will not be repeated. After obtaining the position and orientation of the gripping candidate object, the position and orientation measurement processing unit 913 sends the obtained position and orientation to the prediction unit 914 and the target object selection unit 915. If the position and orientation measurement processing unit 913 obtains the positions and orientations of a plurality of gripping candidate objects, it sends all the obtained positions and orientations to the prediction unit 914 and the target object selection unit 915.” also see Figure 10, step S1001 and the loop that this processing is a part of. This demonstrates that as the robot executes commands to grip the target object, the system continues to sense the object and environment and use that information to determine control of the robot as long as the end has not been reached. Also see at least paragraph 0073, “In the embodiment, the coarse position and orientation (represented by a six-dimensional vector s) of the target object to be measured is repeatedly corrected by an iterative operation using the Gauss-Newton method, which is a kind of nonlinear optimization method, so that the three-dimensional geometric model is fitted in the sensed image. Note that the optimization method for obtaining the position and orientation of a target object is not limited the Gauss-Newton method.”) … ; and grasping the first object(Paragraph 0044, "A plurality of cameras may be arranged by setting up a scaffold for image sensing, the user may hold a camera to sense an image, or a camera mounted on the robot may sense an image while moving the robot. Although an image may be sensed by any method, the relative position and orientation between the camera and the target object 103 in image sensing is obtained and stored in association with the sensed image. When a plurality of cameras are arranged on the scaffold, the relative position and orientation can be obtained from the shape of the scaffold. When the user holds a camera, the relative position and orientation can be obtained from a position and orientation sensor by mounting it on the camera. When the camera mounted on the robot senses an image, the relative position and orientation can be obtained using control information of the robot." See also Figure 10, which demonstrates that the process of sensing (imaging the environment) is done repeatedly until the object has been successfully gripped and the task is complete.) Suzuki does not specifically disclose the camera capturing images of the workspace continuously for alterations to the trajectory, determining a distance between the camera and the object based on focal length and image data, or comparing the orientation and position of the grasper with that of the target object. However, Corkum, in the same field of endeavor of robotic control, teaches: … based on the continuously identified relative position of the first object such that movement of the grasping unit (Paragraph 0017, "For example, the controller 160 can: locate an object reference frame—including an object coordinate system—relative to a target object identified in an image recorded from the camera 150; orient the object reference frame in real space relative to the end effector 140 based on the position and orientation of the target object identified in the image and based on a known offset between the camera 150 and the end effector 140; project the preplanned trajectory into the object reference frame; implement closed-loop controls to move the end effector 140 along the preplanned trajectory toward the terminus of the preplanned trajectory at which the end effector 140 may accurately engage the target object; refine the location and orientation of the object reference coordinate system as the arm moves the end effector 140 and the camera 150 closer to the target object (which may yield a higher-resolution image of the target object); and repeat this process regularly—such as at a rate of 2 Hz, 20 Hz, or for every 10-millimeter interval traversed by the end effector 140—until the end effector 140 engages the target object. By thus realigning the preplanned trajectory to the target object (e.g., to an object feature or constellation of object features) detected in the field of view of the camera 150 as the end effector 140 approaches the target object, the system 100 can achieve increased locational accuracy of the end effector 140 relative to the target object as the end effector 140 nears the target object while also accommodating wide variances in the location and orientation of the target object from its expected location and orientation and/or accommodating wide variances in the location and orientation of one unit of the target object to a next unit of the target object.") … However, Leroux, in the same field of endeavor of robotic grasping, teaches: calculating a distance between the first object and camera based on a focal length of the camera and an occupied ratio of an image area corresponding to the first object with respect to an entire image area; (Column 3 Line 45 – Column 4 Line 12, “We can distinguish the relationship: .DELTA..sub.m=T.sub.p.DELTA..sub.p where .DELTA.m and T.sub.p are expressed in mm, .DELTA..sub.p is the movement in pixels of the object between the two images (without dimensions) and T.sub.p represents the size of a pixel on the image of the camera 2. We can thus calculate the coordinates of the object 4: Z=F(D/(.DELTA..sub.pT.sub.p)-1) Where Z is the focal length of the camera 2 at the surface of the object 4; F is the focal length of the lens (in mm); D is the movement of the camera 2 between the taking of two images (in mm); T.sub.p is the size of a pixel of the CCD sensor (in mm). We can then deduce the other coordinates of the object 4: X=X.sub.pT.sub.p(Z+F)/F and Y=Y.sub.pT.sub.p(Z+F)/F, where X is the abscissa of the object in the camera indicator (in mm); Y is the ordinate of the object in the camera indicator (in mm); X.sub.p is the abscissa of the object in the image (in pixels, therefore without dimension); Y.sub.p is the ordinate of the object in the image (in pixels, therefore without dimension); The method may be applied to chains of three or more images. Each image has characteristic points that may be paired with a point or no points of the previous image and the following image. If a point can be found by pairing on all of the images of a chain, it is more likely that it truly belongs to the object 4, which increases the reliability of the method.” Please also see figure 1 which demonstrates that this system is operating in a three dimensional environment and the arm has multiple degrees of freedom therefor the distance is a three dimensional distance. Examiner further notes that any distance may be understood to be a three dimensional distance even if one or two of the dimensions have a value of zero.) However, Shi, in the same field of endeavor of robotics, teaches: … that has a posture most similar to the posture of a grasping unit used to grasp the objects; (Paragraph 0096, "The method 300 commences 301 and flow of the algorithm proceeds to block 302 whereat the computer processor obtains data regarding the grasping device 100. The present algorithm for planning a grasp considers the grasping device data, along with object data, to determine how the grasping device characteristics match up with object dimensions and pose, for determining how to position the grasping device for grasping the object.") … It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to combine the robotic grasping system as taught by Suzuki with the visual servoing methods as taught by Corkum as well as the ability to determine a relative position of the object with respect to the camera using focal length and image data (pixel size) as taught by Leroux. Further, Shi teaches a method of comparing the pose of the end effector with the pose of the object to be grasped in order to determine how closely they match and how to adjust the grasping device for an effective grasp. It would be obvious to incorporate this processing into the method of selecting a gripping candidate as taught by Suzuki in order to select a candidate requiring the least movement before grasping. Using this combined method, the system would be capable of determining the position of objects even if partially occluded. This would ensure highly accurate control of the robotic system and allow the system to react efficiently to unexpected movement or changes in the workspace or to the workpiece position/orientation. Claim(s) 15 and 27 is/are rejected under 35 U.S.C. 103 as being unpatentable over Suzuki in view of Corkum, Leroux, and Shi and in further view of Shekhawat and in further view of Xi et al. (US 20200094414 A1), hereinafter Xi and Bradski et al. (US 20160089791 A1), hereinafter Bradski. Regarding claim 15, where all the limitations of claim 9 are discussed above, Suzuki does not specifically teach a detection unit and a wireless tag that is connected to a specification for the object and allow the system to identify a relative position of the object. However, Xi, in the same field of endeavor of robotic systems with manipulators for object grasping, teaches: 15. (original) The object grasping system according to claim 9, further comprising: a detection unit for detecting a wireless tag (Paragraph 0064, "At procedure 630, the controller 430 may process the captured image to obtain the information of the product. In certain embodiments, the captured image may include more than one of the e-package information tags 200 on different surfaces of the object 100, such that the controller 430 may obtain the thorough information for the robotic manipulation for handling the object. Once the information of the product is obtained, at procedure 640, the controller 430 may control a robotic grasping device 420 to perform a robotic manipulation for handling the object 100 based on the information of the product."), wherein the control unit identifies a specification of the object based on information from the wireless tag attached to the object by using the detection unit (Paragraph 0012, "In certain embodiments, the first area of each of the e-package information tags is designed for easy detection and robust pose estimation, and the second area of each of the e-package information tags stores the information of the physical features of the object, where the physical features of the object includes: a dimension of the object; a weight of the object; a weight distribution of the object; a property of the object; product information of the object; and the location and the orientation of the object."), and identifies a relative position of the object with respect to the grasping unit based on the specification. (See Figure 6, step 620, the information about the product (such as location and orientation as seen in paragraph 0012) is obtained and step 640 which controls the robotic grasping device to interact with the object. This implies that the system is able to determine the relative position of the object so that the distance can be covered and the grasping device may interact with the object.) It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to combine the robotic gripping system and method as taught by Suzuki with the e-package information tags and ability to read those as taught by Xi. This would allow the system to be informed on physical features of the object as well as the location and orientation of the object and thereby perform operations more effectively. The system would be able to grasp in a more ideal position based on the weight distribution of the object for example. Regarding claim 27, where all the limitations of claim 21 are discussed above, Suzuki does not specifically teach a detection unit and a wireless tag that is connected to a specification for the object and allow the system to identify a relative position of the object. However, Xi, in the same field of endeavor of robotic systems with manipulators for object grasping, teaches: 27. (currently amended) The object grasping method according to claim 21, further comprising: detecting a wireless tag via a detection unit (Paragraph 0064, "At procedure 630, the controller 430 may process the captured image to obtain the information of the product. In certain embodiments, the captured image may include more than one of the e-package information tags 200 on different surfaces of the object 100, such that the controller 430 may obtain the thorough information for the robotic manipulation for handling the object. Once the information of the product is obtained, at procedure 640, the controller 430 may control a robotic grasping device 420 to perform a robotic manipulation for handling the object 100 based on the information of the product."); identifying, via a control unit a specification of the object based on information from the wireless tag attached to the object by using the detection unit (Paragraph 0012, "In certain embodiments, the first area of each of the e-package information tags is designed for easy detection and robust pose estimation, and the second area of each of the e-package information tags stores the information of the physical features of the object, where the physical features of the object includes: a dimension of the object; a weight of the object; a weight distribution of the object; a property of the object; product information of the object; and the location and the orientation of the object."), and identifying a relative position of the object with respect to the grasping unit (See Figure 6, step 620, the information about the product (such as location and orientation as seen in paragraph 0012) is obtained and step 640 which controls the robotic grasping device to interact with the object. This implies that the system is able to determine the relative position of the object so that the distance can be covered and the grasping device may interact with the object.) It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to combine the robotic gripping system and method as taught by Suzuki with the e-package information tags and ability to read those as taught by Xi. This would allow the system to be informed on physical features of the object as well as the location and orientation of the object and thereby perform operations more effectively. The system would be able to grasp in a more ideal position based on the weight distribution of the object for example. Conclusion The Examiner has cited particular paragraphs or columns and line numbers in the referencesapplied to the claims above for the convenience of the Applicant. Although the specified citations arerepresentative of the teachings of the art and are applied to specific limitations within the individual claim, other passages and figures may apply as well. It is respectfully requested of the Applicant in preparing responses, to fully consider the references in their entirety as potentially teaching all or part of the claimed invention, as well as the context of the passage as taught by the prior art or disclosed by the Examiner. See MPEP 2141.02 [R-07.2015] VI. A prior art reference must be considered in its entirety, i.e., as a whole, including portions that would lead away from the claimed Invention. W.L. Gore & Associates, Inc. v. Garlock, Inc., 721 F.2d 1540, 220 USPQ 303 (Fed. Cir. 1983), cert, denied, 469 U.S. 851 (1984). See also MPEP §2123. Any inquiry concerning this communication or earlier communications from the examiner should be directed to HEATHER KENIRY whose telephone number is (571)270-5468. The examiner can normally be reached M-F 7:30-5:30. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Adam Mott can be reached at (571) 270-5376. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /H.J.K./Examiner, Art Unit 3657 /ADAM R MOTT/Supervisory Patent Examiner, Art Unit 3657
Read full office action

Prosecution Timeline

May 20, 2021
Application Filed
Sep 14, 2023
Non-Final Rejection — §103
Mar 25, 2024
Response Filed
Apr 01, 2024
Final Rejection — §103
Oct 10, 2024
Request for Continued Examination
Oct 11, 2024
Response after Non-Final Action
Nov 04, 2024
Non-Final Rejection — §103
May 19, 2025
Response Filed
Jun 02, 2025
Final Rejection — §103
Dec 05, 2025
Request for Continued Examination
Dec 28, 2025
Response after Non-Final Action
Jan 06, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12600035
INFORMATION PROCESSING METHOD, INFORMATION PROCESSING APPARATUS, ROBOT SYSTEM, MANUFACTURING METHOD OF PRODUCT, AND STORAGE MEDIUM
2y 5m to grant Granted Apr 14, 2026
Patent 12583123
ITERATIVE CONTROL OF ROBOT FOR TARGET OBJECT
2y 5m to grant Granted Mar 24, 2026
Patent 12576539
INFORMATION PROCESSING DEVICE, INFORMATION PROCESSING METHOD, AND PROGRAM
2y 5m to grant Granted Mar 17, 2026
Patent 12562076
LEARNING ASSISTANCE SYSTEM, LEARNING ASSISTANCE METHOD, AND LEARNING ASSISTANCE STORAGE MEDIUM
2y 5m to grant Granted Feb 24, 2026
Patent 12558780
MULTI-PURPOSE ROBOTS AND COMPUTER PROGRAM PRODUCTS, AND METHODS FOR OPERATING THE SAME
2y 5m to grant Granted Feb 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

5-6
Expected OA Rounds
78%
Grant Probability
99%
With Interview (+22.1%)
2y 7m
Median Time to Grant
High
PTA Risk
Based on 102 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month