Prosecution Insights
Last updated: April 19, 2026
Application No. 18/839,267

ROBOT SYSTEM

Final Rejection §103§112
Filed
Aug 16, 2024
Examiner
EVANS, KARSTON G
Art Unit
3657
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Fanuc Corporation
OA Round
2 (Final)
70%
Grant Probability
Favorable
3-4
OA Rounds
2y 10m
To Grant
91%
With Interview

Examiner Intelligence

Grants 70% — above average
70%
Career Allow Rate
100 granted / 143 resolved
+17.9% vs TC avg
Strong +21% interview lift
Without
With
+21.3%
Interview Lift
resolved cases with interview
Typical timeline
2y 10m
Avg Prosecution
31 currently pending
Career history
174
Total Applications
across all art units

Statute-Specific Performance

§101
9.8%
-30.2% vs TC avg
§103
48.4%
+8.4% vs TC avg
§102
13.8%
-26.2% vs TC avg
§112
21.2%
-18.8% vs TC avg
Black line = Tech Center average estimate • Based on career data from 143 resolved cases

Office Action

§103 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Arguments The amendment filed 2/18/2026 has been entered. Claims 1, 3, 4, and 7 are amended. Claims 5 and 6 are cancelled. Claims 1-4 and 7-9 remain pending in the application. Applicant’s amendments to the specification and claims have overcome each and every objection and 112(b) rejection set forth in the Non-Final Office Action mailed 11/19/2025 (except of the remaining 112(b) rejection of claim 3). Applicant's arguments, see page 7, with respect to the prior art not teaching the amended subject matter have been fully considered but they are not persuasive. The applicant argues that Atohira teaches moving the workpiece model and the hand model equally and that Atohira does not generate a single composite model by combining the workpiece model and the hand model. However, the applicant’s argument interprets the claim language too narrowly. Under broad reasonable interpretation of the claim language, the simulation of the hand model gripping the workpiece model is a composite model because it combines the hand and workpiece models by modeling them together in the simulation. Claim Interpretation The following is a quotation of 35 U.S.C. 112(f): (f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph: An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked. As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph: (A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function; (B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and (C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function. Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function. Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function. Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) is/are: “model storing unit” in claims 1 and 7; “workpiece detecting unit” in claims 1-2; “workpiece model positioning unit” in claims 1, 3, and 4; “path setting unit” in claims 1 and 7-8; “target selecting unit” in claim 2; “hand model positioning unit” in claims 7; “program generating unit” in claims 8-9; “program executing unit” in claim 9. Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof. According to paragraph [0015], the control device includes all of the ‘units’ and the control device “includes, for example, a memory, a CPU, an input/output interface, and the like, and may be realized by one or more computer devices that execute appropriate programs.” If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 3 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claim 3 recites the limitation “the one object workpiece model positioned by the workpiece model positioning unit.” There is insufficient antecedent basis for this limitation in the claim. It is unclear what ‘the one’ object workpiece model positioned by the workpiece model positioning unit is referencing because there is no previous mention of a specific ‘one’ object workpiece model in the claims. For examination purposes, claim 3 is interpreted as reciting “[[the]] one object workpiece model positioned by the workpiece model positioning unit.” Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claim(s) 1-4 and 8-9 is/are rejected under 35 U.S.C. 103 as being unpatentable over Bradski (US 20160221187 A1) in view of Johnson (US 20200086487 A1), Bai (US 20220203535 A1), and Atohira (US 20180111268 A1). Regarding Claim 1, Bradski teaches A robot system comprising: a robot; (“The system includes a robotic manipulator” See at least [0006]) a three-dimensional sensor configured to measure a surface shape of a target area in which workpieces can exist; (“a system including one or more sensors … The sensors may scan an environment containing one or more objects in order to capture visual data and/or three-dimensional (3D) depth information.” See at least [0037]; “scans from one or more 2D or 3D sensors with fixed mounts on a mobile base, such as a front navigation sensor 116 and a rear navigation sensor 118, and one or more sensors mounted on a robotic arm, such as sensor 106 and sensor 108, may be integrated to build up a digital model of the environment, including the sides, floor, ceiling, and/or front wall of a truck or other container.” See at least [0043]) a hand provided at a distal end of the robot; (“A robotic device may include a robotic arm that may be equipped with a gripper, such as a suction gripper, in order to move objects to specified locations.” See at least [0004] and fig. 1A) and a control device configured to generate a removal path for removing at least one of the workpieces by the robot based on the surface shape measured by the three-dimensional sensor, (“a control system configured to perform functions. The functions include identifying one or more characteristics of a physical object within a physical environment. The functions also include, based on the identified one or more characteristics, determining one or more potential grasp points on the physical object corresponding to points at which the gripper is operable to grip the physical object. Additionally, the functions also include determining a motion path for the gripper to follow in order to move the physical object to a drop-off location for the physical object.” See at least [0006]; “Data from the scans may then be integrated into a representation of larger areas in order to provide digital environment reconstruction. In additional examples, the reconstructed environment may then be used for identifying objects to pick up, determining pick positions for objects, and/or planning collision-free trajectories for the one or more robotic arms and/or a mobile base.” See at least [0037]; Examiner Interpretation: The path/trajectories to move the object are removal paths.) wherein the control device has: a model storing unit configured to store a workpiece model obtained by modeling a three-dimensional shape of the workpieces; a workpiece detecting unit configured to detect (“known templates of certain shapes can be used to refine detected features of objects within the environment that appear to match a particular shape.” See at least [0059]; “identified characteristics of the physical object may include a set of geometric characteristics from a particular perspective viewpoint of the physical object, which may then be used to train templates for recognition. In particular, a comparison can be made between the set of geometric characteristics and one or more virtual geometric shapes from the particular perspective viewpoint of the physical object. Based on an output of the comparison indicating that at least one of the geometric characteristics substantially matches a given virtual geometric shape, a virtual object may be generated that is representative of the physical object and associated with the matching virtual geometric shape. As a result, the identified characteristics of the physical object may be adjusted based on characteristics of the virtual object.” See at least [0105-0106]; Examiner Interpretation: The detected features/characteristics of the objects are from the 3D sensors.) and a path setting unit configured to set the removal path by moving the (“a 3D model of a stack of boxes may be constructed and used as a model to help plan and track progress for loading/unloading boxes to/from a stack or pallet. … the 3D model may be used for collision avoidance. Within examples, planning a collision-free path may involve determining the 3D location of objects and surfaces in the environment. A path optimizer may make use of the 3D information provided by environment reconstruction to optimize paths in the presence of obstacles.” See at least [0064]; “Planning a collision-free path may involve determining the “virtual” location of objects and surfaces in the environment. For example, a path optimizer may make use of the 3D information provided by environment reconstruction to optimize paths in the presence of obstacles, such as bin 610 as shown in FIGS. 6A-6B.” See at least [0136]; “To set up the path optimization problem … (b) collision checking with objects in the environment which may be carried out for a swept volume between each waypoint,” See at least [0141]) Bradski does not explicitly teach, but Johnson teaches a workpiece detecting unit configured to detect (“detecting the location of the object at 331 includes processing the image of the object through a convolutional neural network to predict one or more parts of the object forming a two-dimensional (2D) position of the object in the image. Next, as part of determining the location 331, such an embodiment determines the 6DOF pose using (i) the 2D position of the object in the image, (ii) pixels of the object, and (iii) a depth map corresponding to the image of the object. In such an embodiment, determining the 6DOF pose using (i) the 2D position of the object, (ii) the depth map corresponding to the image of the object, and (iii) the pixels of the object may include fitting the depth map to a candidate three-dimensional (3D) model of the object, where dimensions of the 3D model match dimensions of the object.” See at least [0057]) It would have been obvious to a person having ordinary skill in the art before the effective filing date of the invention to modify the teachings of Bradski to further include the teachings of Johnson with a reasonable expectation of success to acquire more detailed pose information of objects with limited sensor information. (See at least [0057]) Johnson also does not explicitly teach, but Bai teaches a workpiece model positioning unit configured to position, in a virtual space, object workpiece models each being a copy of the workpiece model in the positions and the poses of the workpieces detected by the workpiece detecting unit; (“The configuration engine 142 utilizes the vision data 112A, the vision data 171A, and/or the pose data and/or object identifier data 112, in generated a configured simulated environment 143. The configuration engine 142 can also utilize object model(s) database 152 in generating the configured simulated environment 143. For example, object identifiers from pose data and/or object identifier data 112 and/or determined based on vision data 112A or 117A, can be utilized to retrieve corresponding 3D models of objects from the object model(s) database 152. Those 3D models can be included in the configured simulated environment, and can be included at corresponding poses from pose data and/or object identifier data 112 and/or determined based on vision data 112A or 117A.” See at least [0069]) It would have been obvious to a person having ordinary skill in the art before the effective filing date of the invention to modify the teachings of Bradski and Johnson to further include the teachings of Bai with a reasonable expectation of success for “more robust and/or more accurate determination of a sequence of robotic actions in various scenarios.” (See at least [0007]) Bai also does not explicitly teach, but Atohira teaches and a hand model obtained by modeling a three-dimensional shape of the hand; (“The simulation device includes a robot model arranging section configured to arrange a robot model including a robot hand model in virtual space, wherein the robot model is a three-dimensional model of the robot, and the robot hand model is a three-dimensional model of the robot hand.” See at least [0004]; “The system memory 14 pre-stores a plurality of robot models of a plurality of types of robots including the above-mentioned robot 102.” See at least [0067]) a hand model positioning unit configured to position the hand model in a position and a pose that hold a first workpiece model of the object workpiece models in the virtual space, (“The CPU 12 operates the robot model 102M in the virtual space 200 so as to arrange the robot hand model 116M at the position and orientation defined by the tool coordinate system model C.sub.TM. In this way, the robot model 102M causes the robot hand model 116M to follow the workpiece model W.sub.M within the following operation range in the virtual space 200. … Then, the CPU 12 operates the robot model 102M in the virtual space 200 so as to grip the workpiece model W.sub.M by the robot hand model 116M. If the robot program is properly constructed for the supply method of the workpiece model (e.g., the various parameters such as the convey speed, spacing, and offset amounts of the position and orientation of the workpiece model W.sub.M) determined in step S7, the robot hand model 116M can properly grip the workpiece model W.sub.M in the virtual space 200, as illustrated in FIG. 13.” See at least [0134-0137] and fig. 13 (provided below)) PNG media_image1.png 312 342 media_image1.png Greyscale and to generate a composite model in which the first workpiece model and the hand model are combined; (“Note that, in step S8 described above, the CPU 12 may carry out a simulation of an operation of conveying the gripped workpiece model W.sub.M to a location in the virtual space 200 different from the conveyer model 104M, after gripping the workpiece model W.sub.M by the robot hand model 116M.” See at least [0146] and fig. 13; Examiner Interpretation: The simulation of the hand model gripping the workpiece model is a composite model combining the models because they are modeled together representing the hand holding the workpiece.) It would have been obvious to a person having ordinary skill in the art before the effective filing date of the invention to modify the teachings of Bradski, Johnson, and Bai to further include the teachings of Atohira with a reasonable expectation of success to ensure the robot will operate properly in real space, reducing effort and time setting up the operation. (See at least [0141]) Regarding Claim 2, Bradski further teaches wherein the control device further has a target selecting unit configured to select any one of the workpieces detected by the workpiece detecting unit as a removal target. (“a facade may be constructed from boxes, for instance to plan in what order the boxes should be picked up. For instance, as shown in FIG. 2C, box 222 may be identified by the robotic device as the next box to pick up. Box 222 may be identified within a facade representing a front wall of the stack of boxes 220 constructed based on sensor data collected by one or more sensors, such as sensor 106 and 108. A control system may then determine that box 222 is the next box to pick, possibly based on its shape and size, its position on top of the stack of boxes 220, and/or based on characteristics of a target container or location for the boxes.” See at least [0061]) Regarding Claim 3, Bradski further teaches wherein the control device selects the one workpiece (“a facade may be constructed from boxes, for instance to plan in what order the boxes should be picked up. For instance, as shown in FIG. 2C, box 222 may be identified by the robotic device as the next box to pick up. Box 222 may be identified within a facade representing a front wall of the stack of boxes 220 constructed based on sensor data collected by one or more sensors, such as sensor 106 and 108. A control system may then determine that box 222 is the next box to pick, possibly based on its shape and size, its position on top of the stack of boxes 220, and/or based on characteristics of a target container or location for the boxes.” See at least [0061]) Bradski does not explicitly teach, but Bai teaches the one workpiece model positioned by the workpiece model positioning unit (“The configuration engine 142 utilizes the vision data 112A, the vision data 171A, and/or the pose data and/or object identifier data 112, in generated a configured simulated environment 143. The configuration engine 142 can also utilize object model(s) database 152 in generating the configured simulated environment 143. For example, object identifiers from pose data and/or object identifier data 112 and/or determined based on vision data 112A or 117A, can be utilized to retrieve corresponding 3D models of objects from the object model(s) database 152. Those 3D models can be included in the configured simulated environment, and can be included at corresponding poses from pose data and/or object identifier data 112 and/or determined based on vision data 112A or 117A.” See at least [0069]; “Once the simulated environment is configured to reflect the real environment, the robotic simulator can be used to determine a sequence of robotic actions for use by the real world robot(s) in performing at least part of a robotic task. The robotic task can be one that is specified by a higher-level planning component of the robotic simulator or by real world robot(s), or can be one that is specified based on user interface input. As one non-limiting example, the robotic task can include grasping an object and placing the object in a container.” See at least [0005]) It would have been obvious to a person having ordinary skill in the art before the effective filing date of the invention to modify the teachings of modified Bradski and Johnson to further include the teachings of Bai with a reasonable expectation of success for “more robust and/or more accurate determination of a sequence of robotic actions in various scenarios.” (See at least [0007]) Regarding Claim 4, Bradski further teaches wherein the workpiece model positioning unit registers the other workpiece model as an obstacle, and the one workpiece model is excluded from being registered as an obstacle. (“a 3D model of a stack of boxes may be constructed and used as a model to help plan and track progress for loading/unloading boxes to/from a stack or pallet. … the 3D model may be used for collision avoidance. Within examples, planning a collision-free path may involve determining the 3D location of objects and surfaces in the environment. A path optimizer may make use of the 3D information provided by environment reconstruction to optimize paths in the presence of obstacles.” See at least [0064]; “Planning a collision-free path may involve determining the “virtual” location of objects and surfaces in the environment. For example, a path optimizer may make use of the 3D information provided by environment reconstruction to optimize paths in the presence of obstacles, such as bin 610 as shown in FIGS. 6A-6B.” See at least [0136]; “To set up the path optimization problem … (b) collision checking with objects in the environment which may be carried out for a swept volume between each waypoint,” See at least [0141]; Examiner Interpretation: The workpiece model of the object being carried is not considered as an object in the environment to be avoided and therefore is not registered as an obstacle.) Regarding Claim 8, Bradski further teaches wherein the control device further has a program generating unit configured to generate an operation program for moving the robot along the removal path set by the path setting unit. (“the robotic arm motion may go through N poses, each of which may have multiple joint space solutions. Additionally, there may be multiple criteria at each pose or between poses. For example, the system may be configured to minimize the joint rotations necessary to go from one pose to the next. To solve this problem, the system may be configured to set up the problem as a dynamic programming problem. In particular, at each Cartesian goal pose, the system may define the set of joint space solutions corresponding to that pose. Additionally, for each pair of joint space solutions in neighboring goals, the system may compute an optimal path between the joint configurations. Each such path may have a weight or cost assigned to it, based on criteria previously stated above (e.g., how long the path is, how close the joints are to zero at the end). Subsequently, using dynamic programming, the system may select the desirable connected path from the start position to the end goal. As such, dynamic programming may determine the most appropriate path, in the sense that the path has the smallest cost of all the connected paths from start to goal.” See at least [0140]) Regarding Claim 9, Bradski further teaches wherein the control device further has a program executing unit configured to operate the robot in accordance with the operation program generated by the program generating unit. (“Referring back to FIG. 5, at block 510, method 500 involves providing instructions to cause the robotic manipulator to grip the physical object at the selected grasp point with the gripper and move the physical object through the determined motion path to the drop-off location.” See at least [0152]) Claim(s) 7 is/are rejected under 35 U.S.C. 103 as being unpatentable over Bradski (US 20160221187 A1) in view of Johnson (US 20200086487 A1), Bai (US 20220203535 A1), Atohira (US 20180111268 A1), and Oumi (US 20070213874 A1). Regarding Claim 7, Bradski does not explicitly teach, but Atohira teaches wherein the model storing unit further stores a robot model obtained by modeling a three-dimensional shape of at least a distal end portion of the robot, the hand model positioning unit positions the robot model together with the hand model, (“The simulation device includes a robot model arranging section configured to arrange a robot model including a robot hand model in virtual space, wherein the robot model is a three-dimensional model of the robot, and the robot hand model is a three-dimensional model of the robot hand.” See at least [0004]; “The system memory 14 pre-stores a plurality of robot models of a plurality of types of robots including the above-mentioned robot 102.” See at least [0067]; “FIG. 4 illustrates an example of an image of virtual space 200 displayed on the display 22 in this manner. In the virtual space 200 illustrated in FIG. 4, the robot model 102M including a robot base model 108M, a revolving drum model 110M, a robot arm model 112M, a wrist model 114M, and a robot hand model 116M is arranged.” See at least [0071] and fig. 4 (provided below); Examiner Interpretation: Arranging the models together as demonstrated by at least [0071] and fig. 4 is equivalent to positioning the models together.) PNG media_image2.png 392 346 media_image2.png Greyscale It would have been obvious to a person having ordinary skill in the art before the effective filing date of the invention to modify the teachings of modified Bradski to further include the teachings of Atohira with a reasonable expectation of success to ensure the robot will operate properly in real space, reducing effort and time setting up the operation. (See at least [0141]) Atohira also does not explicitly teach, but Oumi teaches and the path setting unit sets the removal path so that the workpiece model, the hand model, and the robot model do not interfere with each other. (“simulate the workpiece detecting operation and the bin picking motion, relative to the workpiece models 20M arranged in the virtual working environment 22, it is possible to check as to whether the robot model 18M causes mutual interference with neighboring objects (i.e., a collision between the robot model 18M or the objective workpiece model 20Mn held by the robot model 18M and the workpiece models 20M other than the objective workpiece model 20Mn, the container model 36M, etc.) during the bin picking motion (preferably, on the display screen of the display section 14). Therefore, it is possible to optimize the robot operation program 46 by appropriately correcting the data of the position (or the position and orientation) of the robot model 18M (or the hand model 34M) so as to avoid such a mutual interference.” See at least [0040]; “the robot-model operation controlling section 32 causes, on the screen of the display section 14, the robot model 18M and the hand model 34M to appropriately move, and thus to simulate the bin picking motion relative to the objective workpiece model 20Mn (step Q6).” See at least [0053]) It would have been obvious to a person having ordinary skill in the art before the effective filing date of the invention to modify the teachings of modified Bradski and Atohira to further include the teachings of Oumi with a reasonable expectation of success for “preliminarily checking the mutual interference between the robot 18 and neighboring objects in the actual robot system 12 and, as a result, to prepare the optimum robot operation program 46 quickly at low cost.” (See at least [0042]) Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Kanunikov (US 20210129334 A1) is pertinent because it discusses a combination of gripper models and object models for robot obstacle avoidance. Oishi (US 20200027205 A1) is pertinent because it discusses combining a workpiece shape model, a pickup tool model, and an image processing setting model. THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to Karston G Evans whose telephone number is (571)272-8480. The examiner can normally be reached Mon-Fri 9:00-5:00. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Abby Lin can be reached at (571)270-3976. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /K.G.E./Examiner, Art Unit 3657 /ABBY LIN/Supervisory Patent Examiner, Art Unit 3657
Read full office action

Prosecution Timeline

Aug 16, 2024
Application Filed
Nov 15, 2025
Non-Final Rejection — §103, §112
Feb 18, 2026
Response Filed
Mar 19, 2026
Final Rejection — §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602054
CONTROL DEVICE FOR MOBILE OBJECT, CONTROL METHOD FOR MOBILE OBJECT, AND STORAGE MEDIUM
2y 5m to grant Granted Apr 14, 2026
Patent 12600037
REMOTE CONTROL ROBOT, REMOTE CONTROL ROBOT CONTROL SYSTEM, AND REMOTE CONTROL ROBOT CONTROL METHOD
2y 5m to grant Granted Apr 14, 2026
Patent 12589493
INFORMATION PROCESSING APPARATUS AND STORAGE MEDIUM
2y 5m to grant Granted Mar 31, 2026
Patent 12566457
BULK STORE SLOPE ADJUSTMENT VIA TRAVERSAL INCITED SEDIMENT GRAVITY FLOW
2y 5m to grant Granted Mar 03, 2026
Patent 12552023
METHOD FOR CONTROLLING A ROBOT, AND SYSTEM
2y 5m to grant Granted Feb 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
70%
Grant Probability
91%
With Interview (+21.3%)
2y 10m
Median Time to Grant
Moderate
PTA Risk
Based on 143 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month