Prosecution Insights
Last updated: April 19, 2026
Application No. 18/686,363

TRAINED MODEL GENERATION METHOD, TRAINED MODEL GENERATION DEVICE, TRAINED MODEL, AND HOLDING MODE INFERENCE DEVICE

Non-Final OA §101§103§112
Filed
Feb 23, 2024
Examiner
OSTROW, ALAN LINDSAY
Art Unit
3657
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Kyocera Corporation
OA Round
1 (Non-Final)
74%
Grant Probability
Favorable
1-2
OA Rounds
2y 7m
To Grant
99%
With Interview

Examiner Intelligence

Grants 74% — above average
74%
Career Allow Rate
26 granted / 35 resolved
+22.3% vs TC avg
Strong +38% interview lift
Without
With
+37.7%
Interview Lift
resolved cases with interview
Typical timeline
2y 7m
Avg Prosecution
30 currently pending
Career history
65
Total Applications
across all art units

Statute-Specific Performance

§101
14.0%
-26.0% vs TC avg
§103
57.7%
+17.7% vs TC avg
§102
15.8%
-24.2% vs TC avg
§112
10.4%
-29.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 35 resolved cases

Office Action

§101 §103 §112
DETAILED ACTION Status of Claims Claims 1-7 are currently pending and have been examined in this application. This Non-final communication is the first action on the merits. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Information Disclosure Statement The information disclosure statement (IDS) submitted on 2/23/2023 was filed in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner. Priority Receipt is acknowledged of certified copies of papers required by 37 CFR 1.55. Claim Rejections - 35 USC § 112(a) The following is a quotation of the first paragraph of 35 U.S.C. 112(a): IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention. Claims 5 and 7 are rejected under 35 U.S.C. 112(a) or pre-AIA 35 U.S.C. 112, first paragraph, because the claim purports to invoke 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, but fails to recite a combination of elements as required by that statutory provision and thus cannot rely on the specification to provide the structure, material or acts to support the claimed function. As such, the claim recites a function that has no limits and covers every conceivable means for achieving the stated function, while the specification discloses at most only those means known to the inventor. Accordingly, the disclosure is not commensurate with the scope of the claim. Regarding claim 5, Applicant claims a “trained model device” . The claim then proceeds to describe the trained model that the device will generate, but does not provide sufficient description as to the structure of the claimed device. Regarding claim 7, Applicant claims a “holding mode inference device”. The claim then proceeds to describe the use of a trained model and further describes the components of the trained model, but does not provide sufficient description as to the structure of the claimed device. Claim Rejections - 35 USC § 112(b) The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 1-7 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claims 1-7 recites the limitation “the inference image", which seems to have two different meanings as presented in the claims. Examiner’s Note: In light of the specification and claims, the use of the term “the inference image” seems to have two different meanings as presented in the claims. The examiner is interpreting based on the specification, that “the inference image” of the first holding mode inference model and “the inference image” of the second holding mode inference model are intended to be different images captured by the camera mounted on the end effector. However a review of claims 1 and 5-7 does not differentiate the meaning of the term “the inference image”, which as a result makes it difficult to differentiate between the first holding mode inference model and the second holding mode inference model, as described in the claims. Since the term “the inference image” is indefinite as used in the claims, the examiner is interpreting, based on the specification, the intent of the “second holding mode inference model” is to verify, calibrate, or correct the data that was collected during the use of the “first holding mode inference model”. This interpretation will be used when applying prior art to the limitations. The examiner strongly suggests providing a clearer differentiation for the term “the inference image” as used in the claims, in order to further differentiate between the “first holding mode inference model” and the “second holding mode inference model”. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claim 6 is rejected under 35 U.S.C. 101 because the claimed invention is directed to non-statutory subject matter. The claim does not fall within at least one of the four categories of patent eligible subject matter because the claim is directed to a data per se or mere information in the form of a trained model. As the courts' definitions of machines, manufactures and compositions of matter indicate, a product must have a physical or tangible form in order to fall within one of these statutory categories. Digitech, 758 F.3d at 1348, 111 USPQ2d at 1719. Thus, the Federal Circuit has held that a product claim to an intangible collection of information, even if created by human effort, does not fall within any statutory category. Digitech, 758 F.3d at 1350, 111 USPQ2d at 1720 (claimed "device profile" comprising two sets of data did not meet any of the categories because it was neither a process nor a tangible product). Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1, 2, and 5-7 are rejected under 35 U.S.C. 103 as being unpatentable over Wellman (US 20170021499 A1) as modified by Ogawa (US 20180272535 A1) Claim 1: Wellman teaches the following limitations: A trained model generation method comprising: generating a trained model by performing learning using learning data comprising a learning image of a learning target object corresponding to a holding target object to be held by a robot, (Wellman – [0017] … The grasping strategy may be based at least in part upon a database containing information about the item, characteristics of the item, and/or similar items, such as information indicating grasping strategies that have been successful or unsuccessful for such items in the past. Entries or information in the database may be originated and/or updated based on human input for grasping strategies, determined characteristics of a particular item, and/or machine learning related to grasping attempts of other items sharing characteristics with the particular item. Embodiments herein include aspects directed to generating and/or accessing such databases.) the trained model comprising: a class inference model configured to infer, based on an inference image of the holding target object, (Wellman – [0073] The attribute detection module 710 can interact with any number and/or type of sensors to determine attributes of an item to be grasped. For example, the attribute detection module 710 can receive information from imaging devices or optical sensors to determine physical characteristics, such as size, shape, position, orientation, and/or surface characteristics (e.g., how porous and/or slippery the item is based on the surface appearance). Any suitable optical technology can be utilized, …) a classification result obtained by classifying the holding target object into a predetermined holding category; (Wellman – [0020] … Based on the detected attributes, the controller 32 may access (as at 49) the item database 37, such as to access a record for the inventory item 40. The record can include information about attributes of the item, such as weight, size, shape, or other physical characteristics of the item. Based on the record from the item database 37 and/or the detected attributes from the sensor package 16, the controller 32 may access (as at 48) an item gripping database 36 to access an item grasping strategy stored for that item or items with similar characteristics. The controller 32 can provide instructions to the robotic arm 12 for gripping the item 40 based on the gripping strategy accessed from the gripping database at 36 (e.g., at 52).) a first holding mode inference model configured to infer, based on the classification result and the inference image, a first holding mode for the holding target object; and (Wellman – [0017] … The grasping strategy may be based at least in part upon a database containing information about the item, characteristics of the item, and/or similar items, such as information indicating grasping strategies that have been successful or unsuccessful for such items in the past. Entries or information in the database may be originated and/or updated based on human input for grasping strategies, determined characteristics of a particular item, and/or machine learning related to grasping attempts of other items sharing characteristics with the particular item. Embodiments herein include aspects directed to generating and/or accessing such databases.) Wellman does not explicitly teach the following limitations, however Ogawa teaches: a second holding mode inference model configured to infer, based on the first holding mode and the inference image, a second holding mode for the holding target object. (Ogawa – [0059] … The first camera and the sensor are arranged on the manipulator. The manipulator control unit controls the manipulator so that the movable part is moved to a position corresponding to a directed value. The calibration processing unit acquires a first error in a first direction based on an image photographed by the first camera, acquire a second error in a second direction intersecting with the first direction based on a detection result obtained by the sensor, and acquire a directed calibration value with respect to the directed value based on the first error and the second error. ; [0264] Next, the arithmetic processing unit 101 performs a test for grasp. Specifically, for example, the arithmetic processing unit 101 executes a plurality of grasping methods as a test to check success or failure in grasp, and deletes data of a failed grasping method. …) Examiner Note: Second error in a second direction corresponds to second holding mode Therefore, prior to the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art to modify Wellman to provide a second data collection set for the purpose of correcting, verifying, or calibrating the object data related to identifying a grasping location as taught in Ogawa. Refining the object related grasping data by providing a second set of data increases the accuracy of each grasping attempt while also improving the quality of the available object data related to grasping locations and object classification. Claim 2: Wellman teaches the following limitations: The trained model generation method according to claim 1, wherein the learning data further comprises information regarding a class into which the target object is classified. ( Wellman – [0020] … Based on the detected attributes, the controller 32 may access (as at 49) the item database 37, such as to access a record for the inventory item 40. The record can include information about attributes of the item, such as weight, size, shape, or other physical characteristics of the item. Based on the record from the item database 37 and/or the detected attributes from the sensor package 16, the controller 32 may access (as at 48) an item gripping database 36 to access an item grasping strategy stored for that item or items with similar characteristics. The controller 32 can provide instructions to the robotic arm 12 for gripping the item 40 based on the gripping strategy accessed from the gripping database at 36 (e.g., at 52).) Claim 5: Wellman teaches the following limitations: A trained model generation device configured to generate a trained model by performing learning using learning data comprising a learning image of a learning target object corresponding to a holding target object to be held by a robot, (Wellman – [0017] … The grasping strategy may be based at least in part upon a database containing information about the item, characteristics of the item, and/or similar items, such as information indicating grasping strategies that have been successful or unsuccessful for such items in the past. Entries or information in the database may be originated and/or updated based on human input for grasping strategies, determined characteristics of a particular item, and/or machine learning related to grasping attempts of other items sharing characteristics with the particular item. Embodiments herein include aspects directed to generating and/or accessing such databases.) the trained model comprising: a class inference model configured to infer, based on an inference image of the holding target object, (Wellman – [0073] The attribute detection module 710 can interact with any number and/or type of sensors to determine attributes of an item to be grasped. For example, the attribute detection module 710 can receive information from imaging devices or optical sensors to determine physical characteristics, such as size, shape, position, orientation, and/or surface characteristics (e.g., how porous and/or slippery the item is based on the surface appearance). Any suitable optical technology can be utilized, …) a classification result obtained by classifying the holding target object into a predetermined holding category; ( Wellman – [0020] … Based on the detected attributes, the controller 32 may access (as at 49) the item database 37, such as to access a record for the inventory item 40. The record can include information about attributes of the item, such as weight, size, shape, or other physical characteristics of the item. Based on the record from the item database 37 and/or the detected attributes from the sensor package 16, the controller 32 may access (as at 48) an item gripping database 36 to access an item grasping strategy stored for that item or items with similar characteristics. The controller 32 can provide instructions to the robotic arm 12 for gripping the item 40 based on the gripping strategy accessed from the gripping database at 36 (e.g., at 52).) a first holding mode inference model configured to infer, based on the classification result and the inference image, a first holding mode for the holding target object; and (Wellman – [0017] … The grasping strategy may be based at least in part upon a database containing information about the item, characteristics of the item, and/or similar items, such as information indicating grasping strategies that have been successful or unsuccessful for such items in the past. Entries or information in the database may be originated and/or updated based on human input for grasping strategies, determined characteristics of a particular item, and/or machine learning related to grasping attempts of other items sharing characteristics with the particular item. Embodiments herein include aspects directed to generating and/or accessing such databases.) Wellman does not explicitly teach the following limitations, however Ogawa teaches: a second holding mode inference model configured to infer, based on the first holding mode and the inference image, a second holding mode for the holding target object. (Ogawa – [0059] … The first camera and the sensor are arranged on the manipulator. The manipulator control unit controls the manipulator so that the movable part is moved to a position corresponding to a directed value. The calibration processing unit acquires a first error in a first direction based on an image photographed by the first camera, acquire a second error in a second direction intersecting with the first direction based on a detection result obtained by the sensor, and acquire a directed calibration value with respect to the directed value based on the first error and the second error. ; [0264] Next, the arithmetic processing unit 101 performs a test for grasp. Specifically, for example, the arithmetic processing unit 101 executes a plurality of grasping methods as a test to check success or failure in grasp, and deletes data of a failed grasping method. …) Examiner Note: Second error in a second direction corresponds to second holding mode Therefore, prior to the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art to modify Wellman to provide a second data collection set for the purpose of correcting, verifying, or calibrating the object data related to identifying a grasping location as taught in Ogawa. Refining the object related grasping data by providing a second set of data increases the accuracy of each grasping attempt while also improving the quality of the available object data related to grasping locations and object classification. Claim 6: Wellman teaches the following limitations: A trained model comprising: a class inference model configured to infer, based on an inference image of a holding target object to be held by a robot, (Wellman – [0073] The attribute detection module 710 can interact with any number and/or type of sensors to determine attributes of an item to be grasped. For example, the attribute detection module 710 can receive information from imaging devices or optical sensors to determine physical characteristics, such as size, shape, position, orientation, and/or surface characteristics (e.g., how porous and/or slippery the item is based on the surface appearance). Any suitable optical technology can be utilized, …) a classification result obtained by classifying the holding target object into a predetermined holding category; ( Wellman – [0020] … Based on the detected attributes, the controller 32 may access (as at 49) the item database 37, such as to access a record for the inventory item 40. The record can include information about attributes of the item, such as weight, size, shape, or other physical characteristics of the item. Based on the record from the item database 37 and/or the detected attributes from the sensor package 16, the controller 32 may access (as at 48) an item gripping database 36 to access an item grasping strategy stored for that item or items with similar characteristics. The controller 32 can provide instructions to the robotic arm 12 for gripping the item 40 based on the gripping strategy accessed from the gripping database at 36 (e.g., at 52).) a first holding mode inference model configured to infer, based on the classification result and the inference image, a first holding mode for the holding target object; and (Wellman – [0017] … The grasping strategy may be based at least in part upon a database containing information about the item, characteristics of the item, and/or similar items, such as information indicating grasping strategies that have been successful or unsuccessful for such items in the past. Entries or information in the database may be originated and/or updated based on human input for grasping strategies, determined characteristics of a particular item, and/or machine learning related to grasping attempts of other items sharing characteristics with the particular item. Embodiments herein include aspects directed to generating and/or accessing such databases.) Wellman does not explicitly teach the following limitations, however Ogawa teaches: a second holding mode inference model configured to infer, based on the first holding mode and the inference image, a second holding mode for the holding target object. (Ogawa – [0059] … The first camera and the sensor are arranged on the manipulator. The manipulator control unit controls the manipulator so that the movable part is moved to a position corresponding to a directed value. The calibration processing unit acquires a first error in a first direction based on an image photographed by the first camera, acquire a second error in a second direction intersecting with the first direction based on a detection result obtained by the sensor, and acquire a directed calibration value with respect to the directed value based on the first error and the second error. ; [0264] Next, the arithmetic processing unit 101 performs a test for grasp. Specifically, for example, the arithmetic processing unit 101 executes a plurality of grasping methods as a test to check success or failure in grasp, and deletes data of a failed grasping method. …) Examiner Note: Second error in a second direction corresponds to second holding mode Therefore, prior to the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art to modify Wellman to provide a second data collection set for the purpose of correcting, verifying, or calibrating the object data related to identifying a grasping location as taught in Ogawa. Refining the object related grasping data by providing a second set of data increases the accuracy of each grasping attempt while also improving the quality of the available object data related to grasping locations and object classification. Claim 7: Wellman teaches the following limitations: A holding mode inference device configured to infer, using a trained model, a mode in which a robot holds a target object, (Wellman – [0017] … The grasping strategy may be based at least in part upon a database containing information about the item, characteristics of the item, and/or similar items, such as information indicating grasping strategies that have been successful or unsuccessful for such items in the past. Entries or information in the database may be originated and/or updated based on human input for grasping strategies, determined characteristics of a particular item, and/or machine learning related to grasping attempts of other items sharing characteristics with the particular item. Embodiments herein include aspects directed to generating and/or accessing such databases.) the trained model comprising: a class inference model configured to infer, based on an inference image of a holding target object to be held by the robot, (Wellman – [0073] The attribute detection module 710 can interact with any number and/or type of sensors to determine attributes of an item to be grasped. For example, the attribute detection module 710 can receive information from imaging devices or optical sensors to determine physical characteristics, such as size, shape, position, orientation, and/or surface characteristics (e.g., how porous and/or slippery the item is based on the surface appearance). Any suitable optical technology can be utilized, …) a classification result obtained by classifying the holding target object into a predetermined holding category; (Wellman – [0020] … Based on the detected attributes, the controller 32 may access (as at 49) the item database 37, such as to access a record for the inventory item 40. The record can include information about attributes of the item, such as weight, size, shape, or other physical characteristics of the item. Based on the record from the item database 37 and/or the detected attributes from the sensor package 16, the controller 32 may access (as at 48) an item gripping database 36 to access an item grasping strategy stored for that item or items with similar characteristics. The controller 32 can provide instructions to the robotic arm 12 for gripping the item 40 based on the gripping strategy accessed from the gripping database at 36 (e.g., at 52).) a first holding mode inference model configured to infer, based on the classification result and the inference image, a first holding mode for the holding target object; and (Wellman – [0017] … The grasping strategy may be based at least in part upon a database containing information about the item, characteristics of the item, and/or similar items, such as information indicating grasping strategies that have been successful or unsuccessful for such items in the past. Entries or information in the database may be originated and/or updated based on human input for grasping strategies, determined characteristics of a particular item, and/or machine learning related to grasping attempts of other items sharing characteristics with the particular item. Embodiments herein include aspects directed to generating and/or accessing such databases.) Wellman does not explicitly teach the following limitations, however Ogawa teaches: a second holding mode inference model configured to infer, based on the first holding mode and the inference image, a second holding mode for the holding target object. (Ogawa – [0059] … The first camera and the sensor are arranged on the manipulator. The manipulator control unit controls the manipulator so that the movable part is moved to a position corresponding to a directed value. The calibration processing unit acquires a first error in a first direction based on an image photographed by the first camera, acquire a second error in a second direction intersecting with the first direction based on a detection result obtained by the sensor, and acquire a directed calibration value with respect to the directed value based on the first error and the second error. ; [0264] Next, the arithmetic processing unit 101 performs a test for grasp. Specifically, for example, the arithmetic processing unit 101 executes a plurality of grasping methods as a test to check success or failure in grasp, and deletes data of a failed grasping method. …) Examiner Note: Second error in a second direction corresponds to second holding mode Therefore, prior to the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art to modify Wellman to provide a second data collection set for the purpose of correcting, verifying, or calibrating the object data related to identifying a grasping location as taught in Ogawa. Refining the object related grasping data by providing a second set of data increases the accuracy of each grasping attempt while also improving the quality of the available object data related to grasping locations and object classification. Claim 3 is rejected under 35 U.S.C. 103 as being unpatentable over Wellman (US 20170021499 A1) as modified by Ogawa (US 20180272535 A1) in view of Claussen (US 20200262064 A1) Wellman in combination with Ogawa does not explicitly teach the following limitations, however Claussen teaches: Claim 3: The trained model generation method according to claim 1, wherein the trained model comprises a third holding mode inference model configured to infer, based on the classification result received from the class inference model and an image of the target object, a mode in which the robot is not to hold the target object, (Claussen - [0032] It will be appreciated that in a practical scenario the object could have shifted after an unsuccessful grasp. In such a situation, to perform another grasp at the new location, one would perform a registration of the object and grasp locations based on the new location of the object. (As will be appreciated by one skilled in the art, the object registration would compute the difference in location between the initial grasp attempt and the current location of the object). Without limitation, this registration, for example, can be performed based on a 2D red, green, and blue (RGB) image of the camera which can operate at a relatively short distance to the object … ; [0033] Without limitation, processor 22 may be configured to assign a penalty to the selected grasp location, which resulted in the unsuccessful grasp of the respective object. As should be now appreciated, this ensures that the disclosed system is not stuck repeating an unfavorable operation (i.e., repeating grasps at an unsuccessful grasp location).) and the second holding mode inference model is configured to infer, further based on an inference result of the third holding mode inference model, a mode in which the robot holds the target object and output the inferred mode. (Claussen - [0031] When the commanded grasp at the selected grasp location results in an unsuccessful grasp of the respective object of the one or more objects of the objects 20 in the environment of the robotic manipulator, in one non-limiting embodiment, from the previously calculated respective values indicative of grasp quality for the candidate grasp locations, processor 18 may be configured to select a further grasp location likely to result in a successful grasp of the respective object of the one or more objects of the objects 20 in the environment of the robotic manipulator. That is, in case the grasp fails, (e.g., the respective object being picked up slips from gripper 16), the disclosed imaging-based system essentially in real-time would detect such an issue. … ) Therefore, prior to the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art to modify Wellman and Ogawa to provide a method for determining whether to attempt a grasp of an object or not attempt a grasp as taught in Claussen. Having the ability to make accurate determinations regarding whether to perform a grasping attempt, reduces the amount of wasted motion and increase robot motion efficiency during the grasping process. Claim 4 is rejected under 35 U.S.C. 103 as being unpatentable over Wellman (US 20170021499 A1) as modified by Ogawa (US 20180272535 A1) in view of Yazawa (JP2020021212A) Wellman in combination with Ogawa does not explicitly teach the following limitations, however Yazawa teaches: Claim 4: The trained model generation method according to claim 3, wherein the first holding mode inference model is configured to output a position to be touched when the target object is held, and the third holding mode inference model is configured to output a position not to be touched when the target object is held. (Yazawa – [0051] … Alternatively, a label indicating a gripping position in a region for one object is given. Specifically, a label indicating the gripping position of the object is assigned to pixels inside a predetermined range from the center in the first region. Further, the teacher data is generated by assigning a label indicating that the position is not the gripping position of the object to pixels outside a predetermined range from the center in the first region corresponding to the first image. A process of generating teacher data will be described with reference to FIG. 10. … By generating the teacher data to which the label indicating the gripping position is assigned, the estimation accuracy of the learning model can be improved. … ; [see also Figure 10] ) Examiner Note: Yazawa creates a label map utilizing numbers, colors, or tones contrasting regions that can be gripped with regions which cannot be gripped. Therefore, prior to the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art to modify Wellman and Ogawa to provide accurate feedback to the robot system regarding regions of the target object which can be grasped or should not be grasped as taught in Yazawa. Providing accurate mapping of the grasping regions on target objects allows the robot control system to accurately determine viable grasping locations on target objects, thereby reducing the amount of wasted motion, potential damage to parts, which in turn increases robot motion efficiency during the grasping process. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure or directed to the state of the art is listed on the enclosed PTO-892. The following is a brief description for relevant prior art that was cited but not applied: Tellex (US 20200368899 A1) describes a method for creating a probabilistic object model to identify, localize, and manipulate the object, the probabilistic object model using light fields to enable efficient inference for object detection and localization while incorporating information from every pixel observed from across multiple camera locations. Valpola (US 20130266205 A1) describes a system for recognizing physical objects. In the method an object is gripped with a gripper, which is attached to a robot arm or mounted separately. Using an image sensor, a plurality of source images of an area comprising the object is captured while the object is moved with the robot arm. The camera is configured to move along the gripper, attached to the gripper or otherwise able to monitor the movement of the gripper. Moving image elements are extracted from the plurality of source images by computing a variance image from the source images and forming a filtering image from the variance image. Nagarajan (US 10981272 B1) describes a system and method for robot grasp learning. In some implementations, grasp data describing grasp attempts by robots is received. A set of the grasp attempts that represent unsuccessful grasp attempts is identified. Based on the set of grasp attempts representing unsuccessful grasp attempts, a grasp model based on sensor data for the unsuccessful grasp attempts. Any inquiry concerning this communication or earlier communications from the examiner should be directed to ALAN LINDSAY OSTROW whose telephone number is (703)756-1854. The examiner can normally be reached M-F 8 - 5. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Adam Mott can be reached on (571) 270 5376. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /ALAN LINDSAY OSTROW/ Examiner, Art Unit 3657 /ADAM R MOTT/Supervisory Patent Examiner, Art Unit 3657
Read full office action

Prosecution Timeline

Feb 23, 2024
Application Filed
Sep 11, 2025
Non-Final Rejection — §101, §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12583119
TRANSFER SYSTEM AND TRANSFER METHOD
2y 5m to grant Granted Mar 24, 2026
Patent 12576525
ROBOT SYSTEM
2y 5m to grant Granted Mar 17, 2026
Patent 12569989
ESTIMATION DEVICE, ESTIMATION METHOD, ESTIMATION PROGRAM, AND ROBOT SYSTEM
2y 5m to grant Granted Mar 10, 2026
Patent 12539611
ROBOT CONTROL APPARATUS, ROBOT CONTROL SYSTEM, AND ROBOT CONTROL METHOD
2y 5m to grant Granted Feb 03, 2026
Patent 12491627
INFORMATION PROCESSING APPARATUS AND COOKING SYSTEM
2y 5m to grant Granted Dec 09, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
74%
Grant Probability
99%
With Interview (+37.7%)
2y 7m
Median Time to Grant
Low
PTA Risk
Based on 35 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month