Prosecution Insights
Last updated: April 19, 2026
Application No. 18/209,477

MACHINE LEARNING DEVICE, ROBOT SYSTEM, AND MACHINE LEARNING METHOD FOR LEARNING OBJECT PICKING OPERATION

Non-Final OA §101§102§103
Filed
Jun 14, 2023
Examiner
DETERDING, GWYNEVERE AMELIA
Art Unit
2125
Tech Center
2100 — Computer Architecture & Software
Assignee
Preferred Networks Inc.
OA Round
1 (Non-Final)
100%
Grant Probability
Favorable
1-2
OA Rounds
3y 3m
To Grant
99%
With Interview

Examiner Intelligence

Grants 100% — above average
100%
Career Allow Rate
2 granted / 2 resolved
+45.0% vs TC avg
Strong +100% interview lift
Without
With
+100.0%
Interview Lift
resolved cases with interview
Typical timeline
3y 3m
Avg Prosecution
14 currently pending
Career history
16
Total Applications
across all art units

Statute-Specific Performance

§101
21.3%
-18.7% vs TC avg
§103
32.0%
-8.0% vs TC avg
§102
8.0%
-32.0% vs TC avg
§112
20.0%
-20.0% vs TC avg
Black line = Tech Center average estimate • Based on career data from 2 resolved cases

Office Action

§101 §102 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claims 1-28 are presented for examination. Information Disclosure Statement The information disclosure statements (IDS) submitted on June 14, 2023, August 2, 2023, August 7, 2023, December 18, 2023, February 16, 2024, and November 20, 2024 are in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statements are being considered by the examiner. Priority Acknowledgment is made of applicant’s claim for foreign priority under 35 U.S.C. 119 (a)-(d). The certified copies have been filed in parent Application No. 15/223,141, filed on July 29, 2016. Claim Objections Claims 5 and 19 are objected to because of the following informalities: "a number of times of successes" should read "a number of times of success” or “a number of successes.” Appropriate correction is required. Specification The disclosure is objected to because of the following informalities: Page 7 line 11: "the machine learning device illustrated as FIG. 1" should read "the machine learning device illustrated in FIG. 1" Page 13 line 29: "The machine learning device 20 illustrated as FIG. 1" should read "The machine learning device 20 illustrated in FIG.1" Page 18, line 7: "As illustrated as FIG. 2" should read "As illustrated in FIG. 2" Page 18, line 23: "as illustrated as FIG. 3" should read "as illustrated in FIG. 3" Page 19 lines 27-28: "execute the detection mode is executed using" should read "execute the detection mode using" Page 20 line 12: "as illustrated as FIG. 1" should read "as illustrated in FIG. 1" Page 25, line 20: "setting an an optimal" should read "setting an optimal" Page 25, line 25: "as depicted as FIG. 1" should read "as depicted in FIG. 1" Page 25, line 30: "The robot system 10 preferably share or exchange" should read "The plurality of robots 14 preferably share or exchange" Page 27 lines 30-32: "machine learning device illustrated as FIG. 1" should read "machine learning device illustrated in FIG. 1" and "machine learning device 20 depicted as FIG. 1" should read "machine learning device 20 depicted in FIG. 1" Page 31, line 28: "as depicted as FIG. 5" should read "as depicted in FIG. 5" Appropriate correction is required. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-14 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. The analysis of the claims will follow the 2019 Revised Patent Subject Matter Eligibility Guidance (“2019 PEG”). Claim 1 Step 1: The claim recites an estimation method, and is therefore directed to the statutory category of processes. Step 2A Prong 1: The claim recites: “generate information for picking up one of the plurality of objects”: This limitation could encompasses mentally generating information for picking up one of the plurality of objects. Step 2A Prong 2: This judicial exception is not integrated into a practical application. The claim further recites “obtaining, by at least one processor, data in relation to a plurality of objects including at least one of information in relation to shapes of the plurality of objects, or information after processing the information in relation to the shapes of the plurality of objects,” however, this limitation amounts to the insignificant extra-solution activity of mere data gathering (MPEP § 2106.05(g)). The claim also recites that the at least one processor causes a neural network to generate the information for picking up one of the plurality of objects, by inputting the data in relation to the plurality of objects into the neural network. However, this amounts to mere instructions to apply the judicial exception on a generic computer programmed with a generic class of computer algorithms (MPEP § 2106.05(f)), given that it is merely reciting a processor and a neural network for performing the judicial exception of generating information for picking up an object. Step 2B: The claim does not contain significantly more than the judicial exception. The obtaining data limitation, in addition to being insignificant extra-solution activity, is also directed to the well-understood, routine, and conventional activity of receiving or transmitting data over a network (MPEP § 2106.05(d)(II): buySAFE, Inc. v. Google, Inc., 765 F.3d 1350, 1355, 112 USPQ2d 1093, 1096 (Fed. Cir. 2014) (computer receives and sends information over a network)). The inputting data into the neural network limitation, and the causing by the at least one processor, a neural network to [perform the judicial exception] limitation amount to mere instructions to apply the judicial exception (MPEP § 2106.05(f)) for the same reasons given above. As an ordered whole, the claim is directed to a mentally performable process of generating information for picking up an object. Nothing in the claim provides significantly more than this. As such, the claim is not patent eligible. Claim 2 Step 1: A process, as above. Step 2A Prong 1: The claim recites the same judicial exception as claim 1 above. Step 2A Prong 2: This judicial exception is not integrated into a practical application. The claim further recites “wherein the information after processing the information in relation to the shapes includes at least one of position information of the plurality of objects, orientation information of the plurality of objects, or image information of the plurality of objects.” However, this limitation merely further limits the information obtaining step, and still amounts to the insignificant extra-solution activity of mere data gathering (MPEP § 2106.05(g)). Step 2B: The claim does not contain significantly more than the judicial exception. The obtaining data limitation, in addition to being insignificant extra-solution activity, is also directed to the well-understood, routine, and conventional activity of receiving or transmitting data over a network (MPEP § 2106.05(d)(II): buySAFE, Inc. v. Google, Inc., 765 F.3d 1350, 1355, 112 USPQ2d 1093, 1096 (Fed. Cir. 2014) (computer receives and sends information over a network)). Claim 3 Step 1: A process, as above. Step 2A Prong 1: The claim recites the same judicial exception as claim 1 above. Step 2A Prong 2: This judicial exception is not integrated into a practical application. The claim further recites “wherein the information in relation to the shapes of the plurality of objects includes at least one of image information of the plurality of objects, three-dimensional position information of the plurality of objects, or distance information from a measuring device to surfaces of the plurality of objects.” However, this limitation merely further limits the information obtaining step, and still amounts to the insignificant extra-solution activity of mere data gathering (MPEP § 2106.05(g)). Step 2B: The claim does not contain significantly more than the judicial exception. The obtaining data limitation, in addition to being insignificant extra-solution activity, is also directed to the well-understood, routine, and conventional activity of receiving or transmitting data over a network (MPEP § 2106.05(d)(II): buySAFE, Inc. v. Google, Inc., 765 F.3d 1350, 1355, 112 USPQ2d 1093, 1096 (Fed. Cir. 2014) (computer receives and sends information over a network)). Claim 4 Step 1: A process, as above. Step 2A Prong 1: The claim recites: “wherein the neural network is updated by reinforcement learning using a reward calculated based on information in relation to a picking operation of an object”: This limitation could encompass mentally updating a neural network by reinforcement learning using a reward calculated based on information in relation to a picking operation of an object, such as by mentally updating weights of the neural network. Step 2A Prong 2: This judicial exception is not integrated into a practical application. No further additional elements are recited, see analysis of claim 1. Step 2B: The claim does not contain significantly more than the judicial exception. See analysis of claim 1. Claim 5 Step 1: A process, as above. Step 2A Prong 1: The claim recites: “wherein the information in relation to the picking operation of the object includes at least one of success or failure of the picking operation of the object, a number of times of successes of picking operations of objects, a time taken for picking up or transporting the object, a force acting on a hand unit picking up or transporting the object, an achievement level of a post-process after the picking operation of the object, a change in state of the object, or energy for picking up or transporting the object”: This limitation merely further limits the information used in updating the neural network by reinforcement learning, which is still mentally performable given at least one of these options for calculating a reward. Step 2A Prong 2: This judicial exception is not integrated into a practical application. No further additional elements are recited, see analysis of claim 1. Step 2B: The claim does not contain significantly more than the judicial exception. See analysis of claim 1. Claim 6 Step 1: A process, as above. Step 2A Prong 1: The claim recites: “wherein the information in relation to the picking operation of the object includes information for changing positions of a plurality of objects”: This limitation merely further limits the information used in updating the neural network by reinforcement learning, which is still mentally performable given this information for calculating a reward. Step 2A Prong 2: This judicial exception is not integrated into a practical application. No further additional elements are recited, see analysis of claim 1. Step 2B: The claim does not contain significantly more than the judicial exception. See analysis of claim 1. Claim 7 Step 1: A process, as above. Step 2A Prong 1: The claim recites: “wherein the neural network is a value function in the reinforcement learning”: This limitation merely further limits the step of updating the neural network by reinforcement learning, which is still mentally performable when the neural network is a value function in the reinforcement learning. Step 2A Prong 2: This judicial exception is not integrated into a practical application. No further additional elements are recited, see analysis of claim 1. Step 2B: The claim does not contain significantly more than the judicial exception. See analysis of claim 1. Claim 8 Step 1: A process, as above. Step 2A Prong 1: The claim recites: “wherein the value function represents a value of control information of a robot picking up or transporting the object”: This limitation merely further limits the step of updating the neural network by reinforcement learning, which is still mentally performable when the value function represents a value of control information of a robot picking up or transporting the object. Step 2A Prong 2: This judicial exception is not integrated into a practical application. No further additional elements are recited, see analysis of claim 1. Step 2B: The claim does not contain significantly more than the judicial exception. See analysis of claim 1. Claim 9 Step 1: A process, as above. Step 2A Prong 1: The claim recites: “wherein the neural network is updated to minimize an error calculated based on a label for picking up an object and an output of the neural network”: This limitation could encompass mentally updating the neural network to minimize an error calculated based on a label for picking up an object and an output of the neural network, such as by mentally updating weight values. Step 2A Prong 2: This judicial exception is not integrated into a practical application. No further additional elements are recited, see analysis of claim 1. Step 2B: The claim does not contain significantly more than the judicial exception. See analysis of claim 1. Claim 10 Step 1: A process, as above. Step 2A Prong 1: The claim recites: “wherein the neural network outputs at least one of position information of the object or information in relation to a success rate of picking up the object”: This limitation merely further limits the output of the neural network used to update the neural network, and updating the neural network is still mentally performable given at least one of these options. Step 2A Prong 2: This judicial exception is not integrated into a practical application. No further additional elements are recited, see analysis of claim 1. Step 2B: The claim does not contain significantly more than the judicial exception. See analysis of claim 1. Claim 11 Step 1: A process, as above. Step 2A Prong 1: The claim recites: “determining…whether the information for picking up the one of the plurality of objects is abnormal”: This limitation encompasses mentally determining whether the information for picking up the one of the plurality of objects is abnormal. Step 2A Prong 2: This judicial exception is not integrated into a practical application. The claim further recites that the determining is performed by the at least one processor. However, this limitation amounts to mere instructions to apply the judicial exception on a generic computer (MPEP § 2106.05(f)). Step 2B: The claim does not contain significantly more than the judicial exception. The processor limitation amounts to mere instructions to apply the judicial exception on a generic computer (MPEP § 2106.05(f)) as stated above. Claim 12 Step 1: A process, as above. Step 2A Prong 1: The claim recites: “wherein the neural network is updated by using data obtained from simulations”: This limitation encompasses mentally updating the neural network using data obtained from simulations, such as by mentally updating weights. Step 2A Prong 2: This judicial exception is not integrated into a practical application. No further additional elements are recited, see analysis of claim 1. Step 2B: The claim does not contain significantly more than the judicial exception. See analysis of claim 1. Claim 13 Step 1: A process, as above. Step 2A Prong 1: The claim recites: “wherein the information for picking up the one of the plurality of objects includes at least one of robot control information, position information of a hand picking up or transporting the one of the plurality of objects, orientation information of the hand, take-out direction information of the hand, position information of the one of the plurality of objects, success rate information of picking up an object, or control information of a measuring device”: This limitation merely further limits the “generate information” step, which is still mentally performable given at least one of the recited options. Step 2A Prong 2: This judicial exception is not integrated into a practical application. No further additional elements are recited, see analysis of claim 1. Step 2B: The claim does not contain significantly more than the judicial exception. See analysis of claim 1. Claim 14 Step 1: The claim recites an estimation device, and is therefore directed to the statutory category of machines. Step 2A Prong 1: The claim recites: “estimate information for picking up one of the plurality of objects”: This limitation encompasses mentally estimating information for picking up one of the plurality of objects. Step 2A Prong 2: This judicial exception is not integrated into a practical application. The claim further recites “obtain data in relation to a plurality of objects including at least one of information in relation to shapes of the plurality of objects, or information after processing the information in relation to the shapes of the plurality of objects,” however, this limitation amounts to the insignificant extra-solution activity of mere data gathering (MPEP § 2106.05(g)). The claim also recites “at least one memory” and “at least one processor coupled to the at least one memory” that estimates the information for picking up one of the plurality of objects by inputting the data in relation to the plurality of objects into a neural network. However, this amounts to mere instructions to apply the judicial exception on a generic computer programmed with a generic class of computer algorithms (MPEP § 2106.05(f)), given that it is merely reciting a processor, memory, and a neural network for performing the judicial exception of estimating information for picking up an object. Step 2B: The claim does not contain significantly more than the judicial exception. The obtaining data limitation, in addition to being insignificant extra-solution activity, is also directed to the well-understood, routine, and conventional activity of receiving or transmitting data over a network (MPEP § 2106.05(d)(II): buySAFE, Inc. v. Google, Inc., 765 F.3d 1350, 1355, 112 USPQ2d 1093, 1096 (Fed. Cir. 2014) (computer receives and sends information over a network)). The memory, processor, and inputting data into a neural network limitations amount to mere instructions to apply the judicial exception (MPEP § 2106.05(f)) for the same reasons given above. As an ordered whole, the claim is directed to a mentally performable process of estimating information for picking up an object. Nothing in the claim provides significantly more than this. As such, the claim is not patent eligible. Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. Claims 1-3, 13-17, and 27-28 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Domae et al. (JP2013052490A) (“Domae”). Regarding claim 1, Domae discloses “An estimation method, comprising: obtaining, by at least one processor, data in relation to a plurality of objects including at least one of information in relation to shapes of the plurality of objects ([0012]: The sensor 2 measures the workpieces 7 randomly piled in the supply box 8 and transmits the measurement data to the information processing unit 5), or information after processing the information in relation to the shapes of the plurality of objects ([0012]: In the information processing unit 5, first, the measurement feature extraction unit 51 extracts the appearance and shape features (called measurement features) of the workpiece from the measurement data); and causing, by the at least one processor, a neural network to generate information for picking up one of the plurality of objects by inputting the data in relation to the plurality of objects into the neural network” ([0023]: The measurement feature identification unit 52 estimates whether the extracted measurement feature is easy to grasp based on past measurement features and whether or not they were successfully extracted (successful or not). This can be solved as a pattern identification problem in which a newly extracted measurement feature is classified into one of two classes, extraction success or failure, using a group of past measurement features in a multidimensional feature vector space. In this case, a neural network, a support vector machine (SVM), a k-nearest neighbor classifier, or a Bayesian classification may be used as a classifier to solve the problem; the examiner notes that the ease of grasping estimated by the neural network corresponds to “information for picking up one of the plurality of objects” because it is used to determine how the robot will pick up the object, see [0013]: “The grasping operation calculation unit 53 calculates the grasping position and orientation using a method according to the ease of grasping of the estimated measurement features”, and [0018]: The hand 3 grasps and removes the randomly piled workpieces 7). Regarding claim 2, the rejection of claim 1 is incorporated. Domae further discloses “wherein the information after processing the information in relation to the shapes includes at least one of position information of the plurality of objects, orientation information of the plurality of objects, or image information of the plurality of objects” ([0021]: “For example, in the case of a two-dimensional sensor, the measurement features may include the spatial arrangement of the texture on the workpiece surface, the complexity of the texture, the amount of edges, the area or main axis direction of the region surrounded by the edges, and the direction of the local edges. In the case of a three-dimensional sensor, the main normal direction of the surface, the distribution of the normal direction of the surface, a histogram of this, flatness, etc. may also be used”; the examiner notes that the “main axis direction” and “main normal direction” correspond to orientation information of the plurality of objects). Regarding claim 3, the rejection of claim 1 is incorporated. Domae further discloses “wherein the information in relation to the shapes of the plurality of objects includes at least one of image information of the plurality of objects, three-dimensional position information of the plurality of objects, or distance information from a measuring device to surfaces of the plurality of objects” ([0016-0017]: “The sensor 2 measures the workpieces 7 randomly piled up in a supply box 8, and may be installed on the robot 4 or on another fixed jig, movable slider, or the like. It measures two-dimensional images of the workpieces in a bulk pile or three-dimensional data. The format of the three-dimensional information measured by the sensor 2 may be point cloud data using X, Y, and Z, total three-dimensional position information, or a range image format in which each pixel has height information of the measurement target”). Regarding claim 13, the rejection of claim 1 is incorporated. Domae further discloses “wherein the information for picking up the one of the plurality of objects includes at least one of robot control information, position information of a hand picking up or transporting the one of the plurality of objects, orientation information of the hand, take-out direction information of the hand, position information of the one of the plurality of objects, success rate information of picking up an object, or control information of a measuring device” ([0006]: “…wherein the measurement feature identification unit estimates the extraction success rate of the measurement features extracted by the measurement feature identification unit based on past measurement features and extraction success/failure information stored in the DB”; the examiner notes that “extraction success rate” corresponds to “success rate information of picking up an object”). Regarding claim 14, Domae discloses “An estimation device, comprising: at least one memory ([0006]: This invention is provided with a DB that stores at least measurement features of workpieces and success or failure of picking); and at least one processor coupled to the at least one memory ([0006]: an information processing unit that calculates from the measurement data the gripping position and posture and at least one of operation control information of an opening /closing amount, a gripping force, and an operation speed) and configured to: obtain data in relation to a plurality of objects including at least one of information in relation to shapes of the plurality of objects ([0012]: The sensor 2 measures the workpieces 7 randomly piled in the supply box 8 and transmits the measurement data to the information processing unit 5), or information after processing the information in relation to the shapes of the plurality of objects ([0012]: In the information processing unit 5, first, the measurement feature extraction unit 51 extracts the appearance and shape features (called measurement features) of the workpiece from the measurement data); and estimate information for picking up one of the plurality of objects by inputting the data in relation to the plurality of objects into a neural network ([0023]: The measurement feature identification unit 52 estimates whether the extracted measurement feature is easy to grasp based on past measurement features and whether or not they were successfully extracted (successful or not). This can be solved as a pattern identification problem in which a newly extracted measurement feature is classified into one of two classes, extraction success or failure, using a group of past measurement features in a multidimensional feature vector space. In this case, a neural network, a support vector machine (SVM), a k-nearest neighbor classifier, or a Bayesian classification may be used as a classifier to solve the problem; the examiner notes that the ease of grasping estimated by the neural network corresponds to “information for picking up one of the plurality of objects” because it is used to determine how the robot will pick up the object, see [0013]: “The grasping operation calculation unit 53 calculates the grasping position and orientation using a method according to the ease of grasping of the estimated measurement features”, and [0018]: The hand 3 grasps and removes the randomly piled workpieces 7). Regarding claim 15, Domae discloses “A learning method, comprising: obtaining, by at least one processor, data in relation to a plurality of objects including at least one of information in relation to shapes of the plurality of objects ([0012]: The sensor 2 measures the workpieces 7 randomly piled in the supply box 8 and transmits the measurement data to the information processing unit 5), or information after processing the information in relation to the shapes of the plurality of objects ([0012]: In the information processing unit 5, first, the measurement feature extraction unit 51 extracts the appearance and shape features (called measurement features) of the workpiece from the measurement data); and learning, by the at least one processor, a neural network to output information for picking up one of the plurality of objects by inputting the data in relation to the plurality of objects into the neural network” ([0023]: The measurement feature identification unit 52 estimates whether the extracted measurement feature is easy to grasp based on past measurement features and whether or not they were successfully extracted (successful or not). This can be solved as a pattern identification problem in which a newly extracted measurement feature is classified into one of two classes, extraction success or failure, using a group of past measurement features in a multidimensional feature vector space. In this case, a neural network, a support vector machine (SVM), a k-nearest neighbor classifier, or a Bayesian classification may be used as a classifier to solve the problem; the examiner notes that the ease of grasping estimated by the neural network corresponds to “information for picking up one of the plurality of objects” because it is used to determine how the robot will pick up the object, see [0013]: “The grasping operation calculation unit 53 calculates the grasping position and orientation using a method according to the ease of grasping of the estimated measurement features”, and [0018]: The hand 3 grasps and removes the randomly piled workpieces 7; the examiner additionally notes that given that the estimated ease of grasping is based on past measurement features and whether or not they were successfully extracted, the neural network classifier would be learned using this training data). Regarding claim 16, the rejection of claim 15 is incorporated. Claim 16 is a learning method claim corresponding to estimation method claim 2 and the remainder of the rejection follows the same rationale given for claim 2 above. Regarding claim 17, the rejection of claim 15 is incorporated. Claim 17 is a learning method claim corresponding to estimation method claim 3 and the remainder of the rejection follows the same rationale given for claim 3 above. Regarding claim 27, the rejection of claim 15 is incorporated. Claim 27 is a learning method claim corresponding to estimation method claim 13 and the remainder of the rejection follows the same rationale given for claim 13 above. Regarding claim 28, Domae discloses “A learning device, comprising: at least one memory ([0006]: This invention is provided with a DB that stores at least measurement features of workpieces and success or failure of picking); and at least one processor coupled to the at least one memory ([0006]: an information processing unit that calculates from the measurement data the gripping position and posture and at least one of operation control information of an opening /closing amount, a gripping force, and an operation speed) and configured to: obtain data in relation to a plurality of objects including at least one of information in relation to shapes of the plurality of objects ([0012]: The sensor 2 measures the workpieces 7 randomly piled in the supply box 8 and transmits the measurement data to the information processing unit 5), or information after processing the information in relation to the shapes of the plurality of objects ([0012]: In the information processing unit 5, first, the measurement feature extraction unit 51 extracts the appearance and shape features (called measurement features) of the workpiece from the measurement data); and learn a neural network to output information for picking up one of the plurality of objects by inputting the data in relation to the plurality of objects into the neural network” ([0023]: The measurement feature identification unit 52 estimates whether the extracted measurement feature is easy to grasp based on past measurement features and whether or not they were successfully extracted (successful or not). This can be solved as a pattern identification problem in which a newly extracted measurement feature is classified into one of two classes, extraction success or failure, using a group of past measurement features in a multidimensional feature vector space. In this case, a neural network, a support vector machine (SVM), a k-nearest neighbor classifier, or a Bayesian classification may be used as a classifier to solve the problem; the examiner notes that the ease of grasping estimated by the neural network corresponds to “information for picking up one of the plurality of objects” because it is used to determine how the robot will pick up the object, see [0013]: “The grasping operation calculation unit 53 calculates the grasping position and orientation using a method according to the ease of grasping of the estimated measurement features”, and [0018]: The hand 3 grasps and removes the randomly piled workpieces 7; the examiner additionally notes that given that the estimated ease of grasping is based on past measurement features and whether or not they were successfully extracted, the neural network classifier would be learned using this training data). Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claims 4-10, 12, 18-24, and 26 are rejected under 35 U.S.C. 103 as being unpatentable over Domae in view of Maeda et al. (NPL: View-based Programming with Reinforcement Learning for Robotic Manipulation) (“Maeda”). Regarding claim 4, the rejection of claim 1 is incorporated. Domae does not appear to explicitly disclose the further limitations of the claim. However, Maeda discloses “wherein… [a] neural network is updated by reinforcement learning using a reward calculated based on information in relation to a picking operation of an object” (Maeda, VI-A: “In the end of an episode, the neural network is retrained by BPM using desired actor and critic outputs ((5) and (11)) obtained in the episode in addition to the teaching signals obtained in the demonstration in supervised learning. Nmax episodes are repeated in reinforcement learning to obtain an improved neural network that can achieve manipulation tasks in wider task conditions” and Maeda, VI-B: “Design of appropriate reward rt is necessary for successful reinforcement learning. In order to carry the object to the goal, the Euclidean distance between the object and the goal is useful to design the reward function”; the examiner notes that “distance between the object and the goal” corresponds to “information in relation to a picking operation of an object”). Maeda and the instant application both relate to object-manipulating robots using neural networks and reinforcement learning and are analogous. It would have been obvious to one of ordinary skill in the art, prior to the effective filing date of the claimed invention, to have modified the method disclosed by Domae to include updating the neural network by reinforcement learning using a reward calculated based on information in relation to a picking operation of an object as disclosed by Maeda, and one would have been motivated to do so for the purpose of improving the performance of the object-manipulating robot by allowing the robot to adapt to a wider range of changes in task conditions (see Maeda, III-2). Regarding claim 5, the rejection of claim 4 is incorporated. Domae as modified by Maeda further discloses “wherein the information in relation to the picking operation of the object includes at least one of success or failure of the picking operation of the object, a number of times of successes of picking operations of objects, a time taken for picking up or transporting the object, a force acting on a hand unit picking up or transporting the object, an achievement level of a post-process after the picking operation of the object, a change in state of the object, or energy for picking up or transporting the object” (Maeda, VI-B: “Design of appropriate reward rt is necessary for successful reinforcement learning. In order to carry the object to the goal, the Euclidean distance between the object and the goal is useful to design the reward function”; the examiner notes that “distance between the object and the goal” corresponds to “an-achievement level of a post-process after the picking operation of the object” because it reflects how far the object is from a goal location after being picked up and transported by the robot). Maeda and the instant application both relate to object-manipulating robots using neural networks and reinforcement learning and are analogous. It would have been obvious to one of ordinary skill in the art, prior to the effective filing date of the claimed invention, to have modified the method disclosed by Domae/Maeda so that the information in relation to the picking operation of the object includes an achievement level of a post-process after the picking operation of the object, as disclosed by Maeda, and one would have been motivated to do so for the purpose of improving the performance of the object-manipulating robot by allowing the robot to adapt to a wider range of changes in task conditions (see Maeda, III-2). Regarding claim 6, the rejection of claim 4 is incorporated. Domae further discloses “changing positions of a plurality of objects” (Domae, [0018]: “The hand 3 grasps and removes the randomly piled workpieces 7”) and Maeda further discloses “wherein the information in relation to the picking operation of the object includes information for changing… [a position] of … [the object]” (Maeda, VI-B: “Design of appropriate reward rt is necessary for successful reinforcement learning. In order to carry the object to the goal, the Euclidean distance between the object and the goal is useful to design the reward function”; the examiner notes that “distance between the object and the goal” corresponds to “information for changing a position of the object”). Maeda and the instant application both relate to object-manipulating robots using neural networks and reinforcement learning and are analogous. It would have been obvious to one of ordinary skill in the art, prior to the effective filing date of the claimed invention, to have modified the method disclosed by Domae/Maeda so that the information in relation to the picking operation of the object includes information for changing positions of a plurality of objects, in the manner disclosed by Maeda using the plurality of objects disclosed by Domae, and one would have been motivated to do so for the purpose of improving the performance of the object-manipulating robot by allowing the robot to adapt to a wider range of changes in task conditions (see Maeda, III-2). Regarding claim 7, the rejection of claim 4 is incorporated. Domae as modified by Maeda further discloses “wherein the neural network is a value function in the reinforcement learning” (Maeda, V: “The output of the neural network is [at,V(st)], where at =Δxt is the action of the robot and V(st) is the state value. Thus the neural network is composed of two parts: mapping from states to robot motions (“actor”) and mapping from states to state values (“critic”). The latter is added for actor-critic reinforcement learning, which is described in the next section”; the examiner notes that V(st) is a value function in the reinforcement learning, and is an output of the neural network). Maeda and the instant application both relate to object-manipulating robots using neural networks and reinforcement learning and are analogous. It would have been obvious to one of ordinary skill in the art, prior to the effective filing date of the claimed invention, to have modified the method disclosed by Domae/Maeda to have the neural network be a value function in the reinforcement learning, as disclosed by Maeda, and one would have been motivated to do so for the purpose of improving the performance of the object-manipulating robot by allowing the robot to adapt to a wider range of changes in task conditions (see Maeda, III-2). Regarding claim 8, the rejection of claim 7 is incorporated. Domae as modified by Maeda further discloses “wherein the value function represents a value of control information of a robot picking up or transporting the object” (Maeda, V: “The output of the neural network is [at,V(st)], where at =Δxt is the action of the robot and V(st) is the state value. Thus the neural network is composed of two parts: mapping from states to robot motions (“actor”) and mapping from states to state values (“critic”). The latter is added for actor-critic reinforcement learning, which is described in the next section” and Maeda, VII-B: “Programming of picking the object up and placing it to a goal position by the robot hand was tested”; the examiner notes that the value function represents the value of states which are mapped to robot motions, and the robot motions include picking up and transporting the object). Maeda and the instant application both relate to object-manipulating robots using neural networks and reinforcement learning and are analogous. It would have been obvious to one of ordinary skill in the art, prior to the effective filing date of the claimed invention, to have modified the method disclosed by Domae/Maeda to have the value function represent a value of control information of a robot picking up or transporting the object, as disclosed by Maeda, and one would have been motivated to do so for the purpose of improving the performance of the object-manipulating robot by allowing the robot to adapt to a wider range of changes in task conditions (see Maeda, III-2). Regarding claim 9, the rejection of claim 1 is incorporated. Domae does not appear to explicitly disclose the further limitations of the claim. However, Maeda further discloses “wherein… [a] neural network is updated to minimize an error calculated based on a label for picking up an object and an output of the neural network” (Maeda, V: “We train the neural network with backpropagation with momentum (BPM) [9]. The training signals for the input of the neural network are state values (st) in the demonstration. The training signals for the actor part of the output of the neural network are robot motions (at) in the demonstration. The training signals for the critic part of the output are state values (V(st)), which are calculated as follows…The trained neural network (the actor part) can be used to control the robot hand to play back the demonstrations [1]”; the examiner notes that the training signals obtained from the demonstration of the pick-and-place operation (see Maeda, VII-B: “In view-based supervised learning, a human operator demonstrated pick-and-place with keyboard commands”) correspond to “a label for picking up an object”, and that backpropagation involves minimizing an error between the desired output (from the training signals) and the actual output of the neural network). Maeda and the instant application both relate to object-manipulating robots using neural networks and reinforcement learning and are analogous. It would have been obvious to one of ordinary skill in the art, prior to the effective filing date of the claimed invention, to have modified the method disclosed by Domae to include updating the neural network to minimize an error calculated based on a label for picking up an object and an output of the neural network, as disclosed by Maeda, and one would have been motivated to do so for the purpose of ensuring that the neural network produces the desired output related to picking up an object, given an input image (see Maeda, III-1). Regarding claim 10, the rejection of claim 9 is incorporated. Domae as modified by Maeda further discloses “wherein the neural network outputs at least one of position information of the object or information in relation to a success rate of picking up the object” (Domae, [0006]: “…wherein the measurement feature identification unit estimates the extraction success rate of the measurement features extracted by the measurement feature identification unit based on past measurement features and extraction success/failure information stored in the DB”; the examiner notes that “extraction success rate” corresponds to “information in relation to a success rate of picking up the object”, and that the measurement feature identification unit can use a neural network to estimate the extraction success, see rejection of claim 1). Regarding claim 12, the rejection of claim 1 is incorporated. Domae does not appear to explicitly disclose the further limitations of the claim. However, Maeda further discloses “wherein… [a] neural network is updated by using data obtained from simulations” (Maeda, VII: “Using our proposed method, we performed learning experiments in the virtual environment presented in Section IV…” and Maeda, VII-B: “Next, view-based reinforcement learning was performed. In the reinforcement learning, the object was initially located at a shifted position from which the initial neural network was not able to carry it to the goal (Fig. 12). After Nmax episodes in reinforcement learning, an updated neural network was obtained. It can drive the virtual hand to the goal not only from the initial position of the object in the human demonstration but also from the shifted position (Fig. 13)”). Maeda and the instant application both relate to object-manipulating robots using neural networks and reinforcement learning and are analogous. It would have been obvious to one of ordinary skill in the art, prior to the effective filing date of the claimed invention, to have modified the method disclosed by Domae to include updating the neural network by using data obtained from simulations, as disclosed by Maeda, and one would have been motivated to do so for the purpose of obtaining an improved neural network that can achieve manipulation tasks in wider task conditions (see Maeda, VI-A). Regarding claim 18, the rejection of claim 15 is incorporated. Claim 18 is a learning method claim corresponding to estimation method claim 4 and the remainder of the rejection follows the same rationale given for claim 4 above. Regarding claim 19, the rejection of claim 18 is incorporated. Claim 19 is a learning method claim corresponding to estimation method claim 5 and the remainder of the rejection follows the same rationale given for claim 5 above. Regarding claim 20, the rejection of claim 18 is incorporated. Claim 20 is a learning method claim corresponding to estimation method claim 6 and the remainder of the rejection follows the same rationale given for claim 6 above. Regarding claim 21, the rejection of claim 18 is incorporated. Claim 21 is a learning method claim corresponding to estimation method claim 7 and the remainder of the rejection follows the same rationale given for claim 7 above. Regarding claim 22, the rejection of claim 21 is incorporated. Claim 22 is a learning method claim corresponding to estimation method claim 8 and the remainder of the rejection follows the same rationale given for claim 8 above. Regarding claim 23, the rejection of claim 15 is incorporated. Claim 23 is a learning method claim corresponding to estimation method claim 9 and the remainder of the rejection follows the same rationale given for claim 9 above. Regarding claim 24, the rejection of claim 23 is incorporated. Claim 24 is a learning method claim corresponding to estimation method claim 10 and the remainder of the rejection follows the same rationale given for claim 10 above. Regarding claim 26, the rejection of claim 15 is incorporated. Claim 26 is a learning method claim corresponding to estimation method claim 12 and the remainder of the rejection follows the same rationale given for claim 12 above. Claims 11 and 25 are rejected under 35 U.S.C. 103 as being unpatentable over Domae in view of Asada (US20120226382). Regarding claim 11, the rejection of claim 1 is incorporated. Domae does not appear to explicitly disclose the further limitations of the claim. However, Asada discloses “determining, by the at least one processor, whether… information for picking up… one of… [a] plurality of objects is abnormal” (Asada, [0076-0078]: “A procedure of the processing to calculate the position of the robot in the data processing unit 63 having the configuration explained above is explained with reference to FIGS. 7 and 8… When the data processing unit 63 determines in step S11 that the position data is normal (YES in step S11), the data processing unit 63 sets the first flag to "0" and updates the first position to the position data (step S12). The data processing unit 63 shifts to the next step S14. On the other hand, when the data processing unit 63 determines in step S11 that the position data is abnormal (NO in step S11), the data processing unit 63 sets the first flag to "1" and does not update the first position (step S13)”; the examiner notes that “position data” corresponds to “information for picking up one of a plurality of objects” because it is position data of a robot that picks up and moves workpieces, see Asada, [0032]: “The workpiece W selected as the workpiece target is carried to a predetermined position on a workbench 15 located in a movable range of the robot 12. To accomplish this, the workpiece W is lifted to predetermined height by the robot 12”). Asada and the instant application both relate to robots that manipulate workpieces and are analogous. It would have been obvious to one of ordinary skill in the art, prior to the effective filing date of the claimed invention, to have modified the method of Domae to include determining, by the at least one processor, whether the information for picking up the one of the plurality of objects is abnormal, as disclosed by Asada, and one would have been motivated to do so for the purpose of ensuring that the information for picking up the one of the plurality of objects is accurate and reliable, at that noisy data is not used (see Asada, [0015]). Regarding claim 25, the rejection of claim 15 is incorporated. Claim 25 is a learning method claim corresponding to estimation method claim 11, and the remainder of the rejection follows the same rationale given for claim 11 above. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to GWYNEVERE A DETERDING whose telephone number is (571)272-7657. The examiner can normally be reached Mon-Fri. 9am-5pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Kamran Afshar can be reached at (571) 272-7796. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /G.A.D./Examiner, Art Unit 2125 /KAMRAN AFSHAR/Supervisory Patent Examiner, Art Unit 2125
Read full office action

Prosecution Timeline

Jun 14, 2023
Application Filed
Feb 19, 2026
Non-Final Rejection — §101, §102, §103 (current)

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
100%
Grant Probability
99%
With Interview (+100.0%)
3y 3m
Median Time to Grant
Low
PTA Risk
Based on 2 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month