Prosecution Insights
Last updated: April 19, 2026
Application No. 18/165,641

Systems and Methods for Object Orientation and Manipulation Via Machine Learning Based Control

Final Rejection §103
Filed
Feb 07, 2023
Examiner
CAIN, AARON G
Art Unit
3656
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Mitsubishi Electric Research Laboratories Inc.
OA Round
4 (Final)
40%
Grant Probability
Moderate
5-6
OA Rounds
3y 3m
To Grant
66%
With Interview

Examiner Intelligence

Grants 40% of resolved cases
40%
Career Allow Rate
52 granted / 130 resolved
-12.0% vs TC avg
Strong +26% interview lift
Without
With
+26.1%
Interview Lift
resolved cases with interview
Typical timeline
3y 3m
Avg Prosecution
42 currently pending
Career history
172
Total Applications
across all art units

Statute-Specific Performance

§101
4.3%
-35.7% vs TC avg
§103
57.4%
+17.4% vs TC avg
§102
19.7%
-20.3% vs TC avg
§112
17.7%
-22.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 130 resolved cases

Office Action

§103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Status of Claims The Office is in response to the application filed 01/02/2026. Claims 1-18 are presently pending and are presented for examination. Response to Arguments Applicant’s arguments, see pages 8-10, filed 01/02/2026, with respect to the rejection(s) of claim(s) 1-20 under 35 U.S.C. 102(a)(1) as being anticipated by Buchi et al. US 6056108 A (“Buchi”) have been fully considered and are persuasive. The amendments to the claims have overcome the rejection. Therefore, the rejection has been withdrawn. However, upon further consideration, a new ground(s) of rejection is made under 35 U.S.C. 103 in view of Buchi in combination with Kalashnikov et al. US 20210237266 A1 (“Kalashnikov”). Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1-2, 5-7, 9-11, 13, and 17-18 are rejected under 35 U.S.C. 103 as being unpatentable over Buchi et al. US 6056108 A (“Buchi”) in combination with Kalashnikov et al. US 20210237266 A1 (“Kalashnikov”). Regarding Claim 1. Buchi teaches a robotic assembly, comprising: a supporting surface configured to support an object (FIG. 1, number 50); a set of actuators fixedly mounted relative to the supporting surface, wherein each actuator of the set of actuators is configured to apply an impulse to the supporting surface with energy governed by a corresponding control command of a set of control commands (FIG 1, one embodiment of a parts feeder constructed in accordance with the invention includes a framed, flexible planar support onto which parts are dispensed, and which defines a selection/transformation zone in which parts are detected by a machine vision system. The positional states of parts are individually or collectively changed by a transformer to reorient and/or redistribute parts so as to present parts in the desired orientations and locations to a robot for selection. The transformer applies impulsed energy to selective locations on the support, based on the positional states of the parts detected by the machine vision system [Column 3, lines 48-59]); a memory configured to store a learned function trained with machine learning to map a location and an orientation of the object at the supporting surface to a plurality of first control commands of the set of control commands, wherein the learned function is autonomously trained using a plurality of instances of training pose data of one or more training objects and corresponding response data collected through randomly sampled control commands, and wherein the randomly sampled control commands comprise randomized actuator identifiers and impulse energy levels of the set of actuators (“[a] neural network "trained" by the machine vision system may be used to implement this experimentally-based method. In addition, the machine vision system can be first trained to recognize the stable states of a part by performing a preliminary training operation in which the user successively positions the part on the membrane in each of its various stable states, and instructs the vision system to recognize each stable state. Alternatively, the machine vision system can be configured to learn a part's stable states at the same time the appropriate tap points and impulse energies are being learned for each stable state, by allowing the training process to continue for a time statistically-determined to be sufficient for the part to be transformed to, and recognized by the machine vision system as being in, each of its stable states” [column 12, lines 7-35]); and a processor configured to: accept plurality of instances of pose data of the object at the supporting surface; obtain a first current location and a first current orientation of the object based on a first instance of the plurality of instances of the pose data of the object; execute the learned function to map the first current location and the first current orientation of the object to a plurality of second control commands of the set of control commands (There are at least two methods for determining the impulse application points and corresponding magnitudes to transform a part from one pose to another: (a) a theoretically-based method; and (b) an experimentally-based method. The experimentally-based method is based on tapping the part at different locations with different energies and observing the changes in the part's stable state, as detected by the machine vision system. The procedure is repeated several times. If the expected or desired success rate is not achieved, tap-points and energy levels are systematically varied to find an optimum set of parameters. This process is repeated until a change in stable state is detected and stored by the machine vision system (memory configured to store the learned function) and if the expected or desired test rate is not achieved, tap-points and energy levels are systematically varied to find an optimum set of parameters [Column 11, lines 46-67, Column 12, 1-20]. A neural network “trained” by the machine vision system may be used to implement this experimentally-based method. In addition, the machine vision system can be first trained to recognize the stable states of a part by performing a preliminary training operation in which the user successively positions the part on the membrane in each of its various stable states, and instructs the vision system to recognize each stable state. Alternatively, the machine vision system can be configured to learn a part's stable states at the same time the appropriate tap points and impulse energies are being learned for each stable state, by allowing the training process to continue for a time statistically-determined to be sufficient for the part to be transformed to, and recognized by the machine vision system as being in, each of its stable states [Column 12, lines 21-35]); submit the plurality of second control command to the set of actuators to apply a corresponding specific distribution of energy at the first current location of the object at the supporting surface to increase a likelihood of changing the first current orientation to a target orientation of the object; obtain a second current orientation of the object based on a second instance of the plurality of instances of the pose data of the object; and command a robotic manipulator to manipulate the supported object based on a match between the second orientation of the object and the target orientation (Once parts have been dispensed into the transformation zone, the user has the option to repetitively scatter and selectively transform the parts until all parts have been removed by robot 100. By transforming individual parts, a certain number of parts dispensed into the transformation zone can be picked and removed by robot 100 in a predictable time with a minimum of wear on the parts [Column 12, lines 36-42]. Machine vision system 60 then analyzes the transformation zone for parts having the desired orientation, cluttered regions, and parts that can be transformed to the desired stable state (i.e., isolated individual parts). Robot 100 is first controlled to remove all the parts having the desired orientation. While robot 100 places parts, transformer 80 repetitively resolves cluttered areas and transforms isolated parts to the desired stable state [Column 12, lines 52-60]. The machine vision system is also configured in accordance with the invention to detect the orientation of parts on support 50 and to compare the detected orientations with a predetermined part orientation, which typically corresponds to the desired orientation of the part for pick-up by robot 100. The output of machine vision system 60 is used to control transformer 80 so that transformer 80 may reorient the part to the desired orientation. Finally, machine vision system 60 preferably provides feedback for controlling robot 100 to remove parts on support 50 having the desired orientation [Column 8, lines 10-20]). Buchi does not teach: the learned function is autonomously trained in a self-supervised manner. However, Kalashnikov teaches: the learned function is autonomously trained in a self-supervised manner (paragraphs 3 and 139). It would have been obvious to one of ordinary skill in the art at the time the invention was filed to modify the invention of Buchi with the learned function is autonomously trained in a self-supervised manner as taught by Kalashnikov so as to allow the system to train the robot without requiring the supervision of a person. Regarding Claim 2. Buchi in combination with Kalashnikov teaches the robotic assembly of claim 1. Buchi also teaches: wherein the object at the supporting surface has a plurality of stable orientations, and wherein one or more of the plurality of stable orientations include the target orientation of the object (Several prior art designs for programmable parts feeders have been proposed, in which programmed vibration is used to drive parts into a stable orientation. These methods are useful for bringing parts into "low-energy" positions and orientations, where their respective centers of mass are as low as possible, but other methods are then required to further orient the parts relative to a selection plane [Column 1, lines 44-59]. For example, a Delco internal push-button part is similar to a cube and has eight stable states [Column 9, lines 48-53], so this is both prior art and taught by Buchi). Regarding Claim 5. Buchi in combination with Kalashnikov teaches the robotic assembly of claim 1. Buchi also teaches: wherein the processor is further configured to train the learned function during a training stage, wherein for the training stage, the processor is configured to: collect the target orientation of the object on the supporting surface (FIG. 1 depicts an embodiment of a part feeding system 20 constructed in accordance with the present invention. Part feeding system 20 includes a parts dispenser 22 for supplying parts to a transformation unit 23 comprising a support 50 and a transformer 80. Support 50 provides a framed, substantially planar pick-up surface defining a transformation/selection zone where at least one positional parameter defining the positional state of parts is detected and analyzed by a machine vision system 60 including an image sensor 61 [Column 4, lines 51-65], which reads on collecting the target orientation of the object on the supporting surface); submit to the set of actuators multiple sets of random control commands to apply different distributions of energy at different locations of the supporting surface (The impulse generators can move the membrane in a random transformation mode [Column 8, lines 40-62]. FIGS. 8A and 8B show the positioning of clumped parts on a support before (FIG. 8A) and after (FIG. 8B) operation in this "random" transformation mode [Column 10, lines 21-34]); detect, using the imaging system, an orientation change of the object for each of the different distributions of energy (The experimentally-based method is based on tapping the part at different locations with different energies and observing the changes in the part's stable state, as detected by the machine vision system [Column 11, lines 60-63); and train parameters of the learned function to produce the sets of control commands to the set of actuators increasing the likelihood of the different distributions of energy at the current location of the object to change a current orientation of the object to the target orientation (There are several solutions for reorienting and scattering all the parts in the transformation zone according to different user needs. The most flexible approach is to utilize a single impulse generator (FIG. 7A) that is movable relative to membrane 52 in a plane parallel to membrane 52. To improve throughput and maintain the flexibility of a single impulse generator, an array of impulse generators (FIG. 7B) can be moved relative to membrane 52 in order to minimize the average distance the impulse generators must be moved in order for one of the impulse generators to reach a desired part, as well as in the random transformation mode [Column 8, lines 40-50]). Regarding Claim 6. Buchi in combination with Kalashnikov teaches the robotic assembly of claim 1. Buchi also teaches: wherein the robotic assembly is communicatively coupled with a training system for training the learned function using machine learning (A neural network “trained” by the machine vision system may be used to implement this experimentally-based method [Column 12, lines 21-35]), wherein the training system comprises: a training supporting surface (Inherent; any supporting surface used for training is a supporting surface); a training imaging system configured to image the training supporting surface (Inherent; any imaging system configured to image the training supporting surface is a training imaging system); a set of training actuators, each training actuator is configured to apply an impulse to the training supporting surface with energy governed by a corresponding training control command, such that a specific set of training control commands submitted to the set of the training actuators causes a corresponding specific distribution of energy applied to the training supporting surface (Inherent; any set of actuators used for training the features of claim 1 is a set of training actuators); an input interface configured to accept a target orientation of the object at the training supporting surface (The desired assembly orientation reads on a target orientation [Column 2, lines 66-67, Column 3, lines 1-15]); a training processor and a training memory having instructions stored thereon that cause the training processor to train the learned function (Column 12, lines 21-35]), wherein, to train the learned function, the training processor is configured to: submit to the set of training actuators different multiple sets of training control commands to apply different distributions of energy at different locations of the training supporting surface; detect, using the training imaging system, a change of the orientation of the object for each of the distributions of energy; and train parameters of the learned function to produce a set of output commands for the set of actuators such that the set of output commands increases a probability of the distributions of energy at a candidate current location of the object to change a candidate current orientation of the object to the target orientation (Once parts have been dispensed into the transformation zone, the user has the option to repetitively scatter and selectively transform the parts until all parts have been removed by robot 100. By transforming individual parts, a certain number of parts dispensed into the transformation zone can be picked and removed by robot 100 in a predictable time with a minimum of wear on the parts [Column 12, lines 36-42]. Machine vision system 60 then analyzes the transformation zone for parts having the desired orientation, cluttered regions, and parts that can be transformed to the desired stable state (i.e., isolated individual parts). Robot 100 is first controlled to remove all the parts having the desired orientation. While robot 100 places parts, transformer 80 repetitively resolves cluttered areas and transforms isolated parts to the desired stable state [Column 12, lines 52-60]. The machine vision system is also configured in accordance with the invention to detect the orientation of parts on support 50 and to compare the detected orientations with a predetermined part orientation, which typically corresponds to the desired orientation of the part for pick-up by robot 100. The output of machine vision system 60 is used to control transformer 80 so that transformer 80 may reorient the part to the desired orientation. Finally, machine vision system 60 preferably provides feedback for controlling robot 100 to remove parts on support 50 having the desired orientation [Column 8, lines 10-20]); and output the parameters of the learned function (The output of machine vision system 60 preferably is used in the control of various aspects of the feeding system operation to facilitate the selection and transformation of the parts. For example, as discussed above, the machine vision system output may be used to control dispenser 22 to maintain the optimum number of parts in the transformation zone. Machine vision system 60 preferably is configured to detect whether parts are clumped or tangled on support 50, and transformer 80 is controlled responsive to such detection to separate the clumped or tangled parts. Machine vision system 60 is also configured in accordance with the invention to detect the orientation of parts on support 50 and to compare the detected orientations with a predetermined part orientation, which typically corresponds to the desired orientation of the part for pick-up by robot 100. The output of machine vision system 60 is used to control transformer 80 so that transformer 80 may reorient the part to the desired orientation [Column 8, lines 1-20]). Regarding Claim 7. Buchi in combination with Kalashnikov teaches the robotic assembly of claim 1. Buchi also teaches: wherein the set of actuators comprises one or more transducers selected from a group comprising electromagnetic linear solenoids, electromagnetic rotary solenoids, hydraulic cylinders, pneumatic cylinders, piezoelectric transducers, transducers based on impingement of a fluid on the supporting surface, and transducers based on direct impingement of a fluid on the objects (One embodiment of impulse generator 82 includes an electrical coil and plunger arrangement that can generate a single impulse or multiple impulses of programmable frequency and energy [Column 8, lines 63-66], which reads on an electromagnetic linear solenoid as the impulse generator). Regarding Claim 9. Buchi in combination with Kalashnikov teaches the robotic assembly of claim 1. Buchi also teaches: further comprising: an imaging system configured to image the supporting surface and generate the plurality of instances of the pose data of the object at the supporting surface (The machine vision system 60 of FIG. 1); and the robotic manipulator, wherein the robotic manipulator is configured to grasp the object at one or more contact surfaces (The manipulator of FIG. 1 at 100, and is configured to grasp the object, which inherently requires grasping one or more contact surfaces [Column 13, lines 20-35]). Regarding Claim 10. Buchi in combination with Kalashnikov teaches the robotic assembly of claim 1. Buchi also teaches: wherein the application of the corresponding specific distribution of energy at the first current location of the object at the supporting surface changes the first current location to a second current location of the object at the supporting surface, wherein the object at the supporting surface is in the second orientation at the second current location (The "positional state" of a part includes both its location relative to the transformation zone and relative to other parts in the transformation zone, and also its pose (i.e., the position and orientation of the part's coordinate system relative to the feeder coordinate system or relative to a robot selector coordinate system [Column 4, lines 45-65]. The surface area of membrane 52 defines a selection/transformation zone (also shown by dashed lines 56 in FIG. 1) in which parts received from dispenser 22 are analyzed by machine vision system 60 and their orientation selectively transformed, if necessary, based on the analysis [Column 7, lines 14-19]. In general, a part can be transformed from its current pose to a desired pose [Column 9, lines 48-49]). Regarding Claim 11. Buchi in combination with Kalashnikov teaches the robotic assembly of claim 1. Buchi also teaches: wherein the supporting surface comprises: a semi-flexible layer adapted to dampen one or more impulse forces from the set of actuators and transmit at least some of the damped one or more impulse forces to the object; a part retaining corral on the semi flexible layer adapted to retain the object within the supporting surface (the invention includes a framed, flexible planar support onto which parts are dispensed [Column 3, lines 43-58]. The flexible membrane 52 may also have a dual layer configuration, with a tensioned bottom layer and a top layer resting on the bottom layer (not shown). Preferably, the top layer is secured to side walls 54 or other framing members without being placed in tension. Because the top layer is not placed in tension, it facilitates damping of the vibration generated in the membrane when impulse energy is applied by the impulse generator, as discussed below in more detail. A dual layer membrane made with polyurethane having no embedded fabric has approximately the same damping characteristics as a single layer membrane made with polyurethane having embedded fabric [Column 6, lines 66-67, Column 7, lines 1-13]. Note that the “framed” language regarding the flexible membrane reads on a part retaining coral on the semi flexible layer); and an actuator mounting plate underneath the semi-flexible layer, wherein the set of actuators are mounted on the actuator mounting plate (FIG. 7B shows an embodiment in which an actuator array of impulse generators can be moved relative to membrane 52 in order to minimize the average distance the impulse generators must be moved in order for one of the impulse generators to reach a desired part, with the actuators positioned on some form of surface. FIG. 1 shows another such surface (not numbered) beneath the actuators, which reads on a mounting plate underneath the actuators). Regarding Claim 13. Buchi teaches a method for orienting an object supported at a supporting surface of an assembly in a first target orientation (FIG. 1, number 50), the method comprising: i.) receiving pose data of the object at the supporting surface; ii.) obtaining a first current location and a first current orientation of the object based on the pose data of the object (There are at least two methods for determining the impulse application points and corresponding magnitudes to transform a part from one pose to another: (a) a theoretically-based method; and (b) an experimentally-based method. The experimentally-based method is based on tapping the part at different locations with different energies and observing the changes in the part's stable state, as detected by the machine vision system. The procedure is repeated several times. If the expected or desired success rate is not achieved, tap-points and energy levels are systematically varied to find an optimum set of parameters. This process is repeated until a change in stable state is detected and stored by the machine vision system (memory configured to store the learned function) [Column 11, lines 46-67, Column 12, 1-20]. A neural network “trained” by the machine vision system may be used to implement this experimentally-based method. In addition, the machine vision system can be first trained to recognize the stable states of a part by performing a preliminary training operation in which the user successively positions the part on the membrane in each of its various stable states, and instructs the vision system to recognize each stable state. Alternatively, the machine vision system can be configured to learn a part's stable states at the same time the appropriate tap points and impulse energies are being learned for each stable state, by allowing the training process to continue for a time statistically-determined to be sufficient for the part to be transformed to, and recognized by the machine vision system as being in, each of its stable states [Column 12, lines 21-35]); iii.) executing a learned function to map the first current location and the first current orientation of the object to a plurality of first control command of a set of control commands, wherein the learned function is trained with machine learning to map a location and an orientation of the object at the supporting surface to a plurality of second control commands of the set of control commands, wherein the learned function is autonomously trained using a plurality of instances of training pose data of one or more training objects and corresponding response data collected through randomly sampled control commands, and wherein the randomly sampled control commands comprise randomized actuator identifiers and impulse energy levels of the set of actuators that are fixedly mounted relative to the supporting surface (“[a] neural network "trained" by the machine vision system may be used to implement this experimentally-based method. In addition, the machine vision system can be first trained to recognize the stable states of a part by performing a preliminary training operation in which the user successively positions the part on the membrane in each of its various stable states, and instructs the vision system to recognize each stable state. Alternatively, the machine vision system can be configured to learn a part's stable states at the same time the appropriate tap points and impulse energies are being learned for each stable state, by allowing the training process to continue for a time statistically-determined to be sufficient for the part to be transformed to, and recognized by the machine vision system as being in, each of its stable states” [column 12, lines 7-35]); iv.) submitting the plurality of first control commands to the set of actuators to apply a corresponding specific distribution of energy at the first current location of the object at the supporting surface to increase a likelihood of changing the first current orientation to a target orientation of the object, wherein each actuator of the set of actuators is configured to apply an impulse to the supporting surface with energy governed by a corresponding control command of the set of control commands (Once parts have been dispensed into the transformation zone, the user has the option to repetitively scatter and selectively transform the parts until all parts have been removed by robot 100. By transforming individual parts, a certain number of parts dispensed into the transformation zone can be picked and removed by robot 100 in a predictable time with a minimum of wear on the parts [Column 12, lines 36-42]. Machine vision system 60 then analyzes the transformation zone for parts having the desired orientation, cluttered regions, and parts that can be transformed to the desired stable state (i.e., isolated individual parts). Robot 100 is first controlled to remove all the parts having the desired orientation. While robot 100 places parts, transformer 80 repetitively resolves cluttered areas and transforms isolated parts to the desired stable state [Column 12, lines 52-60]. The machine vision system is also configured in accordance with the invention to detect the orientation of parts on support 50 and to compare the detected orientations with a predetermined part orientation, which typically corresponds to the desired orientation of the part for pick-up by robot 100. The output of machine vision system 60 is used to control transformer 80 so that transformer 80 may reorient the part to the desired orientation. Finally, machine vision system 60 preferably provides feedback for controlling robot 100 to remove parts on support 50 having the desired orientation [Column 8, lines 10-20]); and repeating steps i) to iv.) iteratively until the object is oriented in the target orientation (The procedure as described can be repeated several times, or until a change in stable state is detected and stored by the machine vision system [Column 12, lines 1-20]). Buchi does not teach: the learned function is autonomously trained in a self-supervised manner. However, Kalashnikov teaches: the learned function is autonomously trained in a self-supervised manner (paragraphs 3 and 139). It would have been obvious to one of ordinary skill in the art at the time the invention was filed to modify the invention of Buchi with the learned function is autonomously trained in a self-supervised manner as taught by Kalashnikov so as to allow the system to train the robot without requiring the supervision of a person. Regarding Claim 17. Buchi in combination with Kalashnikov teaches the method of claim 13. Buchi also teaches: further comprising training the learned function during a training stage, the training stage comprising: collecting the target orientation of the object on the supporting surface (FIG. 1 depicts an embodiment of a part feeding system 20 constructed in accordance with the present invention. Part feeding system 20 includes a parts dispenser 22 for supplying parts to a transformation unit 23 comprising a support 50 and a transformer 80. Support 50 provides a framed, substantially planar pick-up surface defining a transformation/selection zone where at least one positional parameter defining the positional state of parts is detected and analyzed by a machine vision system 60 including an image sensor 61 [Column 4, lines 51-65], which reads on collecting the target orientation of the object on the supporting surface); submitting to the set of actuators multiple sets of random control commands to apply different distributions of energy at different locations of the supporting surface (The impulse generators can move the membrane in a random transformation mode [Column 8, lines 40-62]. FIGS. 8A and 8B show the positioning of clumped parts on a support before (FIG. 8A) and after (FIG. 8B) operation in this "random" transformation mode [Column 10, lines 21-34]); detecting, using an imaging system, an orientation change of the object for each of the different distributions of energy (The experimentally-based method is based on tapping the part at different locations with different energies and observing the changes in the part's stable state, as detected by the machine vision system [Column 11, lines 60-63); and training parameters of the learned function to produce the sets of control commands to the set of actuators increasing the likelihood of the different distributions of energy at the current location of the object to change a current orientation of the object to the target orientation (There are several solutions for reorienting and scattering all the parts in the transformation zone according to different user needs. The most flexible approach is to utilize a single impulse generator (FIG. 7A) that is movable relative to membrane 52 in a plane parallel to membrane 52. To improve throughput and maintain the flexibility of a single impulse generator, an array of impulse generators (FIG. 7B) can be moved relative to membrane 52 in order to minimize the average distance the impulse generators must be moved in order for one of the impulse generators to reach a desired part, as well as in the random transformation mode [Column 8, lines 40-50]). Regarding Claim 18. Buchi in combination with Kalashnikov teaches the method of claim 13. Buchi also teaches: further comprising training the learned function using machine learning (A neural network “trained” by the machine vision system may be used to implement this experimentally-based method [Column 12, lines 21-35]), wherein the training of the learned function comprises: imaging a training supporting surface of a training system (Inherent; any supporting surface used for training is a supporting surface); accepting a second target orientation of the object at the training supporting surface (The desired assembly orientation reads on a target orientation [Column 2, lines 66-67, Column 3, lines 1-15]. The parts feeder can be designed to accept either a desired assembly orientation or a close-to desired orientation [Column 3, lines 27-37], which means more than one (a second) target orientation is possible); submitting to a set of training actuators different multiple sets of training control commands to apply different distributions of energy at different locations of the training supporting surface; detecting a change of the orientation of the object for each of the different distributions of energy; training parameters of the learned function to produce a set of output commands for the set of actuators such that the set of output commands increases a probability of the distributions of energy at a candidate current location of the object to change a candidate current orientation of the object to the second target orientation (Once parts have been dispensed into the transformation zone, the user has the option to repetitively scatter and selectively transform the parts until all parts have been removed by robot 100. By transforming individual parts, a certain number of parts dispensed into the transformation zone can be picked and removed by robot 100 in a predictable time with a minimum of wear on the parts [Column 12, lines 36-42]. Machine vision system 60 then analyzes the transformation zone for parts having the desired orientation, cluttered regions, and parts that can be transformed to the desired stable state (i.e., isolated individual parts). Robot 100 is first controlled to remove all the parts having the desired orientation. While robot 100 places parts, transformer 80 repetitively resolves cluttered areas and transforms isolated parts to the desired stable state [Column 12, lines 52-60]. The machine vision system is also configured in accordance with the invention to detect the orientation of parts on support 50 and to compare the detected orientations with a predetermined part orientation, which typically corresponds to the desired orientation of the part for pick-up by robot 100. The output of machine vision system 60 is used to control transformer 80 so that transformer 80 may reorient the part to the desired orientation. Finally, machine vision system 60 preferably provides feedback for controlling robot 100 to remove parts on support 50 having the desired orientation [Column 8, lines 10-20]); and outputting the parameters of the learned function (The output of machine vision system 60 preferably is used in the control of various aspects of the feeding system operation to facilitate the selection and transformation of the parts. For example, as discussed above, the machine vision system output may be used to control dispenser 22 to maintain the optimum number of parts in the transformation zone. Machine vision system 60 preferably is configured to detect whether parts are clumped or tangled on support 50, and transformer 80 is controlled responsive to such detection to separate the clumped or tangled parts. Machine vision system 60 is also configured in accordance with the invention to detect the orientation of parts on support 50 and to compare the detected orientations with a predetermined part orientation, which typically corresponds to the desired orientation of the part for pick-up by robot 100. The output of machine vision system 60 is used to control transformer 80 so that transformer 80 may reorient the part to the desired orientation [Column 8, lines 1-20]). Claim(s) 3-4 and 15-16 are rejected under 35 U.S.C. 103 as being unpatentable over Buchi et al. US 6056108 A (“Buchi”) in combination with Kalashnikov et al. US 20210237266 A1 (“Kalashnikov”) as applied to claims 1 and 13 above, and further in view of Sundermeyer et al. US 20220288783 A1 (“Sundermeyer”). Regarding Claim 3. Buchi in combination with Kalashnikov teaches the robotic assembly of claim 1. Buchi does not teach: wherein the learned function is a classifier learned with one or a combination of k-nearest neighbors algorithm (k-NN), a support-vector machine (SVM), an rN (radius-neighborhood), a random forest, a relevance vector machine (RVM), a reinforcement learning (RL), a backward propagation minimizing a loss function. However, Sundermeyer teaches: wherein the learned function is a classifier learned with one or a combination of k-nearest neighbors algorithm (k-NN), a support-vector machine (SVM), an rN (radius-neighborhood), a random forest, a relevance vector machine (RVM), a reinforcement learning (RL), a backward propagation minimizing a loss function (Machine learning model which may include, among other possibilities, k-nearest neighbor (KNN) [paragraph 540]). It would have been obvious to one of ordinary skill in the art at the time the invention was filed to modify the invention of Buchi with wherein the learned function is a classifier learned with one or a combination of k-nearest neighbors algorithm (k-NN), a support-vector machine (SVM), an rN (radius-neighborhood), a random forest, a relevance vector machine (RVM), a reinforcement learning (RL), a backward propagation minimizing a loss function as taught by Sundermeyer because there are a limited number of learned functions that can be used for perform the system of Buchi, which is silent as to the type of learned function used, and applying one of the algorithms described in Sundermeyer would be an obvious design choice with a high chance of success. Regarding Claim 4. Buchi in combination with Kalashnikov teaches the robotic assembly of claim 1. Buchi does not teach: wherein the learned function is a classifier learned with a radius neighbors (rN) selector of a k-nearest neighbors (k-NN) learning. However, Sundermeyer teaches: wherein the learned function is a classifier learned with a radius neighbors (rN) selector of a k-nearest neighbors (k-NN) learning (Machine learning model which may include, among other possibilities, k-nearest neighbor (KNN) [paragraph 540]). It would have been obvious to one of ordinary skill in the art at the time the invention was filed to modify the invention of Buchi with wherein the learned function is a classifier learned with a radius neighbors (rN) selector of a k-nearest neighbors (k-NN) learning as taught by Sundermeyer because there are a limited number of learned functions that can be used for perform the system of Buchi, which is silent as to the type of learned function used, and applying one of the algorithms described in Sundermeyer would be an obvious design choice with a high chance of success. Regarding Claim 15. Buchi in combination with Kalashnikov teaches the method of claim 13. Buchi does not teach: wherein the learned function is a classifier learned with one or a combination of k-nearest neighbors algorithm (k-NN), a support- vector machine (SVM), an rN (radius-neighborhood), a random forest, a relevance vector machine (RVM), a reinforcement learning (RL), or a backward propagation minimizing a loss function (Machine learning model which may include, among other possibilities, k-nearest neighbor (KNN) [paragraph 540]). It would have been obvious to one of ordinary skill in the art at the time the invention was filed to modify the invention of Buchi with wherein the learned function is a classifier learned with one or a combination of k-nearest neighbors algorithm (k-NN), a support- vector machine (SVM), an rN (radius-neighborhood), a random forest, a relevance vector machine (RVM), a reinforcement learning (RL), or a backward propagation minimizing a loss function as taught by Sundermeyer because there are a limited number of learned functions that can be used for perform the system of Buchi, which is silent as to the type of learned function used, and applying one of the algorithms described in Sundermeyer would be an obvious design choice with a high chance of success. Regarding Claim 16. Buchi in combination with Kalashnikov teaches the method of claim 13. Buchi does not teach: wherein the learned function is a classifier learned with a radius neighbors (rN) selector of a k-nearest neighbors (k-NN) learning. Buchi does not teach: wherein the learned function is a classifier learned with a radius neighbors (rN) selector of a k-nearest neighbors (k-NN) learning. However, Sundermeyer teaches: wherein the learned function is a classifier learned with a radius neighbors (rN) selector of a k-nearest neighbors (k-NN) learning (Machine learning model which may include, among other possibilities, k-nearest neighbor (KNN) [paragraph 540]). It would have been obvious to one of ordinary skill in the art at the time the invention was filed to modify the invention of Buchi with wherein the learned function is a classifier learned with a radius neighbors (rN) selector of a k-nearest neighbors (k-NN) learning as taught by Sundermeyer because there are a limited number of learned functions that can be used for perform the system of Buchi, which is silent as to the type of learned function used, and applying one of the algorithms described in Sundermeyer would be an obvious design choice with a high chance of success. Claim 8 is rejected under 35 U.S.C. 103 as being unpatentable over Buchi et al. US 6056108 A (“Buchi”) in combination with Kalashnikov et al. US 20210237266 A1 (“Kalashnikov”) as applied to claim 7 above, and further in view of Seitel et al. US 20150041282 A1 (“Seitel”). Regarding Claim 8. Buchi in combination with Kalashnikov teaches the robotic assembly of claim 7. Buchi does not expressly teach: wherein the set of actuators comprise a combination of actuators of different types from the group. However, Seitel teaches: wherein the set of actuators comprise a combination of actuators of different types from the group (A system in which actuators for a parts feeder bowl can include a pneumatic cylinder or double cylinder, either through a motor driven screw, a hydraulic cylinder, a solenoid, or any device capable of raising the guide bar segment 134. All actuator types can be utilized for all embodiments of the unjamming system, within the spirit and scope of the claims [paragraph 51], meaning any combination of actuators of different types can be used). It would have been obvious to one of ordinary skill in the art at the time the invention was filed to modify the invention of Buchi with wherein the set of actuators comprise a combination of actuators of different types from the group as taught by Seitel because these different types of actuators are well-known in the art as means to energize a surface on which the parts are reoriented, and combining the different types of actuators is an obvious design choice with a high chance of success. Claims 12 and 14 are rejected under 35 U.S.C. 103 as being unpatentable over Buchi et al. US 6056108 A (“Buchi”) in combination with Kalashnikov et al. US 20210237266 A1 (“Kalashnikov”) as applied to claims 1 and 13 above, and further in view of Kinoshita et al. US 20180345500 A1 (“Kinoshita”). Regarding Claim 12. Buchi in combination with Kalashnikov teaches the robotic assembly of claim 1. Buchi does not teach: wherein each command of the set of control commands comprises instructions specifying one or more of a timing and a duration of an impulse force to be applied by one or more actuators of the set of actuators or an identifier of at least one actuator of the set of actuators. However, Kinoshita teaches: wherein each command of the set of control commands comprises instructions specifying one or more of a timing and a duration of an impulse force to be applied by one or more actuators of the set of actuators or an identifier of at least one actuator of the set of actuators (A similar apparatus and robot system, shown in FIG. 1, wherein the control apparatus may use control parameters that may include a frequency of a vibration signal to be supplied to the vibration actuator, amplitude of the vibration signal, and a vibration continuing time [paragraph 31]). It would have been obvious to one of ordinary skill in the art at the time the invention was filed to modify the invention of Buchi with wherein each command of the set of control commands comprises instructions specifying one or more of a timing and a duration of an impulse force to be applied by one or more actuators of the set of actuators or an identifier of at least one actuator of the set of actuators as taught by Kinoshita, in part because all of the impulses generated must have a start and end time, and because having specific timing and duration of the impulse force allows for better control over the manipulation of the platform and the objects on the platform. Regarding Claim 14. Buchi in combination with Kalashnikov teaches the method of claim 13. Buchi also teaches: wherein the method further comprises: obtaining a second current orientation of the object based on a second instance of the pose data of the object; comparing the second current orientation of the object with the target orientation to determine a match (The machine vision system compares the detected orientation of a selected part with a predetermined orientation [Claim 13, Column 8, lines 7-20]); and terminating repetition of the steps i.) to iv.) based on the determined match (The procedure as described can be repeated several times, or until a change in stable state is detected and stored by the machine vision system [Column 12, lines 1-20]). Buchi does not teach: wherein multiple instances of the pose data are received at discrete time intervals, wherein the first current location and the first current orientation of the object are obtained based on a first instance of the pose data. However, Kinoshita teaches: wherein multiple instances of the pose data are received at discrete time intervals, wherein the first current location and the first current orientation of the object are obtained based on a first instance of the pose data (A similar apparatus and robot system, shown in FIG. 1, wherein the control apparatus may use control parameters that may include a frequency of a vibration signal to be supplied to the vibration actuator, amplitude of the vibration signal, and a vibration continuing time [paragraph 31]). It would have been obvious to one of ordinary skill in the art at the time the invention was filed to modify the invention of Buchi with wherein multiple instances of the pose data are received at discrete time intervals, wherein the first current location and the first current orientation of the object are obtained based on a first instance of the pose data as taught by Kinoshita, in part because all of the impulses generated must have a start and end time, and because having specific timing and duration of the impulse force allows for better control over the manipulation of the platform and the objects on the platform. Conclusion THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to AARON G CAIN whose telephone number is (571)272-7009. The examiner can normally be reached Monday: 7:30am - 4:30pm EST to Friday 7:30pm - 4:30am. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Wade Miles can be reached at (571) 270-7777. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /AARON G CAIN/Examiner, Art Unit 3656
Read full office action

Prosecution Timeline

Feb 07, 2023
Application Filed
Feb 05, 2025
Non-Final Rejection — §103
May 12, 2025
Response Filed
Jun 30, 2025
Final Rejection — §103
Oct 02, 2025
Request for Continued Examination
Oct 13, 2025
Response after Non-Final Action
Oct 27, 2025
Non-Final Rejection — §103
Nov 25, 2025
Interview Requested
Dec 02, 2025
Applicant Interview (Telephonic)
Dec 02, 2025
Examiner Interview Summary
Jan 02, 2026
Response Filed
Feb 18, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12573302
METHOD FOR INFRASTRUCTURE-SUPPORTED ASSISTING OF A MOTOR VEHICLE
2y 5m to grant Granted Mar 10, 2026
Patent 12558790
METHOD AND COMPUTING SYSTEMS FOR PERFORMING OBJECT DETECTION
2y 5m to grant Granted Feb 24, 2026
Patent 12552019
MACHINE LEARNING METHOD AND ROBOT SYSTEM
2y 5m to grant Granted Feb 17, 2026
Patent 12544144
DENTAL ROBOT AND ORAL NAVIGATION METHOD
2y 5m to grant Granted Feb 10, 2026
Patent 12541205
MOVEMENT CONTROL SUPPORT DEVICE AND METHOD
2y 5m to grant Granted Feb 03, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

5-6
Expected OA Rounds
40%
Grant Probability
66%
With Interview (+26.1%)
3y 3m
Median Time to Grant
High
PTA Risk
Based on 130 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month