Prosecution Insights
Last updated: April 19, 2026
Application No. 18/394,287

ROBOTICS DEVICE CONTROL SIGNALS BASED ON MOVEMENT DATA

Non-Final OA §101§102§103
Filed
Dec 22, 2023
Examiner
WOOD, BLAKE ANDREW
Art Unit
3658
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
ETH ZÜRICH
OA Round
3 (Non-Final)
72%
Grant Probability
Favorable
3-4
OA Rounds
2y 12m
To Grant
88%
With Interview

Examiner Intelligence

Grants 72% — above average
72%
Career Allow Rate
102 granted / 142 resolved
+19.8% vs TC avg
Strong +17% interview lift
Without
With
+16.7%
Interview Lift
resolved cases with interview
Typical timeline
2y 12m
Avg Prosecution
39 currently pending
Career history
181
Total Applications
across all art units

Statute-Specific Performance

§101
10.4%
-29.6% vs TC avg
§103
49.4%
+9.4% vs TC avg
§102
22.0%
-18.0% vs TC avg
§112
15.6%
-24.4% vs TC avg
Black line = Tech Center average estimate • Based on career data from 142 resolved cases

Office Action

§101 §102 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 01 January 2026 has been entered. Response to Amendment Claims 1, 2, 6, 14, 15, and 18 have been newly amended. Claim 21 has been newly added. No claims have been newly canceled. Claims 1-21 remain pending in the present application. Response to Arguments Applicant's arguments filed 19 December 2025 have been fully considered and are partially persuasive. Regarding claim 1, Applicant asserts that the previously applied prior art fails to teach every limitation of newly amended claim 1. Specifically, Applicant asserts that the combination of Ott and Kim fails to teach at least the newly added limitation of "receiving, from a user, a set of weights that prioritize at least one movement data point of the set of data points over another movement data point of the set of movement data points." Applicant notes that "Ott specifically teaches away from this claim element," citing Ott at Page 399, Paragraph 4, which recites in part: "[d]ue to the large number of markers we avoid a hard priority order between the markers." The examiner agrees, and as such, the previous 35 U.S.C. § 103 rejection of claim 1 has been withdrawn. A new grounds of rejection, however, is made in view of Hayaishi. See the 35 U.S.C. § 102 rejection of at least claim 1, as well as the arguments pertaining to Applicant's interpretation of Hayaishi below for further details. Further regarding claim 1, Applicant asserts that Kim fails to teach at least the limitation of "receiving, from a user, a set of weights that prioritize at least one movement data point of the set of data points over another movement data point of the set of movement data points." Specifically, Applicant argues that "Kim discloses using 'weighted distances' to control gaps between the robot and the model, 'by setting the weight of the portion where the user wants the distance to be close to a large value as compared with other weights, the robot link string information is set so that the distance becomes close.' … In other words, Kim's weights are applied to differences between the robot and model, [sic] whereas the weights recited by claim 1 'prioritize at least one movement data point of the set of movement data points,' the set of movement data points being within the 'movement data corresponding to a desired motion for a robotics device.' Thus, weighting the motion data rather than weighting a model input and a robotic output as with Kim." The examiner respectfully disagrees. First, the examiner notes that Kim was not used in the previous office action to teach the contested limitation above. Rather, the examiner used Hayaishi to teach the "set of weights" as claimed. The examiner believes that Applicant's arguments were mistakenly directed towards Kim, rather than Hayaishi. In order to promote compact prosecution, the examiner is interpreting Applicant's arguments as such. Second, regarding Hayaishi, Applicant asserts that "Kim's [Hayaishi's] weights are applied to differences between the robot and model, [sic] whereas the weights recited by claim 1 'prioritize at least one movement data point of the set of movement data points,' the set of movement data points being within the 'movement data corresponding to a desired motion for a robotics device.' Thus, weighting the motion data rather than weighting a model input and a robotic output as with Kim [Hayaishi]." The examiner asserts, however, that, under the broadest reasonable interpretation of the claim as presented, the "weights" must merely "prioritize at least one movement data point … over another movement data point…," that is, there is nothing in the claim that requires the motion data specifically to be weighted. Rather, the weights must merely be associated with prioritizing at least one movement data point over another. Hayaishi, in at least [0079], teaches: Receiving a set of weights (0079, Here, Wj is a weight, and may be stored in, for example, the storage unit 12. The weights Wj are different in at least one value. Otherwise, there is no need to set weights. In addition, the weight Wj may be changed appropriately by the user or the like. In such a case, for example, by setting the weight of the portion where the user wants the distance to be close to a large value as compared with other weights, the robot link string information is set so that the distance becomes close.) that prioritize at least one movement data point of the set of movement data points over another movement data point of the set of movement data points (0079, For example, the tip of the link row of the model 5 is included in a plurality of locations specified by the identifying unit 14, and the location in the link row of the robot 6 corresponding to the identified location that is the tip of the link row of the model 5 is In the case of the tip of the link row of the robot 6, by setting the weight of the distance of the tip of each link row of the model 5 and the robot 6 larger than the weight of the other distances, the robot link string information can be calculated so that the tip and the tip of the link string of the robot 6 are close to each other [emphasis added]. Specifically, the weight of the distance of the tip of each link row of the model 5 and the robot 6 may be set to "3", and the other weight may be set to "1". ). Regarding claim 20, Applicant asserts that the Kalouche reference fails to anticipate claim 20. Specifically, Applicant asserts that Kalouche fails to disclose at least the limitation of "determining a correlation between a first set of reference points corresponding to one or more locations on the first robotics device and a second set of reference points corresponding to one or more locations on a second robotics device, wherein the second robotics device has a second set of characteristics different from the first set of characteristics." In support of this, Applicant argues the following: Kalouche does not disclose this element for several reasons. First, in the citation of ¶ [0052] of Kalouche referenced above, Kalouche states that indirect mapping may be used if the "robot's dimensions are on a different scale compared to the operator's [sic] body" Kalouche at ¶ [0052]. Where Kalouche discloses an operator, that operator is always human. One need look no further than the title of Kalouche's Application. "TELEOPERATING OF ROBOTS WITH TASKS BY MAPPING TO HUMAN [sic] OPERATOR POSE," Kalouche at Title. See also, "The disclosure relates generally to teleoperation of robots and specifically to teleoperation of robots based on a pose of a human [sic] operator," Kalouche at ¶ [0002]. "Alternatively, an exoskeleton can be worn to control a robot, which may allow for more intuitive and direct control of a robot arm with a morphology that is similar to the arm of a human [sic] operator." Id. at ¶ [0004]. "recorded data of how the human operator guided the robot, Id. at ¶ [0014]. "The imitation learning engine 150 implements an algorithm to learn how a robot can perform different tasks based on the examples from human operators [sic]." Id. at ¶ [0033], etc. The description of Kalouche is replete with discussion of the operator being a human. Yet, Kalouche is silent on the operator ever being a robot. While Kalouche does conflate the terms "subject" and "operator" (see, Kalouche at ¶ [0021]), and that the subject may be many things, including a robot (¶ [0016]) the disclosure never explicitly states – nor implies – that the operator may be anything other than a human. As such, particularly the portion of ¶ [0052] cited by the Office cannot disclose "determining a correlation between a first set of reference points corresponding to one or more locations on the first robotics device [sic] and a second set of reference points corresponding to one or more locations on a second robotics device, wherein the second robotics device has a second set of characteristics different from the first set of characteristics." Taking the instances in which indirect mapping may be used, in turn. First, "Indirect mapping may be used if 1) the robot's dimensions are on a different scale compared to the operator's body." Id. As discussed above, the operator of Kalouche is only ever described as being human. Therefore, this instance must be interpreted based on Kalouche's own specification as applying when a robot's dimensions are different compared to those of a human [sic] operator. Second, "2) the robot has a different configuration or number of joints compared to the operator's body." Id. As above, based on Kalouche's own specification, this operator must be a human as well. Third, "3) it is desired to have varying level of control sensitivity in joint or end-effector space." Id. This portion of Kalouche is not describing "the second robotics device has a second set of characteristics different from the first set of characteristics" of claim 20, but rather a different desired level of control of a robot. Lastly, in the Response to Arguments section of the Office action, the Office inserts disclosure where none exists. "indirect mapping may be employed if the robot 135 does not have an anthropomorphic design or similarly dimensioned arms, legs, and/or fingers [to the subject]." Office action at 3. The Office's insertion of "to the subject" into the citation where none exists amounts to the Office taking improper official notice. The examiner respectfully disagrees for at least the following reasons. Regarding Applicant's assertions that "where Kalouche discloses an operator that operator is always human," and "the disclosure [of Kalouche] never explicitly states – nor implies – that the operator may be anything other than a human," the examiner notes that while Kalouche does make frequent reference to the "operator" being a human (i.e., in the title, and in ¶¶ [0002], [0004], [0014], etc. as pointed out by Applicant), these disclosures do not preclude the "operator" from being a robot. Specifically, the examiner disagrees that "where Kalouche discloses an operator that operator is always human," (emphasis added) and "the disclosure [of Kalouche] never explicitly states – nor implies – that the operator may be anything other than a human," (emphasis added) as Kalouche explicitly states that "[a] subject herein refers to any moving objects that have more than one pose. The moving objects include, among other objects, animals, people, and robots. Although embodiments herein are described with reference to humans as the subject, note that the present invention can be applied essentially in the same manner to any other object or animal having more than one pose. In several instances, the subject may also be referred to as an operator" (emphasis added). Kalouche at ¶ [0016]. Kalouche further explicitly recites that "[f]or the sake of clarity, it is understood that the subject and the operator are referred to interchangeably…" (emphasis added). Id. at ¶ [0021]. In view of these explicit disclosures, the examiner asserts that Kalouche does appropriately disclose wherein the subject/operator is a robot. The examiner believes that it is incorrect to reduce the above cited portions of Kalouche to mean "that the subject may be many things, including a robot" or merely "conflat[ing] the terms subject and operator," as to do so disregards the "as a whole" consideration of the prior art (MPEP 2141.02(VI), "[a] prior art reference must be considered in its entirety, i.e., as a whole…"). Even if, arguendo, Kalouche was not considered to explicitly disclose the above, it would at the least implicitly disclose wherein the operator is a robot, as Kalouche's disclosure that "[a] subject herein refers to any moving objects that have more than one pose … [t]he moving objects include, among other objects, animals, people, and robots…" at the least implies that the inventors of Kalouche had considered situations in which the subject/operator was a robot, rather than a human. Regarding Applicant's specific arguments against ¶ [0052] of Kalouche, the examiner believes that the above arguments pertaining to Kalouche's disclosure of the subject/operator being a robot render moot Applicant's specific arguments against the instances in which indirect mapping is used. Specifically, the examiner asserts that the basis of these specific arguments, that "the operator is always a human," has been shown by the examiner to not be the case. Regarding Applicant's assertion that "[t]he Office's insertion of 'to the subject' into the citation where none exists amounts to the Office taking improper official notice," the examiner notes that, in fact, no official notice was taken. Specifically, the examiner asserts that the inclusion of "to the subject" was the examiner's interpretation of the citation, rather than the examiner taking official notice. The examiner notes that there was no statement alleging that the "fact" is common knowledge or otherwise well-known, nor was the "fact" included in the rejection of claim 20 itself, and was merely added to the citation in order to provide Applicant further context for the examiner's interpretation of the reference. Further, even if, arguendo, the addition of "to the subject" was the Office taking official notice, Applicant's traversal of such official notice would be an improper traversal. MPEP 2144.03(C) states that "[t]o adequately traverse a finding based on official notice, an applicant must specifically point out the supposed errors in the examiner's action, which would include stating why the noticed fact is not considered to be common knowledge or well-known in the art. A mere request by the applicant that the examiner provide documentary evidence in support of an officially-noticed fact is not a proper traversal" (emphasis added). Applicant provides no statement as to why the noticed fact is not considered to be common knowledge or otherwise well-known, merely requesting that "[i]f the examiner is relying on personal knowledge to support the finding of what is known in the art, Applicant respectfully requests that the examiner provide an affidavit or declaration setting forth specific factual statements and explanation to support the finding, pursuant to MPEP 2144.03(C)." Further still, the examiner asserts that a person having ordinary skill in the art, reading Kalouche, would have arrived at the same interpretation as the examiner. The examiner notes that the first sentence of ¶ [0052] of Kalouche recites: "In a second control mode, indirect mapping may be employed if the robot 135 does not have an anthropomorphic design or similarly dimensioned arms, legs, and/or fingers." Particularly, the examiner notes that this sentence is grammatically incomplete, specifically due to the fact that it is not explicitly stated what the "arms, legs, and/or fingers" are "similarly dimensioned to," therefore requiring the use of context clues to determine "similarly dimensioned to" is referencing. Noting the fact that the "not hav[ing] an anthropomorphic design" and the "arms, legs, and/or fingers" are in reference to the "robot 135," the robot itself is precluded from being what the "arms, legs, and/or fingers" are "similarly dimensioned to," as it is unclear how a robot could "not … [be] similarly dimensioned" to itself. Moving to the next sentence in ¶ [0052] of Kalouche, which recites: "Indirect mapping may use a linear or non-linear function to map an estimate of the limbs and joint angles of the operator to the segments and joint angles of the robot 135" (emphasis added), making it clear that what the "arms, legs, and/or fingers" are "not … similarly dimensioned [to]" is the operator (i.e., the subject as explicitly disclosed in at least ¶¶ [0016] and [0021] of Kalouche). Lastly, the examiner asserts that even if, arguendo, Kalouche is not considered to disclose wherein the subject/operator is "the first robotics device," Kalouche very explicitly suggests wherein the subject/operator is a robot. Specifically, the examiner points to at least ¶ [0016] of Kalouche, which recites, inter alia, "… the present invention can be applied essentially in the same manner to any other object or animal having more than one pose." This, with consideration that Kalouche also discloses in ¶ [0016] that "A subject herein refers to any moving objects that have more than one pose. The moving objects include, among other objects, animals, people, and robots" (emphasis added), provides an indication to the examiner that the inventors of Kalouche had already considered wherein the "operator/subject" was a robot. Hence, Applicant's arguments are not persuasive. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-21 are rejected under 35 U.S.C. 101 because they are directed towards an abstract idea without significantly more. 101 Analysis – Step 1 Claims 1-19 and 21 are towards a “computer-implemented method” (i.e., a process). Claim 20 is directed towards “at least one non-transitory computer-readable medium…” (i.e., a manufacture). Therefore, claims 1-21 are within at least one of the four statutory categories. 101 Analysis – Step 2A, Prong I Regarding Prong I of the Step 2A analysis in the 2019 PEG, the claims are to be analyzed to determine whether they recite subject matter that falls within one of the following groups of abstract ideas: a) mathematical concepts, b) certain methods of organizing human activity, and/or c) mental processes. Independent claim 1 includes limitations that recite a mental process (emphasized below) and will be used as the representative claim for the remainder of the 35 U.S.C. 101 rejection. Claim 1 recites: A computer-implemented method of determining control signals for robotics devices, the method comprising: Receiving movement data corresponding to a desired motion for a robotics device; Receiving robotics device data for the robotics device, wherein the robotics device data includes control data and a set of reference points corresponding to a set of locations on the robotics device; Determining a correlation between a set of movement data points in the movement data and the set of reference points; Receiving a set of weights that prioritize at least one movement data point of the set of movement data points over another movement data point of the set of movement data points; and Determining, using the control data, at least one control signal to change a state of the robotics device based on the desired motion for the robotics device, wherein the at least one control signal is determined based on a distance between the at least one prioritized movement data point in the set of movement data points and at least one reference point in the set of reference points. The examiner submits that the foregoing bolded limitation(s) constitute a “mental process”, because under its broadest reasonable interpretation, the claim covers actions capable of being performed in the human mind. Specifically the examiner asserts that “determining a correlation between a set of movement data points … and the set of reference points” and “determining … at least one control signal…” amount to a mere mental judgement as to a correlation between points, and a mere mental judgement as to how the robotics device should move, respectively. Accordingly, the claim recites at least one abstract idea. 101 Analysis – Step 2A, Prong II Regarding Prong II of the Step 2A analysis in the 2019 PEG, the claims are to be analyzed to determine whether the claim, as a whole, integrates the abstract idea into a practical application. As noted in the 2019 PEG, it must be determined whether any additional elements in the claim beyond the abstract idea integrate the exception into a practical application in a manner that imposes a meaningful limit on the judicial exception. The courts have indicated that additional elements merely using a computer to implement an abstract idea, adding insignificant extra-solution activity, or generally linking the use of a judicial exception to a particular technological environment or field of use do not integrate a judicial exception into a “practical application”. In the present case, the additional limitations beyond the above-noted abstract idea are as follows (where the underlined portions are the “additional limitations”, while the bolded portions continue to represent the “abstract idea”): A computer-implemented method of determining control signals for robotics devices, the method comprising: Receiving movement data corresponding to a desired motion for a robotics device; Receiving robotics device data for the robotics device, wherein the robotics device data includes control data and a set of reference points corresponding to a set of locations on the robotics device; Determining a correlation between a set of movement data points in the movement data and the set of reference points; Receiving a set of weights that prioritize at least one movement data point of the set of movement data points over another movement data point of the set of movement data points; and Determining, using the control data, at least one control signal to change a state of the robotics device based on the desired motion for the robotics device, wherein the at least one control signal is determined based on a distance between the at least one prioritized movement data point in the set of movement data points and at least one reference point in the set of reference points. For the following reason(s), the examiner submits that the above identified additional limitations do not integrate the above-noted abstract idea into a practical application. Regarding the limitations of “receiving movement data…,” “receiving robotics device data…,” and “receiving a set of weights,” the examiner asserts that these amount to insignificant, extra-solution activity in the form of mere data gathering. Thus, taken alone, the additional elements do not integrate the abstract idea into a practical application. Further, looking at the additional limitation(s) as an ordered combination or as a whole, the limitation(s) add nothing that is not already present when looking at the elements taken individually. For instance, there is no indication that the additional elements, when considered as a whole, reflect an improvement in the functioning of a computer or an improvement to another technology or technical field, apply or use the above-noted judicial exception to effect a particular treatment or prophylaxis for a disease or medical condition, implement/use the above-noted judicial exception with a particular machine or manufacture that is integral to the claim, effect a transformation or reduction of a particular article to a different state or thing, or apply or use the judicial exception in some other meaningful way beyond generally linking the use of the judicial exception to a particular technological environment, such that the claim as a whole is not more than a drafting effort designed to monopolize the exception (MPEP § 2106.05). Accordingly, the additional limitations do/does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. 101 Analysis – Step 2B Regarding Step 2B of the 2019 PEG, representative independent claim 1 does not include additional elements (considered both individually and as an ordered combination) that are sufficient to amount to significantly more than the judicial exception for the same reasons to those discussed above, with respect to determining that the claim does not integrate the abstract idea into a practical application. Further, a conclusion that an additional element is insignificant extra-solution activity in Step 2A should be re-evaluated in Step 2B to determine if they are more than what is well-understood, routine, and conventional activity in the field. The additional limitations of “receiving movement data…,” “receiving robotics device data…,” and “receiving a set of weights,” are well-understood, routine, and conventional activities because MPEP § 2106.05(d)(II), and the cases cited therein, including Intellectual Ventures I, LLC v. Symantec Corp., 838 F.3d 1307, 1321 (Fed. Cir. 2016), TLI Communications LLC v. AV Auto. LLC, 823 F.3d 607, 610 (Fed. Cir. 2016), and OIP Techs., Inc., v. Amazon.com, Inc., 788 F.3d 1359, 1363 (Fed. Cir. 2015), indicate that mere collection or receipt of data over a network is a well‐understood, routine, and conventional function when it is claimed in a merely generic manner. Hence, independent claim 1 is not patent eligible. Regarding independent claim 14, the examiner notes that while independent claim 14 is not fully commensurate in scope to independent claim 1, it contains no additional limitations which would cause the claims to be patent eligible. Specifically, the examiner notes that, similar to the analysis of independent claim 1 above, independent claim 14 contains mental processes in the forms of “determining characteristics of the robotics device…,” “determining a mapping between the target motion for the robotics device and a trajectory for the set of reference points…,” and “generating a control signal,” which all amount to mental determinations capable of being performed in the human mind. Additionally, independent claim 14 contains insignificant, extra-solution activity in the form of “receiving movement data…” and “receiving a set of weights…,” which, as shown above, are well-understood, routine, and conventional activities when claimed in a generic manner. Hence, independent claim 14 is not patent eligible. Regarding independent claim 20, the examiner notes that independent claim 20 contains mental processes in the form of “determining a correlation…,” and “generating a modified control signal…,” which amount to mere mental determinations as to the correlation and what type of control should be performed, respectively. Further, independent claim 20 contains insignificant, extra-solution activity in the form of “receiv[ing] a desired motion sequence…,” which, as shown above, is well-understood, routine, and conventional activity when claimed in a generic manner. Hence, independent claim 20 is not patent eligible. Regarding dependent claim 2, dependent claim 2 does not include additional limitations that would cause the claim to be patent eligible. Specifically, dependent claim 2 merely provides description of what the “movement data” comprises. Hence, dependent claim 2 is not patent eligible. Dependent claim 15 is similar in scope to dependent claim 2, and is similarly not patent eligible. Regarding dependent claim 3, dependent claim 3 does not include additional limitations that would cause the claim to be patent eligible. Specifically, dependent claim 3 merely provides description of how the mental process of “determining a control signal” is performed, i.e., by minimizing the distance between the movement data point and the reference point. Hence, dependent claim 3 is not patent eligible. Regarding dependent claim 4, dependent claim 4 does not include additional limitations that would cause the claim to be patent eligible. Specifically, dependent claim 4 merely recites an additional mental process in the form of “determining a trajectory,” which the examiner asserts amounts to a mere mental judgment as to what the trajectory is. Hence, dependent claim 4 is not patent eligible. Dependent claim 16 is similar in scope to dependent claim 4, and is similarly not patent eligible. Regarding dependent claim 5, dependent claim 5 does not include additional limitations that would cause the claim to be patent eligible. Specifically, dependent claim 5 merely recites the source of “the movement data points”. Hence, dependent claim 5 is not patent eligible. Dependent claim 17 is similar in scope to dependent claim 5, and is similarly not patent eligible. Regarding dependent claim 6, dependent claim 6 does not include additional limitations that would cause the claim to be patent eligible. Specifically, dependent claim 6 merely provides description of how the mental process of “determining a control signal” is performed. Hence, dependent claim 6 is not patent eligible. Dependent claim 18 is similar in scope to dependent claim 6, and is similarly not patent eligible. Regarding dependent claim 7, dependent claim 7 does not include additional limitations that would cause the claim to be patent eligible. Specifically, dependent claim 7 merely recites what kind of robotic device the robotics device is. Hence, dependent claim 7 is not patent eligible. Dependent claim 19 is similar in scope to dependent claim 7, and is similarly not patent eligible. Regarding dependent claim 8, dependent claim 8 does not include additional limitations that would cause the claim to be patent eligible. Specifically, dependent claim 8 merely recites what kind of data is included with the received robotics device data. Hence, dependent claim 8 is not patent eligible. Regarding dependent claim 9, dependent claim 9 does not include additional limitations that would cause the claim to be patent eligible. Specifically, dependent claim 9 merely recites an additional mental process in the form of “modifying the control signal,” which the examiner asserts is able to be performed in the human mind as it amounts to merely mentally adjusting the “control signal” that was previously determined in the human mind, i.e., mentally creating a first desired control signal, and then mentally updating it based on environmental conditions. Hence, dependent claim 9 is not patent eligible. Regarding dependent claim 10, dependent claim 10 does not include additional limitations that would cause the claim to be patent eligible. Specifically, dependent claim 10 merely provides further description of the “determining a control signal” by requiring that it be performed by “determining error values,” which the examiner asserts could similarly be performed in the human mind, as it amounts to merely making a mental judgement or determination as to the differences between the movement data points and the reference data points. Hence, dependent claim 10 is not patent eligible. Regarding dependent claim 11, dependent claim 11 does not include additional limitations that would cause the claim to be patent eligible. Specifically, dependent claim 11 merely provides description regarding the “determined control signal,” requiring it be determined in real time, which does not change the “determining” appreciably, as mentally determining something would similarly be performed “in real time”. Hence, dependent claim 11 is not patent eligible. Regarding dependent claim 12, dependent claim 12 does not include additional limitations that would cause the claim to be patent eligible. Specifically, dependent claim 12 merely describes what the “state” of the robotics device includes. Hence, dependent claim 12 is not patent eligible. Regarding dependent claim 13, dependent claim 13 does not include additional limitations that would cause the claim to be patent eligible. Specifically, dependent claim 13 merely describes what the “state” of the robotics device includes. Hence, dependent claim 13 is not patent eligible. Regarding dependent claim 21, dependent claim 21 does not include additional limitations that would cause the claim to be patent eligible. Specifically, dependent claim 21 merely recites additional mental processes in the form of “determining a possible state of the robot,” “determining a trajectory associated with at least one movement data point,” and “determining a bi-level optimization of the robotics device movements…,” which all amount to mental determinations as to the possible state, a trajectory, and an optimization scheme. Hence, dependent claim 21 is not patent eligible. The examiner recommends that, in order to overcome the above rejections, Applicant include a discrete control step into the independent claims, e.g., "operating the robotics device using the determined at least one control signal…" or the like. This discrete control step would not be capable of being performed in the human mind, and, if incorporated into the independent claims, would bring the claims into patent eligibility. Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. Claims 1-3, 5-8, 10, 11, 13-15, and 17-19 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Hayaishi (JP6529039B2), hereafter Hayaishi. Regarding claim 1, Hayaishi discloses a computer-implemented method of determining control signals for robotics devices, the method comprising: Receiving movement data corresponding to a desired motion for a robotics device (0022, The model link string information may be obtained, for example, by the motion capture technology. Motion capture may be performed, for example, by detecting the position of a marker attached to the model 5, or may be performed using a captured image of the model 5. The motion capture performed using a marker may be, for example, an optical motion capture, a mechanical motion capture, or a magnetic motion capture. In addition, when performing motion capture using a captured image, the accuracy of motion capture using the captured image may be improved, for example, by measuring the distance to the model 5 or the like. For example, KINECT (registered trademark) or RealSense may be used as a device for performing motion capture using a captured image.); Receiving robotics device data for the robotics device, wherein the robotics device data includes control data and a set of reference points corresponding to a set of locations on the robotics device (0025, The storage unit 12 stores joint movement range information which is information indicating the movement range of joints in the link row of the robot 6. It is preferable that the joint movement range information includes information on all joints whose movement range is limited. For example, when the angle of the joint Pi between the links 101 and 102 in the link row of the robot 6 is represented by θ1i, θ2i, and θ3i as shown in FIG. 5, the joint movement range information is one of θ1i, θ2i, and θ3i. The information may be information indicating a range regarding one or more angles having a limited range of motion. 0032, The calculation unit 15 is configured so that the objective function corresponding to each distance between the plurality of locations whose coordinate values are specified by the identification unit 14 and the plurality of locations in the link row of the robot 6 respectively corresponding to the plurality Robot link string information, which is information on the position of each link included in the link string of 6, is calculated. Here, each of a plurality of points in the link row of the robot 6 corresponding to a plurality of reference points in the link row of the model 5 may also be referred to as a reference point. For example, a plurality of locations in the link array of the robot 6 corresponding to a plurality of locations in the link array of the model 5 identified by the identification unit 14 have, for example, a predetermined ratio between end points of the link array of the robot 6…); Determining a correlation between a set of movement data points in the movement data and the set of reference points (0032, The calculation unit 15 is configured so that the objective function corresponding to each distance between the plurality of locations whose coordinate values are specified by the identification unit 14 and the plurality of locations in the link row of the robot 6 respectively corresponding to the plurality Robot link string information, which is information on the position of each link included in the link string of 6, is calculated. Here, each of a plurality of points in the link row of the robot 6 corresponding to a plurality of reference points in the link row of the model 5 may also be referred to as a reference point.); Receiving a set of weights that prioritize at least one movement data point of the set of movement data points over another movement data point of the set of movement data points (0079, Further, in the present embodiment, the objective function used by the calculation unit 15 in optimization may be one corresponding to each weighted distance. That is, weights are multiplied as coefficients by the distances between the plurality of reference points in the link string of the model 5 and the plurality of reference points in the link string of the robot 6 respectively corresponding to the plurality of reference points included in the objective function It may be Specifically, the objective function E used in the above-mentioned specific example may be changed to the following equation. Here, Wj is a weight, and may be stored in, for example, the storage unit 12. The weights Wj are different in at least one value. Otherwise, there is no need to set weights. In addition, the weight Wj may be changed appropriately by the user or the like. In such a case, for example, by setting the weight of the portion where the user wants the distance to be close to a large value as compared with other weights, the robot link string information is set so that the distance becomes close.); and Determining, using the control data, at least one control signal to change a state of the robotics device based on the desired motion for the robotics device (0066, The output unit 17 outputs the robot link string information received from the noise removing unit 16 to the robot 6 (step S107). As a result, the robot 6 controls the motor or the like of each joint so that each joint of the link array has an angle included in the robot link array information. By doing so, the shape of the link row of the model 5 and the shape of the link row of the robot 6 become similar. Also, by repeating such an operation, the robot 6 can be controlled to imitate the operation of the model 5, and the robot 6 can be operated as a mirror robot.), wherein the at least one control signal is determined based on a distance between the at least one prioritized movement data point in the set of movement data points and at least one reference point in the set of reference points (0079, For example, the tip of the link row of the model 5 is included in a plurality of locations specified by the identifying unit 14, and the location in the link row of the robot 6 corresponding to the identified location that is the tip of the link row of the model 5 is In the case of the tip of the link row of the robot 6, by setting the weight of the distance of the tip of each link row of the model 5 and the robot 6 larger than the weight of the other distances, The robot link string information can be calculated so that the tip and the tip of the link string of the robot 6 are close to each other. Specifically, the weight of the distance of the tip of each link row of the model 5 and the robot 6 may be set to "3", and the other weight may be set to "1". Since the tip of the link row of the model 5 and the tip of the link row of the robot 6 are usually at a position of interest, the motion of the robot 6 is approximated by the motion of the model 5 by becoming a positional relationship approximating both…). Regarding claim 2, Hayaishi discloses the computer-implemented method of claim 1, and further discloses wherein the movement data comprises motion capture data, animation data, or sensor data (0022, The model link string information may be obtained, for example, by the motion capture technology. Motion capture may be performed, for example, by detecting the position of a marker attached to the model 5, or may be performed using a captured image of the model 5. The motion capture performed using a marker may be, for example, an optical motion capture, a mechanical motion capture, or a magnetic motion capture.). Claim 15 is similar in scope to claim 2, and is similarly rejected. Regarding claim 3, Hayaishi discloses the computer implemented method of claim 1, and further discloses wherein the at least one control signal is determined based on minimizing the distance between the at least one movement data point and the at least one reference point (0033, Distances between a plurality of locations specified by the identifying unit 14 and a plurality of locations in the link row of the robot 6 respectively corresponding to the plurality of locations are a specified reference point and a link of the robot 6 corresponding to the reference point It is the distance to the reference point in the column. The objective function according to each distance may be a function whose value increases as each distance increases. Specifically, the objective function may be the sum of squares of each distance. In that case, mapping using the least squares method will be performed. 0034, Further, to calculate robot link string information so that the objective function becomes smaller may be to calculate robot link string information so as to minimize the objective function, Examiner's note: the examiner asserts that the minimizing of the objective function, which is a sum relating to the distance between reference points on the model and reference points on the robot, amounts to a minimizing of the distances). Regarding claim 5, Hayaishi discloses the computer-implemented method of claim 1, and further discloses wherein the set of movement data points, the set of reference points, or both are identified by a user (0022, The model link string information may be obtained, for example, by the motion capture technology. Motion capture may be performed, for example, by detecting the position of a marker attached to the model 5, or may be performed using a captured image of the model 5. The motion capture performed using a marker may be, for example, an optical motion capture, a mechanical motion capture, or a magnetic motion capture. Examiner's note: the marker positions, i.e., the movement data points, would be chosen by the user when attaching the markers to the model). Claim 17 is similar in scope to claim 5, and is similarly rejected. Regarding claim 6, Hayaishi discloses the computer implemented method of claim 1, and further discloses wherein the at least one control signal is determined at least in part based on the set of weights (0079, In the case of the tip of the link row of the robot 6, by setting the weight of the distance of the tip of each link row of the model 5 and the robot 6 larger than the weight of the other distances, The robot link string information can be calculated so that the tip and the tip of the link string of the robot 6 are close to each other. Specifically, the weight of the distance of the tip of each link row of the model 5 and the robot 6 may be set to "3", and the other weight may be set to "1". Since the tip of the link row of the model 5 and the tip of the link row of the robot 6 are usually at a position of interest, the motion of the robot 6 is approximated by the motion of the model 5 by becoming a positional relationship approximating both). Claim 18 is similar in scope to claim 6, and is similarly rejected. Regarding claim 7, Hayaishi discloses the computer-implemented method of claim 1, and further discloses wherein the robotics device is a legged robotics device, an under-actuated robotics device, a simulation of a robotics device, or a combination thereof (0035, robot 6 is a humanoid robot, Examiner's note: a humanoid robot would be a legged robot). Claim 19 is similar in scope to claim 7, and is similarly rejected. Regarding claim 8, Hayaishi discloses the computer-implemented method of claim 1, and further discloses wherein the robotics device data comprises dimensions of the robotics device, mass distribution of the robotics device, a set of possible states of the robotics device, degrees of freedom of at least one component of the robotics device, or combinations thereof (0027, The storage unit 12 may store information other than the joint movement range information. For example, link length information indicating the length of each link in the link string of the robot 6 may be stored in the storage unit 12. It is known as forward kinematics of a robot that the position of the link row of the robot 6 can be specified by using the link length information and the angle of each joint in the link row of the robot 6. Further, for example, information on the direction of the axis of each joint in the link row of the robot 6 may be stored in the storage unit 12.). Regarding claim 10, Hayaishi discloses the computer-implemented method of claim 1, and further discloses wherein determining the at least one control signal comprises determining error values associated with distances between the set of movement data points and the set of reference points (0063, The objective function E is, for example, the square of the distance between the reference point ps1 and the reference point PS1, the square of the distance between the reference point ps2 and the reference point PS2, the square of the distance between the reference point ps3 and the reference point PS3 in FIG. The square of the distance between the reference point ps4 and the reference point PS4 is added. If both end points of the link string are also reference points, their distances may be included in the objective function.). Regarding claim 11, Hayaishi discloses the computer-implemented method of claim 1, and further discloses wherein the at least one control signal is determined in real time (0037, The output unit 17 outputs information on the angle of each joint of the link row of the robot 6 according to the robot link row information. The robot link string information is robot link string information from which high-frequency noise components have been removed by the noise removing unit 16. When the mapping process is performed in real time, the output unit 17 may repeat the information in time series and may output the information at fixed or indefinite time intervals. In this case, for example, the robot 6 operates in accordance with the movement of the model 5.). Regarding claim 13, Hayaishi discloses the computer-implemented method of claim 1, and further discloses wherein the state of the robotics device relates to a linear movement or an angular movement of at least one component of the robotics device (0037, When the robot 6 has two or more link rows, the output unit 17 may output information for each of the link rows of the robot 6. When the robot link string information is information on the angle of each joint of the link string of the robot 6, the output unit 17 may output the robot link string information as it is. When the robot link string information is information indicating the position of the link string of the robot 6, the output unit 17 converts the robot link string information into information on the angle of each joint of the link string of the robot 6 and outputs Do. For example, in FIG. 5, when the positions of the links 101 and 102 are known, a method of specifying the angle with respect to each axis of the joint Pi is already known. It can be converted into information on the angle of each joint of the six link rows. The information on the angle of each joint of the link row of the robot 6 may be information indicating the angle of each joint, and as a result, it may be information on which the angle of each joint can be known.). Regarding claim 14, Hayaishi discloses a computer-implemented method of retargeting of robotics device movements, the method comprising: Receiving movement data corresponding to a target motion for a robotics device (0022, The model link string information may be obtained, for example, by the motion capture technology. Motion capture may be performed, for example, by detecting the position of a marker attached to the model 5, or may be performed using a captured image of the model 5. The motion capture performed using a marker may be, for example, an optical motion capture, a mechanical motion capture, or a magnetic motion capture. In addition, when performing motion capture using a captured image, the accuracy of motion capture using the captured image may be improved, for example, by measuring the distance to the model 5 or the like. For example, KINECT (registered trademark) or RealSense may be used as a device for performing motion capture using a captured image.); Determining characteristics of the robotics device, wherein the characteristics comprise control data (0025, The storage unit 12 stores joint movement range information which is information indicating the movement range of joints in the link row of the robot 6. It is preferable that the joint movement range information includes information on all joints whose movement range is limited. For example, when the angle of the joint Pi between the links 101 and 102 in the link row of the robot 6 is represented by θ1i, θ2i, and θ3i as shown in FIG. 5, the joint movement range information is one of θ1i, θ2i, and θ3i. The information may be information indicating a range regarding one or more angles having a limited range of motion.), a set of reference points corresponding to a set of locations on the robotics device (0032, The calculation unit 15 is configured so that the objective function corresponding to each distance between the plurality of locations whose coordinate values are specified by the identification unit 14 and the plurality of locations in the link row of the robot 6 respectively corresponding to the plurality Robot link string information, which is information on the position of each link included in the link string of 6, is calculated. Here, each of a plurality of points in the link row of the robot 6 corresponding to a plurality of reference points in the link row of the model 5 may also be referred to as a reference point. For example, a plurality of locations in the link array of the robot 6 corresponding to a plurality of locations in the link array of the model 5 identified by the identification unit 14 have, for example, a predetermined ratio between end points of the link array of the robot 6…), and dimensions of the robotics device (0027, The storage unit 12 may store information other than the joint movement range information. For example, link length information indicating the length of each link in the link string of the robot 6 may be stored in the storage unit 12. It is known as forward kinematics of a robot that the position of the link row of the robot 6 can be specified by using the link length information and the angle of each joint in the link row of the robot 6. Further, for example, information on the direction of the axis of each joint in the link row of the robot 6 may be stored in the storage unit 12.); Determining a mapping between the target motion for the robotics device and a trajectory for the set of reference points, wherein the mapping is based on the control data and the dimensions of the robotics device (0032, The calculation unit 15 is configured so that the objective function corresponding to each distance between the plurality of locations whose coordinate values are specified by the identification unit 14 and the plurality of locations in the link row of the robot 6 respectively corresponding to the plurality Robot link string information, which is information on the position of each link included in the link string of 6, is calculated. Here, each of a plurality of points in the link row of the robot 6 corresponding to a plurality of reference points in the link row of the model 5 may also be referred to as a reference point. 0027, The storage unit 12 may store information other than the joint movement range information. For example, link length information indicating the length of each link in the link string of the robot 6 may be stored in the storage unit 12. It is known as forward kinematics of a robot that the position of the link row of the robot 6 can be specified by using the link length information and the angle of each joint in the link row of the robot 6. Further, for example, information on the direction of the axis of each joint in the link row of the robot 6 may be stored in the storage unit 12.), and wherein the mapping minimizes a distance between the set of reference points and a set of movement data points corresponding to the target motion (0033, Distances between a plurality of locations specified by the identifying unit 14 and a plurality of locations in the link row of the robot 6 respectively corresponding to the plurality of locations are a specified reference point and a link of the robot 6 corresponding to the reference point It is the distance to the reference point in the column. The objective function according to each distance may be a function whose value increases as each distance increases. Specifically, the objective function may be the sum of squares of each distance. In that case, mapping using the least squares method will be performed. 0034, Further, to calculate robot link string information so that the objective function becomes smaller may be to calculate robot link string information so as to minimize the objective function, Examiner's note: the examiner asserts that the minimizing of the objective function, which is a sum relating to the distance between reference points on the model and reference points on the robot, amounts to a minimizing of the distances); Receiving a set of weights that prioritize at least one movement data point of the set of movement data points over another movement data point of the set of movement data points (0079, Further, in the present embodiment, the objective function used by the calculation unit 15 in optimization may be one corresponding to each weighted distance. That is, weights are multiplied as coefficients by the distances between the plurality of reference points in the link string of the model 5 and the plurality of reference points in the link string of the robot 6 respectively corresponding to the plurality of reference points included in the objective function It may be Specifically, the objective function E used in the above-mentioned specific example may be changed to the following equation. Here, Wj is a weight, and may be stored in, for example, the storage unit 12. The weights Wj are different in at least one value. Otherwise, there is no need to set weights. In addition, the weight Wj may be changed appropriately by the user or the like. In such a case, for example, by setting the weight of the portion where the user wants the distance to be close to a large value as compared with other weights, the robot link string information is set so that the distance becomes close.); and Generating a control signal based on the mapping, and the at least one prioritized movement data point in the set of movement data points (0066, The output unit 17 outputs the robot link string information received from the noise removing unit 16 to the robot 6 (step S107). As a result, the robot 6 controls the motor or the like of each joint so that each joint of the link array has an angle included in the robot link array information. By doing so, the shape of the link row of the model 5 and the shape of the link row of the robot 6 become similar. Also, by repeating such an operation, the robot 6 can be controlled to imitate the operation of the model 5, and the robot 6 can be operated as a mirror robot. 0079, For example, the tip of the link row of the model 5 is included in a plurality of locations specified by the identifying unit 14, and the location in the link row of the robot 6 corresponding to the identified location that is the tip of the link row of the model 5 is In the case of the tip of the link row of the robot 6, by setting the weight of the distance of the tip of each link row of the model 5 and the robot 6 larger than the weight of the other distances, The robot link string information can be calculated so that the tip and the tip of the link string of the robot 6 are close to each other. Specifically, the weight of the distance of the tip of each link row of the model 5 and the robot 6 may be set to "3", and the other weight may be set to "1". Since the tip of the link row of the model 5 and the tip of the link row of the robot 6 are usually at a position of interest, the motion of the robot 6 is approximated by the motion of the model 5 by becoming a positional relationship approximating both…). Claim 20 is rejected under 35 U.S.C. 102(a)(1) as anticipated by or, in the alternative, under 35 U.S.C. 103 as obvious over Kalouche (US 20210205986 A1). Regarding claim 20, Kalouche discloses at least one computer-readable medium carrying instructions that, when executed by a computing system, cause the cause the computing system to: Receive a desired motion sequence for a first operator having a first set of characteristics, wherein the first set of characteristics comprises dimensions of the first operator (0039, tracking module tracks the poses of the subject in subsequent images captured by the image capturing device, pose estimation module is able to estimate a pose of a subject in real-time as images are captured by the image capturing device, 0037, operator may manually set the subject's body part dimensions); Determining a correlation between a first set of reference points corresponding to one or more locations on the first operator and a second set of reference points corresponding to one or more locations on a second robotics device, wherein the second robotics device has a second set of characteristics different from the first set of characteristics (0052, indirect mapping may be employed if the robot 135 does not have an anthropomorphic design or similarly dimensioned arms, legs, and/or fingers, indirect mapping may use a linear or non-linear function to map an estimate of the limbs and joint angles of the operator to the segments and joint angles of the robot 135, indirect mapping may be used if the robot's dimensions are on a different scale compared to the operator's body, the robot has a different configuration or number of joints compared to the operator's body, or it is desired to have varying levels of control sensitivity in joint or end-effector space); and Generating a modified control signal based on the correlation, the second set of characteristics, and the first set of characteristics, wherein the modified control signal is to control the second robotics device to generate the desired motion sequence (0030, robotic system controller 145 receives the generated body pose information from its corresponding operator system 110 and accordingly determines a set of mapping parameters and kinematic parameters to control the motion of the robot, 0052, indirect mapping may be employed if the robot 135 does not have an anthropomorphic design or similarly dimensioned arms, legs, and/or fingers, indirect mapping may use a linear or non-linear function to map an estimate of the limbs and joint angles of the operator to the segments and joint angles of the robot 135, indirect mapping may be used if the robot's dimensions are on a different scale compared to the operator's body, the robot has a different configuration or number of joints compared to the operator's body, or it is desired to have varying levels of control sensitivity in joint or end-effector space). The examiner notes that Kalouche, under one interpretation, appears to further disclose wherein the first operator is a first robotics device (0016, A subject herein refers to any moving objects that have more than one pose. The moving objects include, among other objects, animals, people, and robots. Although embodiments herein are described with reference to humans as the subject, note that the present invention can be applied essentially in the same manner to any other object or animal having more than one pose. In several instances, the subject may also be referred to as an operator, 0021, In the embodiment of FIG. 1, the operator system 110 is controlled by the operator, who may be the subject of one or more captured images. For the sake of clarity, it is understood that the subject and the operator are referred to interchangeably, but it is also understood that, in some embodiments, the subject in the captured images may be a separate subject from the operator of the operator system 110.). However, the examiner notes that it appears to be unclear whether the above portion of Kalouche provides sufficient disclosure to render the claim anticipated, as Kalouche repeatedly makes reference to the ”operator” being a human (see the Response to Arguments section above for further details). As such, in order to expedite prosecution, the examiner asserts that claim 20 is alternatively rejected under 35 U.S.C. 103 as being obvious in view of Kalouche, as at least [0016] of Kalouche provides a suggestion that the operator/subject may be a robot (0016, A subject herein refers to any moving objects that have more than one pose. The moving objects include, among other objects, animals, people, and robots. Although embodiments herein are described with reference to humans as the subject, note that the present invention can be applied essentially in the same manner to any other object or animal having more than one pose. In several instances, the subject may also be referred to as an operator). The examiner asserts that the above underlined portion provides an explicit suggestion that the “subject/operator” of Kalouche may be a robot, rendering the claim obvious. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claims 4 and 16 is/are rejected under 35 U.S.C. 103 as being unpatentable over Hayaishi, and further in view of Ott et al. ("Motion Capture based Human Motion Recognition and Imitation by Direct Marker Control"), hereafter Ott. Regarding claim 4, Hayaishi discloses the computer-implemented method of claim 1, but fails to disclose it further comprising: Determining a trajectory associated with the at least one movement data point, wherein the at least one control signal is based on the trajectory. Ott, however in an analogous field of endeavor, does teach determining a trajectory associated with the at least one movement data point, wherein the at least one control signal is based on the trajectory (Page 403, Col. 2, Paragraph 3, measured data from the motion capture system used as a desired trajectory for the controller). Hayaishi and Ott are analogous because they are in a similar field of endeavor, e.g., motion capture-based robotics control systems. It would have been obvious to a person having ordinary skill in the art before the effective filing date of the present invention, with a reasonable expectation of success, to have included the trajectory association of Ott in order to provide a means of providing a discrete trajectory from a plurality of movement points. The motivation to combine is to provide a means by which the plurality of movement points can be better evaluated. Claim 16 is similar in scope to claim 4, and is similarly rejected. Claim 9 is rejected under 35 U.S.C. 103 as being unpatentable over Hayaishi, and further in view of Xie (US 20220379470 A1), hereafter Xie. Regarding claim 9, Hayaishi discloses the computer-implemented method of claim 1, but fails to disclose it further comprising: Modifying the at least one control signal in response to a changed environmental condition. Xie, however, in an analogous field of endeavor, does teach: Modifying the at least one control signal in response to a changed environmental condition (0019, robot controller is used to regulate the motion status of the humanoid robot so as to correct the planned motion trajectory of the humanoid robot in real time so that the corrected planned trajectory matches the real motion condition of the humanoid robot and reduces the disturbance of the environment in which the robot is located). Hayaishi and Xie are analogous because they are in a similar field of endeavor, e.g., robot control systems. It would have been obvious to a person having ordinary skill in the art before the effective filing date of the present invention, with a reasonable expectation of success, to have included the planned motion correction of Xie in order to provide a means of ensuring the robot can operate properly. The motivation to combine is to reduce the disturbance caused by the environment in which the robot is located (see at least 0019 of Xie). Claim 12 is rejected under 35 U.S.C. 103 as being unpatentable over Hayaishi, and further in view of Kalouche. Regarding claim 12, Hayaishi discloses the computer-implemented method of claim 1, but fails to explicitly teach wherein the state of the robotics device comprises a position and an orientation of the at least one reference point at a time point. Kalouche, however, in an analogous field of endeavor, does teach wherein the state of the robotics device comprises a position and an orientation of the at least one reference point at a time point (0069, robot kinematics module generates kinematic parameters of the robot, kinematic parameters correspond to a position and an orientation for each segment and/or joint of the robot). Hayaishi and Kalouche are analogous because they are in a similar field of endeavor, e.g., motion capture-based robot control systems. It would have been obvious to a person having ordinary skill in the art before the effective filing date of the present invention, with a reasonable expectation of success, to have included the state of the robotics device of Kalouche in order to provide better means of describing the control system as a whole. The motivation to combine is to ensure that the state of the robot is known as accurately as possible while being controlled. Claim 21 is rejected under 35 U.S.C. 103 as being unpatentable over Hayaishi, and further in view of Ott and Nishimura (US 20220080585 A1), hereafter Nishimura. Regarding claim 21, Hayaishi discloses the computer-implemented method of claim 14, but fails to explicitly disclose it further comprising: Determining a possible state of the robotics device; and Determining a trajectory associated with at least one movement data point of the set of movement data points; Determining a bi-level optimization of the robotics device movements based on the possible state of the robotics device and the trajectory. Ott, however, in an analogous field of endeavor, does teach determining a trajectory associated with at least one movement data point of the set of movement data points (Page 403, Col. 2, Paragraph 3, measured data from the motion capture system used as a desired trajectory for the controller). Hayaishi and Ott are analogous because they are in a similar field of endeavor, e.g., motion capture-based robotics control systems. It would have been obvious to a person having ordinary skill in the art before the effective filing date of the present invention, with a reasonable expectation of success, to have included the trajectory association of Ott in order to provide a means of providing a discrete trajectory from a plurality of movement points. The motivation to combine is to provide a means by which the plurality of movement points can be better evaluated. The combination of Hayaishi and Ott fails to teach, however, determining a possible state of the robotics device; and Determining a bi-level optimization of the robotics device movements based on the possible state of the robotics device and the trajectory. Nishimura, however, in an analogous field of endeavor, does teach determining a possible state of the robotics device (0024, the sensor data 135 is input into a perception system 115, which performs tasks such as image segmentation and object detection, trajectory prediction, and tracking… perception system 114 outputs an initial state, an initial nominal control trajectory 145, and a KL divergence bound 150 for robot 100.); and Determining a bi-level optimization of the robotics device movements based on the possible state of the robotics device and a trajectory (0024, those inputs are processed by a robot control system 120 that executes the RAT iLQR algorithm, an algorithm for solving the bilevel optimization problem mentioned above.). Hayaishi, Ott, and Nishimura are analogous because they are in a similar field of endeavor, e.g., robot control systems. It would have been obvious to a person having ordinary skill in the art before the effective filing date of the present invention, with a reasonable expectation of success, to have included the state determination and bilevel optimization solution of Nishimura in order to provide a means of better optimizing the motion path of the robot. The motivation to combine is to ensure that the robot is controlled to move in as optimal a way as possible. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to BLAKE A WOOD whose telephone number is (571)272-6830. The examiner can normally be reached M-F, 8:00 AM to 4:30 PM Eastern. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Thomas Worden can be reached at (571) 272-4876. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /BLAKE A WOOD/Examiner, Art Unit 3658
Read full office action

Prosecution Timeline

Dec 22, 2023
Application Filed
Jun 11, 2025
Non-Final Rejection — §101, §102, §103
Jun 16, 2025
Interview Requested
Jul 01, 2025
Applicant Interview (Telephonic)
Jul 02, 2025
Examiner Interview Summary
Sep 12, 2025
Response Filed
Oct 09, 2025
Final Rejection — §101, §102, §103
Dec 19, 2025
Response after Non-Final Action
Jan 06, 2026
Request for Continued Examination
Feb 12, 2026
Response after Non-Final Action
Mar 02, 2026
Non-Final Rejection — §101, §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12600269
Vehicle and Method for Adjusting a Position of a Display in the Vehicle
2y 5m to grant Granted Apr 14, 2026
Patent 12588955
COMPUTER-ASSISTED SURGERY SYSTEM
2y 5m to grant Granted Mar 31, 2026
Patent 12591256
WORK UNIT REPLACEMENT SYSTEM AND WORK UNIT REPLACEMENT STATION
2y 5m to grant Granted Mar 31, 2026
Patent 12591255
MOBILE ROBOT AND CONTROL METHOD THEREFOR
2y 5m to grant Granted Mar 31, 2026
Patent 12569985
RUNTIME ASSESSMENT OF SUCTION GRASP FEASIBILITY
2y 5m to grant Granted Mar 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
72%
Grant Probability
88%
With Interview (+16.7%)
2y 12m
Median Time to Grant
High
PTA Risk
Based on 142 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month