DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
Joint Inventors
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
Information Disclosure Statement
The information disclosure statement (IDS) submitted on 09/28/2023 is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner.
Priority
Acknowledgment is made of applicant’s claim for foreign priority under 35 U.S.C. 119 (a)-(d). A certified copy of this document has been placed in the file wrapper as of 07/26/2025. As such, the effective filing date of the instant application is considered 03/29/2021, coinciding with the filing date of the People’s Republic of China application to which foreign priority was requested.
Response to Amendments
Claims 1, 8, and 14 have been amended. No claims have been added or canceled. The previous rejections have been withdrawn, and a second non-final rejection is presented below.
Response to Arguments
Applicant’s arguments filed 10/28/2025 have been fully considered and are not persuasive.
Applicant’s arguments are considered moot with the withdrawal of the previous rejection and the presentation of a new non-final rejection below.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims 1-7 and 14-20 are rejected under 35 U.S.C. 103 as being unpatentable over Miao et al. (CN109093626, referred to as Miao) in view of Cai et al. (CN110598285A, referred to as Cai)
Regarding claim 1: Miao discloses: A computer-implemented control method for a legged robot, comprising: obtaining a motion parameter of a driving mechanism of a target part of the robot, wherein the motion parameter of the driving mechanism is a driving angle of the driving mechanism of the target part; obtaining an end pose of the target part by processing the motion parameter of the driving mechanism according to a preset forward kinematics solving model, wherein the end pose is a posture angle of a rotation arm of the target part, and the forward kinematics solving model [is a neural network model trained by a preset training sample set constructed according to a preset inverse kinematics function relationship, and wherein the preset inverse kinematics function relationship takes the end pose of the target part as input and the motion parameter of the driving mechanism of the target part as output;] and ([pg. 2, lines 12-15] The joint angles of the quadruped robot to be controlled are calculated through forward kinematics, and the position coordinates of each foot end in the fuselage coordinate system are obtained, and the position coordinates of each foot end in the fuselage coordinate system are taken as The position coordinates of each foot end.) controlling the target part of the robot to move based on the end pose. ([pg. 2, lines 1-3] The target position of each joint of the quadruped robot to be controlled is calculated based on the control amount of the body attitude angle, so as to control each joint of the quadruped robot to be controlled to move according to the target position of each joint.)
Miao does not explicitly disclose: is a neural network model trained by a preset training sample set constructed according to a preset inverse kinematics function relationship, and wherein the preset inverse kinematics function relationship takes the end pose of the target part as input and the motion parameter of the driving mechanism of the target part as output;
Miao does not disclose the following limitations, however Cai, in an analogous field of endeavor teaches: is a neural network model trained by a preset training sample set constructed according to a preset inverse kinematics function relationship, and wherein the preset inverse kinematics function relationship takes the end pose of the target part as input and the motion parameter of the driving mechanism of the target part as output; ([pg. 2, lines 3-8] Establish a manipulator joint space prediction model that takes the pose space of the end joints of the manipulator as input and outputs the joint space of the manipulator, and predicts the joint space of the manipulator based on the pose space of the end joint of the manipulator and the prediction model of the manipulator joint space; The joint space includes joint variables that represent the angle information of each joint of the manipulator except the end joints; the prediction model of the manipulator joint space is obtained by training the neural network through an adaptive particle swarm algorithm and an LM algorithm. [pg. 2, lines 14-25] Obtain the motion trajectory of the end joint of the robot in Cartesian space; According to the motion trajectory of the robot end joint in Cartesian space, the pose space of the robot end joint is obtained. Optionally, the training process of the manipulator joint space prediction model includes: Acquiring the pose space of the robot end joint according to a preset motion trajectory of the robot end joint in Cartesian space, and training the neural network with the obtained pose space of the robot end joint as a training set; Obtaining the optimal initial value of the weight and threshold of the neural network through the adaptive particle swarm algorithm; The optimal initial value is optimized by the LM algorithm until the convergence condition is satisfied, and the optimal weight and threshold of the neural network are obtained, so as to obtain the robot hand joint space prediction model. [pg. 7, lines 23-25] S2.1. Obtain the output error ek through forward propagation of the BP neural network. If ek ⟨ε or k⟩ TBP, it means that the algorithm has converged, and go to step S3, otherwise, go to S2.2; S2.2. Update the weight and threshold of the BP neural network)
Miao and Cai are analogous art to the claimed invention since they are from the similar field of robotic trajectory determination and kinematics. It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention, with a reasonable expectation for success, to modify the walking robot of Miao to enable the specific inverse kinematic solving method of Cai.
The motivation for modification would have been to provide the trajectory determination method taught in Cai with the method applied to the walking robot with feedforward control disclosed in Miao.
Regarding claim 2: The combination of Miao and Cai teaches: The method of claim 1,
Cai further teaches: wherein the forward kinematics solving model is trained by: determining a range of the end pose of the target part; obtaining a first amount of sampling points of the end pose by sampling within the range of the end pose; calculating the motion parameter of the driving mechanism corresponding to each of the sampling points of the end pose according to the preset inverse kinematics function relationship; constructing the preset training sample set, wherein the training sample set includes the first amount of training samples, each of the training samples includes a set of the sampling points of the end pose and the corresponding motion parameters of the driving mechanism; and training the neural network model in an initial state using the preset training sample set, and using the trained neural network model as the forward kinematics solving model. ([pg. 2-3, lines 30-4] the LM algorithm includes: S1. Initialize the LM algorithm parameters. Use the optimal initial value of the weight and threshold of the neural network as the initial value of the LM algorithm. Initialize the parameters of the LM algorithm in the entire search space. Set the error target. Set the error target ε and the proportionality coefficient μ. As well as the adjustment factor β, the number of iterations k = 0 and the maximum number of iterations TBP, etc., the optimal solution finally found by the APSO algorithm is taken as the initial value u0 of the LM algorithm. S2. Determine whether the convergence condition is satisfied. If yes, go to S3; otherwise, update the weight and threshold of the BP neural network through the LM algorithm and go to S2. Specifically, step S2 includes: S2.1. Obtain the output error ek through forward propagation of the BP neural network. If ek <ε or k> TBP, it means that the algorithm has converged, and go to step S3, otherwise, go to S2.2; S2.2. Update the weight and threshold of the BP neural network through the formula, and calculate ek + 1. If ek + 1> ek, μ = μβ, μk + 1 = μk, and go to S2.1, Otherwise, μ = μ / β, k = k + 1, go to S2.1; Among them, μ is the proportionality coefficient, I is the identity matrix; uk is the input signal of the k-th iteration; ek is the error generated, Ak is the Jacobi matrix, and its expression is: The expression of the error ek is where l is the number of samples; dk is the actual output of the sample; ok is the ideal output. S3. Stop iteration, and use the current weight and threshold as the optimal weight and threshold of the neural network.)
As previously stated, Miao and Cai are analogous art to the claimed invention since they are from the similar field of robotic trajectory determination and kinematics. It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention, with a reasonable expectation for success, to modify the walking robot of Miao to enable the specific inverse kinematic solving method of Cai.
The motivation for modification would have been to provide the trajectory determination method taught in Cai with the method applied to the walking robot with feedforward control disclosed in Miao.
Regarding claim 3: The combination of Miao and Cai teaches: The method of claim 1,
Cai further teaches: wherein obtaining the end pose of the target part by processing the motion parameter of the driving mechanism according to the preset forward kinematics solving model comprises: inputting the motion parameter of the driving mechanism into the forward kinematics solving model for processing to produce an output, and using the output as the end pose of the target part. ([pg. 2, lines 3-8] Establish a manipulator joint space prediction model that takes the pose space of the end joints of the manipulator as input and outputs the joint space of the manipulator, and predicts the joint space of the manipulator based on the pose space of the end joint of the manipulator and the prediction model of the manipulator joint space; The joint space includes joint variables that represent the angle information of each joint of the manipulator except the end joints; the prediction model of the manipulator joint space is obtained by training the neural network through an adaptive particle swarm algorithm and an LM algorithm. [pg. 2, lines 14-25] Obtain the motion trajectory of the end joint of the robot in Cartesian space; According to the motion trajectory of the robot end joint in Cartesian space, the pose space of the robot end joint is obtained. Optionally, the training process of the manipulator joint space prediction model includes: Acquiring the pose space of the robot end joint according to a preset motion trajectory of the robot end joint in Cartesian space, and training the neural network with the obtained pose space of the robot end joint as a training set; Obtaining the optimal initial value of the weight and threshold of the neural network through the adaptive particle swarm algorithm; The optimal initial value is optimized by the LM algorithm until the convergence condition is satisfied, and the optimal weight and threshold of the neural network are obtained, so as to obtain the robot hand joint space prediction model. [pg. 7, lines 23-25] S2.1. Obtain the output error ek through forward propagation of the BP neural network. If ek ⟨ε or k⟩ TBP, it means that the algorithm has converged, and go to step S3, otherwise, go to S2.2; S2.2. Update the weight and threshold of the BP neural network)
As previously stated, Miao and Cai are analogous art to the claimed invention since they are from the similar field of robotic trajectory determination and kinematics. It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention, with a reasonable expectation for success, to modify the walking robot of Miao to enable the specific inverse kinematic solving method of Cai.
The motivation for modification would have been to provide the trajectory determination method taught in Cai with the method applied to the walking robot with feedforward control disclosed in Miao.
Regarding claim 4: The combination of Miao and Cai teaches: The method of claim 1,
Cai further teaches: wherein obtaining the end pose of the target part by processing the motion parameter of the driving mechanism according to the preset forward kinematics solving model comprises: converting the forward kinematics solving model into a matrix expression; and substituting the motion parameter of the driving mechanism into the matrix expression for calculation to obtain an operation result, and using the operation result as the end pose of the target part. ([pg. 2, lines 3-8] Establish a manipulator joint space prediction model that takes the pose space of the end joints of the manipulator as input and outputs the joint space of the manipulator, and predicts the joint space of the manipulator based on the pose space of the end joint of the manipulator and the prediction model of the manipulator joint space; The joint space includes joint variables that represent the angle information of each joint of the manipulator except the end joints; the prediction model of the manipulator joint space is obtained by training the neural network through an adaptive particle swarm algorithm and an LM algorithm. [pg. 2, lines 14-25] Obtain the motion trajectory of the end joint of the robot in Cartesian space; According to the motion trajectory of the robot end joint in Cartesian space, the pose space of the robot end joint is obtained. Optionally, the training process of the manipulator joint space prediction model includes: Acquiring the pose space of the robot end joint according to a preset motion trajectory of the robot end joint in Cartesian space, and training the neural network with the obtained pose space of the robot end joint as a training set; Obtaining the optimal initial value of the weight and threshold of the neural network through the adaptive particle swarm algorithm; The optimal initial value is optimized by the LM algorithm until the convergence condition is satisfied, and the optimal weight and threshold of the neural network are obtained, so as to obtain the robot hand joint space prediction model. [pg. 7, lines 23-25] S2.1. Obtain the output error ek through forward propagation of the BP neural network. If ek ⟨ε or k⟩ TBP, it means that the algorithm has converged, and go to step S3, otherwise, go to S2.2; S2.2. Update the weight and threshold of the BP neural network)
As previously stated, Miao and Cai are analogous art to the claimed invention since they are from the similar field of robotic trajectory determination and kinematics. It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention, with a reasonable expectation for success, to modify the walking robot of Miao to enable the specific inverse kinematic solving method of Cai.
The motivation for modification would have been to provide the trajectory determination method taught in Cai with the method applied to the walking robot with feedforward control disclosed in Miao.
Regarding claim 5: The combination of Miao and Cai teaches: The method of claim 1,
Cai further teaches: wherein the forward kinematics solving model includes an input layer, a hidden layer, and an output layer; converting the forward kinematics solving model into the matrix expression comprises: converting a processing process of the input layer into the matrix expression as an equation of:
PNG
media_image1.png
35
365
media_image1.png
Greyscale
where, p is a sequence number of the motion parameter of the driving mechanism,
PNG
media_image2.png
15
59
media_image2.png
Greyscale
P is an amount of the motion parameters of the driving mechanism,.p is the p-th motion parameter of the driving mechanism, xp, gp, and yp are processing parameters of the input layer in the positive kinematics solving model that correspond to the motion parameter
PNG
media_image3.png
16
13
media_image3.png
Greyscale
, the a, is a processing result of the input layer that corresponds to the motion parameter
PNG
media_image4.png
21
21
media_image4.png
Greyscale
and APut are processing results of the input layer; converting a processing process from the input layer to the hidden layer into the matrix expression as an equation of:CNxI=Bx.+WNxPe
PNG
media_image5.png
23
36
media_image5.png
Greyscale
where, N is an amount of neurons in the hidden layer, WNxP is a first weight matrixin the forward kinematics solving model, BNxI is a first bias matrix in the forward kinematics solving model, and CNx1is a processing result from the input layer to the hidden layer; converting a processing process of the hidden layer into the matrix expression as an equation of:
PNG
media_image6.png
45
203
media_image6.png
Greyscale
where exp is a natural exponential function, and DNxIis a processing result of the hidden layer; converting a processing process from the hidden layer to the output layer into the matrix expression as an equation of:EQXI=BX,+WxNeDNxl=
PNG
media_image7.png
25
18
media_image7.png
Greyscale
e2...
PNG
media_image8.png
16
12
media_image8.png
Greyscale
...Q
PNG
media_image9.png
34
23
media_image9.png
Greyscale
where, q is a parameter sequence number of the end pose, 1 'q- Q, Q is a parameter amount of the end pose,
PNG
media_image10.png
16
29
media_image10.png
Greyscale
is a second weight matrix in the forward kinematics solving model, BQxi is a second bias matrix in the forward kinematics solving model, EQxi is a processing result from the hidden layer to the output layer, and eq is the q-th element in the processing result EQxi; and converting a processing process of the output layer into the matrix expression as an equation of:
PNG
media_image11.png
16
16
media_image11.png
Greyscale
PNG
media_image12.png
35
31
media_image12.png
Greyscale
PNG
media_image13.png
15
32
media_image13.png
Greyscale
fla9o90q~(eq -y)F7xt=(0,1l94....0p9=gq where, is the q-th parameter of the end pose, q,gq,andY are processing parameters of the output layer in the positive kinematics solving model that correspond to the parameter 0.7 , and FQxI is a processing result of the output layer. ([pg. 2, lines 3-8] Establish a manipulator joint space prediction model that takes the pose space of the end joints of the manipulator as input and outputs the joint space of the manipulator, and predicts the joint space of the manipulator based on the pose space of the end joint of the manipulator and the prediction model of the manipulator joint space; The joint space includes joint variables that represent the angle information of each joint of the manipulator except the end joints; the prediction model of the manipulator joint space is obtained by training the neural network through an adaptive particle swarm algorithm and an LM algorithm. [pg. 2, lines 14-25] Obtain the motion trajectory of the end joint of the robot in Cartesian space; According to the motion trajectory of the robot end joint in Cartesian space, the pose space of the robot end joint is obtained. Optionally, the training process of the manipulator joint space prediction model includes: Acquiring the pose space of the robot end joint according to a preset motion trajectory of the robot end joint in Cartesian space, and training the neural network with the obtained pose space of the robot end joint as a training set; Obtaining the optimal initial value of the weight and threshold of the neural network through the adaptive particle swarm algorithm; The optimal initial value is optimized by the LM algorithm until the convergence condition is satisfied, and the optimal weight and threshold of the neural network are obtained, so as to obtain the robot hand joint space prediction model. [pg. 7, lines 23-25] S2.1. Obtain the output error ek through forward propagation of the BP neural network. If ek ⟨ε or k⟩ TBP, it means that the algorithm has converged, and go to step S3, otherwise, go to S2.2; S2.2. Update the weight and threshold of the BP neural network [pg. 8, lines 20-24] As shown in FIG. 6, the input of the BP neural network of this embodiment is four joint variables (θ1, θ2, θ3,
θ4) of the manipulator, so the input la yer of the BP neura l network of this embodiment is four input nodes,
which is implicit The number of layer nodes is set to 40, and the output layer is 4 nodes, corresponding to the 4 degrees of freedom of the end joint pose of the robot. The input layer to the hidden layer selects the tansig
function, and the hidden layer to the output layer selects the purelin function.)
As previously stated, Miao and Cai are analogous art to the claimed invention since they are from the similar field of robotic trajectory determination and kinematics. It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention, with a reasonable expectation for success, to modify the walking robot of Miao to enable the specific inverse kinematic solving method of Cai.
The motivation for modification would have been to provide the trajectory determination method taught in Cai with the method applied to the walking robot with feedforward control disclosed in Miao.
Examiners Note: Cai discloses a similar input layer calculation of the angle. Applying any mathematical formulae, including that of the claimed invention, would have been an obvious design choice for one of ordinary skill in the art because it facilitates known mathematical means for deriving angle. Since the invention failed to provide novel or unexpected results from the usage of said claimed formula, use of any mathematical means, including that of the claimed invention, would be an obvious matter of design choice within the skill of the art.
Regarding claim 6: The combination of Miao and Cai teaches: The method of claim 1,
Cai further teaches: wherein the target part is a link transmission mechanism including a first rotation arm, a swing member driven by a first driving mechanism, a first link member, and a second rotation ann; two ends of the swing member are rotatably connected to the first rotation arm and the first link member, respectively, and an end of the first rotation arm away from the swing member and an end of the first link member away from the swing member are both movably connected to the second rotation arm; the motion parameter of the driving mechanism is a driving angle of the first driving mechanism; and the end pose is a posture angle of the second rotation arm. ([] The inverse solution of the geometric algebra method is used to determine the conversion relationship between the end position of the bucket cleaning manipulator and the joint variables, so as to obtain the second function
relationship of the joint space of the manipulator through the position space of the end joint of the manipulator. The specific solution process as follows: As shown in Figure 2, the coordinate at the point V of the end of the manipulator is [x, y, z], and its attitude a ngle is δ = θ2 + θ3 + θ4. Let the ba se coordinate be [xo, yo, zo], the first one The point C of the telescopic joint is [xc, yc, zc], the point F of the second telescopic joint is [xf, yf, zf], and the point Q of the third telescopic joint is [xq, yq, zq]. The geometric relationship shown in 2 can be used to derive the calculation formulas for the joint variables of the mine cleaning robot:
Among them, CQ and CV are the geometric parameters of the mine cleaning robot, α, β, a nd γ a re the a ngles of the CF, CQ, CV, a nd x0 a xes, respectively, a nd a re expressed a s: Then the third telescopic joint Q point coordinate [xq, yq, zq] is expressed as: By analogy, the coordinates of each joint point of the manipulator can be obtained according to the above method. Further, the acquisition process of the training set includes: According to the second function relationship, a preset joint variable sequence of the motion trajectory of the end joint of the robot in Cartesian space is obtained. Polynomial interpolation is performed on the obtained joint variable sequence to obtain a curve of the robot joint variable over time. Multiple sets of data are selected above, and the pose space of the end joints of the manipulator is obtained according to the first functional relationship, and the obtained pose space of the end joints of the manipulator is used as the training set. Specifically, as shown in FIG. 4, a preset motion trajectory of the end joint of the robot arm in Cartesian space is obtained, and the joint space corresponding to the pose space of the end joint of the robot arm is obtained according to the process of inverse solution of the geometric algebra method, thereby obtaining the The sequence of joint variable values on the segment motion trajectory is shown in Figure 5.)
As previously stated, Miao and Cai are analogous art to the claimed invention since they are from the similar field of robotic trajectory determination and kinematics. It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention, with a reasonable expectation for success, to modify the walking robot of Miao to enable the specific inverse kinematic solving method of Cai.
The motivation for modification would have been to provide the trajectory determination method taught in Cai with the method applied to the walking robot with feedforward control disclosed in Miao.
Regarding claim 7: The combination of Cai and Miao teaches: The method of claim 6,
Cai further teaches: wherein the inverse kinematics function relationship is as an equation of: converting a processing process of the input layer into the matrix expression as equations of:
PNG
media_image14.png
45
310
media_image14.png
Greyscale
a=x-xc;
PNG
media_image15.png
18
17
media_image15.png
Greyscale
b=zA-zc; and
PNG
media_image16.png
58
280
media_image16.png
Greyscale
where, 0r is a posture angle of the second rotation arm around a preset x-axis,6"' is a posture angle of the second rotation arm around a preset y-axis, R,
PNG
media_image17.png
15
28
media_image17.png
Greyscale
isa corresponding rotation matrix of rotating at the posture angle
PNG
media_image18.png
14
16
media_image18.png
Greyscale
around the x-axis,R,
PNG
media_image19.png
16
21
media_image19.png
Greyscale
) is a corresponding rotation matrix of rotating at the posture angle 00Y around the y-axis, A is a connection point between the first driving mechanism and the swing member, B is a connection point between the first link member and the swing member, and C is a connection point between the first link member and the second rotation ar,xi is a coordinate component of the point A on the x-axis, zi is a coordinate component of the point A on a preset z-axis, xc is a coordinate component of the point C on the x-axis,zc is a coordinate component of the point C on the z-axis, lA is a length from the point A to the point B, Inc is a length from the point B to the point C,6o is an initial included angle between the swing member and a horizontal plane, is an initial position vector of the point A,'co is an initial position vector of the point C, and 9 is a driving angle of the first driving mechanism. ([pg. 2, lines 3-8] Establish a manipulator joint space prediction model that takes the pose space of the end joints of the manipulator as input and outputs the joint space of the manipulator, and predicts the joint space of the manipulator based on the pose space of the end joint of the manipulator and the prediction model of the manipulator joint space; The joint space includes joint variables that represent the angle information of each joint of the manipulator except the end joints; the prediction model of the manipulator joint space is obtained by training the neural network through an adaptive particle swarm algorithm and an LM algorithm. [pg. 2, lines 14-25] Obtain the motion trajectory of the end joint of the robot in Cartesian space; According to the motion trajectory of the robot end joint in Cartesian space, the pose space of the robot end joint is obtained. Optionally, the training process of the manipulator joint space prediction model includes: Acquiring the pose space of the robot end joint according to a preset motion trajectory of the robot end joint in Cartesian space, and training the neural network with the obtained pose space of the robot end joint as a training set; Obtaining the optimal initial value of the weight and threshold of the neural network through the adaptive particle swarm algorithm; The optimal initial value is optimized by the LM algorithm until the convergence condition is satisfied, and the optimal weight and threshold of the neural network are obtained, so as to obtain the robot hand joint space prediction model. [pg. 7, lines 23-25] S2.1. Obtain the output error ek through forward propagation of the BP neural network. If ek ⟨ε or k⟩ TBP, it means that the algorithm has converged, and go to step S3, otherwise, go to S2.2; S2.2. Update the weight and threshold of the BP neural network [pg. 8, lines 20-24] As shown in FIG. 6, the input of the BP neural network of this embodiment is four joint variables (θ1, θ2, θ3,
θ4) of the manipulator, so the input la yer of the BP neura l network of this embodiment is four input nodes,
which is implicit The number of layer nodes is set to 40, and the output layer is 4 nodes, corresponding to the 4 degrees of freedom of the end joint pose of the robot. The input layer to the hidden layer selects the tansig
function, and the hidden layer to the output layer selects the purelin function.)
Examiners Note: Cai discloses a similar input layer calculation of theta. Applying any mathematical formulae, including that of the claimed invention, would have been an obvious design choice for one of ordinary skill in the art because it facilitates known mathematical means for deriving angle. Since the invention failed to provide novel or unexpected results from the usage of said claimed formula, use of any mathematical means, including that of the claimed invention, would be an obvious matter of design choice within the skill of the art.
As previously stated, Miao and Cai are analogous art to the claimed invention since they are from the similar field of robotic trajectory determination and kinematics. It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention, with a reasonable expectation for success, to modify the walking robot of Miao to enable the specific inverse kinematic solving method of Cai.
The motivation for modification would have been to provide the trajectory determination method taught in Cai with the method applied to the walking robot with feedforward control disclosed in Miao.
Regarding claim 14: Rejected using the same rationale as claim 1.
Regarding claim 15: Rejected using the same rationale as claim 2.
Regarding claim 16: Rejected using the same rationale as claim 3.
Regarding claim 17: Rejected using the same rationale as claim 4.
Regarding claim 18: Rejected using the same rationale as claim 5.
Regarding claim 19: Rejected using the same rationale as claim 6.
Regarding claim 20: Rejected using the same rationale as claim 7.
Claims 8-13 are rejected under 35 U.S.C. 103 as being unpatentable over Miao et al. (CN109093626, referred to as Miao) in view of Cai et al. (CN110598285A, referred to as Cai) and further in view of Blankespoor et al. (US9833899).
Regarding claim 8: Rejected using the same rationale as claim 1, however further directed to a “non-transitory computer-readable storage medium”, which is not explicitly disclosed by Cai.
Miao does not explicitly disclose “non-transitory computer-readable storage medium”, however Blankespoor, from an analogous field of endeavor, further teaches: non-transitory computer-readable storage medium ([col. 1, lines 57-60] a non-transitory computer-readable storage medium having stored thereon instructions, that when executed by one or more processors, cause a legged robotic device to carry out operations.)
Miao, Cai, and Blankespoor are analogous art to the claimed invention since they are from the similar field of robotic trajectory determination and kinematics. It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention, with a reasonable expectation for success, to modify the walking robot to enable the non-transitory storage medium of Blankespoor.
The motivation for modification would have been to provide a common storage method with the method applied to the walking robot taught in Cai and Miao.
Regarding claim 9: Rejected using the same rationale as claims 2.
Regarding claim 10: Rejected using the same rationale as claim 3.
Regarding claim 11: Rejected using the same rationale as claim 4.
Regarding claim 12: Rejected using the same rationale as claim 5.
Regarding claim 13: Rejected using the same rationale as claim 6.
Conclusion
The prior art made of record, and not relied upon, considered pertinent to applicant' s disclosure or directed to the state of art is listed on the enclosed PTO-892.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to ATTICUS A CAMERON whose telephone number is 703-756-4535. The examiner can normally be reached M-F 8:30 am - 4:30 pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Thomas Worden can be reached on 571-272-4876. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/ATTICUS A CAMERON/ /JASON HOLLOWAY/ Primary Examiner, Art Unit 3658
Examiner, Art Unit 3658A