DETAILED ACTION
Claims 12-22 are pending and have been examined.
--
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Information Disclosure Statement
The information disclosure statements (IDS) submitted on 09/08/2023 is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statements are being considered by the examiner.
Claim Interpretation
The following is a quotation of 35 U.S.C. 112(f):
(f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph:
An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked.
As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph:
(A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function;
(B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and
(C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function.
Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function.
Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function.
Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action.
Claims 12 recites “a forward propagation module configured to propagate data from one or more sensors applied at an input of the neural network... a fusion system configured to determine a fusion output… a backpropagation module configured to update the weights associated with the neural network online…”
The claimed elements (a forward propagation module, a fusion system, and a backpropagation module) invoke 112(f) because they recite generic placeholders (module and system) coupled to a function, without reciting a structure in the claim. The specification identified the corresponding structure for both elements in [0260] “… the embodiments of the invention may be implemented in various ways by way of hardware, software, or a combination of hardware and software, in particular in the form of program code… executed by one or more processors in a computing device. These computer program instructions may also be stored in a computer-readable medium.”
Claim Objections
Claims 13 and 17 are objected to because of the following informalities:
In claim 13, “wherein said variable is a state vector comprising information in relation to the position and/or the movement of an object detected by the perception system” should be “wherein said variable is a state vector comprising information in relation to the position and the movement of an object detected by the perception system”
In claim 17, “by the estimation device and/or the fusion outputs delivered by the fusion system” should be “by the estimation device and the fusion outputs delivered by the fusion system”
Appropriate correction is required.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 12-22 are rejected under 35 U.S.C. 112(b) or pre-AIA 35 U.S.C. 112, second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor, or for pre-AIA the applicant regards as the invention.
Claim 12 and 22 recite the limitation “a fusion system configured to determine a fusion output by implementing at least one sensor fusion algorithm based on at least some of said predicted values” There is insufficient antecedent basis for the limitation “said predicted values” in the claim. For examination purposes examiner has interpreted “said predicted values” to be “predicted values”
Claim 17 recites the limitation “further comprising a replay buffer configured to store the outputs predicted by the estimation device and/or the fusion outputs delivered by the fusion system.” There is insufficient antecedent basis for the limitation “the outputs… the fusion outputs” in the claim. For examination purposes examiner has interpreted “the outputs… the fusion outputs…” to be “outputs… fusion outputs…”
Claim 19 recites the limitation “the encoder.” There is insufficient antecedent basis for this limitation in the claim. For examination purposes examiner has interpreted “the encoder” to be “the recurrent neural network encoder”
Claim 21 recites the limitation “the loss function between the value predicted for this input sample and the fusion output.” There is insufficient antecedent basis for the limitation “the value predicted for this input sample and the fusion output” in the claim. For examination purposes examiner has interpreted “the loss function between the value predicted for this input sample and the fusion output” to be “the loss function between the improved predicted value of said fusion output and said predicted output” (to be consistent with ‘the loss function’ in claim 12)
Claims 13-16, 18 and 20 are also rejected due to their dependency on a rejected claim.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
-
Claims 12-22 rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more
Step 1: Claims 12-21 recite a device. Claim 22 recites a method. Therefore, claims 12-21 are directed to a machine, and claims 22 are directed to a process.
With respect to claim 12:
2A Prong 1: The claim recites a judicial exception.
a fusion system configured to determine a fusion output by implementing at least one sensor fusion algorithm based on at least some of said predicted values (mental process – evaluation or judgement, a human using a pen and paper can determine an output by using an algorithm based on predicted values)
a backpropagation module configured to update the weights associated with the neural network online by determining a loss function representing an error between an improved predicted value of said fusion output and said predicted output by performing a gradient descent backpropagation (mathematical concept - mathematical calculation and mental process – evaluation or judgement, a loss function representing an error and gradient descent are mathematical calculations, and a human using a pen and paper can update/adjust the weight)
2A Prong 2: The judicial exception is not integrated into a practical application.
the vehicle comprising a perception system using a set of sensors, each sensor providing data (insignificant extra-solution activity – MPEP 2106.05(g), (3) data gathering and outputting; providing data using a set of sensors)
the perception system comprising an estimation device configured to estimate a variable comprising at least one feature in relation to one or more objects detected in an environment of the vehicle, the estimation device comprising an online learning module using a neural network to estimate said variable, the neural network being associated with a set of weights, the learning module comprising (mere instructions to apply an exception – MPEP 2106.05(f), (3) The particularity or generality of the application of the judicial exception; high level recitation of using a neural network to estimate the variable)
a forward propagation module configured to propagate data from one or more sensors applied at an input of the neural network, so as to provide a predicted output comprising an estimation of said variable (mere instructions to apply an exception – MPEP 2106.05(f), (3) The particularity or generality of the application of the judicial exception; high level recitation of using the neural network to propagate data to provide an output)
Since the claim as a whole, looking at the additional elements individually and in combination, does not contain any other additional elements that are indicative of integration into a practical application, the claim is directed to an abstract idea.
2B: The claim(s) does/do not include additional elements that are sufficient to amount to significantly more than the judicial exception.
the vehicle comprising a perception system using a set of sensors, each sensor providing data (insignificant extra-solution activity – MPEP 2106.05(g), (3) data gathering and outputting, and WURC: receiving or transmitting data over a network, e.g., using the Internet to gather data, Symantec, 838 F.3d at 1321, 120 USPQ2d at 1362 - MPEP 2106.05(d)(II)(i)); providing data using a set of sensors)
the perception system comprising an estimation device configured to estimate a variable comprising at least one feature in relation to one or more objects detected in an environment of the vehicle, the estimation device comprising an online learning module using a neural network to estimate said variable, the neural network being associated with a set of weights, the learning module comprising (mere instructions to apply an exception – MPEP 2106.05(f), (3) The particularity or generality of the application of the judicial exception; high level recitation of using a neural network to estimate the variable)
a forward propagation module configured to propagate data from one or more sensors applied at an input of the neural network, so as to provide a predicted output comprising an estimation of said variable (mere instructions to apply an exception – MPEP 2106.05(f), (3) The particularity or generality of the application of the judicial exception; high level recitation of using the neural network to propagate data to provide an output)
Considering the additional elements individually and in combination, and the claim as a whole, the additional elements do not provide significantly more than the abstract idea. Therefore, the claim is not patent eligible.
With respect to claim 13:
2A Prong 2: The judicial exception is not integrated into a practical application.
wherein said variable is a state vector comprising information in relation to the position and/or the movement of an object detected by the perception system (mere instructions to apply an exception – MPEP 2106.05(f), (3) The particularity or generality of the application of the judicial exception; Claim 12 recites “using a neural network to estimate said variable,” which is mere instructions to apply an exception – MPEP 2106.05(f). Specifying the details of the variable does not cause the limitation to integrate the exception into a practical application)
Since the claim as a whole, looking at the additional elements individually and in combination, does not contain any other additional elements that are indicative of integration into a practical application, the claim is directed to an abstract idea.
2B: The claim(s) does/do not include additional elements that are sufficient to amount to significantly more than the judicial exception.
wherein said variable is a state vector comprising information in relation to the position and/or the movement of an object detected by the perception system (mere instructions to apply an exception – MPEP 2106.05(f), (3) The particularity or generality of the application of the judicial exception; Claim 12 recites “using a neural network to estimate said variable,” which is mere instructions to apply an exception – MPEP 2106.05(f). Specifying the details of the variable does not cause the limitation to be significantly more than the judicial exception.)
Considering the additional elements individually and in combination, and the claim as a whole, the additional elements do not provide significantly more than the abstract idea. Therefore, the claim is not patent eligible.
With respect to claim 14:
2A Prong 2: The judicial exception is not integrated into a practical application.
wherein said state vector further comprises information in relation to one or more detected objects (mere instructions to apply an exception – MPEP 2106.05(f), (3) The particularity or generality of the application of the judicial exception; Claim 12 recites “using a neural network to estimate said variable” and claim 13 recites “said variable is a state vector” which is mere instructions to apply an exception – MPEP 2106.05(f). Specifying the details of the state variable does not cause the limitation to integrate the exception into a practical application)
Since the claim as a whole, looking at the additional elements individually and in combination, does not contain any other additional elements that are indicative of integration into a practical application, the claim is directed to an abstract idea.
2B: The claim(s) does/do not include additional elements that are sufficient to amount to significantly more than the judicial exception.
wherein said state vector further comprises information in relation to one or more detected objects (mere instructions to apply an exception – MPEP 2106.05(f), (3) The particularity or generality of the application of the judicial exception; Claim 12 recites “using a neural network to estimate said variable” and claim 13 recites “said variable is a state vector” which is mere instructions to apply an exception – MPEP 2106.05(f). Specifying the details of the state variable does not cause the limitation to be significantly more than the judicial exception.)
Considering the additional elements individually and in combination, and the claim as a whole, the additional elements do not provide significantly more than the abstract idea. Therefore, the claim is not patent eligible.
With respect to claim 15:
2A Prong 2: The judicial exception is not integrated into a practical application.
wherein said state vector further comprises trajectory parameters of a target object (mere instructions to apply an exception – MPEP 2106.05(f), (3) The particularity or generality of the application of the judicial exception; Claim 12 recites “using a neural network to estimate said variable” and claim 13 recites “said variable is a state vector” which is mere instructions to apply an exception – MPEP 2106.05(f). Specifying the details of the state variable does not cause the limitation to integrate the exception into a practical application)
Since the claim as a whole, looking at the additional elements individually and in combination, does not contain any other additional elements that are indicative of integration into a practical application, the claim is directed to an abstract idea.
2B: The claim(s) does/do not include additional elements that are sufficient to amount to significantly more than the judicial exception.
wherein said state vector further comprises trajectory parameters of a target object (mere instructions to apply an exception – MPEP 2106.05(f), (3) The particularity or generality of the application of the judicial exception; Claim 12 recites “using a neural network to estimate said variable” and claim 13 recites “said variable is a state vector” which is mere instructions to apply an exception – MPEP 2106.05(f). Specifying the details of the state variable does not cause the limitation to be significantly more than the judicial exception.)
Considering the additional elements individually and in combination, and the claim as a whole, the additional elements do not provide significantly more than the abstract idea. Therefore, the claim is not patent eligible.
With respect to claim 16:
2A Prong 2: The judicial exception is not integrated into a practical application.
wherein said improved predicted value is determined by applying a Kalman filter (mere instructions to apply an exception – MPEP 2106.05(f), (3) The particularity or generality of the application of the judicial exception; high level recitation of applying a Kalman filter to the predicted value)
Since the claim as a whole, looking at the additional elements individually and in combination, does not contain any other additional elements that are indicative of integration into a practical application, the claim is directed to an abstract idea.
2B: The claim(s) does/do not include additional elements that are sufficient to amount to significantly more than the judicial exception.
wherein said improved predicted value is determined by applying a Kalman filter (mere instructions to apply an exception – MPEP 2106.05(f), (3) The particularity or generality of the application of the judicial exception; high level recitation of applying a Kalman filter to the predicted value)
Considering the additional elements individually and in combination, and the claim as a whole, the additional elements do not provide significantly more than the abstract idea. Therefore, the claim is not patent eligible.
With respect to claim 17:
2A Prong 2: The judicial exception is not integrated into a practical application.
further comprising a replay buffer configured to store the outputs predicted by the estimation device and/or the fusion outputs delivered by the fusion system (insignificant extra-solution activity – MPEP 2106.05(g), (3) data gathering and outputting; storing the outputs)
Since the claim as a whole, looking at the additional elements individually and in combination, does not contain any other additional elements that are indicative of integration into a practical application, the claim is directed to an abstract idea.
2B: The claim(s) does/do not include additional elements that are sufficient to amount to significantly more than the judicial exception.
further comprising a replay buffer configured to store the outputs predicted by the estimation device and/or the fusion outputs delivered by the fusion system (insignificant extra-solution activity – MPEP 2106.05(g), (3) data gathering and outputting, and WURC: storing and retrieving information in memory, Versata Dev. Group, Inc. - MPEP 2106.05(d)(II)(iv); storing the outputs)
Considering the additional elements individually and in combination, and the claim as a whole, the additional elements do not provide significantly more than the abstract idea. Therefore, the claim is not patent eligible.
With respect to claim 18:
2A Prong 2: The judicial exception is not integrated into a practical application.
further comprising a recurrent neural network encoder configured to encode and compress the data prior to storage in the replay buffer, and a decoder configured to decode and decompress the data extracted from the replay buffer (mere instructions to apply an exception – MPEP 2106.05(f), (3) The particularity or generality of the application of the judicial exception; high level recitation of using a RNN to encode and a decoder to decode)
Since the claim as a whole, looking at the additional elements individually and in combination, does not contain any other additional elements that are indicative of integration into a practical application, the claim is directed to an abstract idea.
2B: The claim(s) does/do not include additional elements that are sufficient to amount to significantly more than the judicial exception.
further comprising a recurrent neural network encoder configured to encode and compress the data prior to storage in the replay buffer, and a decoder configured to decode and decompress the data extracted from the replay buffer (mere instructions to apply an exception – MPEP 2106.05(f), (3) The particularity or generality of the application of the judicial exception; high level recitation of using a RNN to encode and a decoder to decode)
Considering the additional elements individually and in combination, and the claim as a whole, the additional elements do not provide significantly more than the abstract idea. Therefore, the claim is not patent eligible.
With respect to claim 19:
2A Prong 2: The judicial exception is not integrated into a practical application.
wherein the encoder is a recurrent neural network encoder and the decoder is a recurrent neural network decoder (mere instructions to apply an exception – MPEP 2106.05(f), (3) The particularity or generality of the application of the judicial exception; Claim 18 recites “a recurrent neural network encoder… a decoder,” which is mere instructions to apply an exception. Specifying the specific model does not cause the limitation to integrate the exception into a practical application.)
Since the claim as a whole, looking at the additional elements individually and in combination, does not contain any other additional elements that are indicative of integration into a practical application, the claim is directed to an abstract idea.
2B: The claim(s) does/do not include additional elements that are sufficient to amount to significantly more than the judicial exception.
wherein the encoder is a recurrent neural network encoder and the decoder is a recurrent neural network decoder (mere instructions to apply an exception – MPEP 2106.05(f), (3) The particularity or generality of the application of the judicial exception; Claim 18 recites “a recurrent neural network encoder… a decoder,” which is mere instructions to apply an exception. Specifying the specific model does not cause the limitation to be significantly more than the judicial exception.)
Considering the additional elements individually and in combination, and the claim as a whole, the additional elements do not provide significantly more than the abstract idea. Therefore, the claim is not patent eligible.
With respect to claim 20:
2A Prong 2: The judicial exception is not integrated into a practical application.
wherein the replay buffer is prioritized (insignificant extra-solution activity – MPEP 2106.05(g), (3) data gathering and outputting; Claim 17 recites “a replay buffer configured to store the outputs,” which is insignificant extra-solution activity. Specifying the buffer is prioritized does not cause the limitation to integrate the exception into a practical application.)
Since the claim as a whole, looking at the additional elements individually and in combination, does not contain any other additional elements that are indicative of integration into a practical application, the claim is directed to an abstract idea.
2B: The claim(s) does/do not include additional elements that are sufficient to amount to significantly more than the judicial exception.
wherein the replay buffer is prioritized (insignificant extra-solution activity – MPEP 2106.05(g), (3) data gathering and outputting, and WURC: storing and retrieving information in memory, Versata Dev. Group, Inc. - MPEP 2106.05(d)(II)(iv); Claim 17 recites “a replay buffer configured to store the outputs,” which is insignificant extra-solution activity. Specifying the buffer is prioritized does not cause the limitation to be significantly more than the judicial exception.)
Considering the additional elements individually and in combination, and the claim as a whole, the additional elements do not provide significantly more than the abstract idea. Therefore, the claim is not patent eligible.
With respect to claim 21:
2A Prong 2: The judicial exception is not integrated into a practical application.
wherein the device is configured to implement a condition for testing input data applied at input of a neural network (mere instructions to apply an exception – MPEP 2106.05(f), (3) The particularity or generality of the application of the judicial exception; high level recitation of using a neural network for testing)
input data being deleted from the replay buffer when the loss function between the value predicted for this input sample and the fusion output is lower than a predefined threshold (insignificant extra-solution activity – MPEP 2106.05(g), (1) well-known or (2) insignificant; updating the buffer (deleting/overwriting data) in a certain condition)
Since the claim as a whole, looking at the additional elements individually and in combination, does not contain any other additional elements that are indicative of integration into a practical application, the claim is directed to an abstract idea.
2B: The claim(s) does/do not include additional elements that are sufficient to amount to significantly more than the judicial exception.
wherein the device is configured to implement a condition for testing input data applied at input of a neural network (mere instructions to apply an exception – MPEP 2106.05(f), (3) The particularity or generality of the application of the judicial exception; high level recitation of using a neural network for testing)
input data being deleted from the replay buffer when the loss function between the value predicted for this input sample and the fusion output is lower than a predefined threshold (insignificant extra-solution activity – MPEP 2106.05(g), (1) well-known or (2) insignificant; updating the buffer (deleting/overwriting data) in a certain condition, and WURC: iii. Electronic recordkeeping, or iv. Storing and retrieving information in memory - MPEP 2106.05(d)(II) (iii) or (iv))
Considering the additional elements individually and in combination, and the claim as a whole, the additional elements do not provide significantly more than the abstract idea. Therefore, the claim is not patent eligible.
With respect to claim 22:
2A Prong 1: The claim recites a judicial exception.
determining a fusion output by implementing at least one sensor fusion algorithm based on at least some of said predicted values (mental process – evaluation or judgement, a human using a pen and paper can determine an output by using an algorithm based on predicted values)
updating the weights associated with the neural network online by determining a loss function representing an error between an improved predicted value of said fusion output and said predicted output by performing a gradient descent backpropagation (mathematical concept - mathematical calculation and mental process – evaluation or judgement, a loss function representing an error and gradient descent are mathematical calculations, and a human using a pen and paper can update/adjust the weight)
2A Prong 2: The judicial exception is not integrated into a practical application.
the vehicle comprising a perception system using a set of sensors, each sensor providing data (insignificant extra-solution activity – MPEP 2106.05(g), (3) data gathering and outputting; providing data using a set of sensors)
the control method comprising: estimating a variable comprising at least one feature in relation to one or more objects detected in an environment of the vehicle, wherein the estimating implements online learning step a neural network to estimate said variable, the neural network being associated with a set of weights, wherein the online learning comprises (mere instructions to apply an exception – MPEP 2106.05(f), (3) The particularity or generality of the application of the judicial exception; high level recitation of using a neural network to estimate the variable)
propagating data from one or more sensors, applied at an input of the neural network, so as to provide a predicted output comprising an estimation of said variable (mere instructions to apply an exception – MPEP 2106.05(f), (3) The particularity or generality of the application of the judicial exception; high level recitation of using the neural network to propagate data to provide an output)
Since the claim as a whole, looking at the additional elements individually and in combination, does not contain any other additional elements that are indicative of integration into a practical application, the claim is directed to an abstract idea.
2B: The claim(s) does/do not include additional elements that are sufficient to amount to significantly more than the judicial exception.
the vehicle comprising a perception system using a set of sensors, each sensor providing data (insignificant extra-solution activity – MPEP 2106.05(g), (3) data gathering and outputting; providing data using a set of sensors, and WURC: receiving or transmitting data over a network, e.g., using the Internet to gather data, Symantec, 838 F.3d at 1321, 120 USPQ2d at 1362 - MPEP 2106.05(d)(II)(i))
the control method comprising: estimating a variable comprising at least one feature in relation to one or more objects detected in an environment of the vehicle, wherein the estimating implements online learning step a neural network to estimate said variable, the neural network being associated with a set of weights, wherein the online learning comprises (mere instructions to apply an exception – MPEP 2106.05(f), (3) The particularity or generality of the application of the judicial exception; high level recitation of using a neural network to estimate the variable)
propagating data from one or more sensors, applied at an input of the neural network, so as to provide a predicted output comprising an estimation of said variable (mere instructions to apply an exception – MPEP 2106.05(f), (3) The particularity or generality of the application of the judicial exception; high level recitation of using the neural network to propagate data to provide an output)
Considering the additional elements individually and in combination, and the claim as a whole, the additional elements do not provide significantly more than the abstract idea. Therefore, the claim is not patent eligible.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims 12, 13-17 and 22 rejected under 35 U.S.C. 103 as being unpatentable over Fang ("Camera and LiDAR Fusion for On-road Vehicle Tracking with Reinforcement Learning" 20190609) in view of Gou (US 20190303765 A1)
In regard to claim 12, Fang teaches: A control device implemented in a vehicle, the vehicle comprising a perception system using a set of sensors, each sensor providing data, (Fang, p.1726, iv. Experiments "A panoramic Ladybug3 camera and 360 LiDAR array are equipped on the host vehicle [a vehicle] to capture a full field of view"; p. 1723, Abstract "Given the input states of different sensors, our approach chooses one input with a higher expected cumulative reward as the observation of a Kalman filter to iteratively predict the target position… The expected cumulative reward is estimated with a convolutional neural network, trained with a modified DQN algorithm, which takes inputs from both LiDAR and a camera. [a set of sensors, each sensor providing data]"; sensors providing image or LiDAR data; the system with sensors is a perception system)
the perception system comprising an estimation device configured to estimate a variable comprising at least one feature in relation to one or more objects detected in an environment of the vehicle, (Fang, p. 1724, III. Methodology "S={st} is a set of descriptions of the target vehicle state [objects detected in an environment], at each time t... We designed a convolutional neural network Q-network as shown in Fig. 3(b) to map a given st=(mc(μ¯t), ml(μ¯t)) to expected Q-values, Qc = Qπ(s,ac) and Ql = Qπ(s,al), [estimate a variable Q-value, comprising feature s in relation to objects] if actions ac and al are respectively taken."; the whole system with sensors, DQN and fusion tracking is the perception system comprising an estimation device)
PNG
media_image1.png
478
1120
media_image1.png
Greyscale
the estimation device comprising an online learning module using a neural network to estimate said variable, the neural network being associated with a set of weights, the learning module comprising: (Fang, p. 1726, C. CNN Q-Network "We adopted the training framework (Algorithm 1) as DQN [12]. A deep convolutional neural network [an online learning module using a neural network to estimate Q(s, a; θ)] is first initialized with a random weight θ as the prediction network, and a target network θ− is introduced, sharing the same parameters."; p. 1725, Fig. 3. System framework of our fusion algorithm. "(b) Architecture of CNN Q-network; Input: image patch and OGM patch around target position u; Output: scores of different actions in current state. [see Fig. 3 (b) Qc and Ql, estimate said variable]")
… a fusion system configured to determine a fusion output by implementing at least one sensor fusion algorithm based on at least some of said predicted values; and (Fang, p. , Abstract "We formulate camera and LiDAR fusion tracking as a sequential decision-making process."; p. , B. Reinforcement Learning of a Fusion-Based Tracking "if Qc > Ql, [a fusion algorithm based on q-values (Qc and Ql) to output 1:0 or 0:1 (Qc or Ql, a fusion output), see Fig. 3] the circuit of camera tracking result is activated, and {μct, Σct} is forwarded to a Kalman filter-based tracking module. Instead, a LiDAR tracking result {μlt, Σlt} is used.")
Fang does not teach, but Gou teaches: a forward propagation module configured to propagate data from one or more sensors applied at an input of the neural network, so as to provide a predicted output comprising an estimation of said variable; (Gou, [0246] "1. Do a feedforward pass for the current state s to get predicted q values [a predicted output comprising an estimation of said variable q-value] for all actions. 2. Do a feedforward pass for the next state s′ and calculate maximum overall network outputs max_a′Q(s′, a′)."; [0128] "Input component 210 may include... Additionally, or alternatively, input component 210 may include a sensor for sensing information")
… a backpropagation module configured to update the weights associated with the neural network online by determining a loss function representing an error between an improved predicted value of said fusion output and said predicted output by performing a gradient descent backpropagation. (Gou, [0246] "an implementation of a DQN network is depicted… a loss function may be represented by the following equation: … r + γmax Q(s', a') - Q(s,a)… [a loss function representing an error between an improved predicted value of said fusion output (r + γmax Q(s', a')) and said predicted output (Q(s,a)), i.e. the difference between the target q-value and predicted q-value] and Q may be a neural network [the neural network online]… 4. Update the weights using backpropagation. [a gradient descent backpropagation]")
It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to have modified Fang to incorporate the teachings of Gou by including training iterations with forward and backward propagations. Doing so would allow the model to analyze the collected data to make additional changes. (Gou, [0248] "the cycle may repeat using the changed/modified model 2020 and/or parameters 2060b to perform more iterations in environment 2010, to collect more inputs 2060 (e.g., results 2060b), to analyze the collected data (e.g., by visual analytical framework 2050), to make additional changes 2070, and/or the like.")
In regard to claim 13, Fang teaches: wherein said variable is a state vector (Fang, p. 1725, Fig. 3. System framework of our fusion algorithm. "(b) Architecture of CNN Q-network; Input: image patch and OGM patch around target position u; Output: scores of different actions in current state. [see Fig. 3 (b) [Qc Ql] said variable, a state vector]"; in light of spec. [0091] 'determine a consolidated object state vector (fusion output)...') comprising information in relation to the position and/or the movement of an object detected by the perception system. (Fang, p. 1724, III. Methodology "S={st} is a set of descriptions of the target vehicle state [objects detected by the system], at each time t... we use μt and Σt to denote the vehicle's mean position [position and movement] and the covariance error model of the estimation, respectively... We designed a convolutional neural network Q-network as shown in Fig. 3(b) to map a given st=(mc(μ¯t), ml(μ¯t)) to expected Q-values, Qc = Qπ(s,ac) and Ql = Qπ(s,al) [information s in relation to the position and movement μ]")
In regard to claim 14, Fang teaches: wherein said state vector (Fang, p. 1725, Fig. 3. System framework of our fusion algorithm. "(b)... Output: scores of different actions in current state. [see Fig. 3 (b) [Qc Ql] said state vector]") further comprises information in relation to one or more detected objects. (Fang, p. 1724, III. Methodology "S={st} is a set of descriptions of the target vehicle state [detected objects]... We designed a convolutional neural network Q-network as shown in Fig. 3(b) to map a given st=(mc(μ¯t), ml(μ¯t)) to expected Q-values, Qc = Qπ(s,ac) and Ql = Qπ(s,al) [information s in relation to the objects]")
In regard to claim 15, Fang teaches: wherein said state vector (Fang, p. 1725, Fig. 3. System framework of our fusion algorithm. "(b)... Output: scores of different actions in current state. [see Fig. 3 (b) [Qc Ql] said state vector]") further comprises trajectory parameters of a target object. (Fang, p. 1724, III. Methodology "S={st} is a set of descriptions of the target vehicle state [a target object], at each time t... we use μt and Σt to denote the vehicle's mean position [trajectory parameters] and the covariance error model of the estimation, respectively... We designed a convolutional neural network Q-network as shown in Fig. 3(b) to map a given st=(mc(μ¯t), ml(μ¯t)) to expected Q-values, Qc = Qπ(s,ac) and Ql = Qπ(s,al)"; the concept of tracking encompasses position, movement, and trajectory)
In regard to claim 16, Fang teaches: wherein said improved predicted value is determined by applying a Kalman filter. (Fang, p. 1726, D. Training Using DQN "yt = Est+1 [r(st) + γmax_a_t+1 Q(st+1, at+1; θ−i)] [yt, said improved predicted value, which is based on s_t+1 and r, which is determined by applying a Kalman filter]... (5)... With st,at, the vehicle positions are estimated by a camera-based/LiDAR-based tracker, a state transition is performed with Kalman filter generating st+1, and a reward is calculated.")
In regard to claim 17, Fang teaches: further comprising a replay buffer configured to store the outputs predicted by the estimation device and/or the fusion outputs delivered by the fusion system. (Fang, p. 1726, D. Training Using DQN "Experience replay memory [replay buffer] D of size N is also declared… With st,at, the vehicle positions are estimated [outputs predicted by the estimation device] by a camera-based/LiDAR-based tracker, a state transition is performed with Kalman filter generating st+1, and a reward [outputs predicted by the fusion system] is calculated. Then, one transition can be stored in experience replay. [storing the outputs]")
In regard to claim 22, Fang teaches: A control method implemented in a vehicle, the vehicle comprising a perception system using a set of sensors, each sensor providing data, the control method comprising: (Fang, p.1726, iv. Experiments "A panoramic Ladybug3 camera and 360 LiDAR array are equipped on the host vehicle [a vehicle] to capture a full field of view"; p. 1723, Abstract "Given the input states of different sensors, our approach chooses one input with a higher expected cumulative reward as the observation of a Kalman filter to iteratively predict the target position… The expected cumulative reward is estimated with a convolutional neural network, trained with a modified DQN algorithm, which takes inputs from both LiDAR and a camera. [a set of sensors, each sensor providing data]"; sensors providing image or LiDAR data; the system with sensors is a perception system)
estimating a variable comprising at least one feature in relation to one or more objects detected in an environment of the vehicle, (Fang, p. 1724, III. Methodology "S={st} is a set of descriptions of the target vehicle state [objects detected in an environment], at each time t... We designed a convolutional neural network Q-network as shown in Fig. 3(b) to map a given st=(mc(μ¯t), ml(μ¯t)) to expected Q-values, Qc = Qπ(s,ac) and Ql = Qπ(s,al), [estimate a variable Q-value, comprising feature s in relation to objects] if actions ac and al are respectively taken."; the whole system with sensors, DQN and fusion tracking is the perception system comprising an estimation device)
wherein the estimating implements online learning step a neural network to estimate said variable, the neural network being associated with a set of weights, (Fang, p. 1726, C. CNN Q-Network "We adopted the training framework (Algorithm 1) as DQN [12]. A deep convolutional neural network [an online learning module using a neural network to estimate Q(s, a; θ)] is first initialized with a random weight θ as the prediction network, and a target network θ− is introduced, sharing the same parameters."; p. 1725, Fig. 3. System framework of our fusion algorithm. "(b) Architecture of CNN Q-network; Input: image patch and OGM patch around target position u; Output: scores of different actions in current state. [see Fig. 3 (b) Qc and Ql, estimate said variable]")
… determining a fusion output by implementing at least one sensor fusion algorithm based on at least some of said predicted values; and (Fang, p. , Abstract "We formulate camera and LiDAR fusion tracking as a sequential decision-making process."; p. , B. Reinforcement Learning of a Fusion-Based Tracking "if Qc > Ql, [a fusion algorithm based on q-values (Qc and Ql) to output 1:0 or 0:1 (Qc or Ql, a fusion output), see Fig. 3] the circuit of camera tracking result is activated, and {μct, Σct} is forwarded to a Kalman filter-based tracking module. Instead, a LiDAR tracking result {μlt, Σlt} is used.")
Fang does not teach, but Gou teaches: wherein the online learning comprises: propagating data from one or more sensors, applied at an input of the neural network, so as to provide a predicted output comprising an estimation of said variable; (Gou, [0246] "1. Do a feedforward pass for the current state s to get predicted q values [a predicted output comprising an estimation of said variable q-value] for all actions. 2. Do a feedforward pass for the next state s′ and calculate maximum overall network outputs max_a′Q(s′, a′)."; [0128] "Input component 210 may include... Additionally, or alternatively, input component 210 may include a sensor for sensing information (e.g., a global positioning system... )")
… updating the weights associated with the neural network online by determining a loss function representing an error between an improved predicted value of said fusion output and said predicted output by performing a gradient descent backpropagation. (Gou, [0246] "an implementation of a DQN network is depicted… a loss function may be represented by the following equation: … r + γmax Q(s', a') - Q(s,a)… [a loss function representing an error between an improved predicted value of said fusion output (r + γmax Q(s', a')) and said predicted output (Q(s,a)), i.e. the difference between the target q-value and predicted q-value] and Q may be a neural network [the neural network online]… 4. Update the weights using backpropagation. [a gradient descent backpropagation]")
The rationale for combining the teachings of Fang and Gou is the same as set forth in the rejection of claim 12.
Claims 18-19 rejected under 35 U.S.C. 103 as being unpatentable over Fand and Gou as applied to claim 17, and in further view of Andersen ("The Dreaming Variational Autoencoder for Reinforcement Learning Environments" 20181002)
In regard to claim 18, Fang and Gou do not teach, but Andersen teaches: further comprising a recurrent neural network encoder configured to encode and compress the data prior to storage in the replay buffer, and a decoder configured to decode and decompress the data extracted from the replay buffer. (Andersen, p. 4, Fig. 1 "DVAE can also use LSTM [recurrent encoder and recurrent decoder] to better learn longer sequences in continuous state-spaces."; p. 12, 6.3 Deep Line Wars Environment Modeling using DVAE "we expand the DVAE algorithm with LSTM to improve the interpretation of animations, illustrated Figure 1."; p. 5 Algorithm 1 "The Dreaming Variational Autoencoder... "; see Fig. 1, the recurrent structure of the encoder Q and the decoder P; see Algorithm 1, line 19 [the data (st) extracted from the replay buffer], line 21 [Q(z|X) encode the data (X including st)], line 22 [P(X|z) decode the data (X including st, extracted from the buffer)], line 23 [prior to storage in the replay buffer])
PNG
media_image2.png
272
1571
media_image2.png
Greyscale
PNG
media_image3.png
398
426
media_image3.png
Greyscale
It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to have modified Fang and Gou to incorporate the teachings of Andersen by including Dreaming Variational Autoencoder with LSTM structure. Doing so would better learn longer sequences in continuous state-spaces. (Andersen, p. 4, Fig. 1 "DVAE can also use LSTM to better learn longer sequences in continuous state-spaces.")
In regard to claim 19, Fang and Gou do not teach, but Andersen teaches: wherein the encoder is a recurrent neural network encoder and the decoder is a recurrent neural network decoder. (Andersen, p. 4, Fig. 1 "DVAE can also use LSTM [recurrent encoder and recurrent decoder] to better learn longer sequences in continuous state-spaces."; p. 12, 6.3 Deep Line Wars Environment Modeling using DVAE "we expand the DVAE algorithm with LSTM to improve the interpretation of animations, illustrated Figure 1."; see Fig. 1, the recurrent structure of the encoder Q and the decoder P)
The rationale for combining the teachings of Fang, Gou and Andersen is the same as set forth in the rejection of claim 18.
Claim 20 rejected under 35 U.S.C. 103 as being unpatentable over Fang and Gou as applied to claim 17, and in further view of Luciw (US 20190061147 A1)
In regard to claim 20, Fang and Gou do not teach, but Luciw teaches: wherein the replay buffer is prioritized. (Luciw, [0056] "A prioritization method can also be applied to pruning the memory. Instead of preferentially sampling the experiences with the highest priorities from experience memory D")
It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to have modified Fang and Gou to incorporate the teachings of Luciw by including TD-error as the basis for prioritization. Doing so would increase learning efficiency and eventual performance. (Luciw, [0051] "Using TD-error as the basis for prioritization for Double DQN increases learning efficiency and eventual performance.")
Claim 21 rejected under 35 U.S.C. 103 as being unpatentable over Fang and Gou as applied to claim 17, and in view of Luciw in further view of Liu (TW 201315187 A)
In regard to claim 21, Fang and Gou do not teach, but Luciw teaches: wherein the device is configured to implement a condition for testing input data applied at input of a neural network, input data being deleted from the replay buffer when the loss function between the value predicted for this input sample and the fusion output... (Luciw, [0051] "an approximation of expected learning progress is the temporal difference (TD) error... (6) Using TD-error as the basis for prioritization [the loss function between the value predicted for this input sample and the fusion output] for Double DQN increases learning efficiency and eventual performance. However, other prioritization methods could be used, such as prioritization by dissimilarity."; [0056] "A prioritization method can also be applied to pruning the memory. Instead of preferentially sampling the experiences with the highest priorities from experience memory D, the experiences with the lowest priorities are preferentially removed from experience memory D. Erasing memories is more final than assigning priorities, but can be necessary depending on the application."; experiences include input data st)
The rationale for combining the teachings of Fang, Gou and Luciw is the same as set forth in the rejection of claim 20.
Fang, Gou and Luciw do not teach, but Liu teaches: … is lower than a predefined threshold. (Liu, p. 14 "the content may be removed when the high speed access memory reaches a predetermined fullness level and/or the priority of the published content is prioritized below a threshold value. [a predefined threshold] In an embodiment, when the high speed access memory reaches a predetermined level of overflow (eg, when the memory is full, 95% full, or 90% full), the lowest priority content of the high speed access can be cleared.")
It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to have modified Fang, Gou and Luciw to incorporate the teachings of Liu by including user-defined policies for removing the content from the memory. Doing so would allow to free up previously unavailable memory according to the policies. (Liu, p. 14 "it may be implemented to remove content from high speed access functions and/or to publish (or delete content from the list) predefined, user-defined… policies."; p. 20 "The request may include an instruction to remove the content if the available storage or memory in the selected SRF 220-2 is insufficient.")
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to SU-TING CHUANG whose telephone number is (408)918-7519. The examiner can normally be reached Monday - Thursday 8-5 PT.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Usmaan Saeed can be reached at (571) 272-4046. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/SU-TING CHUANG/Examiner, Art Unit 2146