DETAILED ACTION
This action is in response to the application filed on 06/27/2023. Claims 1-20 are pending and have been examined.
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claim 1-20 rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more.
Regarding Claim 1:
Subject Matter Eligibility Analysis Step 1:
Claim 1 recites a method, which directs to a process and is one of the four statutory categories of patentable subject matter.
Subject Matter Eligibility Analysis Step 2A Prong 1:
Claim 1 recites
determining a loss function based on the first training data, wherein the loss function defines a region of zero loss based on a minimum and a maximum of the first training data (this limitation is mathematical concept since an equation is given for the loss function within the specification [0060]. This limitation is also a mental process since a human can mentally calculate the loss function since the equation is given)
calculating, using the loss function, a loss of the ML model based on the prediction output (this limitation is a mathematical concept since an equation is given for the loss function within the specification [0060])
Therefore, claim 1 recites an abstract idea.
Subject Matter Eligibility Analysis Step 2A Prong 2:
Claim 1 further recites additional elements of
obtaining a dataset that includes (i) first training data obtained using two or more ground truth sensing systems and (ii) second training data obtained using a prediction sensing system configured to implement the ML model (this element does not integrate the abstract idea into a practical solution because it is mere data gathering, which is an insignificant extra-solution activity (see MPEP 2106.05(g)))
calculating, using the ML model, a prediction output based on the second training data (this element does not integrate the abstract idea into a practical solution because it amounts to mere instruction to apply (see MPEP 2106.05(f)))
updating the ML model based on the calculated loss (this element does not integrate the abstract idea into a practical solution because it amounts to mere instruction to apply (see MPEP 2106.05(f)))
Therefore, claim 1 is not integrated into a practical application.
Subject Matter Eligibility Analysis Step 2B:
The additional elements of claim 1 do not provide significantly more that the abstract idea itself, taken alone and in combination because
obtaining a dataset that includes (i) first training data obtained using two or more ground truth sensing systems and (ii) second training data obtained using a prediction sensing system configured to implement the ML model is well understood, routine, and conventional. The court has ruled that “Receiving or transmitting data over a network, e.g., using the Internet to gather data” is recognized as a computer function that is well‐understood, routine, and conventional (buySAFE, Inc. v. Google, Inc., 765 F.3d 1350, 1355, 112 USPQ2d 1093, 1096 (Fed. Cir. 2014)).
calculating, using the ML model, a prediction output based on the second training data is an instruction to apply to perform the abstract idea and cannot provide significantly more (see MPEP 2106.05(f))
updating the ML model based on the calculated loss is an instruction to apply to perform the abstract idea and cannot provide significantly more (see MPEP 2106.05(f))
Therefore, claim 1 is subject-matter ineligible.
Regarding Claim 2:
Subject Matter Eligibility Analysis Step 1:
Claim 2 recites a method, which directs to a process and is one of the four statutory categories of patentable subject matter.
Subject Matter Eligibility Analysis Step 2A Prong 1:
Since claim 2 is dependent on claim 1, the Subject Matter Eligibility Analysis from claim 1 can be applied to claim 2. Therefore, claim 2 recites an abstract idea.
Subject Matter Eligibility Analysis Step 2A Prong 2:
Claim 2 further recites additional elements of
the ground truth sensing systems include at least one of a camera system, a radar system, and a photocell system and the prediction sensing system includes at least one audio sensor (this element does not integrate the abstract idea into a practical application because it directs to a field of use limitation in which to apply a judicial exception (see MPEP 2106.05(h)))
Therefore, claim 2 is not integrated into a practical application.
Subject Matter Eligibility Analysis Step 2B:
The additional elements in claim 2 do not provide significantly more than the abstract idea itself, taken alone and in combination because
the ground truth sensing systems include at least one of a camera system, a radar system, and a photocell system and the prediction sensing system includes at least one audio sensor specifies a field of use limitation to perform the abstract idea an cannot provide significantly more (see MPEP 2106.05(h))
Therefore, claim 2 is subject-matter ineligible.
Regarding Claim 3:
Subject Matter Eligibility Analysis Step 1:
Claim 3 recites a method, which directs to a process and is one of the four statutory categories of patentable subject matter.
Subject Matter Eligibility Analysis Step 2A Prong 1:
Since claim 3 is dependent on claim 1, the Subject Matter Eligibility Analysis from claim 1 can be applied to claim 3. Therefore, claim 3 recites an abstract idea.
Subject Matter Eligibility Analysis Step 2A Prong 2:
Claim 3 further recites additional elements of
the ML model is configured to detect features in an audio stream of data (this element does not integrate the abstract idea into a practical application because it directs to a field of use limitation in which to apply a judicial exception (see MPEP 2106.05(h)))
Therefore, claim 3 is not integrated into a practical application.
Subject Matter Eligibility Analysis Step 2B:
The additional elements in claim 3 do not provide significantly more than the abstract idea itself, taken alone and in combination because
the ML model is configured to detect features in an audio stream of data specifies a field of use limitation to perform the abstract idea an cannot provide significantly more (see MPEP 2106.05(h))
Therefore, claim 3 is subject-matter ineligible.
Regarding Claim 4:
Subject Matter Eligibility Analysis Step 1:
Claim 4 recites a method, which directs to a process and is one of the four statutory categories of patentable subject matter.
Subject Matter Eligibility Analysis Step 2A Prong 1:
Claim 4 recites
determining the loss function includes defining the region of zero loss between a lower bound based on the minimum of the first training data and an upper bound based on the maximum of the first training data (this limitation is mathematical concept since an equation is given for the loss function within the specification [0060]. This limitation is also a mental process since a human can mentally calculate the loss function since the equation is given)
Therefore, claim 4 recites an abstract idea.
Subject Matter Eligibility Analysis Step 2A Prong 2:
Claim 4 does not recite any additional elements, therefore claim 4 is not integrated into a practical application.
Subject Matter Eligibility Analysis Step 2A Prong 2:
Since there are no additional elements, claim 4 does not provide significantly more than the abstract idea itself, taken alone and in combination. Therefore, claim 4 is subject-matter ineligible.
Regarding Claim 5:
Subject Matter Eligibility Analysis Step 1:
Claim 5 recites a method, which directs to a process and is one of the four statutory categories of patentable subject matter.
Subject Matter Eligibility Analysis Step 2A Prong 1:
Claim 5 recites
determining a correctness function based on the first training data (this limitation is mathematical concept since an equation is given for the loss function within the specification [0066]. This limitation is also a mental process since a human can mentally calculate the correctness function since the equation is given)
Therefore, claim 5 recites an abstract idea.
Subject Matter Eligibility Analysis Step 2A Prong 2:
Claim 5 further recites additional elements of
the correctness function is configured to (i) output a first value in response to the prediction output being within a predetermined tolerance of the region of zero loss and (ii) output a second value in response to the prediction output not being within the predetermined tolerance of the region of zero loss (this element does not integrate the abstract idea into a practical solution because it amounts to mere instruction to apply (see MPEP 2106.05(f)))
Therefore, claim 5 is not integrated into a practical application.
Subject Matter Eligibility Analysis Step 2B:
The additional elements of claim 5 do not provide significantly more than the abstract idea itself, taken alone and in combination because
the correctness function is configured to (i) output a first value in response to the prediction output being within a predetermined tolerance of the region of zero loss and (ii) output a second value in response to the prediction output not being within the predetermined tolerance of the region of zero loss is an instruction to apply to perform the abstract idea and cannot provide significantly more (see MPEP 2106.05(f))
Therefore, claim 5 is subject-matter ineligible.
Regarding Claim 6:
Subject Matter Eligibility Analysis Step 1:
Claim 6 recites a method, which directs to a process and is one of the four statutory categories of patentable subject matter.
Subject Matter Eligibility Analysis Step 2A Prong 1:
Claim 6 recites
calculating an accuracy of the ML model using the correctness function (this limitation is mathematical concept since an equation is given for the loss function within the specification [0060]. This limitation is also a mental process since a human can mentally calculate the correctness function since the equation is given)
Therefore, claim 6 recites an abstract idea.
Subject Matter Eligibility Analysis Step 2A Prong 2:
Claim 6 further recites additional elements of
updating the ML model further based on the calculated accuracy (this element does not integrate the abstract idea into a practical solution because it amounts to mere instruction to apply (see MPEP 2106.05(f)))
Therefore, claim 6 is not integrated into a practical application.
Subject Matter Eligibility Analysis Step 2B:
The additional elements of claim 6 do not provide significantly more than the abstract idea itself, taken alone and in combination because
updating the ML model further based on the calculated accuracy is an instruction to apply to perform the abstract idea and cannot provide significantly more (see MPEP 2106.05(f))
Therefore, claim 6 is subject-matter ineligible.
Regarding Claim 7:
Subject Matter Eligibility Analysis Step 1:
Claim 7 recites a method, which directs to a process and is one of the four statutory categories of patentable subject matter.
Subject Matter Eligibility Analysis Step 2A Prong 1:
Since claim 7 is dependent on claim 1, the Subject Matter Eligibility Analysis from claim 1 can be applied to claim 7. Therefore, claim 7 recites an abstract idea.
Subject Matter Eligibility Analysis Step 2A Prong 2:
Claim 7 further recites additional elements of
(i) the two or more ground truth sensing systems are configured to visually detect an object in a first time interval and (ii) the prediction sensing system is configured to detect audio features associated with the object in the first time interval (this element does not integrate the abstract idea into a practical application because it directs to a field of use limitation in which to apply a judicial exception (see MPEP 2106.05(h)))
Therefore, claim 7 is not integrated into a practical application.
Subject Matter Eligibility Analysis Step 2B:
The additional elements in claim 7 do not provide significantly more than the abstract idea itself, taken alone and in combination because
(i) the two or more ground truth sensing systems are configured to visually detect an object in a first time interval and (ii) the prediction sensing system is configured to detect audio features associated with the object in the first time interval specifies a field of use limitation to perform the abstract idea an cannot provide significantly more (see MPEP 2106.05(h))
Therefore, claim 7 is subject-matter ineligible.
Regarding Claim 8:
Subject Matter Eligibility Analysis Step 1:
Claim 8 recites a system, which directs to a machine and is one of the four statutory categories of patentable subject matter.
Subject Matter Eligibility Analysis Step 2A Prong 1:
Claim 8 recites
determining a loss function based on the first training data, wherein the loss function defines a region of zero loss based on a minimum and a maximum of the first training data (this limitation is mathematical concept since an equation is given for the loss function within the specification [0060]. This limitation is also a mental process since a human can mentally calculate the loss function since the equation is given)
calculating, using the loss function, a loss of the ML model based on the prediction output (this limitation is a mathematical concept since an equation is given for the loss function within the specification [0060])
Therefore, claim 8 recites an abstract idea.
Subject Matter Eligibility Analysis Step 2A Prong 2:
Claim 8 further recites additional elements of
a loss function circuitry (this element does not integrate the abstract idea into a practical application because it a generic computing component on which to perform the abstract idea (see MPEP 2106.05(f)))
a ML circuitry configured to implementing the ML model (this element does not integrate the abstract idea into a practical application because it a generic computing component on which to perform the abstract idea (see MPEP 2106.05(f)))
receiving a dataset that includes first training data obtained using two or more ground truth sensing systems (this element does not integrate the abstract idea into a practical solution because it is mere data gathering, which is an insignificant extra-solution activity (see MPEP 2106.05(g)))
receiving second training data obtained using a prediction sensing system configured to implement the ML model (this element does not integrate the abstract idea into a practical solution because it is mere data gathering, which is an insignificant extra-solution activity (see MPEP 2106.05(g)))
calculating, using the ML model, a prediction output based on the second training data (this element does not integrate the abstract idea into a practical solution because it amounts to mere instruction to apply (see MPEP 2106.05(f)))
updating the ML model based on the calculated loss (this element does not integrate the abstract idea into a practical solution because it amounts to mere instruction to apply (see MPEP 2106.05(f)))
Therefore, claim 8 is not integrated into a practical application.
Subject Matter Eligibility Analysis Step 2B:
The additional elements of claim 8 do not provide significantly more that the abstract idea itself, taken alone and in combination because
a loss function circuitry uses a computer as a tool to perform the abstract idea and cannot provide significantly more (see MPEP 2106.05(f))
a ML circuitry configured to implementing the ML model uses a computer as a tool to perform the abstract idea and cannot provide significantly more (see MPEP 2106.05(f))
receiving a dataset that includes first training data obtained using two or more ground truth sensing systems is well understood, routine, and conventional. The court has ruled that “Receiving or transmitting data over a network, e.g., using the Internet to gather data” is recognized as a computer function that is well‐understood, routine, and conventional (buySAFE, Inc. v. Google, Inc., 765 F.3d 1350, 1355, 112 USPQ2d 1093, 1096 (Fed. Cir. 2014)).
receiving second training data obtained using a prediction sensing system configured to implement the ML model is well understood, routine, and conventional. The court has ruled that “Receiving or transmitting data over a network, e.g., using the Internet to gather data” is recognized as a computer function that is well‐understood, routine, and conventional (buySAFE, Inc. v. Google, Inc., 765 F.3d 1350, 1355, 112 USPQ2d 1093, 1096 (Fed. Cir. 2014)).
calculating, using the ML model, a prediction output based on the second training data is an instruction to apply to perform the abstract idea and cannot provide significantly more (see MPEP 2106.05(f))
updating the ML model based on the calculated loss is an instruction to apply to perform the abstract idea and cannot provide significantly more (see MPEP 2106.05(f))
Therefore, claim 8 is subject-matter ineligible.
Regarding Claim 9:
Subject Matter Eligibility Analysis Step 1:
Claim 9 recites a system, which directs to a machine and is one of the four statutory categories of patentable subject matter.
Subject Matter Eligibility Analysis Step 2A Prong 1:
Since claim 9 is dependent on claim 8, the Subject Matter Eligibility Analysis from claim 8 can be applied to claim 9. Therefore, claim 9 recites an abstract idea.
Subject Matter Eligibility Analysis Step 2A Prong 2:
Claim 9 further recites additional elements of
the ground truth sensing systems include at least one of a camera system, a radar system, and a photocell system and the prediction sensing system includes at least one audio sensor (this element does not integrate the abstract idea into a practical application because it directs to a field of use limitation in which to apply a judicial exception (see MPEP 2106.05(h)))
Therefore, claim 9 is not integrated into a practical application.
Subject Matter Eligibility Analysis Step 2B:
The additional elements in claim 9 do not provide significantly more than the abstract idea itself, taken alone and in combination because
the ground truth sensing systems include at least one of a camera system, a radar system, and a photocell system and the prediction sensing system includes at least one audio sensor specifies a field of use limitation to perform the abstract idea an cannot provide significantly more (see MPEP 2106.05(h))
Therefore, claim 9 is subject-matter ineligible.
Regarding Claim 10:
Subject Matter Eligibility Analysis Step 1:
Claim 10 recites a system, which directs to a machine and is one of the four statutory categories of patentable subject matter.
Subject Matter Eligibility Analysis Step 2A Prong 1:
Since claim 10 is dependent on claim 8, the Subject Matter Eligibility Analysis from claim 8 can be applied to claim 10. Therefore, claim 10 recites an abstract idea.
Subject Matter Eligibility Analysis Step 2A Prong 2:
Claim 10 further recites additional elements of
the ML model is configured to detect features in an audio stream of data (this element does not integrate the abstract idea into a practical application because it directs to a field of use limitation in which to apply a judicial exception (see MPEP 2106.05(h)))
Therefore, claim 10 is not integrated into a practical application.
Subject Matter Eligibility Analysis Step 2B:
The additional elements in claim 10 do not provide significantly more than the abstract idea itself, taken alone and in combination because
the ML model is configured to detect features in an audio stream of data specifies a field of use limitation to perform the abstract idea an cannot provide significantly more (see MPEP 2106.05(h))
Therefore, claim 10 is subject-matter ineligible.
Regarding Claim 11:
Subject Matter Eligibility Analysis Step 1:
Claim 11 recites a system, which directs to a machine and is one of the four statutory categories of patentable subject matter.
Subject Matter Eligibility Analysis Step 2A Prong 1:
Claim 11 recites
the loss function circuitry is configured to define the region of zero loss between a lower bound based on the minimum of the first training data and an upper bound based on the maximum of the first training data (this limitation is mathematical concept since an equation is given for the loss function within the specification [0060]. This limitation is also a mental process since a human can mentally calculate the loss function since the equation is given)
Therefore, claim 11 recites an abstract idea.
Subject Matter Eligibility Analysis Step 2A Prong 2:
Claim 11 does not recite any additional elements, therefore claim 11 is not integrated into a practical application.
Subject Matter Eligibility Analysis Step 2A Prong 2:
Since there are no additional elements, claim 11 does not provide significantly more than the abstract idea itself, taken alone and in combination. Therefore, claim 11 is subject-matter ineligible.
Regarding Claim 12:
Subject Matter Eligibility Analysis Step 1:
Claim 12 recites a system, which directs to a machine and is one of the four statutory categories of patentable subject matter.
Subject Matter Eligibility Analysis Step 2A Prong 1:
Claim 12 recites
correctness function circuitry configured to determine a correctness function based on the first training data (this limitation is mathematical concept since an equation is given for the loss function within the specification [0066]. This limitation is also a mental process since a human can mentally calculate the correctness function since the equation is given)
Therefore, claim 12 recites an abstract idea.
Subject Matter Eligibility Analysis Step 2A Prong 2:
Claim 12 further recites additional elements of
the correctness function is configured to (i) output a first value in response to the prediction output being within a predetermined tolerance of the region of zero loss and (ii) output a second value in response to the prediction output not being within the predetermined tolerance of the region of zero loss (this element does not integrate the abstract idea into a practical solution because it amounts to mere instruction to apply (see MPEP 2106.05(f)))
Therefore, claim 12 is not integrated into a practical application.
Subject Matter Eligibility Analysis Step 2B:
The additional elements of claim 12 do not provide significantly more than the abstract idea itself, taken alone and in combination because
the correctness function is configured to (i) output a first value in response to the prediction output being within a predetermined tolerance of the region of zero loss and (ii) output a second value in response to the prediction output not being within the predetermined tolerance of the region of zero loss is an instruction to apply to perform the abstract idea and cannot provide significantly more (see MPEP 2106.05(f))
Therefore, claim 12 is subject-matter ineligible.
Regarding Claim 13:
Subject Matter Eligibility Analysis Step 1:
Claim 13 recites a system, which directs to a machine and is one of the four statutory categories of patentable subject matter.
Subject Matter Eligibility Analysis Step 2A Prong 1:
Claim 13 recites
the correctness function circuitry is configured to calculate an accuracy of the ML model using the correctness function (this limitation is mathematical concept since an equation is given for the loss function within the specification [0060]. This limitation is also a mental process since a human can mentally calculate the correctness function since the equation is given)
Therefore, claim 13 recites an abstract idea.
Subject Matter Eligibility Analysis Step 2A Prong 2:
Claim 13 further recites additional elements of
the ML circuitry is configured to update the ML model further based on the calculated accuracy (this element does not integrate the abstract idea into a practical solution because it amounts to mere instruction to apply (see MPEP 2106.05(f)))
Therefore, claim 13 is not integrated into a practical application.
Subject Matter Eligibility Analysis Step 2B:
The additional elements of claim 13 do not provide significantly more than the abstract idea itself, taken alone and in combination because
the ML circuitry is configured to update the ML model further based on the calculated accuracy is an instruction to apply to perform the abstract idea and cannot provide significantly more (see MPEP 2106.05(f))
Therefore, claim 13 is subject-matter ineligible.
Regarding Claim 14:
Subject Matter Eligibility Analysis Step 1:
Claim 14 recites a system, which directs to a machine and is one of the four statutory categories of patentable subject matter.
Subject Matter Eligibility Analysis Step 2A Prong 1:
Since claim 14 is dependent on claim 8, the Subject Matter Eligibility Analysis from claim 8 can be applied to claim 8. Therefore, claim 14 recites an abstract idea.
Subject Matter Eligibility Analysis Step 2A Prong 2:
Claim 14 further recites additional elements of
(i) the two or more ground truth sensing systems are configured to visually detect an object in a first time interval and (ii) the prediction sensing system is configured to detect audio features associated with the object in the first time interval (this element does not integrate the abstract idea into a practical application because it directs to a field of use limitation in which to apply a judicial exception (see MPEP 2106.05(h)))
Therefore, claim 14 is not integrated into a practical application.
Subject Matter Eligibility Analysis Step 2B:
The additional elements in claim 14 do not provide significantly more than the abstract idea itself, taken alone and in combination because
(i) the two or more ground truth sensing systems are configured to visually detect an object in a first time interval and (ii) the prediction sensing system is configured to detect audio features associated with the object in the first time interval specifies a field of use limitation to perform the abstract idea an cannot provide significantly more (see MPEP 2106.05(h))
Therefore, claim 14 is subject-matter ineligible.
Regarding Claim 15:
Subject Matter Eligibility Analysis Step 1:
Claim 15 recites a computing device, which directs to a machine and is one of the four statutory categories of patentable subject matter.
Subject Matter Eligibility Analysis Step 2A Prong 1:
Claim 15 recites
determining a loss function based on the first training data, wherein the loss function defines a region of zero loss based on a minimum and a maximum of the first training data (this limitation is mathematical concept since an equation is given for the loss function within the specification [0060]. This limitation is also a mental process since a human can mentally calculate the loss function since the equation is given)
calculating, using the loss function, a loss of the ML model based on the prediction output (this limitation is a mathematical concept since an equation is given for the loss function within the specification [0060])
Therefore, claim 15 recites an abstract idea.
Subject Matter Eligibility Analysis Step 2A Prong 2:
Claim 15 further recites additional elements of
obtaining a dataset that includes (i) first training data obtained using two or more ground truth sensing systems and (ii) second training data obtained using a prediction sensing system configured to implement the ML model (this element does not integrate the abstract idea into a practical solution because it is mere data gathering, which is an insignificant extra-solution activity (see MPEP 2106.05(g)))
calculating, using the ML model, a prediction output based on the second training data (this element does not integrate the abstract idea into a practical solution because it amounts to mere instruction to apply (see MPEP 2106.05(f)))
updating the ML model based on the calculated loss (this element does not integrate the abstract idea into a practical solution because it amounts to mere instruction to apply (see MPEP 2106.05(f)))
Therefore, claim 15 is not integrated into a practical application.
Subject Matter Eligibility Analysis Step 2B:
The additional elements of claim 15 do not provide significantly more that the abstract idea itself, taken alone and in combination because
obtaining a dataset that includes (i) first training data obtained using two or more ground truth sensing systems and (ii) second training data obtained using a prediction sensing system configured to implement the ML model is well understood, routine, and conventional. The court has ruled that “Receiving or transmitting data over a network, e.g., using the Internet to gather data” is recognized as a computer function that is well‐understood, routine, and conventional (buySAFE, Inc. v. Google, Inc., 765 F.3d 1350, 1355, 112 USPQ2d 1093, 1096 (Fed. Cir. 2014)).
calculating, using the ML model, a prediction output based on the second training data is an instruction to apply to perform the abstract idea and cannot provide significantly more (see MPEP 2106.05(f))
updating the ML model based on the calculated loss is an instruction to apply to perform the abstract idea and cannot provide significantly more (see MPEP 2106.05(f))
Therefore, claim 15 is subject-matter ineligible.
Regarding Claim 16:
Subject Matter Eligibility Analysis Step 1:
Claim 16 recites a system, which directs to a machine and is one of the four statutory categories of patentable subject matter.
Subject Matter Eligibility Analysis Step 2A Prong 1:
Since claim 16 is dependent on claim 15, the Subject Matter Eligibility Analysis from claim 15 can be applied to claim 16. Therefore, claim 16 recites an abstract idea.
Subject Matter Eligibility Analysis Step 2A Prong 2:
Claim 16 further recites additional elements of
the ground truth sensing systems include at least one of a camera system, a radar system, and a photocell system and the prediction sensing system includes at least one audio sensor (this element does not integrate the abstract idea into a practical application because it directs to a field of use limitation in which to apply a judicial exception (see MPEP 2106.05(h)))
Therefore, claim 16 is not integrated into a practical application.
Subject Matter Eligibility Analysis Step 2B:
The additional elements in claim 16 do not provide significantly more than the abstract idea itself, taken alone and in combination because
the ground truth sensing systems include at least one of a camera system, a radar system, and a photocell system and the prediction sensing system includes at least one audio sensor specifies a field of use limitation to perform the abstract idea an cannot provide significantly more (see MPEP 2106.05(h))
Therefore, claim 16 is subject-matter ineligible.
Regarding Claim 17:
Subject Matter Eligibility Analysis Step 1:
Claim 17 recites a system, which directs to a machine and is one of the four statutory categories of patentable subject matter.
Subject Matter Eligibility Analysis Step 2A Prong 1:
Since claim 17 is dependent on claim 8, the Subject Matter Eligibility Analysis from claim 8 can be applied to claim 17. Therefore, claim 10 recites an abstract idea.
Subject Matter Eligibility Analysis Step 2A Prong 2:
Claim 17 further recites additional elements of
the ML model is configured to detect features in an audio stream of data (this element does not integrate the abstract idea into a practical application because it directs to a field of use limitation in which to apply a judicial exception (see MPEP 2106.05(h)))
Therefore, claim 17 is not integrated into a practical application.
Subject Matter Eligibility Analysis Step 2B:
The additional elements in claim 17 do not provide significantly more than the abstract idea itself, taken alone and in combination because
the ML model is configured to detect features in an audio stream of data specifies a field of use limitation to perform the abstract idea an cannot provide significantly more (see MPEP 2106.05(h))
Therefore, claim 17 is subject-matter ineligible.
Regarding Claim 18:
Subject Matter Eligibility Analysis Step 1:
Claim 18 recites a system, which directs to a machine and is one of the four statutory categories of patentable subject matter.
Subject Matter Eligibility Analysis Step 2A Prong 1:
Claim 18 recites
determining the loss function includes defining the region of zero loss between a lower bound based on the minimum of the first training data and an upper bound based on the maximum of the first training data (this limitation is mathematical concept since an equation is given for the loss function within the specification [0060]. This limitation is also a mental process since a human can mentally calculate the loss function since the equation is given)
Therefore, claim 18 recites an abstract idea.
Subject Matter Eligibility Analysis Step 2A Prong 2:
Claim 18 does not recite any additional elements, therefore claim 18 is not integrated into a practical application.
Subject Matter Eligibility Analysis Step 2A Prong 2:
Since there are no additional elements, claim 18 does not provide significantly more than the abstract idea itself, taken alone and in combination. Therefore, claim 18 is subject-matter ineligible.
Regarding Claim 19:
Subject Matter Eligibility Analysis Step 1:
Claim 19 recites a computing device, which directs to a machine and is one of the four statutory categories of patentable subject matter.
Subject Matter Eligibility Analysis Step 2A Prong 1:
Claim 19 recites
… to determine a correctness function based on the first training data (this limitation is mathematical concept since an equation is given for the loss function within the specification [0066]. This limitation is also a mental process since a human can mentally calculate the correctness function since the equation is given)
Therefore, claim 19 recites an abstract idea.
Subject Matter Eligibility Analysis Step 2A Prong 2:
Claim 19 further recites additional elements of
the processing device is further configured to execute instructions stored in memory … (this element does not integrate the abstract idea into a practical application because it a generic computing component on which to perform the abstract idea (see MPEP 2106.05(f)))
the correctness function is configured to (i) output a first value in response to the prediction output being within a predetermined tolerance of the region of zero loss and (ii) output a second value in response to the prediction output not being within the predetermined tolerance of the region of zero loss (this element does not integrate the abstract idea into a practical solution because it amounts to mere instruction to apply (see MPEP 2106.05(f)))
Therefore, claim 19 is not integrated into a practical application.
Subject Matter Eligibility Analysis Step 2B:
The additional elements of claim 19 do not provide significantly more than the abstract idea itself, taken alone and in combination because
the processing device is further configured to execute instructions stored in memory … uses a computer as a tool to perform the abstract idea and cannot provide significantly more (see MPEP 2106.05(f))
the correctness function is configured to (i) output a first value in response to the prediction output being within a predetermined tolerance of the region of zero loss and (ii) output a second value in response to the prediction output not being within the predetermined tolerance of the region of zero loss is an instruction to apply to perform the abstract idea and cannot provide significantly more (see MPEP 2106.05(f))
Therefore, claim 19 is subject-matter ineligible.
Regarding Claim 20:
Subject Matter Eligibility Analysis Step 1:
Claim 20 recites a system, which directs to a machine and is one of the four statutory categories of patentable subject matter.
Subject Matter Eligibility Analysis Step 2A Prong 1:
Since claim 20 is dependent on claim 15, the Subject Matter Eligibility Analysis from claim 15 can be applied to claim 15. Therefore, claim 20 recites an abstract idea.
Subject Matter Eligibility Analysis Step 2A Prong 2:
Claim 20 further recites additional elements of
(i) the two or more ground truth sensing systems are configured to visually detect an object in a first time interval and (ii) the prediction sensing system is configured to detect audio features associated with the object in the first time interval (this element does not integrate the abstract idea into a practical application because it directs to a field of use limitation in which to apply a judicial exception (see MPEP 2106.05(h)))
Therefore, claim 20 is not integrated into a practical application.
Subject Matter Eligibility Analysis Step 2B:
The additional elements in claim 20 do not provide significantly more than the abstract idea itself, taken alone and in combination because
(i) the two or more ground truth sensing systems are configured to visually detect an object in a first time interval and (ii) the prediction sensing system is configured to detect audio features associated with the object in the first time interval specifies a field of use limitation to perform the abstract idea an cannot provide significantly more (see MPEP 2106.05(h))
Therefore, claim 20 is subject-matter ineligible.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
Claim(s) 1, 4-6, 8, 11-13 is/are rejected under 35 U.S.C. 103 as being unpatentable over Kurian et al. (US 20220114489 A1) (hereafter referred to as Kurian) in view of Kokkinis et al. (US 20210089967 A1) (hereafter referred to as Kokkinis) in further view of Vapnik et al. (The Nature of Statistical Learning Theory) (hereafter referred to as Vapnik).
Regarding Claim 1:
Kurian teaches
obtaining a dataset that includes … (ii) second training data obtained using a prediction sensing system configured to implement the ML model (Kurian, page 1, paragraph 0001, “determining measurement data from a first sensor; determining approximations of ground truths based on a second sensor; and training the machine-learning method based on the measurement data and the approximations of ground truths”. Examiner notes that the measurement data from Kurian is being mapped the second training data and according to FIG. 3, Computer system 300 is being mapped to the prediction sensing system).
calculating, using the ML model, a prediction output based on the second training data (Kurian, page 2, paragraph 0038, “To train a (artificial) neural network to learn the mapping f(x)=y, where x may denote the input signal and y may denote the target signal, the cross entropy between the network prediction f(x) and the target variable y may be minimized”. Examiner notes that “the input signal (i.e. the measurement data, for example input radar signal)” (Kurian, page 2, paragraph 37) is mapped to the second training data)
calculating, using the loss function, a loss of the ML model based on the prediction output (Kurian, page 3, paragraph 0048, “the afore mentioned cross entropy (or cross entropy loss)”)
updating the ML model based on the calculated loss (Kurian, page 3, paragraph 0057, “the machine-learning method may be trained based on the measurement data and the approximations of ground truths, wherein approximations of ground truths of lower-approximation quality have a lower effect on the training than approximations of ground truths of higher-approximation quality”)
Kurian does not teach, but Kokkinis does teach
obtaining a dataset that includes (i) first training data obtained using two or more ground truth sensing systems… (Kokkinis, page 6, paragraph 0063, “training data for many individual source-sensor pairs can be produced and therefore the technology allows the expansion of the feature domain and obtaining of features that are tailored to the multi-sensor environment that one is encountering”)
Kurian and Kokkinis are considered analogous to the claimed invention because they both obtain data through a system. It would have been obvious to one having ordinary skill in the art prior to the effective filing date to have modified Kurian to obtain the first training data through multiple sensors. Doing so is advantageous because “there is a need for creating intelligent training dictionaries that enable the rapid and useful convergence of iterative machine learning techniques. An exemplary embodiment presents new methods to improve training dictionaries by taking into account multi-sensor and multi-resolution information that is available in many applications” (Kokkinis, page 2, paragraph 0014).
Kurian and Kokkinis does not teach, but Vapnik further teaches
determining a loss function based on the first training data, wherein the loss function defines a region of zero loss based on a minimum and a maximum of the first training data (Vapnik, page 182-183)
PNG
media_image1.png
598
926
media_image1.png
Greyscale
PNG
media_image2.png
385
940
media_image2.png
Greyscale
Examiner notes that the loss is 0 when |y - f(x,a)| <= ε. Figure 6.1 also shows that there is a region where the loss is 0, which contains a minimum and maximum.
Kurian, Kokkinis, and Vapnik are considered analogous to the claimed invention because all of them determines a loss based on datasets. It would have been obvious to one having ordinary skill in the art prior to the effective filing date to modify Kurian and Kokkinis by using the ε-insensitive loss function from Vapnik as the loss function from Kurian and Kokkinis. One of the ordinary skill in the art would have known to apply the known technique of the ε-insensitive loss function to calculate loss. Therefore, applying Vapnik’s technique would yield the predictable result of calculating the loss between datasets (MPEP 2141 (III)(D) Applying a known technique to a known device ready for improvement to yield predictable results).
Regarding Claim 4:
Kurian, Kokkinis, and Vapnik teaches the method of claim 1 and further teaches
determining the loss function includes defining the region of zero loss between a lower bound based on the minimum of the first training data and an upper bound based on the maximum of the first training data (Vapnik, page 182-183)
PNG
media_image1.png
598
926
media_image1.png
Greyscale
PNG
media_image2.png
385
940
media_image2.png
Greyscale
Examiner notes that the loss is 0 when |y - f(x,a)| <= ε. Figure 6.1 also shows that there is a region where the loss is 0, which contains a minimum and maximum.
Based on claim 1, Kurian, Kokkinis, and Vapnik are analogous and it would be obvious to one having ordinary skill in the art to combine the prior arts.
Regarding Claim 5:
Kurian, Kokkinis, and Vapnik teaches the method of claim 1 and further teaches
determining a correctness function based on the first training data, wherein the correctness function is configured to (i) output a first value in response to the prediction output being within a predetermined tolerance of the region of zero loss and (ii) output a second value in response to the prediction output not being within the predetermined tolerance of the region of zero loss (Vapnik, page 182-183)
PNG
media_image1.png
598
926
media_image1.png
Greyscale
Examiner notes if |y - f(x,a)| <= ε, then output is 0 , which means it is within the region of zero loss. If not, then the output is |y - f(x,a)| - ε, which means it is not within the region of zero loss.
Based on claim 1, Kurian, Kokkinis, and Vapnik are analogous and it would be obvious to one having ordinary skill in the art to combine the prior arts.
Regarding Claim 6:
Kurian, Kokkinis, and Vapnik teaches the method of claim 1 and 5 and further teaches
calculating an accuracy of the ML model using the correctness function and updating the ML model further based on the calculated accuracy (Vapnik, page 182-183)
PNG
media_image1.png
598
926
media_image1.png
Greyscale
Examiner notes if |y - f(x,a)| <= ε, then output is 0 , which means it is within the region of zero loss. If not, then the output is |y - f(x,a)| - ε, which means it is not within the region of zero loss.
Based on claim 1, Kurian, Kokkinis, and Vapnik are analogous and it would be obvious to one having ordinary skill in the art to combine the prior arts.
Regarding Claim 8:
Kurian teaches
a loss function circuitry… (Kurian, Figure 2, Examiner notes that the training circuit is where the machine learning model lies, which is where the loss function is being used)
an ML circuitry… (Kurian, Figure 2, Examiner notes that the training circuit is where the machine learning model lies)
the ML includes (i) receiving second training data obtained using a prediction sensing system configured to implement the ML model (Kurian, page 1, paragraph 0001, “determining measurement data from a first sensor; determining approximations of ground truths based on a second sensor; and training the machine-learning method based on the measurement data and the approximations of ground truths”. Examiner notes that the measurement data from Kurian is being mapped the second training data and according to FIG. 3, Computer system 300 is being mapped to the prediction sensing system).
(ii) calculating, using the ML model, a prediction output based on the second training data (Kurian, page 2, paragraph 0038, “To train a (artificial) neural network to learn the mapping f(x)=y, where x may denote the input signal and y may denote the target signal, the cross entropy between the network prediction f(x) and the target variable y may be minimized”. Examiner notes that “the input signal (i.e. the measurement data, for example input radar signal)” (Kurian, page 2, paragraph 37) is mapped to the second training data)
the loss function circuitry is further configured to calculate, using the loss function, a loss of the ML model based on the prediction output (Kurian, page 3, paragraph 0048, “the afore mentioned cross entropy (or cross entropy loss)”)
the ML circuitry is configured to update the ML model based on the calculated loss (Kurian, page 3, paragraph 0057, “the machine-learning method may be trained based on the measurement data and the approximations of ground truths, wherein approximations of ground truths of lower-approximation quality have a lower effect on the training than approximations of ground truths of higher-approximation quality”)
Kurian does not teach, but Kokkinis does teach
loss function circuitry configured to (i) receive a dataset that includes first training data obtained using two or more ground truth sensing systems… (Kokkinis, page 6, paragraph 0063, “training data for many individual source-sensor pairs can be produced and therefore the technology allows the expansion of the feature domain and obtaining of features that are tailored to the multi-sensor environment that one is encountering”)
Kurian and Kokkinis are considered analogous to the claimed invention because they both obtain data through a system. It would have been obvious to one having ordinary skill in the art prior to the effective filing date to have modified Kurian to obtain the first training data through multiple sensors. Doing so is advantageous because “there is a need for creating intelligent training dictionaries that enable the rapid and useful convergence of iterative machine learning techniques. An exemplary embodiment presents new methods to improve training dictionaries by taking into account multi-sensor and multi-resolution information that is available in many applications” (Kokkinis, page 2, paragraph 0014).
Kurian and Kokkinis does not teach, but Vapnik further teaches
loss function circuitry configured to … (ii) determine a loss function based on the first training data, wherein the loss function defines a region of zero loss based on a minimum and a maximum of the first training data (Vapnik, page 182-183)
PNG
media_image1.png
598
926
media_image1.png
Greyscale
PNG
media_image2.png
385
940
media_image2.png
Greyscale
Examiner notes that the loss is 0 when |y - f(x,a)| <= ε. Figure 6.1 also shows that there is a region where the loss is 0, which contains a minimum and maximum.
Kurian, Kokkinis, and Vapnik are considered analogous to the claimed invention because all of them determines a loss based on datasets. It would have been obvious to one having ordinary skill in the art prior to the effective filing date to modify Kurian and Kokkinis by using the ε-insensitive loss function from Vapnik as the loss function from Kurian and Kokkinis. One of the ordinary skill in the art would have known to apply the known technique of the ε-insensitive loss function to calculate loss. Therefore, applying Vapnik’s technique would yield the predictable result of calculating the loss between datasets (MPEP 2141 (III)(D) Applying a known technique to a known device ready for improvement to yield predictable results).
Regarding Claim 11:
Kurian, Kokkinis, and Vapnik teaches the system of claim 8 and further teaches
to determine the loss function, the loss function circuitry is configured to define the region of zero loss between a lower bound based on the minimum of the first training data and an upper bound based on the maximum of the first training data (Vapnik, page 182-183)
PNG
media_image1.png
598
926
media_image1.png
Greyscale
PNG
media_image2.png
385
940
media_image2.png
Greyscale
Examiner notes that the loss is 0 when |y - f(x,a)| <= ε. Figure 6.1 also shows that there is a region where the loss is 0, which contains a minimum and maximum.
Based on claim 8, Kurian, Kokkinis, and Vapnik are analogous and it would be obvious to one having ordinary skill in the art to combine the prior arts.
Regarding Claim 12:
Kurian, Kokkinis, and Vapnik teaches the system of claim 8 and further teaches
correctness function circuitry configured to determine a correctness function based on the first training data, wherein the correctness function is configured to (i) output a first value in response to the prediction output being within a predetermined tolerance of the region of zero loss and (ii) output a second value in response to the prediction output not being within the predetermined tolerance of the region of zero loss (Vapnik, page 182-183)
PNG
media_image1.png
598
926
media_image1.png
Greyscale
Examiner notes if |y - f(x,a)| <= ε, then output is 0 , which means it is within the region of zero loss. If not, then the output is |y - f(x,a)| - ε, which means it is not within the region of zero loss.
Based on claim 8, Kurian, Kokkinis, and Vapnik are analogous and it would be obvious to one having ordinary skill in the art to combine the prior arts.
Regarding Claim 13:
Kurian, Kokkinis, and Vapnik teaches the system of claim 8 and 12 and further teaches
the correctness function circuitry is configured to calculate an accuracy of the ML model using the correctness function and updating the ML model further based on the calculated accuracy (Vapnik, page 182-183)
PNG
media_image1.png
598
926
media_image1.png
Greyscale
Examiner notes if |y - f(x,a)| <= ε, then output is 0 , which means it is within the region of zero loss. If not, then the output is |y - f(x,a)| - ε, which means it is not within the region of zero loss.
Based on claim 8, Kurian, Kokkinis, and Vapnik are analogous and it would be obvious to one having ordinary skill in the art to combine the prior arts.
Regarding Claim 15:
Kurian teaches
the computing device including a processing device configured to execute instructions stored in memory (Kurian, page 4, paragraph 0078, “FIG. 3 shows a computer system 300 with a plurality of computer hardware components configured to carry out steps of a computer-implemented method for path planning according to various embodiments. The computer system 300 may include a processor 302, a memory 304, and a non-transitory data storage 306.)
obtaining a dataset that includes … (ii) second training data obtained using a prediction sensing system configured to implement the ML model (Kurian, page 1, paragraph 0001, “determining measurement data from a first sensor; determining approximations of ground truths based on a second sensor; and training the machine-learning method based on the measurement data and the approximations of ground truths”. Examiner notes that the measurement data from Kurian is being mapped the second training data and according to FIG. 3, Computer system 300 is being mapped to the prediction sensing system).
calculating, using the ML model, a prediction output based on the second training data (Kurian, page 2, paragraph 0038, “To train a (artificial) neural network to learn the mapping f(x)=y, where x may denote the input signal and y may denote the target signal, the cross entropy between the network prediction f(x) and the target variable y may be minimized”. Examiner notes that “the input signal (i.e. the measurement data, for example input radar signal)” (Kurian, page 2, paragraph 37) is mapped to the second training data)
calculating, using the loss function, a loss of the ML model based on the prediction output (Kurian, page 3, paragraph 0048, “the afore mentioned cross entropy (or cross entropy loss)”)
updating the ML model based on the calculated loss (Kurian, page 3, paragraph 0057, “the machine-learning method may be trained based on the measurement data and the approximations of ground truths, wherein approximations of ground truths of lower-approximation quality have a lower effect on the training than approximations of ground truths of higher-approximation quality”)
Kurian does not teach, but Kokkinis does teach
obtaining a dataset that includes (i) first training data obtained using two or more ground truth sensing systems… (Kokkinis, page 6, paragraph 0063, “training data for many individual source-sensor pairs can be produced and therefore the technology allows the expansion of the feature domain and obtaining of features that are tailored to the multi-sensor environment that one is encountering”)
Kurian and Kokkinis are considered analogous to the claimed invention because they both obtain data through a system. It would have been obvious to one having ordinary skill in the art prior to the effective filing date to have modified Kurian to obtain the first training data through multiple sensors. Doing so is advantageous because “there is a need for creating intelligent training dictionaries that enable the rapid and useful convergence of iterative machine learning techniques. An exemplary embodiment presents new methods to improve training dictionaries by taking into account multi-sensor and multi-resolution information that is available in many applications” (Kokkinis, page 2, paragraph 0014).
Kurian and Kokkinis does not teach, but Vapnik further teaches
determining a loss function based on the first training data, wherein the loss function defines a region of zero loss based on a minimum and a maximum of the first training data (Vapnik, page 182-183)
PNG
media_image1.png
598
926
media_image1.png
Greyscale
PNG
media_image2.png
385
940
media_image2.png
Greyscale
Examiner notes that the loss is 0 when |y - f(x,a)| <= ε. Figure 6.1 also shows that there is a region where the loss is 0, which contains a minimum and maximum.
Kurian, Kokkinis, and Vapnik are considered analogous to the claimed invention because all of them determines a loss based on datasets. It would have been obvious to one having ordinary skill in the art prior to the effective filing date to modify Kurian and Kokkinis by using the ε-insensitive loss function from Vapnik as the loss function from Kurian and Kokkinis. One of the ordinary skill in the art would have known to apply the known technique of the ε-insensitive loss function to calculate loss. Therefore, applying Vapnik’s technique would yield the predictable result of calculating the loss between datasets (MPEP 2141 (III)(D) Applying a known technique to a known device ready for improvement to yield predictable results).
Regarding Claim 18:
Kurian, Kokkinis, and Vapnik teaches the system of claim 15 and further teaches
determining the loss function includes defining the region of zero loss between a lower bound based on the minimum of the first training data and an upper bound based on the maximum of the first training data (Vapnik, page 182-183)
PNG
media_image1.png
598
926
media_image1.png
Greyscale
PNG
media_image2.png
385
940
media_image2.png
Greyscale
Examiner notes that the loss is 0 when |y - f(x,a)| <= ε. Figure 6.1 also shows that there is a region where the loss is 0, which contains a minimum and maximum.
Based on claim 15, Kurian, Kokkinis, and Vapnik are analogous and it would be obvious to one having ordinary skill in the art to combine the prior arts.
Regarding Claim 19:
Kurian, Kokkinis, and Vapnik teaches the system of claim 15 and further teaches
determining a correctness function based on the first training data, wherein the correctness function is configured to (i) output a first value in response to the prediction output being within a predetermined tolerance of the region of zero loss and (ii) output a second value in response to the prediction output not being within the predetermined tolerance of the region of zero loss (Vapnik, page 182-183)
PNG
media_image1.png
598
926
media_image1.png
Greyscale
Examiner notes if |y - f(x,a)| <= ε, then output is 0 , which means it is within the region of zero loss. If not, then the output is |y - f(x,a)| - ε, which means it is not within the region of zero loss.
Based on claim 15, Kurian, Kokkinis, and Vapnik are analogous and it would be obvious to one having ordinary skill in the art to combine the prior arts.
Claim(s) 2, 7, 9, 14, 16, and 20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Kurian, Kokkinis, and Vapnik in view of Arandjelović et al. (Look, Listen, and Learn) (hereafter referred to as Arandjelović).
Regarding Claim 2, Kurian, Kokkinis, and Vapnik teaches the method of claim 1, Arandjelović does teach
the ground truth sensing systems include at least one of a camera system, a radar system, and a photocell system and the prediction sensing system includes at least one audio sensor (Arandjelović, page 2, paragraph 2.1, “To tackle the AVC task, we propose the network structure ... It has three distinct parts: the vision and the audio subnetworks which extract visual and audio features, respectively, and the fusion network which takes these features into account to produce the final decision on whether the visual and audio signals correspond”. Examiner notes that since visual features are taken, there is a camera system. There is also “the L3-Net audio subnetwork trained on Flickr-SoundNet is used to extract features from 1 second audio clips” (Arandjelović , page 4, paragraph 3.3), which shows that the audio sensor is part of the prediction system. In page 3, Figure 2 illustrates both the visual and audio parts of the network).
Kurian, Kokkinis, Vapnik and Arandjelović are considered analogous to the claimed invention because all of them determines a loss based on datasets. It would have been obvious to one having ordinary skill in the art prior to the effective filing date to modify Kurian, Kokkinis and Vapnik by incorporating a camea and audio system from Arandjelović into the machine learning framework from Kurian. One of the ordinary skill in the art would have known to apply the known technique of using multiple different sensors. Therefore, applying Arandjelović’s technique would yield the predictable result of calculating the loss between datasets (MPEP 2141 (III)(D) Applying a known technique to a known device ready for improvement to yield predictable results).
Regarding Claim 7, Kurian, Kokkinis, and Vapnik teaches the method of claim 1, Arandjelović does teach
(i) the two or more ground truth sensing systems are configured to visually detect an object in a first time interval and (ii) the prediction sensing system is configured to detect audio features associated with the object in the first time interval (Arandjelović, page 2, paragraph 2.1, “To tackle the AVC task, we propose the network structure ... It has three distinct parts: the vision and the audio subnetworks which extract visual and audio features, respectively, and the fusion network which takes these features into account to produce the final decision on whether the visual and audio signals correspond”. Examiner notes that since visual features are taken, there is a camera system. There is also “the L3-Net audio subnetwork trained on Flickr-SoundNet is used to extract features from 1 second audio clips” (Arandjelović , page 4, paragraph 3.3), which shows that the audio sensor is part of the prediction system. In page 3, Figure 2 illustrates both the visual and audio parts of the network).
Kurian, Kokkinis, Vapnik and Arandjelović are considered analogous to the claimed invention because all of them determines a loss based on datasets. It would have been obvious to one having ordinary skill in the art prior to the effective filing date to modify Kurian, Kokkinis and Vapnik by incorporating a camea and audio system from Arandjelović into the multiple ground truth sensing systems from Kurian and Kokkinis. One of the ordinary skill in the art would have known to apply the known technique of using multiple different sensors. Therefore, applying Arandjelović’s technique would yield the predictable result of calculating the loss between datasets (MPEP 2141 (III)(D) Applying a known technique to a known device ready for improvement to yield predictable results).
Regarding Claim 9, Kurian, Kokkinis, and Vapnik teaches the system of claim 8, Arandjelović does teach
the ground truth sensing systems include at least one of a camera system, a radar system, and a photocell system and the prediction sensing system includes at least one audio sensor (Arandjelović, page 2, paragraph 2.1, “To tackle the AVC task, we propose the network structure ... It has three distinct parts: the vision and the audio subnetworks which extract visual and audio features, respectively, and the fusion network which takes these features into account to produce the final decision on whether the visual and audio signals correspond”. Examiner notes that since visual features are taken, there is a camera system. There is also “the L3-Net audio subnetwork trained on Flickr-SoundNet is used to extract features from 1 second audio clips” (Arandjelović , page 4, paragraph 3.3), which shows that the audio sensor is part of the prediction system. In page 3, Figure 2 illustrates both the visual and audio parts of the network).
Kurian, Kokkinis, Vapnik and Arandjelović are considered analogous to the claimed invention because all of them determines a loss based on datasets. It would have been obvious to one having ordinary skill in the art prior to the effective filing date to modify Kurian, Kokkinis and Vapnik by incorporating a camea and audio system from Arandjelović into the machine learning framework from Kurian. One of the ordinary skill in the art would have known to apply the known technique of using multiple different sensors. Therefore, applying Arandjelović’s technique would yield the predictable result of calculating the loss between datasets (MPEP 2141 (III)(D) Applying a known technique to a known device ready for improvement to yield predictable results).
Regarding Claim 14, Kurian, Kokkinis, and Vapnik teaches the system of claim 8, Arandjelović does teach
(i) the two or more ground truth sensing systems are configured to visually detect an object in a first time interval and (ii) the prediction sensing system is configured to detect audio features associated with the object in the first time interval (Arandjelović, page 2, paragraph 2.1, “To tackle the AVC task, we propose the network structure ... It has three distinct parts: the vision and the audio subnetworks which extract visual and audio features, respectively, and the fusion network which takes these features into account to produce the final decision on whether the visual and audio signals correspond”. Examiner notes that since visual features are taken, there is a camera system. There is also “the L3-Net audio subnetwork trained on Flickr-SoundNet is used to extract features from 1 second audio clips” (Arandjelović , page 4, paragraph 3.3), which shows that the audio sensor is part of the prediction system. In page 3, Figure 2 illustrates both the visual and audio parts of the network).
Kurian, Kokkinis, Vapnik and Arandjelović are considered analogous to the claimed invention because all of them determines a loss based on datasets. It would have been obvious to one having ordinary skill in the art prior to the effective filing date to modify Kurian, Kokkinis and Vapnik by incorporating a camea and audio system from Arandjelović into the multiple ground truth sensing systems from Kurian and Kokkinis. One of the ordinary skill in the art would have known to apply the known technique of using multiple different sensors. Therefore, applying Arandjelović’s technique would yield the predictable result of calculating the loss between datasets (MPEP 2141 (III)(D) Applying a known technique to a known device ready for improvement to yield predictable results).
Regarding Claim 20, Kurian, Kokkinis, and Vapnik teaches the system of claim 15, Arandjelović does teach
(i) the two or more ground truth sensing systems are configured to visually detect an object in a first time interval and (ii) the prediction sensing system is configured to detect audio features associated with the object in the first time interval (Arandjelović, page 2, paragraph 2.1, “To tackle the AVC task, we propose the network structure ... It has three distinct parts: the vision and the audio subnetworks which extract visual and audio features, respectively, and the fusion network which takes these features into account to produce the final decision on whether the visual and audio signals correspond”. Examiner notes that since visual features are taken, there is a camera system. There is also “the L3-Net audio subnetwork trained on Flickr-SoundNet is used to extract features from 1 second audio clips” (Arandjelović , page 4, paragraph 3.3), which shows that the audio sensor is part of the prediction system. In page 3, Figure 2 illustrates both the visual and audio parts of the network).
Kurian, Kokkinis, Vapnik and Arandjelović are considered analogous to the claimed invention because all of them determines a loss based on datasets. It would have been obvious to one having ordinary skill in the art prior to the effective filing date to modify Kurian, Kokkinis and Vapnik by incorporating a camea and audio system from Arandjelović into the multiple ground truth sensing systems from Kurian and Kokkinis. One of the ordinary skill in the art would have known to apply the known technique of using multiple different sensors. Therefore, applying Arandjelović’s technique would yield the predictable result of calculating the loss between datasets (MPEP 2141 (III)(D) Applying a known technique to a known device ready for improvement to yield predictable results).
Claim(s) 3,10, and 17 is/are rejected under 35 U.S.C. 103 as being unpatentable over Kurian, Kokkinis, and Vapnik in view of Kun et al. (US 20120143363 A1) (hereafter referred to as Kun).
Regarding Claim 3, Kurian, Kokkinis, and Vapnik teaches the method of claim 1, Kun does teach
the ML model is configured to detect features in an audio stream of data (Kun, page 1, paragraph 0018, “the apparatus comprises: an audio stream dividing section for dividing the input audio stream into a series of slices; a feature extracting section for extracting short-term features and long-term features for each slice; and classifying section for obtaining a classification result of the input audio stream based on the extracted short-term features and the long-term features.)
Kurian, Kokkinis, Vapnik and Kun are considered analogous to the claimed invention because they deal with audio data. It would have been obvious to one having ordinary skill in the art prior to the effective filing date to modify Kurian, Kokkinis and Vapnik by incorporating the apparatus from Kun to machine learning model from Kurian. One of the ordinary skill in the art would have known to apply the known technique of detecting features from audio data. Therefore, applying Kun’s technique would yield the predictable result of detecting features from a stream of audio data (MPEP 2141 (III)(D) Applying a known technique to a known device ready for improvement to yield predictable results).
Regarding Claim 10, Kurian, Kokkinis, and Vapnik teaches the system of claim 8, Kun does teach
the ML model is configured to detect features in an audio stream of data (Kun, page 1, paragraph 0018, “the apparatus comprises: an audio stream dividing section for dividing the input audio stream into a series of slices; a feature extracting section for extracting short-term features and long-term features for each slice; and classifying section for obtaining a classification result of the input audio stream based on the extracted short-term features and the long-term features.)
Kurian, Kokkinis, Vapnik and Kun are considered analogous to the claimed invention because they deal with audio data. It would have been obvious to one having ordinary skill in the art prior to the effective filing date to modify Kurian, Kokkinis and Vapnik by incorporating the apparatus from Kun to machine learning model from Kurian. One of the ordinary skill in the art would have known to apply the known technique of detecting features from audio data. Therefore, applying Kun’s technique would yield the predictable result of detecting features from a stream of audio data (MPEP 2141 (III)(D) Applying a known technique to a known device ready for improvement to yield predictable results).
Regarding Claim 17, Kurian, Kokkinis, and Vapnik teaches the system of claim 15, Kun does teach
the ML model is configured to detect features in an audio stream of data (Kun, page 1, paragraph 0018, “the apparatus comprises: an audio stream dividing section for dividing the input audio stream into a series of slices; a feature extracting section for extracting short-term features and long-term features for each slice; and classifying section for obtaining a classification result of the input audio stream based on the extracted short-term features and the long-term features.)
Kurian, Kokkinis, Vapnik and Kun are considered analogous to the claimed invention because they deal with audio data. It would have been obvious to one having ordinary skill in the art prior to the effective filing date to modify Kurian, Kokkinis and Vapnik by incorporating the apparatus from Kun to machine learning model from Kurian. One of the ordinary skill in the art would have known to apply the known technique of detecting features from audio data. Therefore, applying Kun’s technique would yield the predictable result of detecting features from a stream of audio data (MPEP 2141 (III)(D) Applying a known technique to a known device ready for improvement to yield predictable results).
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant’s disclosure. Huan Song et al. (US 20210201159 A1) discloses training a machine learning system based on two sensor data. Keshwani et al. (US 20200410677 A1) discloses calculating a prediction accuracy of a convolutional neural network. Gfeller et al. (US 12165663 B2) discloses a machine learning model that takes audio data as input and determines a loss function between ground truth characteristics is filed within the grace period and is attached for it relevancy to the invention as a whole.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to STEVEN VO whose telephone number is (571)272-9622. The examiner can normally be reached Monday - Friday from 7-3 pm EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Michelle Bechtold can be reached at (571) 431-0762. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/S.V./Examiner, Art Unit 2148 /MICHELLE T BECHTOLD/Supervisory Patent Examiner, Art Unit 2148