DETAILED ACTION
1. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
2. This communication is in response to the Applicant’s submission filed 11 January 2023, where:
Claims 1-10 are pending.
Claims 1-10 are rejected.
Foreign priority is claimed to DE 10 2022 200 547.3 filed 18 January 2022. A certified copy of this paper has been filed 21 April 2023. Accordingly, receipt is acknowledged of certified copies of papers required by 37 CFR 1.55.
Information Disclosure Statement
3. An information disclosure statement was submitted on 22 February 2023. The submission complies with the provisions of 37 CFR 1.97. Accordingly, the Examiner considered the information disclosure statement.
Claim Objections
4. Claims 1, 2, 3, and 10 are objected to because of the following informalities:
Claim 1, line 1, recites “fusion (y);” where the variable “y” is not defined in the claim body.
Claim 2, line 2, recites “p(y|x),” where the variable “x” is not defined in the claim body.
Claim 3, line 2, recites “p(y|x),” where the variable “x” is not defined in the claim body.
Claim 10, line 2, recites “fusion (y);” where the variable “y” is not defined in the claim body.
Appropriate correction is required.
Claim Rejections - 35 U.S.C. § 112
5. The following is a quotation of 35 U.S.C. § 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
6. Claim 9 is rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the Applicant), regards as the invention.
Claim 9, line 4-5, recites “the other predictions of the plurality of predictions;” there is insufficient antecedent basis for this limitation in the claim.
Claim Rejections - 35 U.S.C. § 101
7. 35 U.S.C. § 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
8. Claims 1-10 are rejected under 35 U.S.C. § 101 because the claimed invention is directed to an abstract idea without significantly more.
Claim 1 recites a computer-implemented method, which is a process, and thus one of the statutory categories of patentable subject matter. (35 U.S.C. § 101).
However, under Step 2A Prong One, the claim recites the limitations of “[(a)] ascertaining the fusion (y) based on a product of probabilities of the respective classifications and/or regression results and based on an a priori probability of the fusion (y).” The activity of “[(a)] ascertaining the fusion (y)” is a limitation that can practically be performed in the human mind, including, for example, observations, evaluations, judgments, and opinions, and accordingly, are a mental process, (MPEP § 2106.04(a)(2) sub III), which is one of the groupings of abstract ideas. (MPEP § 2106.04(a)(2)). Also, the limitation recites mathematical relationships, mathematical formulas or equations, and mathematical calculations, and also accordingly, is a mathematical concept, (MPEP § 2106.04(a)(2) sub I), which is also one of the groupings of abstract ideas. (MPEP § 2106.04(a)(2)). The claim recites more details or specifics to the abstract idea of “[(a)] ascertaining the fusion (y),” in which “[(a.1)] the a priori probability for ascertaining the fusion being raised to a power, an exponent of the power being a number (N) of elements of the plurality of predictions minus 1,” and accordingly, is merely more specific to the abstract idea. Accordingly, claim 1 recites an abstract idea.
Under Step 2A Prong Two, the claim as a whole is not integrated into a practical application, because the additional elements recited in the claim beyond the identified judicial exception include a “computer-implemented method,” which is a generic computer component used to implement the abstract idea, (MPEP § 2106.05(f)), that does not serve to integrate the abstract idea into a practical application. Accordingly, claim 1 is directed to an abstract idea.
Finally, under Step 2B, the additional elements, taken alone or in combination, do not represent significantly more than the abstract idea itself. The additional elements recited in the claim beyond the identified judicial exception include a “computer-implemented method,” which is a generic computer component used to implement the abstract idea, (MPEP § 2106.05(f)), that does not amount to significantly more than the abstract idea. Therefore, claim 1 is subject-matter ineligible.
Claim 2 depends from claim 1. The claim recites more details or specifics to the abstract idea of “[(a)] ascertaining the fusion (y),” “[(a.1.1)] wherein the fusion (y) is ascertained based on an equation
PNG
media_image1.png
68
307
media_image1.png
Greyscale
where p(y|xi) is an i-th element of the plurality of predictions and p(y) is the a priori probability,” and accordingly, is merely more specific to the abstract idea. Therefore, claim 2 is subject-matter ineligible.
Claim 3 depends from claim 1. The claim recites more details or specifics to the abstract idea of “[(a)] ascertaining the fusion (y),” “[(a.1.1)] wherein the fusion (y) being ascertained based on an equation
PNG
media_image2.png
59
219
media_image2.png
Greyscale
where p(y|xi) is an i-th element of the plurality of predictions and p(y) is the a priori probability,” and accordingly, is merely more specific to the abstract idea. Therefore, claim 3 is subject-matter ineligible.
Claims 4 and 5 depend from claim 1. The claims recite more details or specifics of the abstract idea of “[(a)] ascertaining the fusion (y),” (claim 4: “[(a.1.1)] wherein the a priori probability of the fusion (y) is ascertained based on a relative frequency with respect to a training data set;” and claim 5: “[(a.1.1)] wherein the a priori probability is ascertained using a model, the model being ascertained based on a training data set”), and accordingly are merely more specific to the abstract idea. The plain meaning of a “model” is that of a predictive model involving statistical algorithms and machine learning techniques to forecast future outcomes. The broadest reasonable interpretation of the term “model” is a statistical model reliant on mathematical concepts such as Gaussian mixed distribution, maximum likelihood estimation, probability densities, etc., which is not inconsistent with the Applicant’s disclosure. (MPEP § 2111; see Specification at p. 4, line 24 thru p. 5, line 14). Thus, claims 4 and 5 are subject-matter ineligible.
Claim 6 depends directly or indirectly from claim 1. The claim recites more details or specifics of the abstract idea of “[(a)] ascertaining the fusion (y),” “[(a.1.2)] wherein the i-th element of the plurality of predictions being ascertained by a machine learning system,” and accordingly, are merely more specific to the abstract idea. Also, under Step 2A Prong Two and Step 2B, the “machine learning system” is recited at a high-level of generality, and accordingly, is a generic computer component used to implement the abstract idea, (MPEP § 2106.05(f)), that does not serve to integrate the abstract idea into a practical application, nor amount to significantly more than the abstract idea. Thus, claim 6 is subject-matter ineligible.
Claim 7 depends directly or indirectly from claim 1. The claim recites more details or specifics of the additional element “a machine learning system,” “[(a.1.2.1)] wherein the machine learning system includes a neural network,” and accordingly, is merely more specific to the abstract idea. Thus, claim 7 is subject-matter ineligible.
Claim 8 depends directly or indirectly from claim 1. The claim recites more details or specifics to the additional element of the “machine learning system,” “[(a.1.2.1)] wherein the predictions are each ascertained by a different machine learning system and each machine learning system ascertains a prediction for only one sensor signal,” and accordingly, is merely more specific to the abstract idea. Thus, claim 8 is subject-matter ineligible.
Claim 9 depends directly or indirectly from claim 1. The claim recites more details or specifics to the abstract idea of “[(a)] ascertaining the fusion (y),” “[(a.1.1)] wherein a prediction of the plurality of predictions is left out of account for ascertaining the fusion when the prediction deviates by greater than a predefined threshold value from the other predictions of the plurality of predictions,” and accordingly, is merely more specific to the abstract idea. Thus, claim 9 is subject-matter ineligible.
Claim 10 recites a non-transitory machine-readable storage medium, which is a product, and thus one of the statutory categories of patentable subject matter. (35 U.S.C. § 101).
However, under Step 2A Prong One, the claim recites the limitation of “[(a)] ascertaining the fusion (y) based on a product of probabilities of the respective classifications and/or regression results and based on an a priori probability of the fusion (y).” The activity of “[(a)] ascertaining the fusion (y)” is a limitation that can practically be performed in the human mind, including, for example, observations, evaluations, judgments, and opinions, and accordingly, are a mental process, (MPEP § 2106.04(a)(2) sub III), which is one of the groupings of abstract ideas. (MPEP § 2106.04(a)(2)). Also, the limitation recites mathematical relationships, mathematical formulas or equations, and mathematical calculations, and also accordingly, is a mathematical concept, (MPEP § 2106.04(a)(2) sub I), which is also one of the groupings of abstract ideas. (MPEP § 2106.04(a)(2)). The claim recites more details or specifics to the abstract idea of “[(a)] ascertaining the fusion (y),” in which “[(a.1)] the a priori probability for ascertaining the fusion being raised to a power, an exponent of the power being a number (N) of elements of the plurality of predictions minus 1,” and accordingly, is merely more specific to the abstract idea. Accordingly, claim 1 recites an abstract idea.
Under Step 2A Prong Two, the claim as a whole is not integrated into a practical application, because the additional elements recited in the claim beyond the identified judicial exception include a “non-transitory machine-readable storage medium on which is stored a computer program,” and a “processor,” which are generic computer components used to implement the abstract idea, (MPEP § 2106.05(f)), that do not serve to integrate the abstract idea into a practical application. Accordingly, claim 10 is directed to an abstract idea.
Finally, under Step 2B, the additional elements, taken alone or in combination, do not represent significantly more than the abstract idea itself. The additional elements recited in the claim beyond the identified judicial exception include a “non-transitory machine-readable storage medium on which is stored a computer program,” and a “processor,” which are generic computer components used to implement the abstract idea, (MPEP § 2106.05(f)), that do not amount to significantly more than the abstract idea. Therefore, claim 10 is subject-matter ineligible.
Claim Rejections – 35 U.S.C. § 103
9. The following is a quotation of 35 U.S.C. § 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
10. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. § 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
11. Claims 1, 5, 9, and 10 are rejected under 35 U.S.C. § 103 as being unpatentable over US Published Application 20210406560 to Park et al. [hereinafter Park] in view of Hugh Durrant-Whyte, "Multi Sensor Data Fusion," University of Sydney (2001) [hereinafter Durrant].
Regarding claims 1 and 10, Park teaches [a] computer-implemented method for ascertaining a fusion (y) of a plurality of predictions, each prediction of the plurality of predictions characterizing a respective classification and/or a regression result relating to a sensor signal (Park ¶ 0053 teaches a “fusion DNN 120 may generate the fused output 122 [(that is, ascertaining a fusion (y) of a plurality of predictions )] using the outputs of one or more layers of the individual DNN(s) [(that is, each prediction of the plurality of predictions characterizing a respective classification . . . relating to a sensor signal )], the 3D signals 104, 108, 112, etc., the location prior image(s) 114, the velocity image(s) 116, and/or the instance appearance image(s) 118 [(that is, a sensor signal )]”) of claim 1, and . [a] non-transitory machine-readable storage medium on which is stored a computer program (Park ¶ 0194 teaches “computer-storage media may include both volatile and nonvolatile media and/or removable and non-removable media implemented in any method or technology for storage of information such as computer-readable instructions [(that is, a non-transitory machine-readable storage medium on which is stored a computer program)]”) for ascertaining a fusion (y) of a plurality of predictions, each prediction of the plurality of predictions characterizing a respective classification and/or a regression result relating to a sensor signal (see above, Park ¶ 0053), the computer program, when executed by a processor, causing the processor to perform (Park ¶ 0196 teaches “The CPU(s) 906 may be configured to execute at least some of the computer-readable instructions to control one or more components of the computing device 900 to perform one or more of the methods and/or processes described herein”) of claim 10, the method comprising:
[(a)] ascertaining the fusion (y) based on a product of probabilities of the respective classifications and/or regression results (Park, Fig. 1A, teaches multi-sensor fusion [Examiner annotations in dashed-line text boxes]:
PNG
media_image3.png
1150
1448
media_image3.png
Greyscale
Park ¶ 0027 teaches “sensor data may be used to compute outputs internal to the DNNs themselves, such as feature outputs (F1-Fn) 102A-102N for DNNs [(that is, “DNN outputs” are based on a product of probabilities of the respective classifications)] that use cameras signals, RADAR feature outputs (FRADAR) 106 for RADAR sensors, ultrasonic feature outputs (FUSS) 110 for ultrasonic sensors, LiDAR feature outputs (FLiDAR) for LiDAR sensors, and/or other feature outputs for other sensor types”; Park ¶ 0069 teaches “the fusion DNN 120 may compute the fused output 122 using the first 3D signal, the second 3D signal, and/or one or more other 3D signals. In some embodiments, as described herein”)
Park ¶ 0043 teaches the “fusion DNN 120 and/or one or more of the DNNs or machine learning models used to generate the 3D signal(s) may include may include, for example, and without limitation, any type of machine learning model, such as a machine learning model(s) using . . . Naïve Bayes, . . . and/or other types of machine learning models) . . . ,
Though Park teaches a fusion DNN to produce a fused output, Park, however, does not explicitly teach -
[(a) ascertaining the fusion (y)] . . . and based on an a priori probability of the fusion (y),
[(a.1)] the a priori probability for ascertaining the fusion being raised to a power, an exponent of the power being a number (N) of elements of the plurality of predictions minus 1.
But Durrant teaches –
[(a) ascertaining the fusion (y)] . . . and based on an a priori probability of the fusion (y),
[(a.1)] the a priori probability for ascertaining the fusion being raised to a power, an exponent of the power being a number (N) of elements of the plurality of predictions minus 1 (Durrant at p. 15, “2.2.2 Data Fusion using Bayes Theorem,” second paragraph & equation 19, teaches “Equation 19 is known as the independent likelihood pool [8]. [Equation 19 reads:
PNG
media_image4.png
466
1040
media_image4.png
Greyscale
In practice, the conditional probabilities P(zi | x) are stored a priori as functions of both zi and x [(that is, a priori probability of the fusion (y))]. When an observation sequence Zn = {z1, z2, . . . , zn} is made, the observed values are instantiated in this probability distribution and likelihood functions Λi(x) are constructed, which are functions only of the unknown state x. The product of these likelihood functions with the prior information P(x), appropriately normalised, provides a posterior distribution P(x | Zn), which is a function of x only for a specific observation sequence {z1, z2, . . . , zn}”).
Park and Durrant are from the same or similar field of endeavor. Park teaches a deep neural network (DNN) deployed to fuse data from a plurality of individual machine learning models in which machine learning models use Naïve Bayes machine learning models. Durrant teaches data fusion using Bayes theorem for combining information from a number of sensors.
Thus, it would have been obvious to a person having ordinary skill in the art as of the effective filing date of the Applicant’s invention to modify Park pertaining to a deep neural network being a Naïve Bayes machine learning models for data fusion with the data fusion using Bayes theorem of Durrant.
The motivation to do so is to because “[d]ata fusion is the process of combing information from a number of different sources to provide a robust and complete description of an environment or process of interest. Data fusion is of special significance in any application where a large amounts of data must be combined, fused and distilled to obtain information of appropriate quality and integrity on which decisions can be made.” (Durrant at p. 4, “1. Introduction” first paragraph).
Regarding claim 5, the combination of Park and Durrant teaches all of the limitations of claim 1, as described above in detail.
Park teaches -
[(a.1.1)] wherein the a priori probability is ascertained using a model, the model being ascertained based on a training data set (Park ¶ 0078 teaches “a method 700 for training a multi-sensor fusion network to compute a fused output using a plurality of input channels”; Park ¶ 0185 teaches “server(s) 878 may be used to train machine learning models (e.g., neural networks) based on training data. The training data may be generated by the vehicles, and/or may be generated in a simulation (e.g., using a game engine) [(that is, the model being ascertained based on a training data set)]”).
Regarding claim 9, the combination of Park and Durrant teaches all of the limitations of claim 1, as described above in detail.
Park teaches -
[(a.1.1)] wherein a prediction of the plurality of predictions is left out of account for ascertaining the fusion when the prediction deviates by greater than a predefined threshold value from the other predictions of the plurality of predictions (Park ¶ 0128 teaches a “confidence value enables the system to make further decisions regarding which detections should be considered [(that is, a prediction of the plurality of predictions is left out of account for ascertaining the fusion)] as true positive detections rather than false positive detections [(that is, “false positive predictions” are the other predictions of the plurality of predictions)]. For example, the system may set a threshold value for the confidence and consider only the detections exceeding the threshold value as true positive detections [(that is, when the prediction deviates by greater than a predefined threshold value from the other predictions of the plurality of predictions)]”).
12. Claim 4 is rejected under 35 U.S.C. § 103 as being unpatentable over US Published Application 20210406560 to Park et al. [hereinafter Park] in view of Hugh Durrant-Whyte, "Multi Sensor Data Fusion," University of Sydney (2001) [hereinafter Durrant] and Wong et al., “Sparse Bayesian extreme learning committee machine for engine simultaneous fault diagnosis,” Neurocomputing (2016) [hereinafter Wong].
Regarding claim 4, the combination of Park and Durrant teaches all of the limitations of claim 1, as described above in detail.
Though Park and Durrant teach the features of using training data to train the deep neural networks for data fusion, the combination of Park and Durrant, however, does not explicitly teach -
[(a.1.1)] wherein the a priori probability of the fusion (y) is ascertained based on a relative frequency with respect to a training data set.
But Wong teaches –
[(a.1.1)] wherein the a priori probability of the fusion (y) is ascertained based on a relative frequency with respect to a training data set (Wong, right column of p. 332, “1. Introduction,” first partial paragraph, teaches “a high dimensional input will decrease the accuracy of the fault classifier, so a proper feature selection algorithm is proposed to reduce the input dimension of the fault classifier. An effective statistical algorithm called Sample Entropy (SampEn) has recently been introduced to provide one statistical feature to describe the signal regularity [(that is, “signal regularity” is based on a relative frequency)] in different [intrinsic mode functions (IMFs)] [20,21] [(that is, “Sample Entropy” is based on a relative frequency with respect to a training data set)]”).
Park, Durrant, and Wong are from the same or similar field of endeavor. Park teaches a deep neural network (DNN) deployed to fuse data from a plurality of individual machine learning models in which machine learning models use Naïve Bayes machine learning models. Durrant teaches data fusion using Bayes theorem for combining information from a number of sensors. Wong teaches diagnosis of sensor signals is a multi-signal fusion problem.
Thus, it would have been obvious to a person having ordinary skill in the art as of the effective filing date of the Applicant’s invention to modify the combination of Park and Durrant pertaining to a deep neural network being a Naïve Bayes machine learning models implementing data fusion using Bayes theorem with the sample entropy of Wong.
The motivation to do so is because “[sample entropy (SampEn)] is less sensitive to noise and suitable for short-length time series data, which can fit the characteristics and patterns of engine signals in this application. These properties make SampEn an appealing tool for selecting one feature from each [intrinsic mode functions (IMFs)]. Hence, the input dimension of the fault classifier can be reduced. By reviewing the open literature, it is an original idea to apply [empirical mode decomposition (EMD)] together with SampEn to extract representative features from simultaneous-fault signal patterns of engines.” (Wong, left column of p. 332, “1. Introduction,” first partial paragraph).
Conclusion
13. The prior art made of record and not relied upon is considered pertinent to Applicant's disclosure:
(Salazar et al., "Late Fusion for Improving Intrusion Detection in a Network Traffic Dataset," IEEE (2021)) teaches a method for an intrusion detection system based on late fusion of classifiers. The proposed method was tested in the analysis of a network traffic dataset considering up to 14 classes of anomalous traffic.
(US Published Application 20060082490 to Chen et al.) teaches multi-sensor data fusion system capable of adaptively weighting the contributions from each one of a plurality of sensors using a plurality of data fusion methods.
(US Published Application 20150363706 to Huber et al.) teaches multi sensory data fusion in a distributed sensor environment for object identification classification. Embodiments of the invention are sensor-agnostic and capable handling a large number of sensors of different types via a gateway which transmits sensor measurements to a fusion engine according to predefined rules.
14. Any inquiry concerning this communication or earlier communications from the Examiner should be directed to KEVIN L. SMITH whose telephone number is (571) 272-5964. Normally, the Examiner is available on Monday-Thursday 0730-1730.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, Applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the Examiner by telephone are unsuccessful, the Examiner’s supervisor, KAKALI CHAKI can be reached on 571-272-3719. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/K.L.S./
Examiner, Art Unit 2122
/KAKALI CHAKI/Supervisory Patent Examiner, Art Unit 2122