DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Amendments
This Office Action is in response to the amendment filed on December 16, 2025.
Claims 1, 8-9, 15, and 20 have been amended.
Claim 18 has been cancelled.
Claim 21 has been added.
The objections and rejections from the prior correspondence that are not restated herein are withdrawn.
Information Disclosure Statement
The information disclosure statement (IDS) submitted on September 22, 2022 is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner.
Response to Arguments
Applicant's arguments filed on December 16, 2025 have been fully considered.
Applicant's arguments regarding the Drawings Objections have been fully considered but are not persuasive. Applicant argues that the drawings have been amended to address the drawing objections.
The examiner respectfully disagrees. In the previous Office Action, the examiner indicated that various reference characters in the drawings are not mentioned in the specification. For example, specification paragraph [0051] in reference to Fig. 5 recites computer-controlled machine 10, control system 12, sensor 16, actuator 14, classifier 24, conversion unit 28, memory 32, processor 30, non-volatile storage 26, and receiving unit 22. However, Fig. 5 on page 6 of the drawings recite different reference characters for these same components. Specifically, Fig. 5 recites computer-controlled machine 500, control system 502, sensor 506, actuator 504, classifier 514, conversion unit 518, memory 522, processor 520, non-volatile storage 516, and receiving unit 512. The newly submitted drawings do not overcome the objection because the reference characters in Fig. 5 still do not appear or match the reference characters in the specification for these components. Additionally, Figures 6-11 have similar issues as the one described above for Fig. 5 because they recite various combinations of these reference characters that are not mentioned in the specification. Therefore, the drawings objections are maintained.
Applicant’s arguments regarding the 35 U.S.C. § 112(b) rejections have been fully considered but are not persuasive. Applicant argues that the claims have been amended to facilitate prosecution and that the rejection is now moot, and the claims are in condition for allowance.
The examiner respectfully disagrees. In the previous Office Action, the examiner indicated that claim 1 recites “the classifier”, which lacks antecedent basis. It remains unclear if "the classifier" is referring to a configuration or component relating to the hyper model’s configuration for classifying or a completely different configuration, component, or different model. A term such as “a classifier” must be introduced earlier in the claims prior to reciting “the classifier”. Therefore, the 35 U.S.C. § 112(b) rejections are maintained.
Applicant’s arguments regarding the 35 U.S.C. § 101 rejections have been fully considered but are not persuasive. Applicant argues that the rejection is now moot in view of the claim amendments. Applicant argues that the 101 rejection is improper under the USPTO's most recent subject-matter-eligibility guidance issued under Director Squires. Applicant argues that the instant application discloses machine-learning improvements that constitute technological improvements to computer functionality. Applicant argues that the examiner’s characterization of certain recited operations as mathematical concepts or mental processes, such as generating a frequency spectrum, normalizing data, classifying corruption types, or updating model weights are inconsistent with the actual claim as a whole. Applicant argues that the specification explains that the system operates on real-world sensor signals (e.g., radar, sonar, cameras) and adapts the classifier's operational parameters based on real-time corruption conditions, which are capabilities that cannot be performed mentally, and that materially improve classifier robustness in safety-critical applications such as autonomous driving or robotics.
The examiner respectfully disagrees. The specification and claims do not provide a clear description for how classifying corruptions and updating the BN statistics and weights of the classifier provide a technological improvement in safety-critical applications such as autonomous driving or robotics, and thus do not impose meaningful limits on the judicial exception. The claims as a whole are directed to dynamically updating a classifier’s operational parameters (i.e., weights and BN statistics) in response to corruptions, which is an improvement of the abstract idea itself and not an improvement in computer functionality or a technological improvement in machine learning systems. According to MPEP § 2106.05(a), the judicial exception alone cannot provide the improvement, and that the improvement must be provided by one or more additional elements. Specifically, the amended independent claims 1, 9, and 15, the claim recites the abstract idea of:
generating/generate a frequency spectrum associated with the input data, wherein the generating includes creating the frequency spectrum by applying a frequency domain transformation on the input data; (Mathematical concept – generating a frequency spectrum by applying a frequency domain transformation on the input data involves mathematical calculations – see MPEP § 2106.04(a)(2)(I))
normalizing/normalize the frequency spectrum to generate a normalized frequency spectrum; (Mathematical concept – normalizing the frequency spectrum involves mathematical calculations – see MPEP § 2106.04(a)(2)(I))
(Claims 1 and 9) […] classify corruptions utilizing at least a Fourier transform of the input data; (Mathematical concepts – classifying corruptions utilizing a Fourier transform involves mathematical calculations (see paragraph [0049]) – see MPEP § 2106.04(a)(2)(I))
(Claim 15) classify, utilizing at least a Fourier transform of the input data, a corruption associated with the input data based on an output of the hyper model; (Mathematical concepts – classifying a corruption utilizing a Fourier transform involves mathematical calculations (see paragraph [0049]) – see MPEP § 2106.04(a)(2)(I))
updating/update one or more weights associated with the classifier based on the corruption associated with the input data; (Mathematical concept – updating weights involves mathematical calculations – see MPEP § 2106.04(a)(2)(I))
(Claims 1 and 9) in response to the corruption and corresponding batch norm (BN) statistic, updating one or more network BN statistics associated with the machine-learning network; (Mathematical concepts – updating BN statistics in response to the corruption and corresponding BN statistic involves mathematical calculations (see paragraphs [0024] and [0049]) – see MPEP § 2106.04(a)(2)(I))
If claim limitations, under their broadest reasonable interpretation, cover performance of the limitations as a mental process, but for the recitation of generic computer components, then the claim limitations fall within the mathematical or mental process grouping of abstract ideas. Accordingly, the claim “recites” an abstract idea.
2A Prong 2: The additional elements recited in the claim do not integrate the abstract idea into a practical application, individually or in combination.
Additional elements:
(Claim 1) receiving an input data from a sensor, wherein the input data is indicative of image information, radar information, sonar information, or sound information; (Adding insignificant extra-solution activity of mere data gathering to the judicial exception – see § MPEP2106.05(g).)
(Claim 9) an input interface configured to […] (Mere instructions to apply an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea – see MPEP 2106.05(f).)
(Claims 9 and 15) receive input data from a sensor, wherein the sensor includes a camera, a radar, a sonar, or a microphone; (Adding insignificant extra-solution activity of mere data gathering to the judicial exception – see § MPEP2106.05(g).)
(Claim 9) a processor in communication with the input interface, (Mere recitation of a generic computer component – see § MPEP 2106.05(b)(I))
(Claim 1) sending […]
(Claim 9) send […]
(Claim 15) inputting […]
[…] the normalized frequency spectrum to a hyper model configured to […] (Adding insignificant extra-solution activity to the judicial exception – see § MPEP2106.05(g).)
utilizing the normalized frequency spectrum as input to the hyper model […] (Adding insignificant extra-solution activity of mere data gathering to the judicial exception – see § MPEP2106.05(g).)
outputting/output a classification associated with the input data […] (Adding insignificant extra-solution activity of mere data gathering to the judicial exception – see § MPEP2106.05(g).)
[…] utilizing the classifier with updated weights. (Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea – see MPEP 2106.05(f).)
Since the claim as a whole, looking at the additional elements individually and in combination, does not contain any other additional elements that are indicative of integration into a practical application, the claim is directed to an abstract idea.
2B: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception.
Additional elements:
(Claim 1) receiving an input data from a sensor, wherein the input data is indicative of image information, radar information, sonar information, or sound information; (Simply appending well-understood, routine, conventional activities previously known to the industry, specified at a high level of generality, to the judicial exception (WURC)- see MPEP § 2106.05(d)(ll)(i) - Receiving or transmitting data over a network, e.g., using the Internet to gather data, Symantec, 838 F.3d at 1321, 120 USPQ2d at 1362 (utilizing an intermediary computer to forward information).)
(Claim 9) an input interface configured to […] (Mere instructions to apply an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea – see MPEP 2106.05(f).)
(Claims 9 and 15) receive input data from a sensor, wherein the sensor includes a camera, a radar, a sonar, or a microphone; (Simply appending well-understood, routine, conventional activities previously known to the industry, specified at a high level of generality, to the judicial exception (WURC)- see MPEP § 2106.05(d)(ll)(i) - Receiving or transmitting data over a network, e.g., using the Internet to gather data, Symantec, 838 F.3d at 1321, 120 USPQ2d at 1362 (utilizing an intermediary computer to forward information).)
(Claim 9) a processor in communication with the input interface, (Mere recitation of a generic computer component – see § MPEP 2106.05(b)(I))
(Claim 1) sending […]
(Claim 9) send […]
(Claim 15) inputting […]
[…] the normalized frequency spectrum to a hyper model configured to […] (Simply appending well-understood, routine, conventional activities previously known to the industry, specified at a high level of generality, to the judicial exception (WURC)- see MPEP § 2106.05(d)(ll)(i) - Receiving or transmitting data over a network, e.g., using the Internet to gather data, Symantec, 838 F.3d at 1321, 120 USPQ2d at 1362 (utilizing an intermediary computer to forward information).)
utilizing the normalized frequency spectrum as input to the hyper model […] (Simply appending well-understood, routine, conventional activities previously known to the industry, specified at a high level of generality, to the judicial exception (WURC)- see MPEP § 2106.05(d)(ll)(i) - Receiving or transmitting data over a network, e.g., using the Internet to gather data, Symantec, 838 F.3d at 1321, 120 USPQ2d at 1362 (utilizing an intermediary computer to forward information).)
outputting/output a classification associated with the input data […] (Simply appending well-understood, routine, conventional activities previously known to the industry, specified at a high level of generality, to the judicial exception (WURC)- see MPEP § 2106.05(d)(ll)(i) - Receiving or transmitting data over a network, e.g., using the Internet to gather data, Symantec, 838 F.3d at 1321, 120 USPQ2d at 1362 (utilizing an intermediary computer to forward information).)
[…] utilizing the classifier with updated weights. (Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea – see MPEP 2106.05(f).)
Considering the additional elements individually and in combination, and the claim as a whole, the additional elements do not provide significantly more than the abstract idea. Therefore, the claim is not patent eligible.
The dependent claims either recite further descriptions of the abstract idea, and/or additional elements that individually or in combination do not integrate the abstract idea into a practical application or amount to significantly more than the judicial exception, as shown in the detailed 35 U.S.C. § 101 rejections below.
Applicant’s arguments regarding the 35 U.S.C. § 103 rejections have been fully considered but are not persuasive. Applicant argues that the prior art combination fails to disclose the following amended claim 1 limitations:
sending the normalized frequency spectrum to a hyper model configured to classify corruptions utilizing at least a Fourier transform of the input data;
in response to the corruption and corresponding batch norm (BN) statistic, updating one or more network BN statistics associated with the machine-learning network;
The examiner respectfully disagrees. The prior art of record does teach the amended limitations of claims 1, 9, and 15, as shown in detail below in the 35 U.S.C. § 103 rejections. More specifically, the prior art of record teaches:
sending the normalized frequency spectrum to a hyper model configured to classify corruptions utilizing at least a Fourier transform of the input data; (BUNAZAWA [0130] teaches: "The distribution of a frequency component obtained by subjecting the time-series data of the gear rotation speed Ngear to fast Fourier transform may be normalized, and the normalized feature quantity may be used as input (i.e., sending) variables fed to the map. […] the normalized feature quantity may be used as input variables fed to the map." BUNAZAWA [0138] teaches: "The input variables fed to the map defined by the map data DM may include the sound NZ." BUNAZAWA [0086] teaches: "Specifically, a fully-connected feed-forward neural network having a single middle layer is used as the map." BUNAZAWA [0145] teaches: "The neural network is not limited to a fully-connected feed-forward neural network. For example, a one-dimensional convolutional neural network may be used. […] For example, the state of the gear may be identified using classification by a support vector machine.” BUNAZAWA [0146] teaches: "In the process of S105, the number of middle layers in the neural network is one. However, the number of middle layers may be two or more.” BUNAWAZA [0040] teaches: “The above-described configuration is capable of determining whether the gear is in a damaged state. It is thus possible to detect anomalies (i.e., to classify corruptions) of causes in different categories.” Examiner's note: Paragraph [0025] of the specification states that “a shallow 3-layer fully connected neural network can identify 16 corruption types” and paragraph [0026] states “The hyper model305 may identify the type of input corruption, e.g., motion blur or Gaussian noise”, and thus the hyper model can be defined as a 3-layer fully connected network, per the specification. Under broadest reasonable interpretation “a hyper model” can be interpreted as BUNAZAWA's map data DM model, which is a fully-connected feed-forward neural network that can have a single middle layer, or two or more middle layers. Furthermore, BUNAZAWA [0130] teaches that the distribution of a frequency component is obtained by subjecting the time-series data to a fast Fourier transform to normalize the time-series data (i.e., normalized frequency spectrum). Then, the normalized data by the fast Fourier transform is used as input (i.e., sending) for the map data DM model. The map data DM model is capable of determining gear state anomalies using classification (i.e., configured to classify corruptions) by a support vector machine by using the time-series data as input after being subjected to normalization by the fast Fourier transform.)
Accordingly, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention, having the teachings of SHEN and BUNAZAWA before them, to include BUNAZAWA's normalized Fourier transform and map in SHEN's training method against image corruption. One would have been motivated to make such a combination in order to improve the accuracy of determination as to whether there is an anomaly (i.e., corruption) in the transmission based on various input variables such as vibration or sounds from sensors (i.e., input data) (BUNAZAWA [0010], [0031], and [0033]).
in response to the corruption and corresponding batch norm (BN) statistic, updating one or more network BN statistics associated with the machine-learning network; (BENZ [pg. 1, Abstract] teaches: "We find that simply estimating and adapting the BN statistics on a few (32 for instance) representation samples, without retraining the model, improves the corruption robustness by a large margin on several benchmark datasets with a wide range of model architectures. For example, on ImageNet-C, statistics adaptation improves the top1 accuracy of ResNet50 (i.e., machine-learning network) from 39.2% to 48.7%." BENZ [pg. 1, Figure 1] teaches: "An image under corruption changes the prediction from “German Shepherd” to “Beaver”. After rectifying the BN statistics, the corrupted image is classified correctly." Examiner’s note: Under BRI, “in response to the corruption updating one or more network BN statistics” can be interpreted as Figure 1, which describes that when the model outputs an incorrect classification such as predicting that a “German Shepherd” is a “Beaver”, the BN statistics are rectified (i.e., updating) in order to output a corrected classification.)
Accordingly, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention, having the teachings of SHEN, BUNAZAWA, and BENZ before them, to include BENZ' estimating and adapting BN statistics in SHEN/BUNAZAWA's training method against image corruption. One would have been motivated to make such a combination in order to improve model robustness under corruptions (BENZ [pg. 4, section 3.3. Motivation for rectifying batch normalization]).
Drawings
The drawings are objected to as failing to comply with 37 CFR 1.84(p)(5) because they include the following reference character(s) not mentioned in the description: SENSOR 506, ACTUATOR 504, CONTROL SYSTEM 502, RECEIVING UNIT 512, CONVERSION UNIT 28, CLASSIFIER 514, PROCESSOR 520, NON-VOLATILE STORAGE 516, and MEMORY 522. Corrected drawing sheets in compliance with 37 CFR 1.121(d), or amendment to the specification to add the reference character(s) in the description in compliance with 37 CFR 1.121(b) are required in reply to the Office action to avoid abandonment of the application. Any amended replacement drawing sheet should include all of the figures appearing on the immediate prior version of the sheet, even if only one figure is being amended. Each drawing sheet submitted after the filing date of an application must be labeled in the top margin as either “Replacement Sheet” or “New Sheet” pursuant to 37 CFR 1.121(d). If the changes are not accepted by the examiner, the applicant will be notified and informed of any required corrective action in the next Office action. The objection to the drawings will not be held in abeyance.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 1-17 and 19-21 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
Regarding Claim 1:
The claim recites "the classifier". However, the term lacks antecedent basis and it is unclear if "the classifier" is referring to a configuration or component relating to the hyper model’s configuration for classifying or a completely different configuration or component. Therefore, the claim is rendered indefinite under 35 U.S.C. 112(b). For purposes of examination, the examiner will construe "the classifier" as a separate classifier from the hyper model's classifying configuration.
Regarding Claim 2-8, the dependent claims inherent the deficiencies of their respective parent claims and are likewise rejected.
Regarding Claim 9, the claim recites similar limitation as corresponding claim 1 and is rejected under 35 U.S.C. 112(b) as indefinite using similar rationale as claim 1 above.
Regarding Claims 10-11, the dependent claims inherent the deficiencies of their respective parent claims and are likewise rejected.
Regarding Claim 12, the claim recites similar limitation as corresponding claim 3 and is rejected under 35 U.S.C. 112(b) as indefinite using similar rationale as claim 3 above.
Regarding Claims 13-14, the dependent claims inherent the deficiencies of their respective parent claims and are likewise rejected.
Regarding Claim 15, the claim recites similar limitation as corresponding claims 1 and 9 and is rejected under 35 U.S.C. 112(b) as indefinite using similar rationale as claims 1 and 9 above.
Regarding Claims 16-17 and 19-21, the dependent claims inherent the deficiencies of their respective parent claims and are likewise rejected.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-17 and 19-21 are rejected under 35 U.S.C.101 because the claimed invention is directed to an abstract idea without significantly more.
Step 1: Claims 1-8 and 21 are directed to a process. Claims 9-17 and 19-20 are directed to a machine or an article of manufacture.
With respect to claims 1, 9, and 15:
2A Prong 1: The claim recites an abstract idea. Specifically:
generating/generate a frequency spectrum associated with the input data, wherein the generating includes creating the frequency spectrum by applying a frequency domain transformation on the input data; (Mathematical concept – generating a frequency spectrum by applying a frequency domain transformation on the input data involves mathematical calculations – see MPEP § 2106.04(a)(2)(I))
normalizing/normalize the frequency spectrum to generate a normalized frequency spectrum; (Mathematical concept – normalizing the frequency spectrum involves mathematical calculations – see MPEP § 2106.04(a)(2)(I))
(Claims 1 and 9) […] classify corruptions utilizing at least a Fourier transform of the input data; (Mathematical concepts – Classifying corruptions utilizing a Fourier transform involves mathematical calculations (see paragraph [0049]) – see MPEP § 2106.04(a)(2)(I))
(Claim 15) classify, utilizing at least a Fourier transform of the input data, a corruption associated with the input data based on an output of the hyper model; (Mathematical concepts – Classifying a corruption utilizing a Fourier transform involves mathematical calculations (see paragraph [0049]) – see MPEP § 2106.04(a)(2)(I))
updating/update one or more weights associated with the classifier based on the corruption associated with the input data; (Mathematical concept – updating weights involves mathematical calculations – see MPEP § 2106.04(a)(2)(I))
(Claims 1 and 9) in response to the corruption and corresponding batch norm (BN) statistic, updating one or more network BN statistics associated with the machine-learning network; (Mathematical concepts – updating BN statistics in response to the corruption and corresponding BN statistic involves mathematical calculations (see paragraphs [0024] and [0049]) – see MPEP § 2106.04(a)(2)(I))
If claim limitations, under their broadest reasonable interpretation, cover performance of the limitations as a mental process, but for the recitation of generic computer components, then the claim limitations fall within the mathematical or mental process grouping of abstract ideas. Accordingly, the claim “recites” an abstract idea.
2A Prong 2: The additional elements recited in the claim do not integrate the abstract idea into a practical application, individually or in combination.
Additional elements:
(Claim 1) receiving an input data from a sensor, wherein the input data is indicative of image information, radar information, sonar information, or sound information; (Adding insignificant extra-solution activity of mere data gathering to the judicial exception – see § MPEP2106.05(g).)
(Claim 9) an input interface configured to […] (Mere instructions to apply an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea – see MPEP 2106.05(f).)
(Claims 9 and 15) receive input data from a sensor, wherein the sensor includes a camera, a radar, a sonar, or a microphone; (Adding insignificant extra-solution activity of mere data gathering to the judicial exception – see § MPEP2106.05(g).)
(Claim 9) a processor in communication with the input interface, (Mere recitation of a generic computer component – see § MPEP 2106.05(b)(I))
(Claim 1) sending […]
(Claim 9) send […]
(Claim 15) inputting […]
[…] the normalized frequency spectrum to a hyper model configured to […] (Adding insignificant extra-solution activity to the judicial exception – see § MPEP2106.05(g).)
utilizing the normalized frequency spectrum as input to the hyper model […] (Adding insignificant extra-solution activity to the judicial exception – see § MPEP2106.05(g).)
outputting/output a classification associated with the input data […] (Adding insignificant extra-solution activity to the judicial exception – see § MPEP2106.05(g).)
[…] utilizing the classifier with updated weights. (Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea – see MPEP 2106.05(f).)
Since the claim as a whole, looking at the additional elements individually and in combination, does not contain any other additional elements that are indicative of integration into a practical application, the claim is directed to an abstract idea.
2B: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception.
Additional elements:
(Claim 1) receiving an input data from a sensor, wherein the input data is indicative of image information, radar information, sonar information, or sound information; (Simply appending well-understood, routine, conventional activities previously known to the industry, specified at a high level of generality, to the judicial exception (WURC)- see MPEP § 2106.05(d)(ll)(i) - Receiving or transmitting data over a network, e.g., using the Internet to gather data, Symantec, 838 F.3d at 1321, 120 USPQ2d at 1362 (utilizing an intermediary computer to forward information).)
(Claim 9) an input interface configured to […] (Mere instructions to apply an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea – see MPEP 2106.05(f).)
(Claims 9 and 15) receive input data from a sensor, wherein the sensor includes a camera, a radar, a sonar, or a microphone; (Simply appending well-understood, routine, conventional activities previously known to the industry, specified at a high level of generality, to the judicial exception (WURC)- see MPEP § 2106.05(d)(ll)(i) - Receiving or transmitting data over a network, e.g., using the Internet to gather data, Symantec, 838 F.3d at 1321, 120 USPQ2d at 1362 (utilizing an intermediary computer to forward information).)
(Claim 9) a processor in communication with the input interface, (Mere recitation of a generic computer component – see § MPEP 2106.05(b)(I))
(Claim 1) sending […]
(Claim 9) send […]
(Claim 15) inputting […]
[…] the normalized frequency spectrum to a hyper model configured to […] (Simply appending well-understood, routine, conventional activities previously known to the industry, specified at a high level of generality, to the judicial exception (WURC)- see MPEP § 2106.05(d)(ll)(i) - Receiving or transmitting data over a network, e.g., using the Internet to gather data, Symantec, 838 F.3d at 1321, 120 USPQ2d at 1362 (utilizing an intermediary computer to forward information).)
utilizing the normalized frequency spectrum as input to the hyper model […] (Simply appending well-understood, routine, conventional activities previously known to the industry, specified at a high level of generality, to the judicial exception (WURC)- see MPEP § 2106.05(d)(ll)(i) - Receiving or transmitting data over a network, e.g., using the Internet to gather data, Symantec, 838 F.3d at 1321, 120 USPQ2d at 1362 (utilizing an intermediary computer to forward information).)
outputting/output a classification associated with the input data […] (Simply appending well-understood, routine, conventional activities previously known to the industry, specified at a high level of generality, to the judicial exception (WURC)- see MPEP § 2106.05(d)(ll)(i) - Receiving or transmitting data over a network, e.g., using the Internet to gather data, Symantec, 838 F.3d at 1321, 120 USPQ2d at 1362 (utilizing an intermediary computer to forward information).)
[…] utilizing the classifier with updated weights. (Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea – see MPEP 2106.05(f).)
Considering the additional elements individually and in combination, and the claim as a whole, the additional elements do not provide significantly more than the abstract idea. Therefore, the claim is not patent eligible.
With respect to claim 2:
2A Prong 2: The additional elements recited in the claim do not integrate the abstract idea into a practical application, individually or in combination.
Additional elements:
wherein generating the frequency spectrum is only associated with a first channel of the input data (Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea – see MPEP 2106.05(f).)
2B: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception.
Additional elements:
wherein generating the frequency spectrum is only associated with a first channel of the input data (Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea – see MPEP 2106.05(f).)
Since the claim does not recite additional elements that either integrate the judicial exception into a practical application, nor provide significantly more than the judicial exception, the claim is not patent eligible. Therefore, the claim is not patent eligible.
With respect to claim 3:
2A Prong 1: The claim recites an abstract idea. Specifically:
wherein the frequency domain transformation on the input data includes utilizing a wavelength transform (Mathematical concept – utilizing a wavelength transform to perform the frequency domain transformation on the input data involves mathematical calculations – see MPEP § 2106.04(a)(2)(I))
Additionally, the claim does not recite any new additional elements that would amount to an integration of the abstract idea into a practical application (individually or in combination) or significantly more than the judicial exception.
Since the claim does not recite additional elements that either integrate the judicial exception into a practical application, nor provide significantly more than the judicial exception, the claim is not patent eligible. Therefore, the claim is not patent eligible.
With respect to claim 4:
2A Prong 2: The additional elements recited in the claim do not integrate the abstract idea into a practical application, individually or in combination.
Additional elements:
wherein the corruption includes Gaussian noise, shot noise, motion blur, zoom blur, compression, or brightness changes (Generally linking the use of a judicial exception to a particular technological environment or field of use – see MPEP § 2106.05(h).)
2B: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception.
Additional elements:
wherein the corruption includes Gaussian noise, shot noise, motion blur, zoom blur, compression, or brightness changes (Generally linking the use of a judicial exception to a particular technological environment or field of use – see MPEP § 2106.05(h).)
Since the claim does not recite additional elements that either integrate the judicial exception into a practical application, nor provide significantly more than the judicial exception, the claim is not patent eligible. Therefore, the claim is not patent eligible.
With respect to claim 5:
2A Prong 1: The claim recites an abstract idea. Specifically:
wherein the frequency domain transformation on the input data utilizes a Fourier transform (Mathematical concept – utilizing a Fourier transform for the frequency domain transformation involves mathematical calculations– see MPEP § 2106.04(a)(2)(I))
Additionally, the claim does not recite any new additional elements that would amount to an integration of the abstract idea into a practical application (individually or in combination) or significantly more than the judicial exception.
Since the claim does not recite additional elements that either integrate the judicial exception into a practical application, nor provide significantly more than the judicial exception, the claim is not patent eligible. Therefore, the claim is not patent eligible.
With respect to claim 6:
2A Prong 1: The claim recites an abstract idea. Specifically:
[…] classify a clean image (Mental process – classifying a clean image can be practically performed in the human mind, or by a human using a pen and paper as a physical aid – see MPEP § 2106.04(a)(2)(III))
2A Prong 2: The additional elements recited in the claim do not integrate the abstract idea into a practical application, individually or in combination.
Additional elements:
wherein the hyper model is configured to (Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea – see MPEP 2106.05(f).)
2B: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception.
Additional elements:
wherein the hyper model is configured to (Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea – see MPEP 2106.05(f).)
Since the claim does not recite additional elements that either integrate the judicial exception into a practical application, nor provide significantly more than the judicial exception, the claim is not patent eligible. Therefore, the claim is not patent eligible.
With respect to claim 7:
2A Prong 2: The additional elements recited in the claim do not integrate the abstract idea into a practical application, individually or in combination.
Additional elements:
wherein the classifier is a pre-trained classifier (Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea – see MPEP 2106.05(f).)
2B: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception.
Additional elements:
wherein the classifier is a pre-trained classifier (Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea – see MPEP 2106.05(f).)
Since the claim does not recite additional elements that either integrate the judicial exception into a practical application, nor provide significantly more than the judicial exception, the claim is not patent eligible. Therefore, the claim is not patent eligible.
With respect to claim 8:
2A Prong 1: The claim recites an abstract idea. Specifically:
updating the one or more weights (Mathematical concept – updating weights involves mathematical calculations – see MPEP § 2106.04(a)(2)(I))
2A Prong 2: The additional elements recited in the claim do not integrate the abstract idea into a practical application, individually or in combination.
Additional elements:
in response to utilizing a look-up table defining BN statistics associated with the corruption (Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea – see MPEP 2106.05(f).)
2B: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception.
Additional elements:
in response to utilizing a look-up table defining BN statistics associated with the corruption (Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea – see MPEP 2106.05(f).)
Since the claim does not recite additional elements that either integrate the judicial exception into a practical application, nor provide significantly more than the judicial exception, the claim is not patent eligible. Therefore, the claim is not patent eligible.
With respect to claim 10:
2A Prong 1: The claim recites an abstract idea. Specifically:
update the one or more weights associated with the classifier […] (Mathematical concept – updating weights involves mathematical calculations – see MPEP § 2106.04(a)(2)(I))
[…] or directly updating the one or more weights (Mathematical concept – updating weights involves mathematical calculations – see MPEP § 2106.04(a)(2)(I))
2A Prong 2: The additional elements recited in the claim do not integrate the abstract idea into a practical application, individually or in combination.
Additional elements:
utilizing a look-up table (Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea – see MPEP 2106.05(f).)
2B: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception.
Additional elements:
utilizing a look-up table (Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea – see MPEP 2106.05(f).)
Since the claim does not recite additional elements that either integrate the judicial exception into a practical application, nor provide significantly more than the judicial exception, the claim is not patent eligible. Therefore, the claim is not patent eligible.
With respect to claim 11:
2A Prong 1: The claim recites an abstract idea. Specifically:
wherein the frequency spectrum includes a Fourier transform of the input data (Mathematical concept – the claim involves mathematical calculations – see MPEP § 2106.04(a)(2)(I))
Additionally, the claim does not recite any new additional elements that would amount to an integration of the abstract idea into a practical application (individually or in combination) or significantly more than the judicial exception.
Since the claim does not recite additional elements that either integrate the judicial exception into a practical application, nor provide significantly more than the judicial exception, the claim is not patent eligible. Therefore, the claim is not patent eligible.
With respect to claim 12:
2A Prong 1: The claim recites an abstract idea. Specifically:
wherein the frequency spectrum includes a wavelength transform of the input data (Mathematical concept – the claim involves mathematical calculations – see MPEP § 2106.04(a)(2)(I))
Additionally, the claim does not recite any new additional elements that would amount to an integration of the abstract idea into a practical application (individually or in combination) or significantly more than the judicial exception.
Since the claim does not recite additional elements that either integrate the judicial exception into a practical application, nor provide significantly more than the judicial exception, the claim is not patent eligible. Therefore, the claim is not patent eligible.
With respect to claim 13:
2A Prong 2: The additional elements recited in the claim do not integrate the abstract idea into a practical application, individually or in combination.
Additional elements:
wherein the hyper model is a three-layer fully connected neural network. (Generally linking the use of a judicial exception to a particular technological environment or field of use – see MPEP § 2106.05(h).)
2B: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception.
Additional elements:
wherein the hyper model is a three-layer fully connected neural network. (Generally linking the use of a judicial exception to a particular technological environment or field of use – see MPEP § 2106.05(h).)
Since the claim does not recite additional elements that either integrate the judicial exception into a practical application, nor provide significantly more than the judicial exception, the claim is not patent eligible. Therefore, the claim is not patent eligible.
With respect to claim 14:
2A Prong 2: The additional elements recited in the claim do not integrate the abstract idea into a practical application, individually or in combination.
Additional elements:
wherein the three fully connected layers include a size of 1024 neurons, 512 neurons, and 16 neurons. (Generally linking the use of a judicial exception to a particular technological environment or field of use – see MPEP § 2106.05(h).)
2B: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception.
Additional elements:
wherein the three fully connected layers include a size of 1024 neurons, 512 neurons, and 16 neurons. (Generally linking the use of a judicial exception to a particular technological environment or field of use – see MPEP § 2106.05(h).)
Since the claim does not recite additional elements that either integrate the judicial exception into a practical application, nor provide significantly more than the judicial exception, the claim is not patent eligible. Therefore, the claim is not patent eligible.
With respect to claim 16:
2A Prong 1: The claim recites an abstract idea. Specifically:
update one or more weights associated with the classifier based on a lookup table identifying information associated with the corruption (Mathematical concept – updating weights involves mathematical calculations – see MPEP § 2106.04(a)(2)(I))
Additionally, the claim does not recite any new additional elements that would amount to an integration of the abstract idea into a practical application (individually or in combination) or significantly more than the judicial exception.
Since the claim does not recite additional elements that either integrate the judicial exception into a practical application, nor provide significantly more than the judicial exception, the claim is not patent eligible. Therefore, the claim is not patent eligible.
With respect to claim 17:
2A Prong 1: The claim recites an abstract idea. Specifically:
update one or more weights associated with the classifier (Mathematical concept – updating weights involves mathematical calculations – see MPEP § 2106.04(a)(2)(I))
Additionally, the claim does not recite any new additional elements that would amount to an integration of the abstract idea into a practical application (individually or in combination) or significantly more than the judicial exception.
Since the claim does not recite additional elements that either integrate the judicial exception into a practical application, nor provide significantly more than the judicial exception, the claim is not patent eligible. Therefore, the claim is not patent eligible.
With respect to claim 19:
2A Prong 2: The additional elements recited in the claim do not integrate the abstract idea into a practical application, individually or in combination.
Additional elements:
wherein the hyper model includes three layers (Generally linking the use of a judicial exception to a particular technological environment or field of use – see MPEP § 2106.05(h).)
2B: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception.
Additional elements:
wherein the hyper model includes three layers (Generally linking the use of a judicial exception to a particular technological environment or field of use – see MPEP § 2106.05(h).)
Since the claim does not recite additional elements that either integrate the judicial exception into a practical application, nor provide significantly more than the judicial exception, the claim is not patent eligible. Therefore, the claim is not patent eligible.
With respect to claim 20:
2A Prong 1: The claim recites an abstract idea. Specifically:
update one or more weights of the classifier (Mathematical concept – updating weights involves mathematical calculations – see MPEP § 2106.04(a)(2)(I))
2A Prong 2: The additional elements recited in the claim do not integrate the abstract idea into a practical application, individually or in combination.
Additional elements:
utilizing a look-up table defining batch norm statistics associated with the corruption. (Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea – see MPEP 2106.05(f).)
2B: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception.
Additional elements:
utilizing a look-up table defining batch norm statistics associated with the corruption. (Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea – see MPEP 2106.05(f).)
Since the claim does not recite additional elements that either integrate the judicial exception into a practical application, nor provide significantly more than the judicial exception, the claim is not patent eligible. Therefore, the claim is not patent eligible.
With respect to claim 21:
2A Prong 2: The additional elements recited in the claim(s) do not integrate the abstract idea into a practical application, individually or in combination.
Additional elements:
wherein the hypermodel is pre-trained utilizing natural samples without any corruption. (Mere instructions to apply an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea – see MPEP 2106.05(f).)
2B: The claim(s) do(es) not include additional elements that are sufficient to amount to significantly more than the judicial exception.
Additional elements:
wherein the hypermodel is pre-trained utilizing natural samples without any corruption. (Mere instructions to apply an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea – see MPEP 2106.05(f).)
Since the claim does not recite additional elements that either integrate the judicial exception into a practical application, nor provide significantly more than the judicial exception, the claim is not patent eligible. Therefore, the claim is not patent eligible.
Claim 15-17 and 19-20 are rejected under 35 U.S.C.101 because the claimed invention is directed to non-statutory subject matter.
With respect to claim 15:
The claim does not fall within at least one of the four categories of patent eligible subject matter because the broadest reasonable interpretation of A computer-program product storing instructions encompasses software per se.
A claim whose BRI covers non-statutory embodiments embraces subject matter that is not eligible for patent protection and therefore is directed to non-statutory subject matter. See MPEP 2106.03(II). Computer program product is not being embodied on a medium for its functionality to be realized. Therefore, the claim is directed to a software per se. Accordingly, Claim 15 fails to recite statutory subject matter under 35 U.S.C. 101.
It is suggested that claim 15 be amended to recite A non-transitory computer readable storage medium storing computer-program product… to overcome this rejection.
With respect to claims 16-17 and 19-20:
The dependent claims inherent the deficiencies of their respective parent claims and are likewise rejected.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1, 3-7, and 21 are rejected under 35 U.S.C. 103 as being unpatentable over SHEN (“Gradient-Free Adversarial Training Against Image Corruption for Learning-based Steering”) in view of BUNAZAWA (US 20220044502 A1) and BENZ (“Revisiting Batch Normalization for Improving Corruption Robustness”), hereafter SHEN, BUNAZAWA, and BENZ respectively.
Regarding Claim 1:
SHEN teaches:
A computer-implemented method for training a machine-learning network, comprising:
receiving an input data from a sensor, wherein the input data is indicative of image information, radar information, sonar information, or sound information; (SHEN [pg. 3, section 3 Background and Setup] teaches: "given a single image as input (e.g. captured by a front-facing camera on a self-driving car)," Examiner's note: SHEN explicitly teaches receiving input data from a sensor (i.e., front-facing camera), which is indicative of image information.)
generating a frequency spectrum associated with the input data, wherein the generating includes creating the frequency spectrum by applying a frequency domain transformation on the input data; (SHEN [pg. 2, Figure 1] teaches: "A frequency-space branch is added to the backbone when frequency-related perturbations (e.g., blur, noise) need to be handled." SHEN [pg. 19, A.5] teaches: "Formally, we do a standard 2-D Fourier Transform, and using the absolute value of each complex number and form another channel of the image,")
SHEN is not relied upon for teaching:
normalizing the frequency spectrum to generate a normalized frequency spectrum;
sending the normalized frequency spectrum to a hyper model configured to classify corruptions utilizing at least a Fourier transform of the input data;
utilizing the normalized frequency spectrum as input to the hyper model in order to classify a corruption associated with the input data;
updating one or more weights associated with the classifier based on the corruption associated with the input data; and
in response to the corruption and corresponding batch norm (BN) statistic, updating one or more network BN statistics associated with the machine-learning network; and
outputting a classification associated with the input data utilizing the classifier with updated weights.
However, BUNAZAWA teaches: normalizing the frequency spectrum to generate a normalized frequency spectrum; (BUNAZAWA [0130] teaches: "The distribution of a frequency component obtained by subjecting the time-series data of the gear rotation speed Ngear to fast Fourier transform may be normalized, and the normalized feature quantity may be used as input variables fed to the map.")
sending the normalized frequency spectrum to a hyper model configured to classify corruptions utilizing at least a Fourier transform of the input data; (BUNAZAWA [0130] teaches: "The distribution of a frequency component obtained by subjecting the time-series data of the gear rotation speed Ngear to fast Fourier transform may be normalized, and the normalized feature quantity may be used as input (i.e., sending) variables fed to the map. […] the normalized feature quantity may be used as input variables fed to the map." BUNAZAWA [0138] teaches: "The input variables fed to the map defined by the map data DM may include the sound NZ." BUNAZAWA [0086] teaches: "Specifically, a fully-connected feed-forward neural network having a single middle layer is used as the map." BUNAZAWA [0145] teaches: "The neural network is not limited to a fully-connected feed-forward neural network. For example, a one-dimensional convolutional neural network may be used. […] For example, the state of the gear may be identified using classification by a support vector machine.” BUNAZAWA [0146] teaches: "In the process of S105, the number of middle layers in the neural network is one. However, the number of middle layers may be two or more.” BUNAWAZA [0040] teaches: “The above-described configuration is capable of determining whether the gear is in a damaged state. It is thus possible to detect anomalies (i.e., to classify corruptions) of causes in different categories.” Examiner's note: Paragraph [0025] of the specification states that “a shallow 3-layer fully connected neural network can identify 16 corruption types” and paragraph [0026] states “The hyper model305 may identify the type of input corruption, e.g., motion blur or Gaussian noise”, and thus the hyper model can be defined as a 3-layer fully connected network, per the specification. Under broadest reasonable interpretation “a hyper model” can be interpreted as BUNAZAWA's map data DM model, which is a fully-connected feed-forward neural network that can have a single middle layer, or two or more middle layers. Furthermore, BUNAZAWA [0130] teaches that the distribution of a frequency component is obtained by subjecting the time-series data to a fast Fourier transform to normalize the time-series data (i.e., normalized frequency spectrum). Then, the normalized data by the fast Fourier transform is used as input (i.e., sending) for the map data DM model. The map data DM model is capable of determining gear state anomalies using classification (i.e., configured to classify corruptions) by a support vector machine by using the time-series data as input after being subjected to normalization by the fast Fourier transform.)
utilizing the normalized frequency spectrum as input to the hyper model in order to classify a corruption associated with the input data; (Examiner's note: As taught above by BUNAZAWA [0040], [0086], [0130], [0138], [0145] and [0146], under broadest reasonable interpretation, "to classify a corruption" can be interpreted as the map data DM model being capable of determining gear state anomalies using classification (i.e., configured to classify corruptions) by a support vector machine by using the time-series data as input after being subjected to normalization by the fast Fourier transform.)
Accordingly, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention, having the teachings of SHEN and BUNAZAWA before them, to include BUNAZAWA's normalized Fourier transform and map in SHEN's training method against image corruption. One would have been motivated to make such a combination in order to improve the accuracy of determination as to whether there is an anomaly (i.e., corruption) in the transmission based on various input variables such as vibration or sounds from sensors (i.e., input data) (BUNAZAWA [0010], [0031], and [0033]).
SHEN in view of BUNAZAWA is not relied upon for teaching:
updating one or more weights associated with the classifier based on the corruption associated with the input data; and
in response to the corruption and corresponding batch norm (BN) statistic, updating one or more network BN statistics associated with the machine-learning network; and
outputting a classification associated with the input data utilizing the classifier with updated weights.
However, BENZ teaches: updating one or more weights associated with the classifier based on the corruption associated with the input data; (BENZ [pg. 2, section 1 Introduction] teaches: "As indicated in Figure 1, we investigate and find that such influence on the model performance can be at least partially mitigated by estimating and adapting the statistics with a few representation samples from the corruption domain." (BENZ [pg. 2, section 1 Introduction] teaches: "As indicated in Figure 1, we investigate and find that such influence on the model performance can be at least partially mitigated by estimating and adapting the statistics with a few representation samples from the corruption domain." BENZ [pg. 1, Abstract] teaches: "We find that simply estimating and adapting the BN statistics on a few (32 for instance) representation samples, without retraining the model, improves the corruption robustness by a large margin on several benchmark datasets with a wide range of model architectures." BENZ [pg. 1, Figure 1] teaches: "An image under corruption changes the prediction from “German Shepherd” to “Beaver”. After rectifying the BN statistics, the corrupted image is classified correctly." BENZ [pg. 6, section 6.2. Impact of mean and variance] teaches: "Rectifying the BN statistics involves the manipulation of two parameters, namely the mean
μ
and variance
σ
2
." Examiner's note: paragraph [0046] of the instant application states: "The system may update the classifier 309 with the corresponding BN statistics. Upon updating the model of the classifier 309, the classifier 309 may output the corresponding classification 311." Additionally, paragraph [0031] of the instant application states: "Model adaptation may include updating model's parameters, or even architecture of the model. The system may update BN statistics to adapt the model to update weights based on the corruption." Therefore, under BRI in light of the specification, "updating the one or more weights" can be interpreted as rectifying the BN statistics of BENZ's classifier.”)
in response to the corruption and corresponding batch norm (BN) statistic, updating one or more network BN statistics associated with the machine-learning network; (BENZ [pg. 1, Abstract] teaches: "We find that simply estimating and adapting the BN statistics on a few (32 for instance) representation samples, without retraining the model, improves the corruption robustness by a large margin on several benchmark datasets with a wide range of model architectures. For example, on ImageNet-C, statistics adaptation improves the top1 accuracy of ResNet50 (i.e., machine-learning network) from 39.2% to 48.7%." BENZ [pg. 1, Figure 1] teaches: "An image under corruption changes the prediction from “German Shepherd” to “Beaver”. After rectifying the BN statistics, the corrupted image is classified correctly." Examiner’s note: Under BRI, “in response to the corruption updating one or more network BN statistics” can be interpreted as Figure 1, which describes that when the model outputs an incorrect classification such as predicting that a “German Shepherd” is a “Beaver”, the BN statistics are rectified (i.e., updating) in order to output a corrected classification.)
outputting a classification associated with the input data utilizing the classifier with updated weights. (BENZ [pg. 1, Figure 1] teaches: "An image under corruption changes the prediction from “German Shepherd” to “Beaver”. After rectifying the BN statistics, the corrupted image is classified correctly.")
Accordingly, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention, having the teachings of SHEN, BUNAZAWA, and BENZ before them, to include BENZ' estimating and adapting BN statistics in SHEN/BUNAZAWA's training method against image corruption. One would have been motivated to make such a combination in order to improve model robustness under corruptions (BENZ [pg. 4, section 3.3. Motivation for rectifying batch normalization]).
Regarding Claim 3:
SHEN in view of BUNAZAWA and BENZ teaches the elements of claim 1 as outlined above. SHEN further teaches:
wherein the frequency domain transformation on the input data includes utilizing a wavelength transform. (SHEN [pg. 19, A.5] teaches: "Formally, we do a standard 2-D Fourier Transform, and using the absolute value of each complex number and form another channel of the image," Examiner's note: a "wavelength transformation" can be considered a Fourier transform or any other frequency domain transformation.)
Regarding Claim 4:
SHEN in view of BUNAZAWA and BENZ teaches the elements of claim 1 as outlined above. SHEN further teaches:
wherein the corruption includes Gaussian noise, shot noise, motion blur, zoom blur, compression, or brightness changes. (SHEN [pg. 2, Figure 1] teaches: "A frequency-space branch is added to the backbone when frequency-related perturbations (e.g., blur, noise) need to be handled.")
Regarding Claim 5:
SHEN in view of BUNAZAWA and BENZ teaches the elements of claim 1 as outlined above. SHEN further teaches:
wherein the frequency domain transformation on the input data utilizes a Fourier transform. (SHEN [pg. 19, A.5] teaches: "Formally, we do a standard 2-D Fourier Transform, and using the absolute value of each complex number and form another channel of the image,")
Regarding Claim 6:
SHEN in view of BUNAZAWA and BENZ teaches the elements of claim 1 as outlined above. SHEN further teaches:
wherein the hyper model is configured to classify a clean image. (SHEN [pg. 2, section I. Introduction] teaches: "Finally, we propose a comprehensive robustness evaluation standard under four different scenarios: clean data, single-perturbation data, multi-perturbation data, and previously unseen data." Examiner's note: under BRI, the "hyper model" can be interpreted as the map data DM model from BUNAZAWA, which determines the states of input data of interest (i.e., normal or damaged). Therefore, under BRI, "the hyper model is configured to classify a clean image" can be interpreted as the combination of the map data DM (i.e., hyper model) that makes an evaluation using SHEN's clean data.)
Regarding Claim 7:
SHEN in view of BUNAZAWA and BENZ teaches the elements of claim 1 as outlined above. BENZ further teaches:
wherein the classifier is a pre-trained classifier. (BENZ [pg. 4, Figure 2] teaches: "[…] ResNet50 pretrained on ImageNet." Examiner’s note: ResNet50 is a classifier model.)
Regarding Claim 21:
SHEN in view of BUNAZAWA and BENZ teaches the elements of claim 1 as outlined above. BENZ further teaches:
wherein the hypermodel is pre-trained utilizing natural samples without any corruption. (BENZ [pg. 1, Abstract] teaches: "The performance of DNNs trained on clean images has been shown to decrease when the test images have common corruptions." BENZ [pg. 1, Figure 1] teaches a clean image. BENZ [pg. 4, section 4. Experimental Setup] teaches: "We evaluate the performance of rectifying the BN statistics on various models trained on the corresponding clean dataset (i.e., utilizing natural samples without any corruption)." Examiner’s note: BUNAZAWA’s DM model is also a pre-trained model, and a person of ordinary skill in the art could apply BENZ’s training using a clean dataset in BUNAZAWA’s model.
Claim 2 is rejected under 35 U.S.C. 103 as being unpatentable over SHEN in view of BUNAZAWA, and BENZ as applied to claim 1 above, and further in view of YAMAKAJI (US 20220335276 A1), hereafter YAMAKAJI.
Regarding Claim 2:
SHEN in view of BUNAZAWA and BENZ teaches the elements of claim 1 as outlined above. SHEN in view of BUNAZAWA and BENZ are not relied upon for teaching, but YAMAKAJI teaches:
wherein generating the frequency spectrum is only associated with a first channel of the input data. (YAMAKAJI [0121] teaches: "[…] a method of performing Fourier transform on each channel and then making conversion to one channel in a fully connected layer, or a method of merely weighting each channel in advance so that the input signal 20 to be inputted to the input layer 11 has one channel, can be used." Examiner’s note: under BRI, “a first channel of the input data” can be interpreted as inputting the input signal as one channel, the input signal being the result of a Fourier transform.)
Accordingly, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention, having the teachings of SHEN, BUNAZAWA, BENZ, and YAMAKAJI before them, to include YAMAKAJI channel input layer processing in SHEN/BUNAZAWA/BENZ' training method against image corruption. One would have been motivated to make such a combination in order to reduce calculation amount and increase calculation speed based on input size (YAMAKAJI [0147]).
Claim 8 is rejected under 35 U.S.C. 103 as being unpatentable over SHEN in view of BUNAZAWA and BENZ, as applied to claim 1 above, and further in view of SEGU (US 20230122207 A1), hereafter SEGU.
Regarding Claim 8:
SHEN in view of BUNAZAWA and BENZ teaches the elements of claim 1 as outlined above. BENZ further teaches:
wherein updating the one or more weights is in response to utilizing […] BN statistics associated with the corruption. (BENZ [pg. 1, Abstract] teaches: "We find that simply estimating and adapting the BN statistics on a few (32 for instance) representation samples, without retraining the model, improves the corruption robustness by a large margin on several benchmark datasets with a wide range of model architectures." BENZ [pg. 1, Figure 1] teaches: "An image under corruption changes the prediction from “German Shepherd” to “Beaver”. After rectifying the BN statistics, the corrupted image is classified correctly." BENZ [pg. 6, section 6.2. Impact of mean and variance] teaches: "Rectifying the BN statistics involves the manipulation of two parameters, namely the mean
μ
and variance
σ
2
." Examiner's note: paragraph [0031] of the instant application recites "Model adaptation may include updating model's parameters, or even architecture of the model. The system may update BN statistics to adapt the model to update weights based on the corruption." Therefore, "updating the one or more weights" can be interpreted as rectifying the parameters of the classifier.)
SHEN in view of BUNAZAWA and BENZ is not relied upon for teaching, but SEGU teaches: […] a look-up table defining BN statistics associated with the corruption. (SEGU [0008] teaches: "The computing system includes one or more processors and one or more non-transitory computer-readable media that collectively store [...] plurality of different batch normalization layers respectively associated with a plurality of source domains;” SEGU [0040] teaches: “At training time, the multi-source batch normalization layer can collect and apply domain-specific batch statistics
(
μ
d
b
,
σ
d
b
2
)
,while accordingly updating the domain population statistics as moving average of the statistics for every batch b.” SEGU [0022] teaches: “During inference, a computing system can determine a target set of batch normalization statistics for a target sample associated with a target domain.” SEGU [0045] teaches: "[…] separate batch normalization statistics are kept for each domain, […]". Examiner's note: SEGU teaches maintaining a stored collection of batch normalization statistics for each domain, and each domain represents any possible scenario from collected samples as discusses in SEGU [0003]. During inference, the computing system determines (i.e., looks up) a target set of batch normalization statistics for a target sample (i.e., image) associated with a target domain. Under BRI, “a look-up table defining batch norm statistics associated with the corruption” can be interpreted as the multi-source batch normalization layer that collects and applies domain-specific batch statistics.)
Accordingly, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention, having the teachings of SHEN, BUNAZAWA, BENZ, and SEGU before them, to include SEGU’s multi-source batch normalization layer that collects and applies domain-specific batch statistics in SHEN/BUNAZAWA/BENZ' training method against image corruption. One would have been motivated to conserve data collection resources, thereby “reducing the consumption of computing resources such as processor usage, memory usage, and/or network bandwidth.” (SEGU [0030]).
Claims 9, 11-13, 15-17, and 19 rejected under 35 U.S.C. 103 as being unpatentable over SHEN in view of BUNAZAWA, BENZ, and YAMAKAJI.
Regarding Claim 9:
The claim recites similar limitations as corresponding claim 1 and is rejected for similar reasons as claim 1 using similar teachings and rationale. However, the combination of SHEN, BUNAZAWA, and BENZ is not relied upon for teaching, but YAMAKAJI teaches:
an input interface configured to receive input data from a sensor […] (YAMAKAJI [0039] teaches: "[0039] The input unit 37 is formed by a keyboard, a mouse, a microphone, a camera, or the like.")
a processor in communication with the input interface, wherein the processor is programmed to: (YAMAKAJI [0037] teaches: "[0037] As shown in FIG. 1, the hardware 100 includes a central processing unit (CPU) 30, and an input/output interface 35 is connected to the CPU".)
Accordingly, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention, having the teachings of SHEN, BUNAZAWA, BENZ, and YAMAKAJI before them, to include YAMAKAJI input interface and CPU in SHEN/BUNAZAWA/BENZ' training method against image corruption. One would have been motivated to make such a combination in order to receive and process different types of input data such as images from a camera or sound from a microphone.
Regarding Claim 11:
SHEN in view of BUNAZAWA, BENZ, and YAMAKAJI teaches the elements of claim 9 as outlined above. Additionally, the claim recites similar limitations as corresponding claim 5 and is rejected for similar reasons as claim 5 using similar teachings and rationale.
Regarding Claim 12:
SHEN in view of BUNAZAWA, BENZ, and YAMAKAJI teaches the elements of claim 9 as outlined above. Additionally, the claim recites similar limitations as corresponding claim 3 and is rejected for similar reasons as claim 3 using similar teachings and rationale.
Regarding Claim 13:
SHEN in view of BUNAZAWA, BENZ and YAMAKAJI teaches the elements of claim 9 as outlined above. BUNAZAWA further teaches:
wherein the hyper model is a three-layer fully connected neural network (BUNAZAWA [0086] teaches: “In the present embodiment, the map is a function approximator. Specifically, a fully-connected feed-forward neural network having a single middle layer is used as the map (i.e., fully connected neural network hyper model).” BUNAZAWA [0146] teaches: "In the process of S105, the number of middle layers in the neural network is one. However, the number of middle layers may be two or more (i.e., three-layer).”
Regarding Claim 15:
The claim recites similar limitations as corresponding claim 1 and is rejected for similar reasons as claim 1 using similar teachings and rationale. However, the combination of SHEN, BUNAZAWA, and BENZ is not relied upon for teaching, but YAMAKAJI teaches:
A computer-program product storing instructions which, when executed by a computer, cause the computer to: (YAMAKAJI [0037] teaches: "the CPU 30 loads a program stored in a hard disk drive (HDD) 33 or a solid state drive (SSD, not shown) onto a random access memory (RAM) 32, and executes the program while performing reading and writing as necessary. Thus, the CPU 30 performs various processes to cause the hardware 100 to operate as a device having a predetermined function.")
Accordingly, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention, having the teachings of SHEN, BUNAZAWA, BENZ, and YAMAKAJI before them, to include YAMAKAJI stored program and CPU in SHEN/BUNAZAWA/BENZ' training method against image corruption. One would have been motivated to make such a combination in order to execute the program and operate the device having a predetermined function (i.e, training method against image corruption).
Regarding Claim 16:
SHEN in view of BUNAZAWA, BENZ, and YAMAKAJI teaches the elements of claim 15 as outlined above. Additionally, the claim recites similar limitations as corresponding claim 8 and is rejected for similar reasons as claim 8 using similar teachings and rationale.
Regarding Claim 17:
SHEN in view of BUNAZAWA, BENZ, and YAMAKAJI teaches the elements of claim 15 as outlined above. BENZ further teaches:
wherein the instructions cause the computer to update one or more weights associated with the classifier. (BENZ [pg. 2, section 1 Introduction] teaches: "As indicated in Figure 1, we investigate and find that such influence on the model performance can be at least partially mitigated by estimating and adapting the statistics with a few representation samples from the corruption domain." Examiner’s note: the computer-program product with instructions that cause the computer to execute a predetermined function (i.e., update one or more weights) is taught by the combination of SHEN/BUNAZAWA/BENZ/YAMAKAJI above in claim 15.)
Regarding Claim 19:
SHEN in view of BUNAZAWA, BENZ, and YAMAKAJI teaches the elements of claim 15 as outlined above. BUNAZAWA further teaches:
wherein the hyper model includes three layers. (BUNAZAWA [0086] teaches: “In the present embodiment, the map is a function approximator. Specifically, a fully-connected feed-forward neural network having a single middle layer is used as the map (i.e., fully connected neural network hyper model).” BUNAZAWA [0146] teaches: "In the process of S105, the number of middle layers in the neural network is one. However, the number of middle layers may be two or more (i.e., three-layer).”)
Claim 10 is rejected under 35 U.S.C. 103 as being unpatentable over SHEN in view of BUNAZAWA, BENZ, and YAMAKAJI as applied to claim 9 above, and further in view of SEGU.
Regarding Claim 10:
SHEN in view of BUNAZAWA, BENZ, and YAMAKAJI teaches the elements of claim 9 as outlined above. BENZ further teaches:
wherein the processor is programmed to update the one or more weights associated with the classifier […] (BENZ [pg. 1, Abstract] teaches: "We find that simply estimating and adapting the BN statistics on a few (32 for instance) representation samples, without retraining the model, improves the corruption robustness by a large margin on several benchmark datasets with a wide range of model architectures." BENZ [pg. 1, Figure 1] teaches: "An image under corruption changes the prediction from “German Shepherd” to “Beaver”. After rectifying the BN statistics, the corrupted image is classified correctly." BENZ [pg. 6, section 6.2. Impact of mean and variance] teaches: "Rectifying the BN statistics involves the manipulation of two parameters, namely the mean
μ
and variance
σ
2
." Examiner's note: paragraph [0031] of the instant application recites "Model adaptation may include updating model's parameters, or even architecture of the model. The system may update BN statistics to adapt the model to update weights based on the corruption." Therefore, "to update the one or more weights associated with the classifier" can be interpreted as rectifying the parameters of the classifier. Furthermore, the processor is taught by YAMAKAJI [0039] as outlined above in claim 9.)
or directly updating the one or more weights. (BENZ [pg. 3, section 3.1. Revisiting classical batch normalization] discusses directly updating the BN statistics of the model (i.e., updating the weights) by using a moving average: "population statistics
μ
p
and
σ
p
2
are estimated over the whole training dataset through moving average.)
However, SHEN in view of BUNAZAWA, BENZ, and YAMAKAJI is not relied upon for teaching, but SEGU teaches: […] utilizing a look-up table (SEGU [0008] teaches: "The computing system includes one or more processors and one or more non-transitory computer-readable media that collectively store [...] plurality of different batch normalization layers respectively associated with a plurality of source domains;” SEGU [0040] teaches: “At training time, the multi-source batch normalization layer can collect and apply domain-specific batch statistics
(
μ
d
b
,
σ
d
b
2
)
,while accordingly updating the domain population statistics as moving average of the statistics for every batch b.” SEGU [0022] teaches: “During inference, a computing system can determine a target set of batch normalization statistics for a target sample associated with a target domain.” SEGU [0045] teaches: "[…] separate batch normalization statistics are kept for each domain, […]". Examiner's note: SEGU teaches maintaining a stored collection of batch normalization statistics for each domain, and each domain represents any possible scenario from collected samples as discusses in SEGU [0003]. During inference, the computing system determines (i.e., looks up) a target set of batch normalization statistics for a target sample (i.e., image) associated with a target domain. Under BRI, “utilizing a look-up table” can be interpreted as the multi-source batch normalization layer that collects and applies domain-specific batch statistics.)
Accordingly, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention, having the teachings of SHEN, BUNAZAWA, BENZ, YAMAKAJI, and SEGU before them, to include SEGU’s multi-source batch normalization layer that collects and applies domain-specific batch statistics in SHEN/BUNAZAWA/YAMAKAJIBENZ' training method against image corruption. One would have been motivated to conserve data collection resources, thereby “reducing the consumption of computing resources such as processor usage, memory usage, and/or network bandwidth.” (SEGU [0030]).
Claim 14 is rejected under 35 U.S.C. 103 as being unpatentable over SHEN in view of BUNAZAWA, BENZ, and YAMAKAJI, as applied to claim 13, and further in view of CHEN (US 20160293167 A1) hereafter CHEN.
Regarding Claim 14:
SHEN in view of BUNAZAWA, BENZ, and YAMAKAJI teaches the elements of claim 13 as outlined above. However, SHEN in view of BUNAZAWA, BENZ, and YAMAKAJI is not relied upon for teaching, but CHEN teaches:
wherein the three fully connected layers include a size of 1024 neurons, 512 neurons, and 16 neurons. (CHEN [0063] teaches: "Table 1 shows baseline results for various configurations of fully-connected networks: with variable number of layers (top), with variable context sizes (middle) and with variable number of nodes (bottom.)" CHEN [0127] teaches: "[0127] Alternatively, a different number of layers (e.g., 2, 3, 5, 8, etc.) or a different number of nodes per layer (e.g., 16, 32, 64, 128, 512, 1024, etc.) may be used.")
Accordingly, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention, having the teachings of SHEN, BUNAZAWA, BENZ, YAMAKAJI, and CHEN before them, to include CHEN's fully-connected layer configuration in SHEN/BUNAZAWA/BENZ/YAMAKAJI's training method against image corruption. One would have been motivated to make such a combination in order to in order to use full-connected model to increase accuracy and significantly decrease error rates (CHEN [0045]).
Claim 20 is rejected under 35 U.S.C. 103 as being unpatentable over SHEN in view of BUNAZAWA, BENZ, and YAMAKAJI, as applied to claim 15 above, and further in view of SEGU.
Regarding Claim 20:
SHEN in view of BUNAZAWA, BENZ, and YAMAKAJI teaches the elements of claim 15 as outlined above. BENZ further teaches:
wherein the instructions cause the computer to update one or more weights of the classifier utilizing […] batch norm statics associated with the corruption. (BENZ [pg. 1, Abstract] teaches: "We find that simply estimating and adapting the BN statistics on a few (32 for instance) representation samples, without retraining the model, improves the corruption robustness by a large margin on several benchmark datasets with a wide range of model architectures." BENZ [pg. 1, Figure 1] teaches: "An image under corruption changes the prediction from “German Shepherd” to “Beaver”. After rectifying the BN statistics, the corrupted image is classified correctly." BENZ [pg. 6, section 6.2. Impact of mean and variance] teaches: "Rectifying the BN statistics involves the manipulation of two parameters, namely the mean
μ
and variance
σ
2
." Examiner's note: paragraph [0031] of the instant application recites "Model adaptation may include updating model's parameters, or even architecture of the model. The system may update BN statistics to adapt the model to update weights based on the corruption." Therefore, "updating the one or more weights" can be interpreted as rectifying the parameters of the classifier.)
However, SHEN in view of BUNAZAWA, BENZ, and YAMAKAJI, are not relied upon for teaching, but SEGU teaches: […] a look-up table defining batch norm statics associated with the corruption. (SEGU [0008] teaches: "The computing system includes one or more processors and one or more non-transitory computer-readable media that collectively store [...] plurality of different batch normalization layers respectively associated with a plurality of source domains;” SEGU [0040] teaches: “At training time, the multi-source batch normalization layer can collect and apply domain-specific batch statistics
(
μ
d
b
,
σ
d
b
2
)
,while accordingly updating the domain population statistics as moving average of the statistics for every batch b.” SEGU [0022] teaches: “During inference, a computing system can determine a target set of batch normalization statistics for a target sample associated with a target domain.” SEGU [0045] teaches: "[…] separate batch normalization statistics are kept for each domain, […]". Examiner's note: SEGU teaches maintaining a stored collection of batch normalization statistics for each domain, and each domain represents any possible scenario from collected samples as discusses in SEGU [0003]. During inference, the computing system determines (i.e., looks up) a target set of batch normalization statistics for a target sample (i.e., image) associated with a target domain. Under BRI, “a look-up table defining batch norm statistics associated with the corruption” can be interpreted as the multi-source batch normalization layer that collects and applies domain-specific batch statistics.)
Accordingly, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention, having the teachings of SHEN, BUNAZAWA, BENZ, and SEGU before them, to include SEGU’s multi-source batch normalization layer that collects and applies domain-specific batch statistics in SHEN/BUNAZAWA/BENZ' training method against image corruption. One would have been motivated to conserve data collection resources, thereby “reducing the consumption of computing resources such as processor usage, memory usage, and/or network bandwidth.” (SEGU [0030]).
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to Alvaro S Laham Bauzo whose telephone number is (571)272-5650. The examiner can normally be reached Mon-Fri 7:30 AM - 11:00 AM | 1:00 PM - 5:30 PM ET.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Usmaan Saeed can be reached on (571) 272-4046. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/A.S.L./Examiner, Art Unit 2146
/USMAAN SAEED/Supervisory Patent Examiner, Art Unit 2146