DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 1/26/2026 has been entered.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-37 are rejected under 35 U.S.C. 101
because the claimed invention is directed to an abstract idea without significantly
more.
When considering subject matter eligibility under 35 U.S.C. 101, it must be
determined whether the claim is directed to one of the four statutory categories of
invention, i.e., process, machine, manufacture, or composition of matter (Step 1). If the
claim does fall within one of the statutory categories, the second step in the analysis is
to determine whether the claim is directed to a judicial exception (Step 2A). The Step 2A
analysis is broken into two prongs. In the first prong (Step 2A, Prong 1), it is determined
whether or not the claims recite a judicial exception (e.g., mathematical concepts,
mental processes, certain methods of organizing human activity). If it is determined in
Step 2A, Prong 1 that the claims recite a judicial exception, the analysis proceeds to the
second prong (Step 2A, Prong 2), where it is determined whether or not the claims
integrate the judicial exception into a practical application. If it is determined at step 2A,
Prong 2 that the claims do not integrate the judicial exception into a practical
application, the analysis proceeds to determining whether the claim is a patent-eligible
application of the exception (Step 2B). If an abstract idea is present in the claim, any
element or combination of elements in the claim must be sufficient to ensure that the
claim integrates the judicial exception into a practical application, or else amounts to
significantly more than the abstract idea itself. Applicant is advised to consult the 2019
PEG for more details of the analysis.
Step 1
According to the first part of the analysis, in the instant case, claims 1-7, and 21-26 are directed to a processor, claims 8-13, and 27-31 are directed to a system, claims 14-20 and 32-37 are directed to a medium comprising means for using one or more neural networks to generate data labels based, at least in part, on a first version of training data and a modified version of the first version of training data. Thus, each of the claims falls within one of the four statutory categories (i.e. process, machine, manufacture, or composition of matter). Step 2A,
Step 2A, Prong 1
Following the determination of whether or not the claims fall within one of the four
categories (Step 1), it must be determined if the claims recite a judicial exception (e.g.
mathematical concepts, mental processes, certain methods of organizing human
activity) (Step 2A, Prong 1). In this case, the claims are determined to recite a judicial
exception as explained below.
Regarding Claims 1, 8, 14 these claims recite
apply one or more augmentations to one or more first images of training data to generate one or more modified images, wherein the training data comprises one or more first labels corresponding to the one or more first images, the one or more first labels generated using one or more first neural networks;
generate, using one or more second neural networks, one or more pseudo labels corresponding to the one or more modified images, wherein the one or more second neural networks is generated by averaging weights of a plurality of prior versions of the one or more second neural networks over multiple training iterations; and
generate neural network training data comprising the one or more first images, the one or more first labels corresponding to the one or more first images and generated using the one or more first neural networks, the one or more modified images, and the one or more pseudo labels corresponding to the one or more modified images and generated using the one or more second neural networks.
The claims recite a mental process. As set forth in MPEP 2106.04(a)(2)(III)(C), “Claims can recite a mental process even if they are claimed as being performed on a computer”. This limitation recites using one or more neural networks as a tool as disclosed at specification [0061]-[0066] and Fig. 1, etc. to perform an abstract idea, MPEP 2106.05(f).) Thus, the claim recites abstract ideas.
Regarding Claims 21 and 32 these claims recite
use one or more first neural networks to generate one or more first labels corresponding to one or more first images of training data to be used to train one or more second neural networks, wherein the one or more first neural networks are trained based, at least in part, on the one or more first images of the training data and one or more modified images; and generate, using the one or more second neural networks, one or more pseudo labels corresponding to the one or more modified images, wherein the one or more second neural networks is generated by averaging weights of a plurality of prior versions of the one or more second neural networks over multiple training iterations.
The claims recite a mental process. As set forth in MPEP 2106.04(a)(2)(III)(C), “Claims can recite a mental process even if they are claimed as being performed on a computer”. This limitation recites using one or more neural networks as a tool as disclosed at specification [0061]-[0066] and Fig. 1, etc. to perform an abstract idea, MPEP 2106.05(f).) Thus, the claim recites abstract ideas.
Regarding Claim 27 claim recite
generate a modified version of training data by applying one or more augmentations to a first version of training data, wherein the first version of training data comprises one or more first labels corresponding to the first version of training data, the one or more first labels generated using one or more first neural networks; generate, using one or more first neural networks, one or more second labels for the modified version of training data independent of the one or more first labels, wherein the one or more second neural networks is generated by averaging weights of a plurality of prior versions of the one or more second neural networks over multiple training iterations; and generate neural network training data using the one or more second labels and the modified version of training data.
The claims recites a mental process. As set forth in MPEP 2106.04(a)(2)(III)(C), “Claims can recite a mental process even if they are claimed as being performed on a computer”. This limitation recites using one or more neural networks as a tool as disclosed at specification [0061]-[0066] and Fig. 1, etc. to perform an abstract idea, MPEP 2106.05(f).) Thus, the claim recites abstract ideas.
Step 2A, Prong 2
Following the determination that the claims recite a judicial exception, it must be
determined if the claims recite additional elements that integrate the exception into a
practical application of the exception (Step 2A, Prong 2). In this case, after considering
all claim elements individually and as an ordered combination, it is determined that the
claims do not include additional elements that integrate the exception into a practical
application of the exception as explained below.
In Prong Two, a claim is evaluated as a whole to determine whether the recited judicial exception is integrated into a practical application of that exception. A claim is not “directed to” a judicial exception, and thus is patent eligible, if the claim as a whole integrates the recited judicial exception into a practical application of that exception. A claim that integrates a judicial exception into a practical application will apply, rely on, or use the judicial exception in a manner that imposes a meaningful limit on the judicial exception, such that the claim is more than a drafting effort designed to monopolize the judicial exception. MPEP 2106.04(d). The claims recite an abstract idea and further the claims as a whole does not integrate the recited judicial exception into a practical application of the exception. A claim that integrates a judicial exception into a practical application will apply, rely on, or use the judicial exception in a manner that imposes a meaningful limit on the judicial exception, such that the claim is more than a drafting effort designed to monopolize the judicial exception. MPEP 2106.04(d).
Regarding Claims 1, 8, 14, 21, 27and 32 these claims recite
This limitation recites using one or more neural networks as a tool to perform an abstract idea, which is not indicative of integration into a practical application. MPEP 2106.05(f).
MPEP § 2106.05(f): Mere Instructions to Apply an Exception. Do the additional element(s) amount to merely the words “apply it” (or an equivalent)
or are mere instructions to implement an abstract idea or other exception on a computer? (Yes)
Step 2B
Based on the determination in Step 2A of the analysis that the claims are
directed to a judicial exception, it must be determined if the claims contain any element
or combination of elements sufficient to ensure that the claim amounts to significantly
more than the judicial exception (Step 2B). In this case, after considering all claim
elements individually and as an ordered combination, it is determined that the claims do
not include additional elements that are sufficient to amount to significantly more than
the judicial exception for the same reasons given above in the Step 2A, Prong 2
analysis. Furthermore, each additional element identified above as being insignificant
extra-solution activity is also well-known, routine, conventional as described below.
Claims 1, 8, 14, 21, 27and 32 The claims do not include additional elements, alone or in combination, that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional elements amount to no more than generic computing components and field of use/technological environment which do not amount to significantly more than the abstract idea. The underlying concept merely receives information, analyzes it, and store the results of the analysis – this concept is not meaningfully different than concepts found by the courts to be abstract (see Electric Power Group, collecting information, analyzing it, and displaying certain results of the collection and analysis; see Cybersource, obtaining and comparing intangible data; see Digitech, organizing information through mathematical correlations; see Grams, diagnosing an abnormal condition by performing clinical tests and thinking about the results; see Cyberfone, using categories to organize store and transmit information; see Smartgene, comparing new and stored information and using rules to identify options).
The claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception because the additional elements when considered both individually and as a combination do not amount to significantly more than the abstract idea. For example, claim 1 recites “apply…”, “generate…” and “generate …” and Claim 21 recites “use…” and “generate…” and Claim 27 recite “generate…”,“generate…” and “generate…” These elements are recited at a high level of generality and are well-understood, routine, and conventional activities in the computer art. Generic computers performing generic computer functions, without an inventive concept, do not amount to significantly more than the abstract idea. Looking at the elements as a combination does not add anything more than the elements analyzed individually. Therefore, these claims do not amount to significantly more than the abstract idea itself.
Step 2A/2B Prong 2 Dependent Claims
Regarding to claim 2, 19, 31, 32
Claim 2, 19, 31, 32 merely recite other additional elements that define parameter of the NN which performing generic functions that when looking at the elements as a combination does not add anything more than the elements analyzed individually. Therefore, these claims also do not amount to significantly more than the abstract idea itself. These claims are not patent eligible.
Regarding to claim 3, 23, 24, 26, 28, 29, 30, 33-34, 36
Claim 3, 23, 24, 26, 28, 29,30, 33-34, 36 merely recite other additional elements that define the labels which performing generic functions that when looking at the elements as a combination does not add anything more than the elements analyzed individually. Therefore, these claims also do not amount to significantly more than the abstract idea itself. These claims are not patent eligible.
Regarding to claim 4
Claim 4 merely recite other additional elements use the one or more first neural networks using both task loss and consistency loss which performing generic functions that when looking at the elements as a combination does not add anything more than the elements analyzed individually. Therefore, these claims also do not amount to significantly more than the abstract idea itself. These claims are not patent eligible.
Regarding to claim 5
Claim 5 merely recite other additional elements that define the images which performing generic functions that when looking at the elements as a combination does not add anything more than the elements analyzed individually. Therefore, these claims also do not amount to significantly more than the abstract idea itself. These claims are not patent eligible.
Regarding to claim 6
Claim 6 merely recite other additional elements that define the training NN parameters which performing generic functions that when looking at the elements as a combination does not add anything more than the elements analyzed individually. Therefore, these claims also do not amount to significantly more than the abstract idea itself. These claims are not patent eligible.
Regarding to claim 7
Claim 7 merely recite other additional elements that define the modified images which performing generic functions that when looking at the elements as a combination does not add anything more than the elements analyzed individually. Therefore, these claims also do not amount to significantly more than the abstract idea itself. These claims are not patent eligible.
Regarding to claim 9, 11, 13, 15, 17, 18, 20, 22, 35
Claim 9, 11, 13, 15, 17, 18, 20, 22, 35 merely recite other additional elements that define the training NN which performing generic functions that when looking at the elements as a combination does not add anything more than the elements analyzed individually. Therefore, these claims also do not amount to significantly more than the abstract idea itself. These claims are not patent eligible.
Regarding to claim 10, 16
Claim 10, 16 merely recite other additional elements that define images of the training data which performing generic functions that when looking at the elements as a combination does not add anything more than the elements analyzed individually. Therefore, these claims also do not amount to significantly more than the abstract idea itself. These claims are not patent eligible.
Regarding to claim 12, 37
Claim 12, 37 merely recite other additional elements that define the NN parameters which performing generic functions that when looking at the elements as a combination does not add anything more than the elements analyzed individually. Therefore, these claims also do not amount to significantly more than the abstract idea itself. These claims are not patent eligible.
Regarding to claim 25, 29
Claim 25, 29 merely recite other additional elements that define training the NN which performing generic functions that when looking at the elements as a combination does not add anything more than the elements analyzed individually. Therefore, these claims also do not amount to significantly more than the abstract idea itself. These claims are not patent eligible.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-3, 5-11, 14-15, 19, 21-25, 27, 29-33, 35-36 are rejected under 35 U.S.C. 103 as being unpatentable over Zhang et al. (Zhang) US 2021/0073660 in view of Uchida et al. (Uchida) US 2019/0294955
In regard to claim 1, Zhang disclose One or more processors, (Fig. 1, [0008] [0096] [0121] processor system with one or more microprocessors) comprising:
apply one or more augmentations to one or more first images of training data to generate one or more modified images, ([0009]-[0016] [0096][0109]-[0113] apply data augmentation to the training data to generate the augmented data, such as generated the augmented images) wherein the training data comprises one or more first labels corresponding to the one or more first images, the one or more first labels generated using one or more first neural networks; ([0009]-[0016] [0096][0109]-[0113] the training data have labels corresponding to the images, such as “dog”, etc. and the labels is generated by the machine learnable model (a NN) to providing the class labels)
generate, using one or more second neural networks, one or more pseudo labels corresponding to the one or more modified images; ([0009]-[0039] [0096]-[0098] [0109]-[0113] generate the predicted target labels corresponding to the augmented images and using a NN, the machine learnable models could be different) and
generate neural network training data comprising the one or more first images, the one or more first labels corresponding to the one or more first images and generated using the one or more first neural networks, [0009]-[0010] generating the training data with images and corresponding labels and using the NN) the one or more modified images, and the one or more pseudo labels corresponding to the one or more modified images and generated using the one or more second neural networks. ([0011]-[0016] [0096][0109]-[0113] generating new data instances of training data with augmented images and predicted target labels using another NN, the ML models and NNs could be different or even the same)
But Zhang fail to explicitly disclose “wherein the one or more second neural networks is generated by averaging weights of a plurality of prior versions of the one or more second neural networks over multiple training iterations;”
Uchida disclose wherein the one or more second neural networks is generated by averaging weights of a plurality of prior versions of the one or more second neural networks over multiple training iterations; (Fig. 6, [0052]-[0059] the second NN is generate by average weights of the first NN (which can be considered as previous version of the second NN and its weights are updated by backpropagation) input to the second NN and the weights are updated based on the corresponding gradients and the operation is repeated. Note: please further define the difference between the first and second NN to help move forward the prosecution, call to discuss if necessary)
It would have been obvious to one having ordinary skill in the art before the effective filing data of the claimed invention was made to incorporate Uchida’s method of information processing using NN into Zhang’s invention as they are related to the same field endeavor of data processing using ML models. The motivation to combine these arts, as proposed above, at least because Uchida’s training second NN using first NN’s average weights would help to provide more updated NNs into Zhang’s system. Therefore it would have been obvious to one having ordinary skill in the art before the effective filing data of the claimed invention was made that providing more updated NNs based on the first NN would improve performance and training efficiency of the NN.
In regard to claim 2, Zhang and Uchida disclose The one or more processors of claim 1, the rejection is incorporated herein.
But Zhang fail to explicitly disclose “wherein at least one parameter of the one or more second neural networks comprises an average of two or more parameters corresponding to the one or more first neural networks.”
Uchida disclose wherein at least one parameter of the one or more second neural networks comprises an average of two or more parameters corresponding to the one or more first neural networks. (Fig. 6, [0053]-[0059] average weighs obtained from the weights of the first NN are the parameters of the second NN)
It would have been obvious to one having ordinary skill in the art before the effective filing data of the claimed invention was made to incorporate Uchida’s method of information processing using NN into Zhang’s invention as they are related to the same field endeavor of data processing using ML models. The motivation to combine these arts, as proposed above, at least because Uchida’s training second NN using first NN’s average weights would help to provide more updated NNs into Zhang’s system. Therefore it would have been obvious to one having ordinary skill in the art before the effective filing data of the claimed invention was made that providing more updated NNs based on the first NN would improve performance and training efficiency of the NN.
In regard to claim 3, Zhang and Uchida disclose The one or more processors of claim 1, the rejection is incorporated herein.
Zhang disclose wherein the one or more first labels indicate objects in the one or more first images. (([0008]-[0020] [0094]-[0100] images and objects in the images with class labels)
In regard to claim 5, Zhang and Uchida disclose The one or more processors of claim 1, the rejection is incorporated herein.
Zhang disclose wherein the one or more modified images comprises images resulting from application of image transformations to the one or more first images of the training data. ([0008]-[0020] [0093]-[0100][0102]-[0114] the data instance with images and the augmented data instances with augmented images to the data instance)
In regard to claim 6, Zhang and Uchida disclose The one or more processors of claim 1, the rejection is incorporated herein.
Zhang disclose wherein the circuitry is further to increase, relative to a previous round of training, a number of the one or more augmentations used to generate the one or more modified images. (Fig. 1-3, [0009]-[0020][0028]-[0032] [0093]-[0100][0102]-[0114] generating new data instance x* based on the received data instance x of the training data and modified versions of the data instance x with augmented variable s, s can change and increase the variances)
In regard to claim 7, Zhang and Uchida disclose The one or more processors of claim 1, the rejection is incorporated herein.
Zhang disclose wherein the one or more modified images comprises multiple different versions of a same datum from the one or more first images. (Fig. 1-3, [0009]-[0020] [0093]-[0100][0102]-[0114] generating new data instance x* based on the received data instance x of the training data and modified versions of the data instance x with augmented variable s, s can change with various values)
In regard to claim 8, claim is a system claim corresponding to the one or more processors claim 1 above and, therefore, is rejected for the same reasons set forth in the rejections of claim 1.
In regard to claim 9, Zhang and Uchida disclose The system of claim 8, the rejection is incorporated herein.
Zhang disclose wherein the one or more processors are to train the one or more second neural networks further based, at least in part, on a set of labels generated by one or more other neural networks. (Fig. 1-3, [0009]-[0020] [0093]-[0100][0102]-[0114] train the ML 300 (second neural network) continuously based on the prediction target labels y* generated with a function (f) and that represent a first neural network)
In regard to claim 10, Zhang and Uchida disclose The system of claim 8, the rejection is incorporated herein.
Zhang disclose wherein the one or more first images comprises a datum and the one or more modified images comprises a modified version of the datum ([0008]-[0020] [0093]-[0100][0102]-[0114] the data instance with images and the augmented data instances with augmented images to the data instance) and wherein the one or more processors are further to generate a label of the modified version of the datum independently from the datum. ([0008]-[0020] [0093]-[0100][0102]-[0114] assigning class label to the new instances of data)
In regard to claim 11, Zhang and Uchida disclose The system of claim 8, the rejection is incorporated herein.
Zhang disclose wherein the one or more processors are to train the one or more second neural networks further based, at least in part, on one or more previous versions of the one or more second neural networks. (Fig. 1-3, [0009]-[0020][0028]-[0032] [0093]-[0100][0102]-[0114] train the ML 300 (second neural network) continuously based on the prediction target labels y* generated with a function (f) iteratively)
In regard to claim 14, claim is a medium claim corresponding to the one or more processors claim 1 above and, therefore, is rejected for the same reasons set forth in the rejections of claim 1.
In regard to claim 15, Zhang and Uchida disclose The machine-readable medium of claim 14, the rejection is incorporated herein.
Zhang disclose wherein the set of instructions, if performed by one or more processors, further cause the one or more processors to train the one or more second neural networks by using a set of labels generated by one or more previous neural networks. (Fig. 1-3, [0009]-[0020] [0093]-[0100][0102]-[0114] train the ML 300 (second neural network) continuously based on the prediction target labels y* generated with a function (f) and that represent a first neural network)
In regard to claim 19, Zhang and Uchida disclose The machine-readable medium of claim 15, the rejection is incorporated herein.
Zhang disclose wherein the set of labels are classifications of image content. ([0008]-[0020] [0093]-[0100][0102]-[0114] assigning class label to the new instances of data with image content)
In regard to claim 21, Zhang disclose One or more processors, (Fig. 1, [0013]-[0021]) comprising: circuitry to use one or more first neural networks to generate one or more first labels corresponding to one or more first images of training data to be used to train one or more second neural networks, wherein the one or more first neural networks are trained based, at least in part, on the one or more first images of the training data and one or more modified images; (Fig. 1-3, [0009]-[0020] [0093]-[0100][0102]-[0114] generating data instance include images, with prediction target labels, using a ML model that represent a first neural network and it is trained based on the data instance of the training data and modified versions of the data instance augmentation, and the data instance and corresponding labels can be used to train a second neural network) and
generate, using the one or more second neural networks, one or more pseudo labels corresponding to the one or more modified images; ([0009]-[0039] [0096]-[0098] [0109]-[0113] generate the predicted target labels corresponding to the augmented images and using a NN, the machine learnable models could be different)
But Zhang fail to explicitly disclose “wherein the one or more second neural networks is generated by averaging weights of a plurality of prior versions of the one or more second neural networks over multiple training iterations.”
Uchida disclose and wherein the one or more second neural networks is generated by averaging weights of a plurality of prior versions of the one or more second neural networks over multiple training iterations. (Fig. 6, [0052]-[0059] the second NN is generate by average weights of the first NN (which can be considered as previous version of the second NN and its weights are updated by backpropagation) input to the second NN and the weights are updated based on the corresponding gradients and the operation is repeated)
It would have been obvious to one having ordinary skill in the art before the effective filing data of the claimed invention was made to incorporate Uchida’s method of information processing using NN into Zhang’s invention as they are related to the same field endeavor of data processing using ML models. The motivation to combine these arts, as proposed above, at least because Uchida’s training second NN using first NN’s average weights would help to provide more updated NNs into Zhang’s system. Therefore it would have been obvious to one having ordinary skill in the art before the effective filing data of the claimed invention was made that providing more updated NNs based on the first NN would improve performance and training efficiency of the NN.
In regard to claim 22, Zhang and Uchida disclose The one or more processors of claim 21, the rejection is incorporated herein.
Zhang disclose wherein the one or more first neural networks is a convolutional neural network. ([0008]-[0020] a deep convolutional NN)
In regard to claim 23, Zhang and Uchida disclose The one or more processors of claim 21, the rejection is incorporated herein.
Zhang disclose wherein the training data comprises audio data. ([0009]-[0020] [0094] audio fragments)
In regard to claim 24, Zhang and Uchida disclose The one or more processors of claim 23, the rejection is incorporated herein.
Zhang disclose wherein the one or more modified images comprises modified audio data resulting from applying modifications to the audio data. ([0009]-[0020] [0094]-[00103] audio fragments which are augmented)
In regard to claim 25, Zhang and Uchida disclose The one or more processors of claim 21, the rejection is incorporated herein.
Zhang disclose wherein the one or more second neural networks is trained to infer the one or more first labels label based, at least in part, on a plurality of different modifications of a same datum. (Fig. 1-3, [0009]-[0020] [0093]-[0100][0102]-[0114] ML 300 is trained referring the augmented data instances corresponding to the data instance with the target prediction label input)
In regard to claim 27, Zhang disclose A system, ((Fig. 1, [0008] [0096] [0121] processor system with one or more microprocessors) comprising:
generate a modified version of training data by applying one or more augmentations to a first version of training data, wherein the first version of training data comprises one or more first labels corresponding to the first version of training data, the one or more first labels generated using one or more first neural networks; ([0009]-[0016] [0096][0109]-[0113] apply data augmentation to the training data to generate the augmented data, the training data have labels corresponding to the first images, such as “dog”, etc. and the labels is generated by the machine learnable model (a NN) to providing the class labels)
generate, using one or more second neural networks, one or more second labels for the modified version of training data independent of the one or more first labels; ([0009]-[0039] [0096]-[0098] [0109]-[0113] generate the predicted target labels corresponding to the augmented images and using a NN) and
generate neural network training data using the one or more second labels and the modified version of training data. ([0011]-[0016] [0096][0109]-[0113] generating new data instances of training data with augmented images and predicted target labels using a NN)
But Zhang fail to explicitly disclose “wherein the one or more second neural networks is generated by averaging weights of a plurality of prior versions of the one or more second neural networks over multiple training iterations;”
Uchida disclose wherein the one or more second neural networks is generated by averaging weights of a plurality of prior versions of the one or more second neural networks over multiple training iterations; (Fig. 6, [0052]-[0059] the second NN is generate by average weights of the first NN (which can be considered as previous version of the second NN and its weights are updated by backpropagation) input to the second NN and the weights are updated based on the corresponding gradients and the operation is repeated)
It would have been obvious to one having ordinary skill in the art before the effective filing data of the claimed invention was made to incorporate Uchida’s method of information processing using NN into Zhang’s invention as they are related to the same field endeavor of data processing using ML models. The motivation to combine these arts, as proposed above, at least because Uchida’s training second NN using first NN’s average weights would help to provide more updated NNs into Zhang’s system. Therefore it would have been obvious to one having ordinary skill in the art before the effective filing data of the claimed invention was made that providing more updated NNs based on the first NN would improve performance and training efficiency of the NN.
In regard to claim 29, Zhang and Uchida disclose The system of claim 27, the rejection is incorporated herein.
Zhang disclose wherein the one or more second neural networks are trained to infer the one or more first labels corresponding to the first version of training data using results from applying one or more modifications to a same datum. (Fig. 1-3, [0009]-[0020] [0093]-[0100][0102]-[0114] ML 300 is trained referring the augmented data instances corresponding to the data instance with the target prediction label input)
In regard to claims 30-31, claims 30-31 are system claims corresponding to the one or more processors claims 5, 2 above and, therefore, are rejected for the same reasons set forth in the rejections of claims 5, 2.
In regard to claim 32, claim 32 is a medium claim corresponding to the one or more processors claim 21 above and, therefore, is rejected for the same reasons set forth in the rejections of claim 21.
In regard to claim 33, Zhang and Uchida disclose The machine-readable medium of claim 32, the rejection is incorporated herein.
Zhang disclose wherein the training data comprises video data. ([0122] sensor data obtained from video camera)
In regard to claim 35, claim 35 is a medium claim corresponding to the system claim 29 above and, therefore, is rejected for the same reasons set forth in the rejections of claim 29.
In regard to claim 36, Zhang and Uchida disclose The machine-readable medium of claim 32, the rejection is incorporated herein.
Zhang disclose wherein the one or more modified images comprises a modified version of the images ([0008]-[0020] [0093]-[0100][0102]-[0114] the data instance with images and the augmented data instances with augmented images to the data instance) comprises a modified version of the one or more first images of the training data and wherein the one or more processors are further to use the one or more first neural networks to generate a label of the one or more modified images independently from the one or more first images. ([0009]-[0016] [0096][0109]-[0113] apply data augmentation to the training data to generate the augmented data, the training data have labels corresponding to the images, such as “dog”, etc., generate the predicted target labels corresponding to the augmented images and using a NN)
Claims 4, 12-13, 16-18, 20, 26, 34, 37 are rejected under 35 U.S.C. 103 as being unpatentable over Zhang et al. (Zhang) US 2021/0073660 and Uchida et al. (Uchida) US 2019/0294955 as applied to claim 1, further in view of Albright et al. (Albright) US 2019/0130218 A1
In regard to claim 4, Zhang and Uchida disclose The one or more processors of claim 1, the rejection is incorporated herein.
But Zhang and Uchida fail to explicitly disclose “wherein the circuitry is to use the one or more first neural networks using both task loss and consistency loss.”
Albright disclose wherein the circuitry is to use the one or more first neural networks using both task loss and consistency loss. ([0026]-[0031] determine the error is below a certain threshold value which is a consistency loss and user score to indicating a likelihood that the object represents a type which can represents task loss compare to 100%)
It would have been obvious to one having ordinary skill in the art before the effective filing data of the claimed invention was made to incorporate Albright’s method of calculating loss into Uchida and Zhang’s invention as they are related to the same field endeavor of data processing using ML models. The motivation to combine these arts, as proposed above, at least because Albright’s method of calculating loss with training data would help to provide more data loss calculation method into Uchida and Zhang’s system. Therefore it would have been obvious to one having ordinary skill in the art before the effective filing data of the claimed invention was made that calculating loss with training data would help to train the ML models to identify classification of the training data.
In regard to claim 12, Zhang and Uchida disclose The system of claim 11, the rejection is incorporated herein.
But Zhang and Uchida fail to explicitly disclose “wherein the one or more previous versions of the one or more second neural networks comprise a neural network with weights formed based at least in part on weights of multiple previous versions of the one or more second neural networks.”
Albright disclose wherein the one or more previous versions of the one or more second neural networks comprise a neural network with weights formed based at least in part on weights of multiple previous versions of the one or more second neural networks. ([0018] [0022] [0026]-[0029][0042]-[0045] training NNs by adjusting weighs associated with the NNs and by weighting various features more relevant to reduce the error)
It would have been obvious to one having ordinary skill in the art before the effective filing data of the claimed invention was made to incorporate Albright’s method of training the NN with weights into Uchida and Zhang’s invention as they are related to the same field endeavor of data processing using ML models. The motivation to combine these arts, as proposed above, at least because Albright’s method of training the NN with weights would help to adapt the training data into Uchida and Zhang’s system. Therefore it would have been obvious to one having ordinary skill in the art before the effective filing data of the claimed invention was made that adapting the training data with weights would help to train the ML models to identify classification of the training data.
In regard to claim 13, Zhang and Uchida disclose The system of claim 8, the rejection is incorporated herein.
But Zhang and Uchida fail to explicitly disclose “wherein the one or more processors are to train the one or more second neural networks by at least varying a number of consistency regularization terms for the one or more second neural networks as the number of training iterations for the one or more second neural networks increases.”
Albright disclose wherein the one or more processors are to train the one or more second neural networks by at least varying a number of consistency regularization terms for the one or more second neural networks as the number of training iterations for the one or more second neural network increases.([0018] [0022] [0026]-[0029][0042]-[0045] training NNs by iterating until the error rate is below a value based on an aggregate metric, including weights, score, etc.)
It would have been obvious to one having ordinary skill in the art before the effective filing data of the claimed invention was made to incorporate Albright’s method of training the NN with weights into Uchida and Zhang’s invention as they are related to the same field endeavor of data processing using ML models. The motivation to combine these arts, as proposed above, at least because Albright’s training the NN to reach a condition would help to provide a training criteria into Uchida and Zhang’s system. Therefore it would have been obvious to one having ordinary skill in the art before the effective filing data of the claimed invention was made that providing a training criteria would help to train the ML models to identify classification of the training data.
In regard to claim 16, Zhang and Uchida disclose The machine-readable medium of claim 15, the rejection is incorporated herein.
But Zhang and Uchida fail to explicitly disclose “wherein one or more first labels corresponding to the one or more first images comprises one or more incorrect labels.”
Albright disclose wherein one or more first labels corresponding to the one or more first images comprises one or more incorrect labels. ([0002][0015]-[0016 [0020] ]the initial data with misrepresent or unlabeled data, therefore need to overcome misclassifications)
It would have been obvious to one having ordinary skill in the art before the effective filing data of the claimed invention was made to incorporate Albright’s method of training the NN into Uchida and Zhang’s invention as they are related to the same field endeavor of data processing using ML models. The motivation to combine these arts, as proposed above, at least because Albright’s method of training the NN with incorrect labeled data would help to provide more training data into Uchida and Zhang’s system. Therefore it would have been obvious to one having ordinary skill in the art before the effective filing data of the claimed invention was made that provide the incorrect labeled training data would help to train the ML models to identify classification of the training data.
In regard to claim 17, Zhang and Uchida disclose The machine-readable medium of claim 15, the rejection is incorporated herein.
But Zhang and Uchida fail to explicitly disclose “wherein the set of instructions, if performed by one or more processors, further cause the one or more processors to train the one or more second neural networks by adjusting a number of consistency regularization terms for the one or more second neural networks.”
Albright disclose wherein the set of instructions, if performed by one or more processors, further cause the one or more processors to train the one or more second neural networks by adjusting a number of consistency regularization terms for the one or more second neural networks. ([0018] [0022] [0026]-[0029][0042]-[0045] training NNs by adjusting weighs associated related to the features with the NNs and by weighting various features more relevant to reduce the error)
It would have been obvious to one having ordinary skill in the art before the effective filing data of the claimed invention was made to incorporate Albright’s method of training the NN with weights into Uchida and Zhang’s invention as they are related to the same field endeavor of data processing using ML models. The motivation to combine these arts, as proposed above, at least because Albright’s training the NN to reach a condition would help to provide a training criteria into Uchida and Zhang’s system. Therefore it would have been obvious to one having ordinary skill in the art before the effective filing data of the claimed invention was made that providing a training criteria would help to train the ML models to identify classification of the training data.
In regard to claim 18, Zhang, Uchida and Albright disclose The machine-readable medium of claim 17, the rejection is incorporated herein.
But Zhang and Uchida fail to explicitly disclose “wherein the set of instructions, if performed by one or more processors, further cause the one or more processors to increase the number of consistency regularization terms as training iterations for the one or more second neural networks increases.”
Albright disclose wherein the set of instructions, if performed by one or more processors, further cause the one or more processors to increase the number of consistency regularization terms as training iterations for the one or more second neural networks increases. ([0018] [0022] [0026]-[0029][0042]-[0045] training NNs by iterating until the error rate is below a value based on an aggregate metric, including weights, score, etc.)
It would have been obvious to one having ordinary skill in the art before the effective filing data of the claimed invention was made to incorporate Albright’s method of training the NN with weights into Uchida and Zhang’s invention as they are related to the same field endeavor of data processing using ML models. The motivation to combine these arts, as proposed above, at least because Albright’s training the NN to reach a condition would help to provide a training criteria into Uchida and Zhang’s system. Therefore it would have been obvious to one having ordinary skill in the art before the effective filing data of the claimed invention was made that providing a training criteria would help to train the ML models to identify classification of the training data.
In regard to claim 20, Zhang and Uchida disclose The machine-readable medium of claim 15, the rejection is incorporated herein.
But Zhang and Uchida fail to explicitly disclose “wherein the set of instructions, if performed by one or more processors, that cause the one or more processors to train the one or more second neural networks further cause the one or more processors to be trained based, at least in part, on a neural network comprising weights determined from averaging weights from one or more other neural networks.”
Albright disclose wherein the set of instructions, if performed by one or more processors, that cause the one or more processors to train the one or more second neural networks further cause the one or more processors to be trained based, at least in part, on a neural network comprising weights determined from averaging weights from one or more other neural networks. ([0018][0022]-[0027] training the NNs based on various weights, the weights can be adjusted according to the implementation choice)
It would have been obvious to one having ordinary skill in the art before the effective filing data of the claimed invention was made to incorporate Albright’s method of training the NN with weights into Uchida and Zhang’s invention as they are related to the same field endeavor of data processing using ML models. The motivation to combine these arts, as proposed above, at least because Albright’s training the NN to reach a condition would help to provide a training criteria into Uchida and Zhang’s system. Therefore it would have been obvious to one having ordinary skill in the art before the effective filing data of the claimed invention was made that providing a training criteria would help to train the ML models to identify classification of the training data.
In regard to claim 26, Zhang and Uchida disclose The one or more processors of claim 21, the rejection is incorporated herein.
But Zhang and Uchida fail to explicitly disclose “wherein the one or more first labels corresponding to the one or more first images of training data comprises one or more erroneous labels.”
Albright disclose wherein the one or more first labels corresponding to the one or more first images of training data comprises one or more erroneous labels ([0002][0015]-[0016 [0020] ]the initial data with misrepresent or unlabeled data, therefore need to overcome misclassifications)
It would have been obvious to one having ordinary skill in the art before the effective filing data of the claimed invention was made to incorporate Albright’s method of training the NN into Uchida and Zhang’s invention as they are related to the same field endeavor of data processing using ML models. The motivation to combine these arts, as proposed above, at least because Albright’s method of training the NN with incorrect labeled data would help to provide more training data into Uchida and Zhang’s system. Therefore it would have been obvious to one having ordinary skill in the art before the effective filing data of the claimed invention was made that provide the incorrect labeled training data would help to train the ML models to identify classification of the training data.
In regard to claim 34, Zhang and Uchida disclose The machine-readable medium of claim 32, the rejection is incorporated herein.
But Zhang and Uchida fail to explicitly disclose “wherein the one or more first images of the training data comprises one or more mislabeled data.”
Albright disclose wherein the one or more first images of the training data comprises one or more mislabeled data. ([0002][0015]-[0016 [0020] ]the initial data with misrepresent or unlabeled data, therefore need to overcome misclassifications)
It would have been obvious to one having ordinary skill in the art before the effective filing data of the claimed invention was made to incorporate Albright’s method of training the NN into Uchida and Zhang’s invention as they are related to the same field endeavor of data processing using ML models. The motivation to combine these arts, as proposed above, at least because Albright’s method of training the NN with incorrect labeled data would help to provide more training data into Uchida and Zhang’s system. Therefore it would have been obvious to one having ordinary skill in the art before the effective filing data of the claimed invention was made that provide the incorrect labeled training data would help to train the ML models to identify classification of the training data.
In regard to claim 37, Zhang and Uchida disclose The machine-readable medium of claim 32, the rejection is incorporated herein.
But Zhang and Uchida fail to explicitly disclose “wherein the one or more processors are to use the one or more first neural networks to adjust consistency regularizations terms for the one or more second neural networks.”
Albright disclose wherein the one or more processors are to use the one or more first neural networks to adjust consistency regularizations terms for the one or more second neural networks. ([0018] [0022] [0026]-[0029][0042]-[0045] training NNs by adjusting error rates, weights, score, etc. until the error rate is below a value based on an aggregate metric)
It would have been obvious to one having ordinary skill in the art before the effective filing data of the claimed invention was made to incorporate Albright’s method of training the NN with weights into Uchida and Zhang’s invention as they are related to the same field endeavor of data processing using ML models. The motivation to combine these arts, as proposed above, at least because Albright’s training the NN to reach a condition would help to provide a training criteria into Uchida and Zhang’s system. Therefore it would have been obvious to one having ordinary skill in the art before the effective filing data of the claimed invention was made that providing a training criteria would help to train the ML models to identify classification of the training data.
Claim 28 is rejected under 35 U.S.C. 103 as being unpatentable over Zhang et al. (Zhang) US 2021/0073660 and Uchida et al. (Uchida) US 2019/0294955 as applied to claim 1, further in view of Nair et al. (Nair) US 11514948 B1
In regard to claim 28, Zhang and Uchida disclose The system of claim 27, the rejection is incorporated herein.
But Zhang and Uchida fail to explicitly disclose “wherein the first version of training data comprises data from natural language processing (NLP) algorithms.”
Nair disclose wherein the first version of training data comprises data from natural language processing (NLP) algorithms. (col. 2, line 54-col. 3, line 17 and line 60- col. 4, line 8, the audio data from speaking the first language to the second language with model-based dubbing, audio extract and analysis and language translation, etc.)
It would have been obvious to one having ordinary skill in the art before the effective filing data of the claimed invention was made to incorporate Nair’s generating a translated version of a source video into Uchida and Zhang’s invention as they are related to the same field endeavor of data processing using ML models. The motivation to combine these arts, as proposed above, at least because Nair’s method of generating a translated version of a source video would help to provide more modified version of data with to Uchida and Zhang’s system. Therefore it would have been obvious to one having ordinary skill in the art before the effective filing data of the claimed invention was made that providing a translated version of a source video would help to provide more training data and therefore help to train the ML models to identify certain translated version of data.
Response to Arguments
Applicant's arguments filed on 1/26/2026 regarding to claims 1-37 have been considered but are moot because the arguments do not apply to the current rejection.
With respect to arguments related to 35 USC § 101, USC § 102 and USC §103, please see the rejection above based on the amendment.
Conclusion
The prior art made of record and not relied upon is considered pertinent to Applicant's disclosure.
U.S. Patent Documents PATENT DATE INVENTOR(S) TITLE
US 20190332875 A1 2019-10-31 Vallespi-Gonzalez et al.
Traffic Signal State Classification For Autonomous Vehicles
Vallespi-Gonzalez et al. disclose Systems, methods, tangible non-transitory computer-readable media, and devices for operating an autonomous vehicle are provided. For example, the disclosed technology can include receiving sensor data and map data. The sensor data can include information associated with an environment detected by sensors of a vehicle. The map data can include information associated with traffic signals in the environment. Further, an input representation can be generated based on the sensor data and the map data. The input representation can include regions of interest associated with images of the traffic signals. States of the traffic signals in the environment can be determined, based on the input representation and a machine-learned model. Traffic signal state data that includes a determinative state of the traffic signals can be generated based on the states of the traffic signals… see abstract.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to XUYANG XIA whose telephone number is (571)270-3045. The examiner can normally be reached Monday-Friday 8am-4pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jennifer Welch can be reached at 571-272-7212. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
XUYANG XIA
Primary Examiner
Art Unit 2143
/XUYANG XIA/Primary Examiner, Art Unit 2143