DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
The rejection related to 35 USC § 101 regarding to claim 1-3, 5-11 is withdrawn.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-2, 5, 7-8, 10-11 are rejected under 35 U.S.C. 103 as being unpatentable over Namiki et al. (Namiki) US 2022/0343631 in view of Kolouri et al. (Kolouri) US 2020/0130177 and Li et al. (Li) US 2020/0019758
In regard to claim 1, Namiki disclose A method, the method comprising: ([0002] [0006]-[0034] [0061]-[0073] a learning method for an object recognition model (basic algorithm) with the object recognition apparatus for domain adaption)
extracting features from an input samples using the target model and generating an output result of classifying the input samples into classes using the features, wherein the target model comprises a feature extractor extracting the features and a classifier generating the output result; (Fig. 2, Fig. 4, Fig. 7 [0062]-[0070] [0076]-[0082] [0104]-[0105] feature extraction unit extract features from input image data using the object recognition apparatus S22-S24 and performs the class classification based on the extracted features and outputs a result to classifying the input data sets, the apparatus (model) include a feature extraction unit extracting the features and a feature identification unit (class classifier) performs the class classification and outputs the result)
calculating a classification loss representing differences between the output results and labels corresponding to the input samples; (Fig. 2, Fig. 4, Fig. 7 [0065]-[0070] [0076]-[0082] [0104]-[0105] calculating a classification loss which represent the differences using the output identification results and correct answer labels corresponding to the input data sets)
calculating another loss based on a pair of features extracted from a pair of the input samples belonging to the same class; (Fig. 2, Fig.3, [0062]-[0082] [0093]-[0099] calculating a loss based on the common features which inherently disclose belong to the same class from the input pairs to the common features 12c,, for example, new vehicle in the c class as part of the input to 2 feature extractors, “the same class is inferred to have the highest confidence score in the two class classification results”)
and
updating the parameters of the target model based at least on the another loss to increase similarity of an updated pair of features extracted from the pair of the input samples belonging to the same class. (Fig. 2, Fig. 4, Fig. 7 [0062]-[0071] [0076]-[0082] [0104]-[0105] with backpropagation to update the parameters of the class classifier and the feature extractor of the model based on the losses including a loss (La) based on the common features which inherently disclose belong to the same class from the input pairs and to increase similarity between the pair of features extracted and outputted identification results with the input pairs)
But Namiki fail to explicitly disclose “the method of transfer learning from a source model to a target model of a transfer learning apparatus, calculating a Sample-Based Regularization (SBR) loss that regularizes intra-class feature similarity by at least determining distance between metric between feature vectors of the pair of features related to a target task of the target model; repeatedly updating parameters of the target model based at least on the classification loss to reduce the difference between the output results and the labels, and the SBR loss to increase similarity between the features.”
Kolouri disclose the method of transfer learning from a source model to a target model of a transfer learning apparatus, ([0005] [0055]-[0058] method of transfer learning from a model from a source domain to a model from a target domain of a transfer learning apparatus)
calculating a Sample-Based Regularization (SBR) loss that regularizes intra-class feature similarity by at least determining distance between metric between feature vectors of the pair of features related to a target task of the target model; ([0055]-[0058] [0071]-[0077] [0105]-[0112] minimizing the sample based loss using λ which is a regularization parameter to minimize the loss of the dissimilarity measure that is increase the similarity of the class-specific feature similarity and is determined by the distance between metric between the feature vectors of the pair of features related to a learning task of the target model)
repeatedly updating parameters of the target model based at least on the classification loss to reduce the difference between the output results and the labels, and the SBR loss to increase similarity between the features. ([0025]-[0037] [0100]-[0112] iteratively update the parameters of the model based on the classification loss to minimize the dissimilarity between the output and labels and the sample based loss with regularization parameter to make it more similar between the features.
It would have been obvious to one having ordinary skill in the art before the effective filing data of the claimed invention was made to incorporate Kolouri’s transfer learning model into Namiki’s invention as they are related to the same field endeavor of model training and learning. The motivation to combine these arts, as proposed above, at least because Kolouri’s calculating a sample based regularization loss would provide more loss functions to Namiki’s system. Therefore it would have been obvious to one having ordinary skill in the art before the effective filing data of the claimed invention was made that providing more loss functions in training ML model would help to improve accuracy of prediction precision and training efficiency.
But Namiki and Kolouri fail to explicitly disclose “repeatedly updating the parameters until the classification loss or the SBR loss converges to a predetermined threshold.”
Li disclose repeatedly updating the parameters until the classification loss or the SBR loss converges to a predetermined threshold. ([0016]-[0022] [0047]-[0050] training is repeated until it converges where the classification loss reached a desired threshold minimum of loss)
It would have been obvious to one having ordinary skill in the art before the effective filing data of the claimed invention was made to incorporate Li’s meta-learning model into Kolouri and Namiki’s invention as they are related to the same field endeavor of model training and learning. The motivation to combine these arts, as proposed above, at least because Li’s training stop criteria would provide more training control to Kolouri and Namiki’s system. Therefore it would have been obvious to one having ordinary skill in the art before the effective filing data of the claimed invention was made that providing training stopping criteria would help to improve accuracy of prediction precision and training efficiency.
In regard to claim 2, Namiki and Kolouri, Li disclose The method of claim 1, the rejection is incorporated herein.
Namiki disclose further including: reducing gradient due to the classification loss by multiplying a hyper-parameter using a gradient reduction layer during of backward propagation of the gradient toward the feature extractor. ([0066]-[0070] [0107]-[0110] using a GRL layer at the time of the backpropagation of the gradient toward the feature extractor to reduce the gradient duo to classification loss by updating the parameters of the class classifier and feature extractors (multiplying a hyper-parameter, weight, it is well known to the people with skill in the art to adjust the weight of the layer of the model to update the parameters)
In regard to claim 5, Namiki and Kolouri, Li disclose The method of claim 1, the rejection is incorporated herein.
Namiki disclose wherein the updating the parameters updates the parameters of the classifier based on the classification loss (Fig. 2, Fig. 4, Fig. 7 [0065]-[0070]] [0076]-[0082] [0104]-[0105] update the parameters of the class classifier based on the classification loss) and updates the parameters of the feature extractor based on the classification loss and the SBR loss. (Fig. 2, Fig. 4, Fig. 7 [0065]-[0070]] [0076]-[0082] [0104]-[0105] update the parameters of the feature extractor of the model based on the losses, Lt, Lc, Ls, etc.)
In regard to claims 7-8, claims 7-8 are apparatus claims corresponding to the method claims 1-2 above and, therefore, are rejected for the same reasons set forth in the rejections of claims 1-2.
In regard to claim 10, claim 10 is an apparatus claim corresponding to the method claim 1 above and, therefore, is rejected for the same reasons set forth in the rejections of claim 1.
In regard to claim 11, claim 11 is a medium claim corresponding to the method claim 1 above and, therefore, is rejected for the same reasons set forth in the rejections of claim 1.
Claims 3, 9 are rejected under 35 U.S.C. 103 as being unpatentable over Namiki et al. (Namiki) US 2022/0343631and Kolouri et al. (Kolouri) US 2020/0130177, and Li et al. (Li) US 2020/0019758 as applied to claim 1, further in view of Shin et al. (Shin) US 2020/0160212
In regard to claim 3, Namiki and Kolouri, Li disclose The method of claim 1, the rejection is incorporated herein.
But Namiki and Kolouri, Li fail to explicitly disclose “wherein the target model is implemented based on a deep neural network and initialized using a structure and parameters of a pre-trained, deep neural network-based source model, wherein parameters of the feature extractor are initialized based on the parameters of the source model, and parameters of the classifier are initialized to random values.”
Shin disclose wherein the target model is implemented based on a deep neural network and initialized using a structure and parameters of a pre-trained, deep neural network-based source model, ([0009]-[0018] [0037]-[0056] [0088]deep learning model with deep NN and initiated using a structure and parameters of a pre-trained source model) wherein parameters of the feature extractor are initialized based on the parameters of the source model, and parameters of the classifier are initialized to random values. ([0009]-[0021] [0043]-[0056] feature map of the pre-trained model is input the model and a random model structure and dataset is performed for the model)
It would have been obvious to one having ordinary skill in the art before the effective filing data of the claimed invention was made to incorporate Shin’s ML source model into Li, Kolouri and Namiki’s invention as they are related to the same field endeavor of model training and learning. The motivation to combine these arts, as proposed above, at least because Shin’s ML source model with target model initialization method would provide more model initialization method into Li, Kolouri and Namiki’s system. Therefore it would have been obvious to one having ordinary skill in the art before the effective filing data of the claimed invention was made that providing more model initialization method in training ML model would help to improve learning transfer efficiency.
In regard to claim 9, claim 9 is an apparatus claims corresponding to the method claim 3 above and, therefore, is rejected for the same reasons set forth in the rejections of claim 3.
Claim 6 is rejected under 35 U.S.C. 103 as being unpatentable over Namiki et al. (Namiki) US 2022/0343631, Kolouri et al. (Kolouri) US 2020/0130177 and Li et al. (Li) US 2020/0019758 as applied to claim 1, further in view of Li et al. (Li2) US 2020/0043504
In regard to claim 6, Namiki and Kolouri, Li disclose The method of claim 1, the rejection is incorporated herein.
Namiki disclose a distance metric between an output result of the feature extractor for an input sample included in a mini-batch for the same class and an average of output results of the feature extractor for all input samples included in the mini-batch. ([0062]-[0082] [0093]-[0099] a distance between the output of the feature extractor of the mini-batch for the same class and the output of the feature extractor of the input sample in the mini-batch, “the same class is inferred to have the highest confidence score in the two class classification results”)
Namiki and Kolouri, Li fail to explicitly disclose “the distance metric is Euclidean distance.”
Li2 disclose the distance metric is Euclidean distance. ([0089]-[0101] [0122]-[0132] the metric is Euclidean distance)
It would have been obvious to one having ordinary skill in the art before the effective filing data of the claimed invention was made to incorporate Li2’s feature extractor and classifier training model into Li, Kolouri and Namiki’s invention as they are related to the same field endeavor of model training and learning. The motivation to combine these arts, as proposed above, at least because Li2’s feature extractor and classifier training model with loss calculation would provide loss calculation method into Li, Kolouri and Namiki’s system. Therefore it would have been obvious to one having ordinary skill in the art before the effective filing data of the claimed invention was made that providing loss calculation method in training would help to improve feature identification efficiency.
Response to Arguments
Applicant’s arguments with respect to claims 1-3, 5-11 filed on 2/3/2026 have been considered but are moot because the arguments do not apply to the current rejection.
Conclusion
The prior art made of record and not relied upon is considered pertinent to Applicant's disclosure.
PATENT PUB. # PUB. DATE INVENTOR(S) TITLE
US 20190138860 A1 2019-05-09 Liu et al.
FONT RECOGNITION USING ADVERSARIAL NEURAL NETWORK TRAINING
Liu et al. disclose The present disclosure relates to a font recognition system that employs a multi-task learning framework and adversarial training to improve font classification and remove negative side effects caused by intra-class variances of glyph content. For example, in one or more embodiments, the font recognition system adversarial trains a font recognition neural network by minimizing font classification loss while at the same time maximizing glyph classification loss. By employing an adversarially trained font classification neural network, the font recognition system can improve overall font recognition by removing the negative side effects from diverse glyph content…. see abstract.
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to XUYANG XIA whose telephone number is (571)270-3045. The examiner can normally be reached Monday-Friday 8am-4pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jennifer Welch can be reached at 571-272-7212. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
XUYANG XIA
Primary Examiner
Art Unit 2143
/XUYANG XIA/Primary Examiner, Art Unit 2143