Prosecution Insights
Last updated: April 19, 2026
Application No. 18/478,852

IMAGE RECOGNITION MODEL TRAINING METHOD AND APPARATUS

Final Rejection §101§103
Filed
Sep 29, 2023
Examiner
ROSARIO, DENNIS
Art Unit
2676
Tech Center
2600 — Communications
Assignee
Alipay (Hangzhou) Information Technology Co., Ltd.
OA Round
2 (Final)
69%
Grant Probability
Favorable
3-4
OA Rounds
3y 8m
To Grant
98%
With Interview

Examiner Intelligence

Grants 69% — above average
69%
Career Allow Rate
385 granted / 557 resolved
+7.1% vs TC avg
Strong +29% interview lift
Without
With
+28.6%
Interview Lift
resolved cases with interview
Typical timeline
3y 8m
Avg Prosecution
34 currently pending
Career history
591
Total Applications
across all art units

Statute-Specific Performance

§101
16.5%
-23.5% vs TC avg
§103
40.3%
+0.3% vs TC avg
§102
24.6%
-15.4% vs TC avg
§112
13.6%
-26.4% vs TC avg
Black line = Tech Center average estimate • Based on career data from 557 resolved cases

Office Action

§101 §103
DETAILED ACTION Claim(s) 1,10 and 11,19 is/are rejected under 35 U.S.C. 103 as being unpatentable over CAO et al. (CN 112800467 A) with machine translation (of the previous Office action) plus machine translation II in view of Qiu et al. (Iterative Teaching by Data Hallucination) and PAN et al. (WO 2022/222458 A1) with machine translation: Claim(s) 4,5 and 13 is/are rejected under 35 U.S.C. 103 as being unpatentable over CAO et al. (CN 112800467 A) with machine translation (of the previous Office action) plus machine translation II in view of Qiu et al. (Iterative Teaching by Data Hallucination) and PAN et al. (WO 2022/222458 A1) with machine translation as applied in claims 1,10 and 11,19 further in view of HU (US 2024/0078438 A1): Claim(s) 14 is/are rejected under 35 U.S.C. 103 as being unpatentable over CAO et al. (CN 112800467 A) with machine translation (of the previous Office action) plus machine translation II in view of Qiu et al. (Iterative Teaching by Data Hallucination) and PAN et al. (WO 2022/222458 A1) with machine translation as applied in claims 1,10 and 11,19 further in view of HU (US 2024/0078438 A1) as applied in the rejection of claims 4,5 and 13, further in view of GAO et al. (CN 114998592 A) with machine translation: Claim(s) 1,10 and 11,19 is/are additionally rejected under 35 U.S.C. 103 as being unpatentable over CAO et al. (CN 112800467 A) with machine translation plus machine translation II in view of Qiu et al. (Iterative Teaching by Data Hallucination) further in view of Zhu et al. (Imbalanced driving scene recognition with class focal loss and data augmentation) and Zhang et al. (mixup: BEYOND EMPIRICAL RISK MINIMIZATION) and PAN et al. (WO 2022/222458 A1) with machine translation as applied in the primary rejection of claims 1,10 and 11,19: Claim(s) 4,5 and 13 is/are additionally rejected under 35 U.S.C. 103 as being unpatentable over CAO et al. (CN 112800467 A) with machine translation in view of Qiu et al. (Iterative Teaching by Data Hallucination) further in view of Zhu et al. (Imbalanced driving scene recognition with class focal loss and data augmentation) and Zhang et al. (mixup: BEYOND EMPIRICAL RISK MINIMIZATION) and PAN et al. (WO 2022/222458 A1) with machine translation as applied in the primary rejection of claims 1,10 and 11,19 further in view of HU (US 2024/0078438 A1): Claim(s) 14 is/are additionally rejected under 35 U.S.C. 103 as being unpatentable over CAO et al. (CN 112800467 A) with machine translation in view of Qiu et al. (Iterative Teaching by Data Hallucination) further in view of Zhu et al. (Imbalanced driving scene recognition with class focal loss and data augmentation) and Zhang et al. (mixup: BEYOND EMPIRICAL RISK MINIMIZATION) and PAN et al. (WO 2022/222458 A1) with machine translation as applied in the primary rejection of claims 1,10 and 11,19 further in view of HU (US 2024/0078438 A1) as applied in the rejection of claims 4,5 and 13, further in view of GAO et al. (CN 114998592 A) with machine translation: Claim(s) 6,7,8,9 and 15,16,17,18 is/are additionally rejected under 35 U.S.C. 103 as being unpatentable over CAO et al. (CN 112800467 A) with machine translation in view of Qiu et al. (Iterative Teaching by Data Hallucination) further in view of Zhu et al. (Imbalanced driving scene recognition with class focal loss and data augmentation) and Zhang et al. (mixup: BEYOND EMPIRICAL RISK MINIMIZATION) and PAN et al. (WO 2022/222458 A1) with machine translation as applied in the primary rejection of claims 1,10 and 11,19 further in view of WANG et al. (CN 115497141 A) with machine translation: Claim(s) 7,16 is/are additionally rejected under 35 U.S.C. 103 as being unpatentable over CAO et al. (CN 112800467 A) with machine translation in view of Qiu et al. (Iterative Teaching by Data Hallucination) further in view of Zhu et al. (Imbalanced driving scene recognition with class focal loss and data augmentation) and Zhang et al. (mixup: BEYOND EMPIRICAL RISK MINIMIZATION) and PAN et al. (WO 2022/222458 A1) with machine translation as applied in the primary rejection of claims 1,10 and 11,19 further in view of WANG et al. (CN 115497141 A) with machine translation, as applied in the additional rejection of claims 6,7,8,9 and 15,16,17,18, further in view of BEN HU et al. (CN 112801883 A) with machine translation: Claim(s) 8,9 and 17,18 is/are additionally rejected under 35 U.S.C. 103 as being unpatentable over CAO et al. (CN 112800467 A) with machine translation in view of Qiu et al. (Iterative Teaching by Data Hallucination) further in view of Zhu et al. (Imbalanced driving scene recognition with class focal loss and data augmentation) and Zhang et al. (mixup: BEYOND EMPIRICAL RISK MINIMIZATION) and PAN et al. (WO 2022/222458 A1) with machine translation as applied in the primary rejection of claims 1,10 and 11,19 further in view of WANG et al. (CN 115497141 A) with machine translation, as applied in the additional rejection of claims 6,7,8,9 and 15,16,17,18, further in view of BEN HU et al. (CN 112801883 A) with machine translation as applied in the additional rejection of claims 7,16 further in view of Yuan et al. ( Multiview Scene Image Inpainting Based on Conditional Generative Adversarial Networks): Claims 2,3 and 12 objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims: Claim 21 allowed. Response to Amendment The amendment was received 1/15/2026. Claims canceled 20 pending 1-19,21: PNG media_image1.png 728 154 media_image1.png Greyscale Priority PNG media_image2.png 866 841 media_image2.png Greyscale Primary Rejection PNG media_image3.png 729 504 media_image3.png Greyscale Backup Rejection PNG media_image4.png 732 983 media_image4.png Greyscale 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-19 and 21 not rejected under 35 U.S.C. 101 because the claimed invention is directed to improving the Computers. field not without significantly more (streamlined analysis) via applicant’s disclosure: TECHNICAL FIELD [0001]Implementations of the present specification generally relate to the field of artificial intelligence technologies, and in particular, to an image data1 processing method and apparatus, an image recognition model training method and apparatus, and an image recognition method and apparatus. [0046]A federated learning solution based on mixed data desensitization is provided according to the implementations of the present specification. In the federated learning solution, when training an image recognition model by using local training sample image data, a first member device first performs data desensitization processing on the training sample image data based on frequency domain transform, then performs image mixing on desensitized image data by using an image mixing processing method based on, e.g., Mixup data augmentation, and subsequently trains the image recognition model by using the desensitized image data that is image mixing processed, thereby improving a data2 privacy protection capability in federated learning. In addition, during image recognition model training, a hyperparameter selection model is further used to adaptively select an appropriate image mixing parameter (e.g., a number of images participating in image mixing) based on first desensitized image data, so as to ensure not only that a plurality of pieces of desensitized image data can be fused in an image recognition model training process, but also that model training performance is not significantly affected. Response to Arguments Rejections Under 35 USC 101 Applicant’s arguments, see remarks, page 12, filed 1/15/2026, with respect to 35 USC 101 have been fully considered and are persuasive. The 35 USC 101 rejection of claim 20 has been withdrawn. Rejections Under 35 USC 103 Applicant’s arguments, see remarks, pages 13,14 filed 1/15/2026, with respect to the rejection(s) of claim(s) 1,10.11,19 and 20 under 35 USC 103 have been fully considered and are persuasive. Therefore, the rejection has been withdrawn. However, upon further consideration, a new ground(s) of rejection is made in view of 35 USC 103: Claim(s) 1,10 and 11,19 is/are rejected under 35 U.S.C. 103 as being unpatentable over CAO et al. (CN 112800467 A) with machine translation (of the previous Office action) plus machine translation II in view of Qiu et al. (Iterative Teaching by Data Hallucination) and PAN et al. (WO 2022/222458 A1) with machine translation, Claim(s) 1,10 and 11,19 is/are additionally rejected under 35 U.S.C. 103 as being unpatentable over CAO et al. (CN 112800467 A) with machine translation plus machine translation II in view of Qiu et al. (Iterative Teaching by Data Hallucination) further in view of Zhu et al. (Imbalanced driving scene recognition with class focal loss and data augmentation) and Zhang et al. (mixup: BEYOND EMPIRICAL RISK MINIMIZATION) and PAN et al. (WO 2022/222458 A1) with machine translation as applied in the primary rejection of claims 1,10 and 11,19: wherein PAN teaches each member hospital (client “1” “2” to “N”) locally training a model: PNG media_image5.png 626 926 media_image5.png Greyscale PNG media_image5.png 626 926 media_image5.png Greyscale Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claim(s) 1,10 and 11,19 is/are rejected under 35 U.S.C. 103 as being unpatentable over CAO et al. (CN 112800467 A) with machine translation (of the previous Office action) plus machine translation II in view of Qiu et al. (Iterative Teaching by Data Hallucination) and PAN et al. (WO 2022/222458 A1) with machine translation: MPEP 904.03 Conducting the Search [R-07.2022], 4th para: The best reference should always be the one used in rejecting the claims. Sometimes the best reference (Qiu et al. (Iterative Teaching by Data Hallucination)) will have a publication date (12 Apr 2023) less than a year prior to the application filing date (9/29/2023 or 29 September 2023), hence it will be open to being overcome under 37 CFR 1.130 or 1.131. In such circumstances, if a second reference [Zhu et al. (Imbalanced driving scene recognition with class focal loss and data augmentation: 07 June 2022) & Zhang et al. (mixup: BEYOND EMPIRICAL RISK MINIMIZATION): 27 Apr 2018] exists which cannot be so overcome and which, though inferior, is an adequate basis for rejection, the claims should be additionally rejected thereon. PNG media_image6.png 729 373 media_image6.png Greyscale PNG media_image7.png 1370 944 media_image7.png Greyscale Re claim 1., CAO teaches An image recognition model training method, the method comprising: iteratively performing, by a first member (“client” “computing”, pg. 5, 8th txt blk) device having local (“user”, pg. 6, 9th txt blk) training data, a model training process (“for subsequent training of the model”, pg. 7, last txt blk), the (subsequent) model training processing including: obtaining current training sample34 image56 (“form”, pg. 5, 13th txt blk) data (serving as a specimen of a person, animal, or thing, photographed, painted, sculptured, or otherwise made visible) and label (“desensitization information…as the electronic label”7, pg. 9, 1st txt blk) data (i.e., desensitization information=label) of the current training sample image (form) data; performing data desensitization processing (obtaining said desensitization info label) on the current training sample image (form) data based on frequency domain (“wavelet”, pg. 7, 11th txt blk) transform to obtain first desensitized image (label) data of the current training sample image (form) data; performing image mixing processing (resulting in “combined” “image form data”, pg. 8, 7th txt blk) on the first (wavelet) desensitized image data (or “non-sensitive data”, pg. 8, 7th txt blk) based on mixup data augmentation by using a first hyperparameter to obtain second (combined) desensitized image data and second (combined) label data that is label mixing processed (i.e., combined) and corresponding (i.e., equivalent) to the second desensitized (combined) image data, the first hyperparameter indicating a number of images (or two different images comprised by “different image form data”, pg. 8, 5th txt blk) participating in the image mixing processing (resulting the combined image form data); locally training (subsequently), by the first member device (via “at least one processor”-“client end”, machine translation II, pg. 12, 10th txt blk), a current image recognition model (“identification”, pg. 5, 11th txt blk) by using the second desensitized (combined) image (form) data and the second (combined) label data (i.e., desensitization information=label); and providing a local (“online training”, machine translation II, pg. 7, 4th txt blk) model (“updating”) training result (or a determined “updating time” “time stamp”, pg. 7, 5th txt blk, “realizing the online training of the”, pg. 7, 4th txt blk, identification model is updating) of the locally training to a second (“cloud”, pg. 7, 8th txt blk) member device configured to maintain (via “cloud storage”, pg. 10, 4th txt blk) an (identification) image recognition model, for the second member (cloud) device to update the (identification) image recognition model by using local (“online training”, machine translation II, pg. 7, 4th txt blk) model training results (resulting in said determined training stamp and “timestamp” “compared” “summary information”, pg. 7, 8th txt blk) from a plurality of first (client) member (“combination”, pg. 14, 3rd txt blk) devices including the local model training result of the first member (client) device (comprising said device combination); and receiving an updated (identification) image recognition model from the second member (cloud) device for a next (subsequent) round of model (time-stamp) training. Cao does not teach the difference8 of claim 1 of (see next page showing the difference of claim 1): A) iteratively (performing, by a first9 member device having1011 local training data, a model training process)12 …13 B) minxup data augmentation by using14 a first hyperparameter… C) the first hyperparameter indicating (a number)…. D) locally (training), by (the first15 member device, a current image recognition model by using1617 the second desensitized image data and the second label data)… E) the locally (training) (the current image recognition model). Qiu teaches the difference of claim 1: (“The entire process is”) iteratively (“executed”, 4.2 Performative Teaching, 2nd para, 4th S) (performing, by a first18 member device having1920 local training data, a model training process)21 …22 mixup data augmentation (via a “Mixup” “data augmenta-tion space”, 4.1 Mixup-based Teaching: corresponds to fig. 2: “Representation Space”) by using a first hyperparameter (“to parameterize the teacher by…a hyperparameter”, 3.3 Parameterized Teaching Policy & 3.3.1 Data Transformation)… the first (teacher) hyperparameter (α) indicating (cats with cat names: fig. 1(b)) (a number)… locally training, by (the first23 member device, a current image recognition model by using2425 the second desensitized image data and the second label data)… the locally (training) (the current image recognition model): . PNG media_image8.png 1626 807 media_image8.png Greyscale Since Cao teaches training, one of skill in the art of training can make Cao’s be as Qiu’s: PNG media_image9.png 1047 970 media_image9.png Greyscale predictably recognizing the change “achieving significant empirical performance gain”, Qiu, 2nd page, lcol, last S, or gain in performative DHT (Data Hallucination Teaching) accuracy or a gain in the performance of Cao’s deep learning via Qui’s Table 2: PNG media_image10.png 511 954 media_image10.png Greyscale CAO of the combination of CAO,Qiu does not teach the remaining difference of claim 1 of: locally2627 training, by28 (the first29 member device, a current image recognition model by using303132 the second desensitized image data and the second label data)33… the locally (training) (the current image recognition model). PAN teaches the remaining difference of claim 1, in the context of desensitization (a sensitivity/privacy problem) as faced by applicants: PAN, pg. 2 penult txt blk: --The data desensitization module 2 is used to desensitize each image data to remove the private information contained in each image data. Among them, the data desensitization module 2 specifically includes a privacy information extraction sub-module and a removal sub-module; the privacy information extraction sub-module is used to extract the OCR text for each image data to obtain the privacy information contained in each image data. ;Privacy information includes but not limited topatient name, patient number, image number and hospital name; Removal sub-module is used to remove the private information contained in each image data.--: (“each hospital private task model is”, pg. 3, 7th txt blk) locally3435 training (resulting in “trained locally”, pg. 3, 7th txt blk), by36(a same correlated hospital members set via “this hospital and…other hospitals…with the same system”37, page 5, 3rd txt blk) (the first38 member device, a current image recognition model by using394041 the second desensitized image data and the second label data)42… the locally (training) (the current image recognition model). Since CAO of the combination of CAO,Qiu teaches desensitization as faced by applicants, one of skill in the art of desensitization can make CAO’s of the combination of CAO,Qiu be as PAN’s seeing that the change “solves the problem that most of the existing models cannot be updated based on the latest clinical data after the construction is completed, solves the safety problem that may be caused by using the data of each hospital to train the artificial intelligence model at the same time, and solves the existing medical image-oriented model. It is not suitable for multiple tasks and multiple image types, that is, the problem of no generality, which solves the problem that the existing federated learning methods in the medical field rarely make full use of unlabeled data.”, PAN, pg. 7, 4th txt blk: PNG media_image11.png 1328 994 media_image11.png Greyscale PNG media_image12.png 2049 1031 media_image12.png Greyscale Re claim 10. (Currently Amended), CAO of the combination of CAO,Qiu,PAN teaches The method according to claim 1, further comprising: aggregating43, by a second member (cloud) device, local model (“online training”, machine translation II, CAO: pg. 7, 4th txt blk) training results (“continuously executing the subsequent update process on the local model”4445, pg. 7, 8th txt blk, “to it is the latest updated model”, pg. 10, 10th txt blk) of the current (identification) image recognition (deep learning) model received from at least two first member (combination) devices including the first member device (comprising said combination devices) to (subsequently) update the current image recognition (deep learning) model, and sending (from the cloud) an updated ( & time stamped) image recognition model to the at least two first member (client) devices to perform (said subsequent) local model training. Claim 11 is rejected like claim 1: Re 11. (Currently Amended) CAO of the combination of CAO,Qiu,PAN teaches A computing system[[,]] comprising: at least one processor (Cao: fig. 6,7); at least one storage device (Cao: fig. 6,7) coupled to the at least one processor; and a computer program (Cao: fig. 6,7) stored in the at least one storage device, which when executed by the at least one processor, enable the at least one processor to, individually or collectively, implement acts including: iteratively (or performatively) performing, by a first member (client) device having local training data, a model training process, the model training processing including: obtaining current training sample image data (i.e., a training image) and (cat) label data of the current training sample image data; performing data desensitization processing (obtaining said cat label) on the current training sample (cat) image data based on frequency domain (wavelet) transform to obtain first desensitized image data (including associated/desensitized cat label) of the current training sample (cat) image data; performing image mixing (or information combining) processing on the first desensitized (cat) image data (into one image form data) based on mixup data augmentation (space) by using a first hyperparameter (α) to obtain second (one) desensitized (combined) image (form) data and second label data (since the one desensitized form cat image46 data is in the same function, status or role of--i.e., as--the cat label) that is label mixing processed and corresponding to the second desensitized image data, the first hyperparameter (α) indicating a number of (cat) images participating in the image mixing processing (in said Mixup augmentation space); locally training, by the first member device, a current (identification) image recognition model by using the second (combined) desensitized (cat) image data and the second label data (being in the same function, status, or role of the desensitized cat image); and providing a local model training (updating-time-stamp) result of the locally training the current (identification) image recognition model to a second member (cloud) device configured to maintain (i.e., store) an image (identification) recognition model, for the second member (cloud) device to update (via a time-stamp) the (identification) image recognition model by using local model training results (or a determined time and a compared times) from a plurality of first member (client) devices including the local model training result of the first member (client) device (including other combined devices); and receiving an updated image recognition model from the second member (cloud) device for a next (subsequent) round of model training. Claim 19 is rejected like claim 10: 19. (Currently Amended) The computing system according to claim 11, wherein the acts further include: aggregating, by a second member device, local model training results of the current image recognition model received from at least two first member devices including the first member device to update the current image recognition model, and sending an updated image recognition model to the at least two first member devices to perform local model training. Claim(s) 4,5 and 13 is/are rejected under 35 U.S.C. 103 as being unpatentable over CAO et al. (CN 112800467 A) with machine translation (of the previous Office action) plus machine translation II in view of Qiu et al. (Iterative Teaching by Data Hallucination) and PAN et al. (WO 2022/222458 A1) with machine translation as applied in claims 1,10 and 11,19 further in view of HU (US 2024/0078438 A1): PNG media_image13.png 729 418 media_image13.png Greyscale Re 4. (Currently Amended), CAO of the combination of CAO,Qiu,PAN teaches The method according to claim 1, wherein the providing the (update) local model training result of the current (identification) image recognition model to the second member (cloud) device includes: providing the (updated) local model training result of the current (identification) image recognition model to the second member (cloud) device in response to that a second determined threshold is satisfied. CAO of the combination of CAO,Qiu,PAN teaches does not teach “in response to that a second determined threshold is satisfied”. HU teaches: in response to that a second determined threshold is satisfied (via [0047]: PNG media_image14.png 628 759 media_image14.png Greyscale Thus one of skill in the art of clients can make CAO’s of the combination of CAO,Qiu,PAN be as HU’s predictably recognizing the change increasing “the accuracy of learning models”, Hu [0054] 2nd S. Re claim 5 (Currently Amended), CAO of the combination of CAO,Qiu,PAN,HU teaches The method according to claim 4, wherein the second determined (increasing accuracy) threshold comprises: a round interval between a current number of training rounds of the image recognition model and a number of training rounds of the image recognition model when4748 a previous local training result was sent reaches a second threshold number of rounds. Claim 13 is rejected like claim 4: Re 13. (Currently Amended), CAO of the combination of CAO,Qiu,PAN,HU teaches The computing system according to claim 11, wherein the providing the local model training result of the current image recognition model to the second member device includes: providing the local model training result of the current image recognition model to the second member device in response to that a second determined threshold is satisfied. Claim(s) 14 is/are rejected under 35 U.S.C. 103 as being unpatentable over CAO et al. (CN 112800467 A) with machine translation (of the previous Office action) plus machine translation II in view of Qiu et al. (Iterative Teaching by Data Hallucination) and PAN et al. (WO 2022/222458 A1) with machine translation as applied in claims 1,10 and 11,19 further in view of HU (US 2024/0078438 A1) as applied in the rejection of claims 4,5 and 13, further in view of GAO et al. (CN 114998592 A) with machine translation: PNG media_image3.png 729 504 media_image3.png Greyscale Re 14. (Currently Amended), CAO of the combination of CAO,Qiu,PAN,HU teaches The computing system according to claim 13, wherein the second determined threshold (federated learning model update) comprises: a round interval between a current number of training rounds of the (identification) image recognition model and a number of training rounds of the (identification) image recognition model when a previous local training result (via said time-stamp) was sent (to the cloud) reaches a second threshold number of rounds. Cao of the combination of CAO,Qiu,PAN,HU does not teach: a round interval between a current number of training rounds … and a number of training rounds … reaches a second threshold number of rounds. GAO teaches in the context of structure, fig. 2:120: a round interval (or “round” “period”49, pg. 11, 5th txt blk) between a current (“plurality”, pg. 11, 5th txt blk) number of training rounds … and a (plural) number of training rounds … reaches a second (“preset”, pg. 11, 5th txt blk) threshold number of rounds. Since Qiu of the combination of CAO,Qiu,PAN,HU teaches a threshold, one of skill in the art of thresholds can make Qiu’s of the combination of CAO,Qiu,PAN,HU be as GAO’s predictably recognizing the change resulting in “better model performance”, Gao, pg. 11, 5th txt blk. Claim(s) 1,10 and 11,19 is/are additionally rejected under 35 U.S.C. 103 as being unpatentable over CAO et al. (CN 112800467 A) with machine translation plus machine translation II in view of Qiu et al. (Iterative Teaching by Data Hallucination) further in view of Zhu et al. (Imbalanced driving scene recognition with class focal loss and data augmentation) and Zhang et al. (mixup: BEYOND EMPIRICAL RISK MINIMIZATION) and PAN et al. (WO 2022/222458 A1) with machine translation as applied in the primary rejection of claims 1,10 and 11,19: PNG media_image16.png 734 535 media_image16.png Greyscale Claim 1 is rejection like claim 1, rejected the first time, above: Re 1. (Currently Amended), Cao teaches An image recognition model training method, the method comprising: iteratively performing, by a first member (“client” “computing”, pg. 5, 8th txt blk) device having local (“user”, pg. 6, 9th txt blk) training data, a model training process (“for subsequent training of the model”, pg. 7, last txt blk), the (subsequent) model training processing including: obtaining current training sample5051 image5253 (“form”, pg. 5, 13th txt blk) data (serving as a specimen of a person, animal, or thing, photographed, painted, sculptured, or otherwise made visible) and label (“desensitization information…as the electronic label”54, pg. 9, 1st txt blk) data (i.e., desensitization information=label) of the current training sample image (form) data; performing data desensitization processing (obtaining said desensitization info label) on the current training sample image (form) data based on frequency domain (“wavelet”, pg. 7, 11th txt blk) transform to obtain first desensitized image (label) data of the current training sample image (form) data; performing image mixing processing (resulting in “combined” “image form data”, pg. 8, 7th txt blk) on the first (wavelet) desensitized image data (or “non-sensitive data”, pg. 8, 7th txt blk) based on mixup data augmentation by using a first hyperparameter to obtain second (combined) desensitized image data and second (combined) label data that is label mixing processed (i.e., combined) and corresponding (i.e., equivalent) to the second desensitized (combined) image data, the first hyperparameter indicating a number of images (or two different images comprised by “different image form data”, pg. 8, 5th txt blk) participating in the image mixing processing (resulting the combined image form data); locally training (subsequently), by the first member device, a current image recognition model (“identification”, pg. 5, 11th txt blk) by using the second desensitized (combined) image (form) data and the second (combined) label data (i.e., desensitization information=label); and providing a local model (“updating”) training result (or a determined “updating time” “time stamp”, pg. 7, 5th txt blk, “realizing the online training of the”, pg. 7, 4th txt blk, identification model is updating) of the locally training the current (identification) image recognition model to a second (“cloud”, pg. 7, 8th txt blk) member device configured to maintain (via “cloud storage”, pg. 10, 4th txt blk) an (identification) image recognition model, for the second member (cloud) device to update the (identification) image recognition model by using local model training results (resulting in said determined training stamp and “timestamp” “compared” “summary information”, pg. 7, 8th txt blk) from a plurality of first (client) member (“combination”, pg. 14, 3rd txt blk) devices including the local model training result of the first member (client) device (comprising said device combination); and receiving an updated (identification) image recognition model from the second member (cloud) device for a next (subsequent) round of model (time-stamp) training. Cao does not teach the difference of claim 1 of: A) iteratively… B) mixup data augmentation by using a first hyperparameter… C) the first hyperparameter indicating… D) locally… E) locally. Qiu teaches the difference of claim 1, except for “locally”: A) (“The entire process is”) iteratively (“executed”, 4.2 Performative Teaching, 2nd para, 4th S)… B) mixup data augmentation (via a “Mixup” “data augmenta-tion space”, 4.1 Mixup-based Teaching: corresponds to fig. 2: “Representation Space”) by using a first hyperparameter (“to parameterize the teacher by…a hyperparameter”, 3.3 Parameterized Teaching Policy & 3.3.1 Data Transformation)… C) the first (teacher) hyperparameter (α) indicating (cats with cat names: fig. 1(b))… D) locally… E) locally…: . PNG media_image8.png 1626 807 media_image8.png Greyscale Since Cao teaches training, one of skill in the art of training can make Cao’s be as Qiu’s: PNG media_image9.png 1047 970 media_image9.png Greyscale predictably recognizing the change “achieving significant empirical performance gain”, Qiu, 2nd page, lcol, last S, or gain in performative DHT (Data Hallucination Teaching) accuracy or a gain in the performance of Cao’s deep learning via Qui’s Table 2: PNG media_image10.png 511 954 media_image10.png Greyscale Additionally, Qiu of the combination of CAO,Qiu teaches in another alternative vanilla disclosure just difference A): A) iteratively (“iteratively”, 1st page, rcol)… B) mixup data augmentation by using a first hyperparameter… C) the first hyperparameter indicating… D) locally… E) locally…(via: PNG media_image17.png 1366 1083 media_image17.png Greyscale Qiu of the combination of CAO,Qiu does not teach in the another alternative vanilla disclosure the remaining difference B)C)D)E) of the difference of claim 1 of: B) mixup data augmentation by using a first hyperparameter… C) the first hyperparameter indicating… D) locally… E) locally. Zhu teaches just different B) of the difference of claim 1 of: B) mixup data augmentation (“Mixup” “data augmentation”, pg. 2959, rcol, 2nd para) by using a first hyperparameter… C) the first hyperparameter indicating… D) locally… E) locally. PNG media_image18.png 1401 1039 media_image18.png Greyscale Since Qiu of the combination of CAO,Qiu teaches mixup, one of skill in the art of mixup can make the Qiu’s of the combination of Cao, Qiu be as Zhu’s, predictably recognizing the change “improve recognition performance”, Zhu, Abstract. PNG media_image19.png 1057 1168 media_image19.png Greyscale PNG media_image20.png 1564 1170 media_image20.png Greyscale The combination of Cao, Qui, Zhu does not teach the remaining difference of claim 1 of: B) using a first hyperparameter… C) the first hyperparameter indicating… D) locally… E) locally. Zhang teaches difference B) and C) of claim 1 of, except for difference D) and E): B) using a first hyperparameter (page 3, below)… C) the first hyperparameter indicating (page 3, below)… D) locally… E) locally. PNG media_image21.png 1599 1005 media_image21.png Greyscale Since Zhu of the combination of CAO, Qui, Zhu teaches mixup, one of skill in the art of mixup can make Zhu’s (1)(2) of the combination of CAO, Qui, Zhu be as Zhang’s predictably recognizing the change “leads to improved performance”, pg. 5, 2nd, 1st S: PNG media_image22.png 2893 1005 media_image22.png Greyscale The Zhu of the combination of CAO, Qui, Zhu does not teach the last remaining difference of claim 1: D) locally… E) locally. PAN already teaches/makes obvious the last remaining difference of claim 1 in the previous/primary rejection of claim 1: D) locally… E) locally. Re 10. (Currently Amended), CAO of the combination of Cao, Qui, Zhu, Zhang,PAN teaches The method according to claim 1, further comprising: aggregating55, by a second member (cloud) device, local model training results (“continuously executing the subsequent update process on the local model”5657, pg. 7, 8th txt blk, “to it is the latest updated model”, pg. 10, 10th txt blk) of the current (identification) image recognition (deep learning) model received from at least two first member (combination) devices including the first member device (comprising said combination devices) to (subsequently) update the current image recognition (deep learning) model, and sending (from the cloud) an updated ( & time stamped) image recognition model to the at least two first member (client) devices to perform (said subsequent) local model training. Claim 11 is rejected like claim 1: 11. (Currently Amended), CAO of the combination of Cao, Qui, Zhu, Zhang,PAN teaches A computing system, comprising: at least one processor (Cao: fig. 6,7); at least one storage device (Cao: fig. 6,7) coupled to the at least one processor; and a computer program (Cao: fig. 6,7) stored in the at least one storage device, which when executed by the at least one processor, enable the at least one processor to, individually or collectively, implement acts including: iteratively (or performatively) performing, by a first member (client) device having local training data, a model training process, the model training processing including: obtaining current training sample image data (i.e., a training image) and (cat) label data of the current training sample image data; performing data desensitization processing (obtaining said cat label) on the current training sample (cat) image data based on frequency domain (wavelet) transform to obtain first desensitized image data (including associated/desensitized cat label) of the current training sample (cat) image data; performing image mixing (or information combining) processing on the first desensitized (cat) image data (into one image form data) based on mixup data augmentation (space) by using a first hyperparameter (α) to obtain second (one) desensitized (combined) image (form) data and second label data (since the one desensitized form cat image58 data is in the same function, status or role of--i.e., as--the cat label) that is label mixing processed and corresponding to the second desensitized image data, the first hyperparameter (α) indicating a number of (cat) images participating in the image mixing processing (in said Mixup augmentation space); locally training, by the first member device, a current (identification) image recognition model by using the second (combined) desensitized (cat) image data and the second label data (being in the same function, status, or role of the desensitized cat image); and providing a local model training (updating-time-stamp) result of the locally training the current (identification) image recognition model to a second member (cloud) device configured to maintain (i.e., store) an image (identification) recognition model, for the second member (cloud) device to update (via a time-stamp) the (identification) image recognition model by using local model training results (or a determined time and a compared times) from a plurality of first member (client) devices including the local model training result of the first member (client) device (including other combined devices); and receiving an updated image recognition model from the second member (cloud) device for a next (subsequent) round of model training. Claim 19 is rejected like claim 10: 19. (Currently) The computing system according to claim 11, wherein the acts further include: aggregating, by a second member device, local model training results of the current image recognition model received from at least two first member devices including the first member device to update the current image recognition model, and sending an updated image recognition model to the at least two first member devices to perform local model training. Claim(s) 4,5 and 13 is/are additionally rejected under 35 U.S.C. 103 as being unpatentable over CAO et al. (CN 112800467 A) with machine translation in view of Qiu et al. (Iterative Teaching by Data Hallucination) further in view of Zhu et al. (Imbalanced driving scene recognition with class focal loss and data augmentation) and Zhang et al. (mixup: BEYOND EMPIRICAL RISK MINIMIZATION) and PAN et al. (WO 2022/222458 A1) with machine translation as applied in the primary rejection of claims 1,10 and 11,19 further in view of HU (US 2024/0078438 A1): PNG media_image23.png 734 588 media_image23.png Greyscale Re 4. (Currently Amended), Cao of the combination of Cao, Qui, Zhu, Zhang,PAN teaches The method according to claim 1, wherein the providing the local (update) model training result of the current (identification) image recognition model to the second member (cloud) device includes: providing the local (updated) model training result of the current (identification) image recognition model to the second member (cloud) device in response to that a second determined threshold is satisfied. Cao of the combination of Cao, Qui, Zhu, Zhang does not teach “in response to that a second determined threshold is satisfied”. HU teaches: in response to that a second determined threshold is satisfied (via [0047]: PNG media_image14.png 628 759 media_image14.png Greyscale Thus one of skill in the art of clients can make Cao’s of the combination of Cao, Qui, Zhu, Zhang,PAN be as HU’s predictably recognizing the change increasing “the accuracy of learning models”, Hu [0054] 2nd S. Re claim 5. (Currently Amended), Cao of the combination of Cao, Qui, Zhu, Zhang,PAN,HU teaches The method according to claim 4, wherein the second determined (increasing accuracy) threshold comprises: a round interval between a current number of training rounds of the image recognition model and a number of training rounds of the image recognition model when5960 a previous training result was sent reaches a second threshold number of rounds. Claim 13 is rejected like claim 4: 13. (Currently Amended) The computing system according to claim 11, wherein the providing the model training result of the current image recognition model to the second member device includes: providing the model training result of the current image recognition model to the second member device in response to that a second determined threshold is satisfied. Claim(s) 14 is/are additionally rejected under 35 U.S.C. 103 as being unpatentable over CAO et al. (CN 112800467 A) with machine translation in view of Qiu et al. (Iterative Teaching by Data Hallucination) further in view of Zhu et al. (Imbalanced driving scene recognition with class focal loss and data augmentation) and Zhang et al. (mixup: BEYOND EMPIRICAL RISK MINIMIZATION) and PAN et al. (WO 2022/222458 A1) with machine translation as applied in the primary rejection of claims 1,10 and 11,19 further in view of HU (US 2024/0078438 A1) as applied in the rejection of claims 4,5 and 13, further in view of GAO et al. (CN 114998592 A) with machine translation: PNG media_image24.png 734 674 media_image24.png Greyscale Re claim 14, CAO of the combination of CAO, Qui, Zhu, Zhang,PAN,HU teaches The computing system according to claim 13, wherein the second determined threshold (federated learning model update) comprises: a round interval between a current number of training rounds of the (identification) image recognition model and a number of training rounds of the (identification) image recognition model when a previous training result (via said time-stamp) was sent (to the cloud) reaches a second threshold number of rounds. Cao of the combination of CAO, Qui, Zhu, Zhang,PAN,HU does not teach: a round interval between a current number of training rounds … and a number of training rounds … reaches a second threshold number of rounds. GAO teaches in the context of structure, fig. 2:120: a round interval (or “round” “period”61, pg. 11, 5th txt blk) between a current (“plurality”, pg. 11, 5th txt blk) number of training rounds … and a (plural) number of training rounds … reaches a second (“preset”, pg. 11, 5th txt blk) threshold number of rounds. Since Qiu of the combination of CAO, Qui, Zhu, Zhang,PAN,HU teaches a threshold, one of skill in the art of thresholds can make Qiu’s of the combination of CAO, Qui, Zhu, Zhang,PAN,HU be as GAO’s predictably recognizing the change resulting in “better model performance”, Gao, pg. 11, 5th txt blk. Claim(s) 6,7,8,9 and 15,16,17,18 is/are additionally rejected under 35 U.S.C. 103 as being unpatentable over CAO et al. (CN 112800467 A) with machine translation in view of Qiu et al. (Iterative Teaching by Data Hallucination) further in view of Zhu et al. (Imbalanced driving scene recognition with class focal loss and data augmentation) and Zhang et al. (mixup: BEYOND EMPIRICAL RISK MINIMIZATION) and PAN et al. (WO 2022/222458 A1) with machine translation as applied in the primary rejection of claims 1,10 and 11,19 further in view of WANG et al. (CN 115497141 A) with machine translation: MPEP 2120 I. CHOICE OF PRIOR ART; BEST AVAILABLE Prior art rejections should ordinarily be confined strictly to the best available art (WANG et al. (CN 115497141 A)). Exceptions may properly be made, for example, where: (A) the propriety of a 35 U.S.C. 102 or 103 rejection depends on a particular interpretation of a claim; (B) a claim is met by a prior art disclosure which does not disclose the inventive concept involved; (C) for cases examined under the first inventor to file provisions of the AIA , the most pertinent disclosure (WANG et al. (CN 115497141 A)) could be shown not to be prior art by invoking an exception in a 37 CFR 1.130 affidavit or declaration of attribution or prior public disclosure; (D) for cases examined under pre-AIA law, an obviousness rejection is based on prior art that qualifies only under pre-AIA 35 U.S.C. 102(e), (f), or (g) so that the rejection could be overcome by establishing that the prior art is disqualified under pre-AIA 35 U.S.C. 103(c); or (E) for cases examined under pre-AIA law, the most pertinent disclosure could be antedated by a 37 CFR 1.131 affidavit or declaration of prior invention. In the interest of compact prosecution, such rejections (claim 7 above) should be backed up by the best other art rejections available. Keep in mind the best backup rejection(s) could be based on alternate embodiments from the same "best available" reference(s) (WANG et al. (CN 115497141 A)). For example, if an anticipation rejection could be overcome by invoking an exception in a 37 CFR 1.130(b) declaration, it would be appropriate to make an additional obviousness rejection over another disclosure in the same reference (WANG et al. (CN 115497141 A)). Merely cumulative rejections, i.e., those which would clearly fall if the primary rejection (claim 7 above) were not sustained, should be avoided: PNG media_image25.png 734 680 media_image25.png Greyscale Re 6. (Original), CAO of the combination of CAO, Qui, Zhu, Zhang,PAN teaches The method according to claim 1, wherein the first hyperparameter is k (or 0.4), and a maximum weight coefficient for image mixing (to said one-form-image) is Wmax; and the performing image mixing processing (to said one-form-image) on the first (wavelet) desensitized image data based on (performative) data augmentation by using the first hyperparameter (α=0.4) includes: performing (k−1) times of scrambling processing on an image data set (i.e., “image form data” “domain”, pg. 7, 11th txt blk) of the first (wavelet) desensitized image data to obtain k image data sets (in the “frequency domain and time domain”, pg. 7, 11th txt blk); constructing an image hypermatrix with a size of m*k based on the k image (domain) data sets, wherein a first column in the image hypermatrix corresponds to the image (domain) data set of the first (wavelet) desensitized image data in an original form before the scrambling processing, and m is an amount of image data in the image (domain) data set; randomly generating a weight coefficient for each piece (or “part”, pg. 9, 2nd txt blk) of image data (comprised by “user data”, pg. 5, last txt blk) in the image hypermatrix; normalizing weight coefficients of the (user) image data in the image hypermatrix, so that a sum of weight coefficients of each row of (user) image data is 1, and the weight coefficient of each (part-)piece of (user) image data is not greater than Wmax; and performing weighted summation on each row of (user) image data in the image hypermatrix to obtain a mixed image hypermatrix with a size of m*1, wherein the (user) image data in the mixed image hypermatrix is (wavelet) desensitized (user) image data augmentation processed (performatively). CAO of the combination of CAO, Qui, Zhu, Zhang,PAN does not teach, as indicated in bold above: --a maximum weight coefficient … is Wmax… performing (k−1) times of scrambling processing… constructing an image hypermatrix with a size of m*k … a first column in the image hypermatrix corresponds to … before the scrambling processing, and m is an amount of … randomly generating a weight coefficient … in the image hypermatrix; normalizing weight coefficients … in the image hypermatrix, so that a sum of weight coefficients of each row … is 1, and the weight coefficient … is not greater than Wmax; and performing weighted summation on each row of …in the image hypermatrix to obtain a mixed image hypermatrix with a size of m*1, … in the mixed image hypermatrix …--. WANG teaches, as indicated in bold above: --a maximum weight coefficient … is Wmax… performing (k−1) times of scrambling processing… constructing an image hypermatrix with a size of m*k … a first column in the image hypermatrix (pg. 3, 1st txt blk) corresponds to … before the scrambling processing, and m is an amount of … randomly generating (pg. 3, 1st txt blk) a weight coefficient … in the image hypermatrix; normalizing weight coefficients (via a “weight coefficient normalization module 1540”, pg. 21, 3rd txt blk) … in the image hypermatrix, so that a sum of weight coefficients of each row … is 1, and the weight coefficient … is not greater than Wmax; and performing weighted summation (pg. 21, 4th txt blk) on each row of …in the image hypermatrix to obtain a mixed image hypermatrix (pg. 21, 4th txt blk) with a size of m*1, … in the mixed image hypermatrix …--. Since CAO of the combination of CAO, Qui, Zhu, Zhang,PAN teaches wavelets, one of skill in the art of wavelets can make CAO’s of the combination of CAO, Qui, Zhu, Zhang,PAN be as WANG’s predictably recognizing the change “to ensure that the whole process from the source obtained from the image data to the image data processing is safe and reliable, improving the security of the privacy protection data to be processed.”, WANG, pg. 10, 4th txt blk. Re claim 7. (Original), CAO of the combination of CAO, Qui, Zhu, Zhang,PAN teaches The method according to claim 1, wherein the performing (wavelet) data desensitization processing on the current training sample image (i.e., training image) data based on frequency domain62 (wavelet) transform includes: performing local frequency domain (wavelet) transform processing on the current training sample image (or training image) data to obtain at least63 one64 feature graph65 (closest mapping is a “function” “transform”, pg. 7, 10th txt blk), wherein each666768 feature graph of the at least one feature graph includes a plurality of elements and corresponds to a data block in the current training sample image data, and each element corresponds to a frequency in a frequency domain (this limitation in italics is directed to two graphs, which is outside the broadest reasonable interpretation); respectively6970 constructing, by using elements corresponding to frequencies in the at least one feature graph, frequency component channel feature graphs corresponding to the frequencies71; and selecting at least one target frequency component channel feature graph from the frequency component channel feature graphs to obtain (via said wavelet) desensitized image data of the current training sample image (i.e., training image) data, wherein the selected target frequency component channel feature graph includes a channel feature for (deep-learning) image recognition. CAO of the combination of CAO, Qui, Zhu, Zhang,PAN does not teach the difference of claim 7: at least72 one73 feature graph74 … constructing, by using elements corresponding to frequencies in the at least one feature graph, frequency component channel feature graphs corresponding to the frequencies75; and selecting at least one target frequency component channel feature graph from the frequency component channel feature graphs …wherein the selected target frequency component channel feature graph includes a channel feature. WANG teaches the difference of claim 7: at least76 one77 feature graph78 (comprised by “ ‘feature map’-‘sub-graph data’ ”, pg. 9, 6th txt blk) … constructing, by using elements corresponding to frequencies in the at least one feature graph, frequency component channel feature graphs (pg. 11, 8th txt blk) corresponding to the frequencies79; and selecting at least one target frequency component channel feature graph (pg. 12, last txt blk, wherein “graph” and “map” are used “interchangeably”, pg. 9, 6th txt blk) from the frequency component channel feature graphs …wherein the selected target frequency component channel feature graph includes a channel feature. Since CAO of the combination of CAO, Qui, Zhu, Zhang,PAN teaches “extracting the feature of the image” (pg. 7, last txt blk), one of skill in the art of image features can make CAO’s of the combination of CAO, Qui, Zhu, Zhang,PAN be as WANG’s predictably recognizing the change “to ensure that the whole process from the source obtained from the image data to the image data processing is safe and reliable, improving the security of the privacy protection data to be processed.”, WANG, pg. 10, 4th txt blk. Re 8. (Original), CAO of the combination of CAO, Qui, Zhu, Zhang,PAN,WANG teaches The method according to claim 7, further comprising: after the selecting the at least one target frequency component channel feature graph from the frequency component channel feature graphs, performing a first shuffling processing on the target frequency component channel feature graph to obtain a first shuffled feature graph; and performing normalization processing on the first shuffled feature graph to obtain the first desensitized image data of the current training sample image data (“or any combination thereof”, pg 12, 5th txt blk). Re 9. (Original), CAO of the combination of CAO, Qui, Zhu, Zhang,PAN,WANG teaches The method according to claim 8, further comprising: after the performing normalization processing on the first shuffled feature graph, performing channel mixing processing on the first shuffled feature graph that is normalization processed; performing a second shuffling processing on the first shuffled feature graph that is channel mixing processed (“after channel mixing processing”, pg. 14, 3rd txt blk), to obtain a second shuffled feature graph; and performing normalization processing on the second shuffled feature graph to obtain the first desensitized image data of the current training sample image data. Re15., CAO of the combination of CAO, Qui, Zhu, Zhang,PAN,WANG teaches, as in the rejection of claim 6, The computing system according to claim 11, wherein the first hyperparameter is k, and a maximum weight coefficient for image mixing is Wmax; and the performing image mixing processing on the first desensitized image data based on data augmentation by using the first hyperparameter includes: performing (k−1) times of scrambling processing on an image data set of the first desensitized image data to obtain k image data sets; constructing an image hypermatrix with a size of m*k based on the k image data sets, wherein a first column in the image hypermatrix corresponds to the image data set of the first desensitized image data in an original form before the scrambling processing, and m is an amount of image data in the image data set; randomly generating a weight coefficient for each piece of image data in the image hypermatrix; normalizing weight coefficients of the image data in the image hypermatrix, so that a sum of weight coefficients of each row of image data is 1, and the weight coefficient of each piece of image data is not greater than Wmax; and performing weighted summation on each row of image data in the image hypermatrix to obtain a mixed image hypermatrix with a size of m*1, wherein the image data in the mixed image hypermatrix is desensitized image data augmentation processed. Re 16., CAO of the combination of CAO, Qui, Zhu, Zhang,PAN,WANG teaches, as in the rejection of claim 7, The computing system according to claim 11, wherein the performing data desensitization processing on the current training sample image data based on frequency domain transform includes: performing local frequency domain transform processing on the current training sample image data to obtain at least one feature graph, wherein each feature graph of the at least one feature graph includes a plurality of elements and corresponds to a data block in the current training sample image data, and each element corresponds to a frequency in a frequency domain; respectively constructing, by using elements corresponding to frequencies in the at least one feature graph, frequency component channel feature graphs corresponding to the frequencies; and selecting at least one target frequency component channel feature graph from the frequency component channel feature graphs to obtain desensitized image data of the current training sample image data, wherein the selected target frequency component channel feature graph includes a channel feature for image recognition. Re 17., CAO of the combination of CAO, Qui, Zhu, Zhang,PAN,WANG teaches, as in the rejection of claim 8, The computing system according to claim 16, wherein the acts further include: after the selecting the at least one target frequency component channel feature graph from the frequency component channel feature graphs, performing a first shuffling processing on the target frequency component channel feature graph to obtain a first shuffled feature graph; and performing normalization processing on the first shuffled feature graph to obtain the first desensitized image data of the current training sample image data. Re 18., CAO of the combination of CAO, Qui, Zhu, Zhang,PAN,WANG teaches, as in the rejection of claim 9, The computing system according to claim 17, wherein the acts further include: after the performing normalization processing on the first shuffled feature graph, performing channel mixing processing on the first shuffled feature graph that is normalization processed; performing a second shuffling processing on the first shuffled feature graph that is channel mixing processed, to obtain a second shuffled feature graph; and performing normalization processing on the second shuffled feature graph to obtain the first desensitized image data of the current training sample image data. Claim(s) 7,16 is/are additionally rejected under 35 U.S.C. 103 as being unpatentable over CAO et al. (CN 112800467 A) with machine translation in view of Qiu et al. (Iterative Teaching by Data Hallucination) further in view of Zhu et al. (Imbalanced driving scene recognition with class focal loss and data augmentation) and Zhang et al. (mixup: BEYOND EMPIRICAL RISK MINIMIZATION) and PAN et al. (WO 2022/222458 A1) with machine translation as applied in the primary rejection of claims 1,10 and 11,19 further in view of WANG et al. (CN 115497141 A) with machine translation, as applied in the additional rejection of claims 6,7,8,9 and 15,16,17,18, further in view of BEN HU et al. (CN 112801883 A) with machine translation: PNG media_image26.png 734 830 media_image26.png Greyscale Re claim 7, WANG of the combination of the combination of CAO, Qui, Zhu, Zhang, WANG additionally teaches in another disclosure claim 7 of: 7. (Original) The method according to claim 1, wherein the performing data desensitization processing on the current training sample image data based on frequency domain transform (“not limited to…local wavelet transform”, pg. 10, 6th txt blk) includes: performing local frequency domain transform processing on the current training sample image data to obtain at least one feature graph, wherein each feature graph of the at least one feature graph includes a plurality of elements and corresponds to a data block in the current training sample image data, and each element corresponds to a frequency in a frequency domain; respectively constructing, by using elements corresponding to frequencies in the at least one feature graph, frequency component channel feature graphs corresponding to the frequencies; and selecting at least one target frequency component channel feature graph from the frequency component channel feature graphs to obtain desensitized image data of the current training sample image data, wherein the selected target frequency component channel feature graph includes a channel feature for image recognition. BEN HU also teaches the difference of claim 7 of: at least80 one81 feature (or “characteristic”, pg, 1, last txt blk) graph82 … constructing, by using elements corresponding to frequencies in the at least one feature graph, frequency component channel feature graphs (“to obtain the final upper sampling characteristic graph”, pg. 7, 12th txt blk) corresponding to the frequencies83; and selecting (“directly”, pg. 9, 8th txt blk) at least one target frequency component channel feature graph from the frequency component channel feature graphs …wherein the (directly) selected target frequency component channel feature graph includes a channel feature (comprised by “the ith channel characteristic graph”, pg. 6, 10th txt blk). Since WANG (CAO) of the combination of the combination of CAO, Qui, Zhu, Zhang, WANG teaches wavelet, one of skill in wavelet can make WANG’s (CAO’s figs. 4,5) of the combination of the combination of CAO, Qui, Zhu, Zhang, WANG be as BEN HU’s (fig. 13b) predictably recognizing the change “making the upper sampling characteristic graph more accurate”, BEN HU: pg. 14, 2nd txt blk: PNG media_image27.png 2362 1129 media_image27.png Greyscale Claim 16 rejected like claim 7: 16. (Original) The computing system according to claim 11, wherein the performing data desensitization processing on the current training sample image data based on frequency domain transform includes: performing local frequency domain transform processing on the current training sample image data to obtain at least one feature graph, wherein each feature graph of the at least one feature graph includes a plurality of elements and corresponds to a data block in the current training sample image data, and each element corresponds to a frequency in a frequency domain; respectively constructing, by using elements corresponding to frequencies in the at least one feature graph, frequency component channel feature graphs corresponding to the frequencies; and selecting at least one target frequency component channel feature graph from the frequency component channel feature graphs to obtain desensitized image data of the current training sample image data, wherein the selected target frequency component channel feature graph includes a channel feature for image recognition. Claim(s) 8,9 and 17,18 is/are additionally rejected under 35 U.S.C. 103 as being unpatentable over CAO et al. (CN 112800467 A) with machine translation in view of Qiu et al. (Iterative Teaching by Data Hallucination) further in view of Zhu et al. (Imbalanced driving scene recognition with class focal loss and data augmentation) and Zhang et al. (mixup: BEYOND EMPIRICAL RISK MINIMIZATION) and PAN et al. (WO 2022/222458 A1) with machine translation as applied in the primary rejection of claims 1,10 and 11,19 further in view of WANG et al. (CN 115497141 A) with machine translation, as applied in the additional rejection of claims 6,7,8,9 and 15,16,17,18, further in view of BEN HU et al. (CN 112801883 A) with machine translation as applied in the additional rejection of claims 7,16 further in view of Yuan et al. ( Multiview Scene Image Inpainting Based on Conditional Generative Adversarial Networks): PNG media_image28.png 733 889 media_image28.png Greyscale Re claim 8. (Original), BEN HU of the combination of the combination of CAO, Qui, Zhu, Zhang,PAN, WANG, BEN HU teaches The method according to claim 7, further comprising: after the selecting (directly) the at least (first) one target frequency component channel feature (or characteristic) graph from the frequency component channel feature graphs (comprising said first characteristic graph), performing a first shuffling processing on the target frequency component channel (characteristic) feature graph to obtain a first shuffled feature graph; and performing normalization processing on the first shuffled feature graph to obtain the first (wavelet) desensitized image data of the current training sample image data (i.e., training image). BEN HU of the combination of the combination of CAO, Qui, Zhu, Zhang,PAN, WANG, BEN HU does not teach: --performing a first shuffling processing … a first shuffled; and performing normalization processing on the first shuffled --. Yuan teaches the difference of claim 8: --performing a first shuffling processing (fig. 3: “Channel Shuffle”, twice) … a first shuffled (fig. 3: “Channel Shuffle”, twice); and performing (convolutional “Batch”, pg. 319, reproduced below) normalization processing on the first shuffled -- PNG media_image29.png 789 1099 media_image29.png Greyscale PNG media_image30.png 1310 984 media_image30.png Greyscale Since BEN HU of the combination of CAO, Qui, Zhu, Zhang,PAN, WANG, BEN HU teaches a camera & feature maps, one of skill in the art of cameras & feature maps can make BEN HU’s (CNN) of the combination of CAO, Qui, Zhu, Zhang,PAN, WANG, BEN HU be as Yuan’s (encoder-decoder: i.e., generator) predictably recognizing the change “to fully utilize…information”, Yuan (pg. 316, rcol, 1st para, middle) in each camera feature map: PNG media_image31.png 1952 1129 media_image31.png Greyscale Re claim 9. (Originial), BEN HU of the combination of CAO, Qui, Zhu, Zhang,PAN, WANG, BEN HU,Yuan teaches The method according to claim 8, further comprising: after the performing (batch) normalization processing on the first (channel) shuffled feature graph, performing channel mixing processing (fig. 3: “Channel Shuffle”, twice) on the first shuffled feature graph that is (batch) normalization processed; performing a second shuffling processing (fig. 3: “Channel Shuffle”, twice) on the first shuffled feature graph (fig. 3: “Channel Shuffle”, twice) that is channel mixing processed (fig. 3: “Channel Shuffle”, twice), to obtain a second shuffled feature graph (fig. 3: “Channel Shuffle”, twice); and performing (batch) normalization (coder-convolutional or decoder-deconvolutional) processing on the second shuffled feature graph (fig. 3: “Channel Shuffle”, twice) to obtain the first (wavelet) desensitized image data of the current training sample image data (or training image). Claim 17 rejected like claim 8: 17. The computing system according to claim 16, wherein the acts further include: after the selecting the at least one target frequency component channel feature graph from the frequency component channel feature graphs, performing a first shuffling processing on the target frequency component channel feature graph to obtain a first shuffled feature graph; and performing normalization processing on the first shuffled feature graph to obtain the first desensitized image data of the current training sample image data. Claim 18 rejected like claim 9: 18. The computing system according to claim 17, wherein the acts further include: after the performing normalization processing on the first shuffled feature graph, performing channel mixing processing on the first shuffled feature graph that is normalization processed; performing a second shuffling processing on the first shuffled feature graph that is channel mixing processed, to obtain a second shuffled feature graph; and performing normalization processing on the second shuffled feature graph to obtain the first desensitized image data of the current training sample image data. Allowable Subject Matter Claims 2,3 and 12 objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims: PNG media_image32.png 745 891 media_image32.png Greyscale The following is a statement of reasons for the indication of allowable subject matter: Claims 2,3 and 12 are allowable for the same reasons as in the Office action of 10/21/2025, starting page 78. Claim 21 allowed. The following is an examiner’s statement of reasons for allowance: Claim 21 is allowed for the same reasons as in said Office action of 10/21/2025, starting page 78. Any comments considered necessary by applicant must be submitted no later than the payment of the issue fee and, to avoid processing delays, should preferably accompany the issue fee. Such submissions should be clearly labeled “Comments on Statement of Reasons for Allowance.” Conclusion The prior art “nearest to the subject matter defined in the claims” (MPEP 707.05) made of record and not relied upon is considered pertinent to applicant's disclosure. The following table lists several references that are relevant to the subject matter claimed and disclosed in this Application. The references are not relied on by the Examiner, but are provided to assist the Applicant in responding to this Office action: Citation Relevance Zhang (Explanation-driven Frameworks for Data Augmentation and Supervised Contrastive Learning in Image Classification) Zhang teaches various training mixing augmentations with augmentation parameters “λ” and selecting a model training parameter “200”, pg. 34, CIFAR-100, 4th S: “We used the best epoch model among the 200 epoch validation process to select the hyperparameter - starting epoch.” as the closest to the claimed “select, from a candidate hyperparameter set, a first hyperparameter” of claim 21. CAO et al. (CN 113221747 A) with SEARCH machine translation CAO (same inventor as CAO {CN 112800467 A} applied in the rejection of claim 1) teaches, pg. 15, 4th txt blk: “identifying the user privacy data by the updated privacy identification model and the local trained active learning model in the terminal device.” as the closest to the claimed “locally training, by the first member device, a current image recognition model” of claim 1. MA et al. (US 2024/0144652 A1) MA teaches selecting from an image combining (mixing/blending) set, fig.8:830: “SELECTING, FROM A SUBSET…COMBINING THE PLURALITY OF IMAGES” via [0060]: “For example, the SAGE component 180 may be configured to obtain a plurality of images, compute a corresponding saliency map, select a rearrangement offset that maximizes an overall saliency, generate a new mixed image and a new mixed label, and augment the dataset with the new mixed image and the new mixed label.” as the closest to the claimed “select, from a candidate hyperparameter set, a first hyperparameter indicating a number of images participating in image mixing processing” of claim 21. THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to DENNIS ROSARIO whose telephone number is (571)272-7397. The examiner can normally be reached Monday-Friday, 9AM-5PM EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Henok Shiferaw can be reached at 571-272-4637. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /DENNIS ROSARIO/Examiner, Art Unit 2676 /Henok Shiferaw/Supervisory Patent Examiner, Art Unit 2676 1 data: information, wherein information is defined: Computers. A) important or useful facts obtained as output from a computer by means of processing input data with a program. B) data at any stage of processing (input, output, storage, transmission, etc.). (Dictionary.com) 2 data: information, wherein information is defined: Computers. A) important or useful facts obtained as output from a computer by means of processing input data with a program. B) data at any stage of processing (input, output, storage, transmission, etc.). (Dictionary.com) 3 sample: specimen (Dictionary.com) 4 Applicant’s disclosure: --[0042]The subject matter described in the present specification is now discussed with reference to example implementations. It should be understood that these implementations are merely discussed to enable a person skilled in the art to better understand and implement the subject matter described in the present specification, and are not intended to limit the protection scope, applicability, or examples described in the claims. Functions and arrangements of the discussed elements can be changed without departing from the protection scope of the content in the present specification. Based on a requirement, examples can be omitted or replaced, or various processes or components can be added. For example, the described method can be performed in an order different from the described order, and steps can be added, omitted, or combined. In addition, features described relative to some examples can also be combined in other examples.--, wherein scope is defined: Linguistics, Logic. the range of words or elements of an expression (claim 1) over which a modifier (e.g., a patent examiner) or operator (e.g., me) has control. (Dictionary.com) 5 image: a physical likeness or representation of a person, animal, or thing, photographed, painted, sculptured, or otherwise made visible, wherein representation is defined: the act of representing, wherein representing is defined: to serve as an example or specimen of; exemplify (Dictionary.com). 6 identity: image=sample OR sample=image, wherein identity is defined: Logic., an assertion that two terms (sample & image) refer to the same thing (specimen). 7 as: in the role, function, or status of (Dictionary.com) 8 THE CLAIMED INVENTION AS A WHOLE regarding “locally training”: The data-information problem faced by applicants is in applicant’s disclosure: [0002]A service processing solution based on an image recognition model is widely used in a large number of applications, for example, a face-scanning payment service based on a facial recognition model. Image data related to data privacy information (for example, user privacy information) is usually distributed in different data owners or different regions and countries. To protect the data privacy information, data sharing of the private data is not allowed between data owners or between different regions. However, to provide a user with a better service by using sufficient data, information between data needs to be adequately mined for a specific task to train an image recognition model. Therefore, a federated learning method is proposed. In the method, private image data of a plurality of data owners can be used to train an image recognition model while the data does not leave a domain. [0003]Inventors recognized that in the federated learning method, after locally completing image recognition model training, each data owner needs to share gradient information or weight information with a model owner for aggregation, which causes information leakage of the gradient information or the weight information. Techniques of this specification avoid reconstructing original image data from shared gradient information or weight information. Applicant’s data-information solution includes “local training” (maps to the claimed “locally training”): [0046]A federated learning solution based on mixed data desensitization is provided according to the implementations of the present specification. In the federated learning solution, when training an image recognition model by using local training sample image data, a first member device first performs data desensitization processing on the training sample image data based on frequency domain transform, then performs image mixing on desensitized image data by using an image mixing processing method based on, e.g., Mixup data augmentation, and subsequently trains the image recognition model by using the desensitized image data that is image mixing processed, thereby improving a data privacy protection capability in federated learning. In addition, during image recognition model training, a hyperparameter selection model is further used to adaptively select an appropriate image mixing parameter (e.g., a number of images participating in image mixing) based on first desensitized image data, so as to ensure not only that a plurality of pieces of desensitized image data can be fused in an image recognition model training process, but also that model training performance is not significantly affected. I don’t see “image mixing processed” in claim 1 and instead I see “label mixing processed” (not clear how “label mixing processed”:fig. 8:820 factors into the data-information solution). This (“image mixing processed”) absence of applicant’s data-information solution is an indication of obviousness. 9 “first” gives order to the sequential elements (“second”) of claim 1: Placeholders: “second” can be “first” and “first” can be “second” under the broadest reasonable interpretation of claim 1. 10 “having” can be a participle participating in the action of the claimed “iteratively performing” 11 “having” can be an adjective further limiting “iteratively performing” 12 (italics) represent claim limitations already taught 13 ellipses (…) represent claim limitations already taught 14 participle 15 “first” gives order to the sequential elements (“second”) of claim 1: Placeholders: “second” can be “first” and “first” can be “second” under the broadest reasonable interpretation of claim 1. 16 “using” can be a participle participating in the action of the claimed “locally training”. 17 “using” can be an adjective further modifying the claimed “locally training”. 18 “first” gives order to the sequential elements (“second”) of claim 1: Placeholders: “second” can be “first” and “first” can be “second” under the broadest reasonable interpretation of claim 1. 19 “having” can be a participle participating in the action of the claimed “iteratively performing” 20 “having” can be an adjective further limiting “iteratively performing” 21 (italics) represent claim limitations already taught 22 ellipses (…) represent claim limitations already taught 23 “first” gives order to the sequential elements (“second”) of claim 1: Placeholders: “second” can be “first” and “first” can be “second” under the broadest reasonable interpretation of claim 1. 24 “using” can be a participle participating in the action of the claimed “locally training”. 25 “using” can be an adjective further modifying the claimed “locally training”. 26 CLAIM SCOPE I: “locally” is an adverb modifying the verb (train) of the claimed “training”: locally train. CLAIM SCOPE I is the broadest reasonable interpretation of claim 1. 27 CLAIM SCOPE II: “locally” is an adverb modifying this entire claimed clause: --local training, by the first local member device, a local current image recognition model by using the second local desensitized image data and the second local label data--. CLAIM SCOPE II is not the broadest reasonable interpretation of claim 1. 28 “by” is a propositional modifier modifying verbs, nouns (“training”), adjectives: locally 1st member device training 29 “first” gives order to the sequential elements (“second”) of claim 1: Placeholders: “second” can be “first” and “first” can be “second” under the broadest reasonable interpretation of claim 1. 30 CLAIM SCOPE III: “using” can be a participle participating in the action of the claimed “locally training”. 31 CLAIM SCOPE IV: “using” can be an adjective further modifying the claimed “locally training”. 32 “using” and “ locally training” are in a grammatical relationship and thus is hard to just grammatically address one of them (such as “locally training…a current image recognition model”) while ignoring the other (such as “using the second desensitized image data”). 33 (italics) represent claim limitations already taught 34 CLAIM SCOPE I: “locally” is an adverb modifying the verb (train) of the claimed “training”: locally train. CLAIM SCOPE I is the broadest reasonable interpretation of claim 1. 35 CLAIM SCOPE II: “locally” is an adverb modifying this entire claimed clause: --local training, by the first local member device, a local current image recognition model by using the second local desensitized image data and the second local label data--. CLAIM SCOPE II is not the broadest reasonable interpretation of claim 1. 36 “by” is a propositional modifier modifying verbs, nouns (“training”), adjectives: locally 1st member device training 37 system: any assemblage or set of correlated members. (Dictionary.com) 38 “first” gives order to the sequential elements (“second”) of claim 1: Placeholders: “second” can be “first” and “first” can be “second” under the broadest reasonable interpretation of claim 1. 39 CLAIM SCOPE III: “using” can be a participle participating in the action of the claimed “locally training”. 40 CLAIM SCOPE IV: “using” can be an adjective further modifying the claimed “locally training”. 41 “using” and “ locally training” are in a grammatical relationship and thus is hard to just grammatically address one of them (such as “locally training…a current image recognition model”) while ignoring the other (such as “using the second desensitized image data”). 42 (italics) represent claim limitations already taught 43 aggregating: to combine or be combined into a body, etc (Dictionary.com) 44 update: Computers., to incorporate new or more accurate information in (a database, program, procedure, etc.), wherein incorporate is defined: to form or combine into one body or uniform substance, as ingredients, wherein ingredients is defined: something that enters as an element into a mixture, wherein more is defined: additional. (Dictionary.com) 45 Identity: aggregate=update & update=aggregate 46 image: a description of something in speech or writing, wherein description is defined: the act or method of describing, wherein describing is defined: label (Dictionary.com) 47 “when” is an unsatisfied contingent limitation or a not established as true or sure limitation in method claim 5 “Therefore "[t]he Examiner did not need to present evidence of the obviousness of the [ ] method steps of claim 1 that are not required to be performed under a broadest reasonable interpretation of the claim” MPEP 2111.04 II. CONTINGENT LIMITATIONS, last para: PNG media_image15.png 258 847 media_image15.png Greyscale 48 when: at any time, wherein at is defined: (used to indicate a state or condition), wherein condition is defined: that (i.e., “a previous training result was sent reaches a second threshold number of rounds”) on which (i.e., “a previous training result was sent reaches a second threshold number of rounds”) something else (“a round interval between a current number of training rounds of the image recognition model and a number of training rounds of the image recognition model”) is contingent, wherein contingent is defined: dependent for existence, occurrence, character, etc., on something (i.e., “a previous training result was sent reaches a second threshold number of rounds”) not yet certain, wherein certain is defined: established as true or sure; unquestionable; indisputable (Dictionary.com) 49 period: the present time, wherein present is defined: current (Dictionary.com) 50 sample: specimen (Dictionary.com) 51 Applicant’s disclosure: [0042]The subject matter described in the present specification is now discussed with reference to example implementations. It should be understood that these implementations are merely discussed to enable a person skilled in the art to better understand and implement the subject matter described in the present specification, and are not intended to limit the protection scope, applicability, or examples described in the claims. Functions and arrangements of the discussed elements can be changed without departing from the protection scope of the content in the present specification. Based on a requirement, examples can be omitted or replaced, or various processes or components can be added. For example, the described method can be performed in an order different from the described order, and steps can be added, omitted, or combined. In addition, features described relative to some examples can also be combined in other examples. 52 image: a physical likeness or representation of a person, animal, or thing, photographed, painted, sculptured, or otherwise made visible, wherein representation is defined: the act of representing, wherein representing is defined: to serve as an example or specimen of; exemplify (Dictionary.com). 53 identity: image=sample OR sample=image, wherein identity is defined: Logic., an assertion that two terms (sample & image) refer to the same thing (specimen). 54 as: in the role, function, or status of (Dictionary.com) 55 aggregating: to combine or be combined into a body, etc (Dictionary.com) 56 update: Computers., to incorporate new or more accurate information in (a database, program, procedure, etc.), wherein incorporate is defined: to form or combine into one body or uniform substance, as ingredients, wherein ingredients is defined: something that enters as an element into a mixture, wherein more is defined: additional. (Dictionary.com) 57 Identity: aggregate=update & update=aggregate 58 image: a description of something in speech or writing, wherein description is defined: the act or method of describing, wherein describing is defined: label (Dictionary.com) 59 “when” is an unsatisfied contingent limitation or a not established as true or sure limitation in method claim 5 “Therefore "[t]he Examiner did not need to present evidence of the obviousness of the [ ] method steps of claim 1 that are not required to be performed under a broadest reasonable interpretation of the claim” MPEP 2111.04 II. CONTINGENT LIMITATIONS, last para: PNG media_image15.png 258 847 media_image15.png Greyscale 60 when: at any time, wherein at is defined: (used to indicate a state or condition), wherein condition is defined: that (i.e., “a previous training result was sent reaches a second threshold number of rounds”) on which (i.e., “a previous training result was sent reaches a second threshold number of rounds”) something else (“a round interval between a current number of training rounds of the image recognition model and a number of training rounds of the image recognition model”) is contingent, wherein contingent is defined: dependent for existence, occurrence, character, etc., on something (i.e., “a previous training result was sent reaches a second threshold number of rounds”) not yet certain, wherein certain is defined: established as true or sure; unquestionable; indisputable (Dictionary.com) 61 period: the present time, wherein present is defined: current (Dictionary.com) 62 domain: Mathematics. the (frequency) set of values assigned to the independent variables of a function. (Dictionary.com) 63 at least: According to the lowest possible assessment (“one”), no less than (“one”), wherein -est (of lowest) is defined: a suffix forming the superlative degree of adjectives and adverbs, wherein superlative is defined: of the highest kind, quality, or order. (Dictionary.com) 64 Markush language: claim 7 is limited by one graph and not two graphs 65 graph: a diagram representing a system of connections or interrelations among two or more things by a number of distinctive dots, lines, bars, etc. (Dictionary.com) 66 each: every one of two or more considered individually or one by one. (Dictionary.com) 67 Markush language: 68 MPEP 2117 Markush Claims [R-01.2024], 2nd para, 4th S: “Although the term "Markush claim" is used throughout the MPEP, any claim that recites alternatively usable members (i.e., “each”), regardless of format, should be treated as a Markush claim.” 69 respectively: (of two or more things) referring or applying to two or more things previously mentioned in a parallel or sequential way. (Dictionary.com 70 Markush language 71 i.e., the (frequency) set 72 at least: According to the lowest possible assessment (“one”), no less than (“one”), wherein -est (of lowest) is defined: a suffix forming the superlative degree of adjectives and adverbs, wherein superlative is defined: of the highest kind, quality, or order. (Dictionary.com) 73 Markush language: claim 7 is limited by one graph and not two graphs 74 graph: a diagram representing a system of connections or interrelations among two or more things by a number of distinctive dots, lines, bars, etc. (Dictionary.com) 75 i.e., the (frequency) set 76 at least: According to the lowest possible assessment (“one”), no less than (“one”), wherein -est (of lowest) is defined: a suffix forming the superlative degree of adjectives and adverbs, wherein superlative is defined: of the highest kind, quality, or order. (Dictionary.com) 77 Markush language: claim 7 is limited by one graph and not two graphs 78 graph: a diagram representing a system of connections or interrelations among two or more things by a number of distinctive dots, lines, bars, etc. (Dictionary.com) 79 i.e., the (frequency) set 80 at least: According to the lowest possible assessment (“one”), no less than (“one”), wherein -est (of lowest) is defined: a suffix forming the superlative degree of adjectives and adverbs, wherein superlative is defined: of the highest kind, quality, or order. (Dictionary.com) 81 Markush language: claim 7 is limited by one graph and not two graphs 82 graph: a diagram representing a system of connections or interrelations among two or more things by a number of distinctive dots, lines, bars, etc. (Dictionary.com) 83 i.e., the (frequency) set
Read full office action

Prosecution Timeline

Sep 29, 2023
Application Filed
Oct 16, 2025
Non-Final Rejection — §101, §103
Dec 30, 2025
Interview Requested
Jan 06, 2026
Examiner Interview Summary
Jan 06, 2026
Applicant Interview (Telephonic)
Jan 15, 2026
Response Filed
Feb 12, 2026
Final Rejection — §101, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12586184
METHODS AND APPARATUS FOR ANALYZING PATHOLOGY PATTERNS OF WHOLE-SLIDE IMAGES BASED ON GRAPH DEEP LEARNING
2y 5m to grant Granted Mar 24, 2026
Patent 12585733
SYSTEMS AND METHODS OF SENSOR DATA FUSION
2y 5m to grant Granted Mar 24, 2026
Patent 12536786
IMAGE LOCALIZATION USING A DIGITAL TWIN REPRESENTATION OF AN ENVIRONMENT
2y 5m to grant Granted Jan 27, 2026
Patent 12518519
PREDICTOR CREATION DEVICE AND PREDICTOR CREATION METHOD
2y 5m to grant Granted Jan 06, 2026
Patent 12518404
SYSTEMS AND METHODS FOR MACHINE LEARNING BASED PHYSIOLOGICAL MOTION MEASUREMENT
2y 5m to grant Granted Jan 06, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
69%
Grant Probability
98%
With Interview (+28.6%)
3y 8m
Median Time to Grant
Moderate
PTA Risk
Based on 557 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month