Prosecution Insights
Last updated: April 19, 2026
Application No. 18/579,178

METHOD AND DEVICE FOR DATA PROCESSING

Non-Final OA §102§103
Filed
Jan 12, 2024
Examiner
LEE, BENEDICT E
Art Unit
2665
Tech Center
2600 — Communications
Assignee
BEIJING ZITIAO NETWORK TECHNOLOGY CO., LTD.
OA Round
1 (Non-Final)
87%
Grant Probability
Favorable
1-2
OA Rounds
3y 0m
To Grant
99%
With Interview

Examiner Intelligence

Grants 87% — above average
87%
Career Allow Rate
92 granted / 106 resolved
+24.8% vs TC avg
Moderate +15% lift
Without
With
+14.8%
Interview Lift
resolved cases with interview
Typical timeline
3y 0m
Avg Prosecution
16 currently pending
Career history
122
Total Applications
across all art units

Statute-Specific Performance

§101
7.6%
-32.4% vs TC avg
§103
50.7%
+10.7% vs TC avg
§102
31.8%
-8.2% vs TC avg
§112
7.3%
-32.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 106 resolved cases

Office Action

§102 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Priority Acknowledgment is made of applicant’s claim for foreign priority under 35 U.S.C. § 119 (a)-(d). The certified copy has been filed in parent Application No. CN202110802048.4, filed on 07/15/2021. Claim Objections Claim 6 is objected to because of the following informalities: the conjunctions “and/or” create uncertain scope.1 Appropriate correction is required. Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claims 1, 20 and 25 are rejected under 35 U.S.C. § 102(a)(2) as being anticipated by Kawaguchi (U.S. 11,514,307 B2). Regarding claim 1, Kawaguchi discloses a method for data processing adapted to a generative adversarial network obtained by model distillation, wherein the method for data processing comprises: obtaining an image to be processed; and (Per Fig. 2, Kawaguchi discloses image data Dt1a in his first learning data Dt1. Kawaguchi col. 7 lines 13–18. First learning data Dt1 stores image data Dt1a of classification targets and correct answer class data Dt1b (also referred to as a correct answer label) in association with each other.) processing the image with a first generator to obtain a processed image (a processed image construed as image data Dt2a while conducting iterative process to advance unprocessed image data into input image in the learning data generator 20. See his col. 13 lines 35–41.); (Per Fig. 9 step at S22, Kawaguchi discloses learning data generator 20 where image data Dt2a is input to his large-scale classifier Dm1. Id. col. 13 lines 42–48. [l]earning data generator 20 (first classification processor 21) inputs image data Dt2a to large-scale classifier Dm1 already subjected to the learning process,) wherein the generative adversarial network comprises the first generator, a second generator and a discriminator (a discriminator construed as compression rule generator as Kawaguchi discloses classification to analyze correlation between trained data. See his col. 17 lines 31–38. [a]nalyzes degrees of correlation between candidate classes a, b, c using the data tables), the model distillation is a process of alternately training the first generator (the first generator construed as small-scale classifier Dm2) and the second generator (the second generator construed as large-scale classifier Dm1), and a model scale of the first generator is smaller than a model scale of the second generator. (Per Fig. 3, Kawaguchi discloses large-scale classifier Dm1 and small-scale classifier Dm2 are built in his neural network. Id. col. 7 lines 29–41. Large-scale classifier Dm1 and small-scale classifier Dm2 are composed by including the neural network, for example.) Regarding claim 20, Kawaguchi discloses an electronic device comprising at least one processor (Fig. 4, 101 a CPU) and a memory (Fig. 4, 103 a RAM); the memory stores computer-executable instructions; the computer-executable instructions stored in the memory which, when executed by the at least one processor, cause the at least one processor to carry out a method for data processing adapted to a generative adversarial network obtained by model distillation, wherein the method comprises: obtaining an image to be processed; and (Per Fig. 2, Kawaguchi discloses image data Dt1a in his first learning data Dt1. Kawaguchi col. 7 lines 13–18. First learning data Dt1 stores image data Dt1a of classification targets and correct answer class data Dt1b (also referred to as a correct answer label) in association with each other.) processing the image with a first generator to obtain a processed image (a processed image construed as image data Dt2a while conducting iterative process to advance unprocessed image data into input image in the learning data generator 20. See his col. 13 lines 35–41.); (Per Fig. 9 step at S22, Kawaguchi discloses learning data generator 20 where image data Dt2a is input to his large-scale classifier Dm1. Id. col. 13 lines 42–48. [l]earning data generator 20 (first classification processor 21) inputs image data Dt2a to large-scale classifier Dm1 already subjected to the learning process,) wherein the generative adversarial network comprises the first generator, a second generator and a discriminator (a discriminator construed as compression rule generator as Kawaguchi discloses classification to analyze correlation between trained data. See his col. 17 lines 31–38. [a]nalyzes degrees of correlation between candidate classes a, b, c using the data tables), the model distillation is a process of alternately training the first generator (the first generator construed as small-scale classifier Dm2) and the second generator (the second generator construed as large-scale classifier Dm1), and a model scale of the first generator is smaller than a model scale of the second generator. (Per Fig. 3, Kawaguchi discloses large-scale classifier Dm1 and small-scale classifier Dm2 are built in his neural network. Id. col. 7 lines 29–41. Large-scale classifier Dm1 and small-scale classifier Dm2 are composed by including the neural network, for example.) Regarding claim 25, Kawaguchi discloses a non-transitory computer-readable storage medium, wherein the non- transitory computer-readable storage medium stores computer-executable instructions which, when executed by a processor, carry out a method for data processing adapted to a generative adversarial network obtained by model distillation, wherein the method comprises: obtaining an image to be processed; and (Per Fig. 2, Kawaguchi discloses image data Dt1a in his first learning data Dt1. Kawaguchi col. 7 lines 13–18. First learning data Dt1 stores image data Dt1a of classification targets and correct answer class data Dt1b (also referred to as a correct answer label) in association with each other.) processing the image with a first generator to obtain a processed image (a processed image construed as image data Dt2a while conducting iterative process to advance unprocessed image data into input image in the learning data generator 20. See his col. 13 lines 35–41.); (Per Fig. 9 step at S22, Kawaguchi discloses learning data generator 20 where image data Dt2a is input to his large-scale classifier Dm1. Id. col. 13 lines 42–48. [l]earning data generator 20 (first classification processor 21) inputs image data Dt2a to large-scale classifier Dm1 already subjected to the learning process,) wherein the generative adversarial network comprises the first generator, a second generator and a discriminator (a discriminator construed as compression rule generator as Kawaguchi discloses classification to analyze correlation between trained data. See his col. 17 lines 31–38. [a]nalyzes degrees of correlation between candidate classes a, b, c using the data tables), the model distillation is a process of alternately training the first generator (the first generator construed as small-scale classifier Dm2) and the second generator (the second generator construed as large-scale classifier Dm1), and a model scale of the first generator is smaller than a model scale of the second generator. (Per Fig. 3, Kawaguchi discloses large-scale classifier Dm1 and small-scale classifier Dm2 are built in his neural network. Id. col. 7 lines 29–41. Large-scale classifier Dm1 and small-scale classifier Dm2 are composed by including the neural network, for example.) Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 2–3, 7, and 21–22 are rejected under 35 U.S.C. § 103 as being unpatentable over Kawaguchi in view of Huang (CN112258381A). Regarding claim 2, Kawaguchi fails to specifically disclose the method for data processing, wherein a process of alternately training the first generator and the second generator in the generative adversarial network comprises: determining a loss value of the second generator based on sample data and the discriminator, the sample data comprising a sample image and a reference image corresponding to the sample image; adjusting the second generator based on the loss value of the second generator; determining a distillation loss between the adjusted second generator and the first generator based on the sample image, the adjusted second generator and the first generator; and adjusting the first generator based on the distillation loss. In related art, Huang discloses the method for data processing, wherein a process of alternately training the first generator and the second generator in the generative adversarial network comprises: determining a loss value of the second generator based on sample data and the discriminator, the sample data comprising a sample image and a reference image corresponding to the sample image; (Huang discloses a first discriminant loss value as the first discriminator is trained with the first image and the first sample image. Huang para. ¶0077. The first discriminant loss value is the loss value calculated when training the first discriminator based on the first image and the first sample image.) adjusting the second generator based on the loss value of the second generator; (Per Fig. 2 at step 205, Huang discloses adjusting parameters of the second generator based on the loss value. Id. para. ¶0104. [t]he parameters of the second generator are adjusted based on the restoration loss value.) determining a distillation loss between the adjusted second generator and the first generator based on the sample image, the adjusted second generator and the first generator; and (Per Fig. 2 at step 205, Huang discloses a restoration loss value between a first generator and a second generator. Id. para. ¶0105. That is, the restoration loss value is related to the first generator GB and the second generator GA.) adjusting the first generator based on the distillation loss. (Per Fig. 2 at step 205, Huang discloses that the parameters of the first generator are adjusted based on the loss value. Id. [t]he parameters of the first generator GB can be adjusted based on the similarity loss value,) Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date to incorporate the teachings of Huang into the teachings of Kawaguchi to prevent a failure, which causes deformation between sample images is large. Id. para. ¶0005. Regarding claim 3, Kawaguchi as modified by Huang, discloses the method for data processing, wherein determining the loss value of the second generator based on the sample data and the discriminator comprises: processing the sample image with the second generator to obtain an output image of the second generator; (Per Fig. 3, Huang discloses a second generator to obtain a second image where the first image is transformed. Huang para. ¶0100. [a]nd the first image B1 is transformed by the second generator GA to obtain the second image A1.) performing, with the discriminator, a true or false discrimination between the reference image corresponding to the sample image and the output image of the second generator to determine an adversarial loss of the second generator; (Huang discloses a recognition result to determine whether its true value is large or not such that the second sample image is recognized in the discriminator. Id. para. ¶0094. [t]he error between the recognition result and the true result is large, indicating that the first discriminator DB cannot recognize the second category of images;) determining a reconstruction loss of the second generator based on a difference between the reference image corresponding to the sample image and the output image of the second generator; and (Huang discloses reconstructing an image based on latent variables of the other image. Id. para. ¶0140. The decoder D is used to reconstruct the image Y corresponding to image X based on the latent variables Z.) determining the loss value of the second generator based on the adversarial loss and the reconstruction loss. (Then, Huang discloses training multiple sample images based on the loss value adjusting parameters of the trained model. Id. para. ¶0141. The parameters of the AE model are then adjusted based on the restoration loss value to obtain the trained AE model.) Regarding claim 7, it has been rejected in the same manner as claim 2. Regarding claim 21, it has been rejected in the same manner as claim 2. Regarding claim 22, it has been rejected in the same manner as claim 3. PNG media_image1.png 556 643 media_image1.png Greyscale Claim 9 is rejected under 35 U.S.C. § 103 as being unpatentable over Kawaguchi in view of Zeng (CN113112020A). Regarding claim 9, Kawaguchi fails to specifically disclose the method for data processing, wherein the first generator is a student generator, and the second generator is a teacher generator. In related art, Zeng discloses the method for data processing, wherein the first generator is a student generator, and the second generator is a teacher generator. (Per Fig. 1 at step S103, Zeng discloses training teacher and student networks inputting a generated image. Zeng para. ¶0072. Input the generated image into the trained teacher network and student network, and perform knowledge distillation on the student network;) Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date to incorporate the teachings of Zeng into the teachings of Kawaguchi to build an efficient model network where generative networks and knowledge distillation are compressed. Id. para. ¶0006. Claim 10 is rejected under 35 U.S.C. § 103 as being unpatentable over Kawaguchi in view of Li et al. (U.S. 11,429,860 B2) Regarding claim 10, Kawaguchi fails to specifically disclose the method, wherein the teacher generator comprises a first teacher generator and a second teacher generator, a model capacity of the first teacher generator is greater than a model capacity of the student generator, and a model depth of the second teacher generator is greater than a model depth of the student generator. In related art, Li discloses the method, wherein the teacher generator comprises a first teacher generator and a second teacher generator, a model capacity of the first teacher generator is greater than a model capacity of the student generator, and a model depth of the second teacher generator is greater than a model depth of the student generator. (Per Fig. 1, Li discloses that a teacher DNN model has a larger model size than the student one. Li col. 6 lines 14–38. [b]y training an initialized “student” DNN model to approximate a trained giant-sized teacher DNN model having a larger model size (e.g. number of parameters) than the student, wherein the giant-sized teacher DNN model comprises an ensemble of other DNN models.) Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date to incorporate the teachings of Li into the teachings of Kawaguchi to reduce dimensions of the DNN. Id. col. 1 lines 33–42. Allowable Subject Matter Claims 4–6, 8, 11–14, and 23–24 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Perincherry et al. (U.S. 11,321,587 B2) discloses a system and a method receive a dataset having a first label and a first context. Contact Any inquiry concerning this communication or earlier communications from the examiner should be directed to BENEDICT LEE whose telephone number is (571)270-0390. The examiner can normally be reached 10:00-16:00 (EST). Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Stephen R. Koziol can be reached at (408) 918-7630. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /BENEDICT E LEE/Examiner, Art Unit 2665 /Stephen R Koziol/Supervisory Patent Examiner, Art Unit 2665 1 See MPEP § 2173.05(h).
Read full office action

Prosecution Timeline

Jan 12, 2024
Application Filed
Jan 26, 2026
Non-Final Rejection — §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12567243
METHOD FOR OPTIMIZING DATA TO BE USED TO TRAIN OBJECT RECOGNITION MODEL, METHOD FOR BUILDING OBJECT RECOGNITION MODEL, AND METHOD FOR RECOGNIZING AN OBJECT
2y 5m to grant Granted Mar 03, 2026
Patent 12561958
METHOD OF TRAINING SEMICONDUCTOR PROCESS IMAGE GENERATOR
2y 5m to grant Granted Feb 24, 2026
Patent 12561215
GRAPH MACHINE LEARNING FOR CASE SIMILARITY
2y 5m to grant Granted Feb 24, 2026
Patent 12548170
METHOD, DEVICE AND SYSTEM FOR REAL-TIME MULTI-CAMERA TRACKING OF A TARGET OBJECT
2y 5m to grant Granted Feb 10, 2026
Patent 12541999
METHOD FOR EMOTION RECOGNITION BASED ON HUMAN-OBJECT TIME-SPACE INTERACTION BEHAVIOR
2y 5m to grant Granted Feb 03, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
87%
Grant Probability
99%
With Interview (+14.8%)
3y 0m
Median Time to Grant
Low
PTA Risk
Based on 106 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month