Prosecution Insights
Last updated: April 19, 2026
Application No. 18/321,248

APPARATUS AND METHOD FOR GENERATING TRAINING DATA

Final Rejection §101§102§103
Filed
May 22, 2023
Examiner
ROSARIO, DENNIS
Art Unit
2676
Tech Center
2600 — Communications
Assignee
Korea Institute Of Science And Technology
OA Round
2 (Final)
69%
Grant Probability
Favorable
3-4
OA Rounds
3y 8m
To Grant
98%
With Interview

Examiner Intelligence

Grants 69% — above average
69%
Career Allow Rate
385 granted / 557 resolved
+7.1% vs TC avg
Strong +29% interview lift
Without
With
+28.6%
Interview Lift
resolved cases with interview
Typical timeline
3y 8m
Avg Prosecution
34 currently pending
Career history
591
Total Applications
across all art units

Statute-Specific Performance

§101
16.5%
-23.5% vs TC avg
§103
40.3%
+0.3% vs TC avg
§102
24.6%
-15.4% vs TC avg
§112
13.6%
-26.4% vs TC avg
Black line = Tech Center average estimate • Based on career data from 557 resolved cases

Office Action

§101 §102 §103
DETAILED ACTION Claim 1,2,3,4,6,5,10 and 9 and 19 objected to because of the following informalities: Claims 1,2,3,4,6,5,10 and 9 and 11,12,13,14,16,15,20 and 19 rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Claim(s) 1,9 is/are rejected under 35 U.S.C. 103 as being unpatentable over Gu et al. (AdaIN-Based Tunable CycleGAN for Efficient Unsupervised Low-Dose CT Denoising) in view of Liu et al. (Very Lightweight Photo Retouching Network With Conditional Sequential Modulation) and DAVID et al. (US 2022/0012595 A1): Claim(s) 2,3,4,6,5 is/are rejected under 35 U.S.C. 103 as being unpatentable over Gu et al. (AdaIN-Based Tunable CycleGAN for Efficient Unsupervised Low-Dose CT Denoising) in view of Liu et al. (Very Lightweight Photo Retouching Network With Conditional Sequential Modulation) and DAVID et al. (US 2022/0012595 A1) as applied in the rejection of claims 1,9 further in view of LIN et al. (CN 110555458 A) with machine translation: Claim(s) 10 is/are rejected under 35 U.S.C. 103 as being unpatentable over Gu et al. (AdaIN-Based Tunable CycleGAN for Efficient Unsupervised Low-Dose CT Denoising) in view of Liu et al. (Very Lightweight Photo Retouching Network With Conditional Sequential Modulation) and DAVID et al. (US 2022/0012595 A1) as applied in claims 1,9 further in view of Jung (Learning to Avoid Errors in GANs by Manipulating Input Spaces): Claim(s) 11,19 is/are rejected under 35 U.S.C. 103 as being unpatentable over Gu et al. (AdaIN-Based Tunable CycleGAN for Efficient Unsupervised Low-Dose CT Denoising) in view of Liu et al. (Very Lightweight Photo Retouching Network With Conditional Sequential Modulation): Claim(s) 12,13,14,16,15 is/are rejected under 35 U.S.C. 103 as being unpatentable over Gu et al. (AdaIN-Based Tunable CycleGAN for Efficient Unsupervised Low-Dose CT Denoising) in view of Liu et al. (Very Lightweight Photo Retouching Network With Conditional Sequential Modulation) as applied in claims 11,19 further in view of LIN et al. (CN 110555458 A) with machine translation: Claim(s) 20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Gu et al. (AdaIN-Based Tunable CycleGAN for Efficient Unsupervised Low-Dose CT Denoising) in view of Liu et al. (Very Lightweight Photo Retouching Network With Conditional Sequential Modulation) as applied in claims 11,19 further in view of Jung (Learning to Avoid Errors in GANs by Manipulating Input Spaces): Response to Amendment The amendment was received 12/5/2025. Claims 7,8 and 17,18 canceled. Claims pending 1,2,3,4,6,5,10 and 9 and 11,12,13,14,16,15,20 and 19: PNG media_image1.png 570 192 media_image1.png Greyscale Priority Receipt is acknowledged of certified copies (KOREA, REPUBLIC OF 10-2022-0188728 12/29/2022) of papers (a translation (filed 12/05/2025)) required by 37 CFR 1.55. Applicant cannot not rely upon the certified copy (filed 06/16/2023) of the foreign priority application (KOREA, REPUBLIC OF 10-2022-0188728 12/29/2022) to overcome this rejection (Claim(s) 1-20 is/are rejected under 35 U.S.C. 102(a)(1) as being anticipated by the best reference to Kim et al. (NaturalInversion: Data-Free Image Synthesis Improving Real-World Consistency) in the Office action of 08/07/2025, starting page 20) because a translation (filed 12/05/2025) of said application has not not been made of record (filed 12/05/2025) in accordance with 37 CFR 1.55. When an English language translation (filed 12/05/2025) of a non-English language foreign application (KOREA, REPUBLIC OF 10-2022-0188728 12/29/2022) is required, the translation (filed 12/05/2025) must be that of the certified copy (of the foreign application as filed) (filed 06/16/2023) submitted together with a statement (filed 12/05/2025: Oath or Declaration filed: “CERTIFICATION” ) that the translation of the certified copy (filed 06/16/2023) is accurate. See MPEP §§ 215 and 216. PNG media_image2.png 1018 847 media_image2.png Greyscale Claim Objections PNG media_image1.png 570 192 media_image1.png Greyscale Claim 1,2,3,4,6,5,10 and 9 and 19 objected to because of the following informalities: Claim 1 has no period. Thus claims 2,3,4,6,5,10 objected for depending on claim 1. Claim 9 depends on canceled claim 8. Claim 9 is interpreted to depend on claim 1. Claim 19 depends on canceled claim 18. Claim 18 is interpreted to depend on claim 11. Appropriate correction is required. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1,2,3,4,6,5,10 and 9 and 11,12,13,14,16,15,20 and 19 rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Step 0: Broadest Reasonable Interpretation in footnotes in this Office action. Step 1: The claimed invention is directed to a machine and method: Claim 1 is to a machine and claim 11 is to a method Step 2A, prong 1: The claim(s) recite(s) math (--one sample vector from a first neural network generator…a second neural network generator … a feature map … a lightweight neural network target model…generator1… parameter2…distribution3…distribution value--) without significantly more: 1. An apparatus for generating training data, comprising: at least one processor; and a memory to store instructions for executing the at least one processor, wherein upon being executed by the at least one processor, the instructions allow the at least one processor to: output a first image for one sample vector from a first neural network4 generator included in the apparatus, and generate a second image from a second neural network generator included in the apparatus based on the first image and a feature map extracted from a convolution block for each stage of a lightweight neural network target model5 for the first image, generate a third image from a third neural network generator included in the apparatus based on at least one of the first image or the second image, wherein the third neural network generator generates a fourth image by applying a scaling parameter which adjusts an output channel distribution to the third image close to a channel distribution value of original training data of the lightweight target neural network model, and wherein the generating of the third image comprises generating a fourth image by applying a scaling parameter which adjusts an output channel distribution to the third image Step 2A, prong 2: This judicial exception is not integrated into a practical application because the additional elements (“processor” “memory” “neural network” “image” “output channel” “channel”) considered with the math do not improve computer-electronics technology or a technical field6 (“lightweight deep learning”7) in view of applicant’s disclosure. Step 2B: The claim(s) does/do not include additional elements that are sufficient to amount to significantly more than the judicial exception because each additional element: (“processor” “memory” “neural network” “image” “output channel” “channel”) considered individually or with the math: (--one sample vector from a first neural network generator…a second neural network generator … a feature map … a lightweight neural network target model…generator8… parameter9…distribution10…distribution value--) adheres to conventional practices as indicated in applicant’s specification’s background11: PNG media_image3.png 1483 893 media_image3.png Greyscale Response to Arguments Claim Objections Applicant’s arguments, see remarks, page 7, filed 12/05/2025, with respect to the objection of claims 1-10 have been fully considered and are persuasive. The objection of claims 1-10 has been withdrawn. Rejections under 35 USC 101 Applicant's arguments filed 12/05/2025 have been fully considered but they are not persuasive. In response to applicant's argument that the references fail to show certain features of the invention, it is noted that the features upon which applicant relies (“learned” “aligns” “a specific multi-stage neural-network pipeline” “learned…within neural networks” “learned… controls channel-utilization efficiency and aligns… a multi-stage adaptive process, in which intermediate outputs (e.g., the second image and feature maps) are used to correct and refine later outputs” “learned…internal operation of the lightweight neural network model itself” “neural-network training” “guide…learned…statistically…preserving performance” “This ordered combination… compressing or lightening a neural network”), i.e., applicant’s remarks, pages 8-10: Moreover, as amended, claim 1 now requires that the at least one processor to: output a first image from a first neural network generator for a sample vector; generate a second image from a second neural network generator based on the first image and a feature map extracted from convolution blocks of a lightweight neural network target model; and generate a third image from a third neural network generator based on at least one of the first or second images, wherein the third image is produced by applying a learned scaling parameter that adjusts the output channel distribution of the third image so that its channel distribution aligns with that of the original training data of the lightweight neural network target model. These steps define a specific multi-stage neural-network pipeline that coordinates multiple neural network generators with the internal convolutional structure of a lightweight neural network model. This architecture is designed to preserve original data characteristics and stabilize performance even when the target model is compressed or "lightweight." Far from being a mental process or a mere mathematical formula, the claim recites particular operations of convolution blocks, feature maps, and learned scaling parameters within neural networks, technical elements that only exist and operate within computer technology. Even assuming arguendo that some mathematical concept is implicated, the claimed features are integrated into a practical application. The claimed apparatus operates on actual images and feature maps generated by neural network generators and convolution blocks, not on abstract numbers. The learned scaling parameter is not a generic coefficient but a structural optimization parameter that controls channel-utilization efficiency and aligns channel distributions of generated images with those of the original training data, thereby addressing the technical problem of performance degradation in lightweight neural network models. Further, the generation of the third image via the third neural network generator forms part of a multi-stage adaptive process, in which intermediate outputs (e.g., the second image and feature maps) are used to correct and refine later outputs. This adaptive, multi-stage architecture is analogous to the rule-based and learning- based improvements found eligible in McRO, where structured procedures for generating improved outputs rendered the claims patent-eligible. Here, the claims specify how the neural network generators, convolution blocks, and learned scaling parameter interact to improve the internal operation of the lightweight neural network model itself, rather than merely using a computer as a tool to perform arithmetic. The claims therefore fit within the category of technological improvements over conventional neural-network training techniques as contemplated in decisions such as Enfish, McRO, DDR Holdings, and in USPTO Examples 39 and 47. At Step 2B, the claims recite significantly more than any alleged abstract idea. The additional elements are not "well-understood, routine, and conventional" in combination: using multiple neural network generators (first, second, and third) in cooperation with a lightweight neural network target model; employing feature maps extracted from specific convolution blocks of the lightweight neural network model to guide generation of the second image; and applying a learned scaling parameter specifically to adjust the channel distribution of the third image such that it statistically matches the original training data distribution, thereby preserving performance of the lightweight model. This ordered combination provides a specific, non-conventional solution to the technical problem of maintaining inference accuracy when compressing or lightening a neural network. The Office has not identified, and Applicant is unaware of, any evidence that this particular arrangement of neural network generators, convolution blocks, and channel-distribution-based scaling parameters was routine or conventional in the art. ) are not recited in the rejected claim(s). Although the claims are interpreted in light of the specification, limitations from the specification are not read into the claims. See In re Van Geuns, 988 F.2d 1181, 26 USPQ2d 1057 (Fed. Cir. 1993). Rejections under 35 USC 102/103 Applicant’s arguments, see remarks, pages 11,12, filed 12/05/2025, with respect to 35 USC 102(a)(1) have been fully considered and are persuasive. The rejection of claims 1-20 has been withdrawn in the Office action of 08/07/2025, page 20: Claim(s) 1-20 is/are rejected under 35 U.S.C. 102(a)(1) as being anticipated by the best reference to Kim et al. (NaturalInversion: Data-Free Image Synthesis Improving Real-World Consistency Thus all 35 USC 102 and 103 rejections in the Office action of 8/7/2025 are withdrawn and Gu (AdaIN-Based Tunable CycleGAN for Efficient Unsupervised Low-Dose CT Denosiing) as applied in the Office action of 08/07/2025 is being applied to the current claims (12/05/2025) under 35 USC 103 as the primary reference in the following rejections wherein David (US 20220012595 A1) teaches adjusting an input distribution close to the original training data corresponding to currently amended claim 1: [0026] The seed input data may be tuned to more closely match or converge to the original training data (e.g., having an input space distribution that more closely resembles that of the original training data, compared to a uniform input space distribution). In one embodiment, where the type or distribution of training data is unknown (e.g., not clear if it is image, text, or audio data, or if the distribution of data in the input space is Gaussian or constant), the target model may be probed to discern the type or distribution of training data. Ideally, minor adjustments in samples of the correct type or distribution (e.g., same as or substantially similar to the training dataset) will typically result in small changes to the model output (stable model), whereas minor adjustments in samples of the incorrect type or distribution may result in relatively large changes to the model output (unstable model). Accordingly, some embodiments may probe the model with multiple slightly different samples, e.g., varied according to a Gaussian, uniform, or other distributions and/or for each of a plurality of different data types. The data type and/or distribution for which the model is most stable (e.g., where relatively small changes in the input space cause relatively small changes in the output space) may be used as the data type and/or distribution of the seed input data. This mechanism may be performed in an initial test probe of the target model, e.g., prior to divergent behavior probes that test student-mentor output differences for ultimately extrapolating the divergent probe training dataset. For example, probing with random seed input data may be a first iteration, after which the seed data are incrementally adjusted to maximize or increase divergent student-mentor outputs, in each subsequent iteration, to generate dynamically adjusted divergent probe training data. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1,9 is/are rejected under 35 U.S.C. 103 as being unpatentable over Gu et al. (AdaIN-Based Tunable CycleGAN for Efficient Unsupervised Low-Dose CT Denoising) in view of Liu et al. (Very Lightweight Photo Retouching Network With Conditional Sequential Modulation) and DAVID et al. (US 2022/0012595 A1): PNG media_image4.png 577 436 media_image4.png Greyscale PNG media_image5.png 804 2006 media_image5.png Greyscale Claim 1 is rejected like broader claim 11 below: Re 1. (Currently Amended), Gu of the combination (illustrated above) of Gu,Liu teaches An apparatus for generating training data, comprising: at least one (“GPU” , pg. 78, C. Implementation Details, 3rd para, last S) processor; and a memory to store instructions for executing the at least one processor, wherein upon being executed by the at least one processor, the instructions allow the at least one processor to: output a first image (fig. 1: “X” or “Y”) for one sample vector from a first neural network generator included in the apparatus, and generate a second image (fig. 1: “X” or “Y”) from a second neural network generator (fig. 2: “G”) included in the apparatus based on the first image and a (“decoder”, pg. 78, C. Implementation Details, 1st para, 4th S) feature map extracted (decoded) from a convolution block (via fig. 2: “Convolution layer”: “convolution layer” “feature map”12, pg. 78, C. Implementation Details, 2nd para, 4th S) for each stage (“of 4 stages” , pg. 78, C. Implementation Details, 1st para, 1st S) of a (“very”, pg. 74, lcol, 1st full para, 4th S) lightweight13 (&) neural network14 (&) target15 model (Fig. 2: “AdaIN Code Generator”: see rejection claim 11) for the first image, generate a third image from a third neural network generator included in the apparatus based on at least one of the first image or the second image (see rejection claim 11), wherein the third neural network generator generates a fourth image by applying a scaling parameter (see rejection claim 11) which adjusts16 (since “adjusts” is further modified by “close to”, Gu of the combination of Gu,Liu does not teach the modification of “adjusts…close to”) an output channel distribution to the third image close17 to a channel distribution value of original training data of the lightweight target neural network model (see rejection claim 11), and wherein the generating of the third image comprises generating a fourth image by applying a scaling parameter which adjusts an output channel distribution to the third image (see rejection claim 11) Gu of the combination (illustrated above) of Gu,Liu does not teach the difference of claim 1 of: --adjusts (an output channel distribution)18…19close to…original training data--. David teaches the difference of claim 1: adjusts (an output channel distribution)20…21close to…original training data (via: [0026] The seed input data may be tuned22 to more closely match or converge to the original training data (e.g., having an input space distribution that more closely resembles that of the original training data, compared to a uniform input space distribution).) Since Gu of the combination of Gu,Liu teaches a distribution, one of skill in the art of distributions can make Gu’s of the combination (illustrated above) of Gu,Liu be as David’s seeing in the change “accelerating training, and improving accuracy for the same number of training iterations, as compared to training using a random or equally distributed training dataset.”, David [0007] last S: PNG media_image6.png 1137 2002 media_image6.png Greyscale Re claim 9, Gu of the combination (illustrated above) of Gu,Liu,David teaches The apparatus for generating training data according to claim 8, wherein the (channel-wise) scaling parameter is learned such that a (UN) channel distribution value of the third image is close (or “common”23, Liu: pg. 4647, lcol, 2nd para, penult S, as shown in fig. 7) to a channel distribution value (as shown in Liu’s fig. 7: “Z-score” is close or similar to “UN (proposed)” in terms of “Frequency” and box-plot “Values”) of original training data of the lightweight target neural network model. Claim(s) 2,3,4,6,5 is/are rejected under 35 U.S.C. 103 as being unpatentable over Gu et al. (AdaIN-Based Tunable CycleGAN for Efficient Unsupervised Low-Dose CT Denoising) in view of Liu et al. (Very Lightweight Photo Retouching Network With Conditional Sequential Modulation) and DAVID et al. (US 2022/0012595 A1) as applied in the rejection of claims 1,9 further in view of LIN et al. (CN 110555458 A) with machine translation: PNG media_image7.png 577 502 media_image7.png Greyscale Re claim 2 (Currently Amended), Gu of the combination (illustrated above) of Gu,Liu,David teaches The apparatus for generating training data according to claim 1, wherein the lightweight (AdaIN) target neural network model includes at least one first convolution block (via Gu’s fig. 2: “Convolution layer”) to generate the feature map, and wherein the second neural network generator (via Gu’s fig. 2: “G”) includes at least one second convolution block (via Gu’s fig. 2: “Convolution layer”) to generate a feature enhancement map. Gu of the combination (illustrated above) of Gu,Liu,David does not teach the difference of claim 2: a feature enhancement map. Lin teaches “the multi-channel feature enhancement map”, 3rd page, 1st S. Since Gu of the combination (illustrated above) of Gu,Liu,David teaches a feature map it would have been obvious to make the feature map as Lin’s predictably recognizing the change as being enhanced or increased in quality. Re claim 3 (Original), Gu of the combination (illustrated above) of Gu,Liu,David,Lin teaches The apparatus for generating training data according to claim 2, wherein the feature map of the first convolution block (via Gu’s fig. 2: “Convolution layer”) mapped with the second convolution block (via Gu’s fig. 2: “Convolution layer”) is combined (or “concatenated”, Gu: pg. 78, C. Implementation Details, 1st para, 4th S) with the feature enhancement map of the second convolution block. Re claim 4 (Currently Amended), Gu of the combination (illustrated above) of Gu,Liu,David,Lin teaches The apparatus for generating training data according to claim 3, wherein the first convolution block (via Gu’s fig. 2: “Convolution layer”) mapped with the second convolution block (via Gu’s fig. 2: “Convolution layer”) includes a remaining first convolution block (via Gu’s fig. 2: “Convolution layer”) except the first convolution block (via Gu’s fig. 2: “Convolution layer”) of a last stage (“of 4 stages”, Gu: pg. 78, C. Implementation Details, 1st para, 1st S) of the (“very”, Gu: pg. 74, lcol, 1st full para, 4th S) lightweight target neural network model (Gu: Fig. 2: “AdaIN Code Generator”). Re claim 6 (Currently Amended), Gu of the combination (illustrated above) of Gu,Liu,David,Lin teaches The apparatus for generating training data according to claim 4, wherein the feature map (via Gu’s fig. 2: “Convolution layer”) of the first convolution block (via Gu’s fig. 2: “Convolution layer”) of the last stage is used as an input value (represented as arrows in fig. 2) of the second neural network generator (Gu’s fig. 2: “G”). Re claim 5 (Currently Amended)24, Gu of the combination (illustrated above) of Gu,Liu,David,Lin teaches The apparatus for generating training data according to claim 3, wherein in case of the at least one second convolution block (via Gu’s fig. 2: “Convolution layer”) being a plurality of second convolution blocks (via Gu’s fig. 2: “Convolution layer”), the feature enhancement map (via Gu’s fig. 2: “Convolution layer”) of a previous second convolution block (via Gu’s fig. 2: “Convolution layer”) in combination with the feature map (via Gu’s fig. 2: “Convolution layer”) of the first convolution block (via Gu’s fig. 2: “Convolution layer”) corresponding to the previous second convolution block (via Gu’s fig. 2: “Convolution layer”) is included in an input value (represented as arrows in fig. 2) of a next second convolution block (via Gu’s fig. 2: “Convolution layer”). 5. (Currently Amended) The apparatus for generating training data according to claim 3, wherein in case of the at least one second convolution block being a plurality of second convolution blocks, the feature enhancement map of a previous second convolution block in combination with the feature map of the first convolution block corresponding to the previous second convolution block is included in an input value of a next second convolution block. 5.25 The apparatus for generating training data according to claim 3, wherein in case of the at least one second convolution block being a plurality of second convolution blocks, the feature enhancement map of a previous second convolution block in combination with the feature map of the first convolution block corresponding to the previous second convolution block is included in an input value of a next second convolution block. Claim(s) 10 is/are rejected under 35 U.S.C. 103 as being unpatentable over Gu et al. (AdaIN-Based Tunable CycleGAN for Efficient Unsupervised Low-Dose CT Denoising) in view of Liu et al. (Very Lightweight Photo Retouching Network With Conditional Sequential Modulation) and DAVID et al. (US 2022/0012595 A1) as applied in claims 1,9 further in view of Jung (Learning to Avoid Errors in GANs by Manipulating Input Spaces): PNG media_image8.png 577 502 media_image8.png Greyscale Re claim 10 (Currently Amended), Gu of the combination (illustrated above) of Gu,Liu,David,Lin teaches The apparatus for generating training data according to claim 1, wherein the first neural network generator (Gu: fig. 2: “G”) iteratively generates the first image for a first sample vector a preset number of times, and upon the preset number of times being exceeded, iteratively generates the first image for a second sample vector after the first neural network generator (fig. 2: “G”) is initialized. Gu of the combination (illustrated above) of Gu,Liu,David,Lin does not teach the difference of claim 10: iteratively generates the first image for a first sample vector a preset number of times, and upon the preset number of times being exceeded, iteratively generates the first image for a second sample vector after the first neural network generator is initialized. Jung teaches the difference of claim 10: 10. (Currently Amended) The apparatus for generating training data according to claim 1, wherein the first neural network generator (fig. 2: “G”) iteratively generates the first image (resulting in “better image” “iterative improvements”, in the description of fig. 2: fig. 5: series of better iterative image improvement face images) for a first (“sampled”, pg. 2: 2 Proposed Method, 2nd S) sample vector (“ z ( i ) ”: “Algorithm 2…Nz is the number of components per noise vector”, page 32) a preset number (or the first iteration or iteration 1 out of NR iterations via “if first iteration then” (page 32, Algorithm 2, line 3) of (NR) times, and upon the preset number (1) of (NR) times being exceeded (via Algorithm 2, line 6: otherwise if not the first iteration: i.e., “else”), iteratively generates (via Algorithm 2, line 19: “Update the generator by descending its stochastic gradient”) the first image for a second sample vector (via Algorithm 2, line 4: “ z ( i ) ”) after the first neural network generator (fig. 2: “G”) is initialized (via fig. 1(a): “Initial situation”). Since Gu of the combination (illustrated above) of Gu,Liu,David,Lin teaches a generator, one of skill in the art of generators can make Gu’s of the combination (illustrated above) of Gu,Liu,David,Lin be as Jung’s predictably recognizing the changing generating better improved images. Claim(s) 11,19 is/are rejected under 35 U.S.C. 103 as being unpatentable over Gu et al. (AdaIN-Based Tunable CycleGAN for Efficient Unsupervised Low-Dose CT Denoising) in view of Liu et al. (Very Lightweight Photo Retouching Network With Conditional Sequential Modulation): PNG media_image9.png 589 502 media_image9.png Greyscale Re 11. (Currently Amended), Gu teaches A method for generating training data, performed by an apparatus for generating training data, including at least one processor and a memory to store instructions for executing the at least one process, the method comprising: outputting a first image for one sample vector (or “mean26 vector”, pg. 75, A. Switchable Generator Using AdaIN Layers, 2nd para, 4th S) from a first neural network generator included in the apparatus; and generating a second image from a second neural network generator included in the apparatus based on the first image and a feature map extracted from a convolution block for each stage of a lightweight27 (“dubbed AdaIN code generator”, pg. 74, lcol, 2nd para, 4th S) (&) target28 (&) neural network2930 model (or “AdaIN layers”-fig. 2: layers-“feature maps”31, pg. 78, C. Implementation Details, 1st para, last S: fig. 5: -target-map-function: “G”) for the first image (fig. fig. 5:”y” or “x”), generating a third image from a third neural network generator included in the apparatus based on at least one of the first image or the second image (via fig. 1), and wherein the generating of the third image (via fig. 1) comprises generating a fourth image (via fig. 1) by applying a scaling (via “globally-scaled noise reduction”, pg. 83, description of fig.11) parameter (“scaled…reduction” is not the same as the claimed “scaling parameter”) which adjusts an output32 (via “adjusting…the outputs”, pg 74, C. Adaptive Instance Normalization (AdaIN), 1st para, 2nd S) (&) channel33 distribution34 (via “output” “feature”35, pg. 78, C. Implementation Details, 1st para, last S and “channel feature”, pg 74, C. Adaptive Instance Normalization (AdaIN), 2nd para, 1st S) to the third image (via fig. 1). Gu does not teach the difference of claim 11 of: --a scaling parameter which--. Liu teaches the difference of claim 11 of: --a scaling parameter which—via page 4644: “parameter of…scaling” (feature/ distribution scaling parameter): PNG media_image10.png 1097 878 media_image10.png Greyscale Since Gu teaches scaling, one of skill in the art of scaling can make Gu’s be as Liu’s seeing in the change “improving the aesthetic visual quality of images”, Liu, abstact 1st S: PNG media_image5.png 804 2006 media_image5.png Greyscale Claim 19 is rejected like claim 9: 19. (Currently Amended) The method for generating training data according to claim 18, wherein the scaling parameter is learned such that a channel distribution value of the third image is close to a channel distribution value of original training data of the lightweight target neural network model. Claim(s) 12,13,14,16,15 is/are rejected under 35 U.S.C. 103 as being unpatentable over Gu et al. (AdaIN-Based Tunable CycleGAN for Efficient Unsupervised Low-Dose CT Denoising) in view of Liu et al. (Very Lightweight Photo Retouching Network With Conditional Sequential Modulation) as applied in claims 11,19 further in view of LIN et al. (CN 110555458 A) with machine translation: PNG media_image11.png 589 502 media_image11.png Greyscale Claim 12 is rejected like claim 2: 12. (Currently Amended) The method for generating training data according to claim 11, wherein the lightweight target neural network model includes at least one first convolution block to generate the feature map, and wherein the second neural network generator includes at least one second convolution block to generate a feature enhancement map. Claim 1336 is rejected like claim 3: 13. (Currently Amended) The method for generating training data according to claim 12, wherein the feature map of the first convolution block mapped with the second convolution block is combined with the feature enhancement map of the second convolution block. PNG media_image12.png 101 718 media_image12.png Greyscale Claim 14 is rejected like claim 4: 14. (Currently Amended) The method for generating training data according to claim 13, wherein the first convolution block mapped with the second convolution block includes a remaining first convolution block except the first convolution block of a last stage of the lightweight target neural network model. Claim 16 is rejected like claim 6: 16. (Currently Amended) The method for generating training data according to claim 14, wherein the feature map of the first convolution block of the last stage is used as an input value of the second neural network generator. Claim 15 is rejected like claim 5: 15. (Orignial) The method for generating training data according to claim 13, wherein in case of the at least one second convolution block being a plurality of second convolution blocks, the feature enhancement map of a previous second convolution block in combination with the feature map of the first convolution block corresponding to the previous second convolution block is included in an input value of a next second convolution block. Claim(s) 20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Gu et al. (AdaIN-Based Tunable CycleGAN for Efficient Unsupervised Low-Dose CT Denoising) in view of Liu et al. (Very Lightweight Photo Retouching Network With Conditional Sequential Modulation) as applied in claims 11,19 further in view of Jung (Learning to Avoid Errors in GANs by Manipulating Input Spaces): PNG media_image13.png 589 502 media_image13.png Greyscale Claim 20 is rejected like claim 10: 20. (Currently Amended) The method for generating training data according to claim 11, wherein the generating of the first image comprises iteratively generating the first image for a first sample vector a preset number of times, and upon the preset number of times being exceeded, initializing the first neural network generator and iteratively generating the first image for a second sample vector. Conclusion The prior art “nearest to the subject matter defined in the claims” (MPEP 707.05) made of record and not relied upon is considered pertinent to applicant's disclosure. The following table lists several references that are relevant to the subject matter claimed and disclosed in this Application. The references are not relied on by the Examiner, but are provided to assist the Applicant in responding to this Office action. Citation Relevance ALOIS et al. (DE 102021202293 A1) with SEARCH machine translation ALOIS teaches, pg. 8, 4th txt blk” “The first trained function is adjusted in such a way that the result matches the original training data as closely as possible.” as the closest to the claimed “adjusts an output channel distribution to the third image close to a channel distribution value of original training data” of claim 1. Zhang et al. (Lifting up Imbalanced Data Classification to SVM Based Ensemble Level on Oversampling Feature Spaces) Zhang teaches: Pg. 297, lcol, 5th full para: --We randomly selected data among the minority class Min choice S , and the KB-GD algorithm was used to oversample the original training data in the feature space to the majority class of the data, which were respectively mapped to three different feature spaces.-- pg. 299, rcol, “10)”: --One approach is to solve the problem of imbalanced data classification by oversampling the data and adjusting the classification hyperplane boundary to close to the majority class.-- as the closest to the claimed “adjusts an output channel distribution to the third image close to a channel distribution value of original training data” of claim 1. THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to DENNIS ROSARIO whose telephone number is (571)272-7397. The examiner can normally be reached Monday-Friday, 9AM-5PM EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Henok Shiferaw can be reached at 571-272-4637. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /DENNIS ROSARIO/Examiner, Art Unit 2676 /Henok Shiferaw/Supervisory Patent Examiner, Art Unit 2676 1 generator: Mathematics. A. an element or one of a set of elements from which a specified mathematical object can be formed by applying certain operations. B. an element, as a line, that generates a figure. (Dictionary.com) 2 parameter Mathematics. A. a constant or variable term in a function that determines the specific form of the function but not its general nature, as a in f (x ) = ax, where a determines only the slope of the line described by f (x ). B. one of the independent variables in a set of parametric equations. (Dictionary.com) 3 distribution: Mathematics. a generalized function used especially in solving differential equations. (Dictionary.com) 4 neural network: Also called neural net. Computers. a hardware or software system in which weighted connections between data nodes are refined to produce increasingly accurate results in information processing, as in pattern recognition or problem solving, with the goal of algorithmic computing that requires minimal human intervention. (Dictionary.com) 5 Applicant’s disclosure: [0058] The lightweight target model 40 refers to a model that is a target to be lightweight,and may be a trained classifier based on the original training data to be reproduced. 6 Applicant’s Disclosure: 1. Field [0002]The disclosed embodiments relate to an apparatus and method for generating training data. More particularly, the disclosed embodiments relate to technology for generating training data similar to original training data without access to the original training data. wherein for is defined: (lightweight deep learning) intended to belong to, or be used in connection with (generating training data similar to original training data without access to the original training data) (Dictionary.com) 2. Description of the Related Art [0005]Lightweight deep learning is essential for deep learning in mobile, edge or cloud environment. Here, lightweight deep learning refers to technology that generates a compression model with a similar level of performance to the original model and a smaller amount of computational resources. wherein refers to is defined: to relate to; apply to; mean or denote. (Dictionary.com) 7 deep learning: Computers. an advanced type of machine learning that uses multilayered neural networks to establish nested hierarchical models for data processing and analysis, as in image recognition or natural language processing, with the goal of self-directed information processing, wherein neural network is defined: Also called neural net. Computers. a hardware or software system in which weighted connections between data nodes are refined to produce increasingly accurate results in information processing, as in pattern recognition or problem solving, with the goal of algorithmic computing that requires minimal human intervention. (Dictionary.com) 8 generator: Mathematics. A. an element or one of a set of elements from which a specified mathematical object can be formed by applying certain operations. B. an element, as a line, that generates a figure. (Dictionary.com) 9 parameter Mathematics. A. a constant or variable term in a function that determines the specific form of the function but not its general nature, as a in f (x ) = ax, where a determines only the slope of the line described by f (x ). B. one of the independent variables in a set of parametric equations. (Dictionary.com) 10 distribution: Mathematics. a generalized function used especially in solving differential equations. (Dictionary.com) 11 background: one's origin, education, experience, etc., in relation to one's present character, status, etc., wherein experience is defined: knowledge or practical wisdom gained from what one has observed, encountered, or undergone, wherein practical is defined: of or relating to practice or action, wherein practice is defined: custom, wherein custom is defined: convention, wherein convention is defined: conventionalism, wherein conventionalism is defined: adherence to or advocacy of conventional attitudes or practices (Dictionary.com) 12 map: Mathematics., function., wherein function is defined: Mathematics. Also called correspondence, map, mapping, transformation. a relation between two sets in which one element of the second (data) set is assigned to each element of the first (data) set, as the expression (data set) y = (data set) x 2 ; operator, wherein data set is defined: Computers. a collection of data records for computer processing, wherein record is defined: Computers., a group of related fields, or a single field, treated as a unit and comprising part of a file or data set, for purposes of input, processing, output, or storage by a computer, wherein file is defined: a collection of papers, records, etc., arranged in convenient order, wherein order is defined: formal disposition or array, wherein array is defined: Computers., a block of related data elements, each of which is usually identified by one or more subscripts (see equations (1) & (2): x1 & y1). (Dictionary.com) 13 coordinate adjective 14 coordinate adjective 15 coordinate adjective 16 “adjusts” is further modified by the adverb “close to” 17 “close to” is a adverb further modifying the claimed “adjusts” 18 (italics) represent claim limitations already taught 19 ellipses (…) represent claim limitations already taught 20 (italics) represent claim limitations already taught 21 ellipses (…) represent claim limitations already taught 22 tune: to adjust for proper functioning or for the desired results. (Dictionary.com) 23 common: Mathematics., bearing a similar relation to two or more entities. (Dictionary.com) 24 Is claim 5 currently amended ? It looks the same 25 original claim 5, 5/22/2023 26 mean: maths another name for average See also geometric mean, wherein average is defined: the typical or normal amount, quality, degree, etc, wherein typical is defined: being or serving as a representative example of a particular type; characteristic, wherein example is defined: a specimen or instance that is typical of the group or set of which it forms part; sample (Dictionary.com) 27 coordinate adjective, wherein coordinate is defined: Grammar. of the same rank in grammatical construction, as Jack and Jill in the phrase Jack and Jill, or got up and shook hands in the sentence He got up and shook hands. (Dictionary.com) 28 coordinate adjective 29 coordinate adjective 30 Also called: neural net. an analogous network of electronic components, esp one in a computer designed to mimic the operation of the human brain, wherein mimic is defined: to imitate (a person, a manner, etc), esp for satirical effect; ape, wherein imitate is defined: to try to follow the manner, style, character, etc, of or take as a model (Dictionary.com) 31 map: a maplike delineation, representation, or reflection of anything, wherein representation is defined: the act of representing, wherein represent is defined: to serve as an example or specimen of; exemplify, wherein example is defined: a pattern or model, as of something to be imitated or avoided. (Dictionary.com): “the generator makes…the fake image”, pg. 78, rcol, 2nd para, 3rd S. 32 coordinate adjective 33 coordinate adjective 34 distribution: an act or instance of distributing. (Dictionary.com) 35 feature: a prominent or conspicuous part or characteristic, wherein part is defined: an allotted portion; share, wherein allotted is defined: divided or distributed by share or portion; parceled out. (Dictionary.com) 36 looks the same as original claim 13
Read full office action

Prosecution Timeline

May 22, 2023
Application Filed
Aug 04, 2025
Non-Final Rejection — §101, §102, §103
Dec 05, 2025
Response Filed
Jan 23, 2026
Final Rejection — §101, §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12586184
METHODS AND APPARATUS FOR ANALYZING PATHOLOGY PATTERNS OF WHOLE-SLIDE IMAGES BASED ON GRAPH DEEP LEARNING
2y 5m to grant Granted Mar 24, 2026
Patent 12585733
SYSTEMS AND METHODS OF SENSOR DATA FUSION
2y 5m to grant Granted Mar 24, 2026
Patent 12536786
IMAGE LOCALIZATION USING A DIGITAL TWIN REPRESENTATION OF AN ENVIRONMENT
2y 5m to grant Granted Jan 27, 2026
Patent 12518519
PREDICTOR CREATION DEVICE AND PREDICTOR CREATION METHOD
2y 5m to grant Granted Jan 06, 2026
Patent 12518404
SYSTEMS AND METHODS FOR MACHINE LEARNING BASED PHYSIOLOGICAL MOTION MEASUREMENT
2y 5m to grant Granted Jan 06, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
69%
Grant Probability
98%
With Interview (+28.6%)
3y 8m
Median Time to Grant
Moderate
PTA Risk
Based on 557 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month