Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Detailed Action
The following action is in response to the communication(s) received on 12/12/2025.
As of the claims filed 12/12/2025:
Claims 1, 2, 5, 6, 9, 13, 14, and 17 have been amended.
Claims 1-20 are pending.
Claims 1, 5, and 13 are independent claims.
Response to Arguments
Applicant’s arguments filed 12/12/2025 have been fully considered, but are not fully persuasive.
The amendments to the Specification have overcome the objection regarding new matter. Thus, the objection has been withdrawn.
Applicant asserts that the claims do not recite “means” or “step” and thus should not be interpreted under 35 U.S.C. § 112(f). Examiner respectfully submits that, although the claims do not recite “means” or “step”, they are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder (e.g., modules) that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier (See MPEP 2181(I); “3-prong analysis”).
Regarding the rejections under 35 U.S.C. § 112(b):
The amended limitations have overcome the indefiniteness regarding “…are at least partially decorrelated” and how the modules “are trained”. Thus, this rejection has been withdrawn for these claims and their dependent claims.
The amendments to claims 9 and 17 have overcome the new matter rejection. Thus, the new matter rejections have been withdrawn.
However, claims 9 and 17 remain rejected under 35 U.S.C. § 112(b) as they still do not recite descriptions for the variables.
The amendments have overcome the eligibility rejection under 35 U.S.C. § 101. Thus, the rejection has been withdrawn
Regarding the prior art rejection under 35 U.S.C. § 102:
Applicant asserts that Peng does not teach the independent feature sets in the correlation loss in claims 1, 5, and 13. (p.13 ¶3-4) Examiner respectfully submits that the feature set being independent is not recited in the claims; rather, the claims merely require that it is related to the first and second feature sets, which is taught by Peng (Peng, fig.1 middle; fci and fds correspond to the first and second intermediate data sets, respectively; the data used to generate fci corresponds to the first feature set; the data used to generate fds corresponds to the second feature set).
Claim Interpretation
The following is a quotation of 35 U.S.C. 112(f):
(f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph:
An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked.
Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Such claim limitation(s) is/are:
Claims 1-6, 13, and 14: “A dual neck autoencoder module for…”
Claims 1, 2, 5, and 13: “an encoder module…”
Claims 1, 2, 5, and 13: “a decoder module…”
Claims 1, 2, 4-6, 11, 13, 14, and 19: “first bottleneck module…”
Claims 1, 2, 4-6, 11, 13, 14, and 19: “second bottleneck module…”
Claims 6, 10, 14, and 18: “training module…”
Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof.
If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention.
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), first paragraph:
The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same and shall set forth the best mode contemplated by the inventor of carrying out his invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 9 and 17 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
In addition, Claims 9 and 17 recite a function with undefined variables. It is unclear regarding the scope of the variables:
PNG
media_image1.png
125
578
media_image1.png
Greyscale
Thus, claims 9 and 17 are indefinite.
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
Claims 1-8, 10-16, and 18-20 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Peng et al., "Domain Agnostic Learning with Disentangled Representations" (hereinafter Peng).
Regarding Claim 1, Peng teaches: A dual neck autoencoder module for reducing adversarial attack transferability, the dual neck autoencoder module comprising: an encoder module configured to receive input data and at least partially compress said input data; (Peng, fig.1/fig 1 left,
PNG
media_image2.png
381
991
media_image2.png
Greyscale
[p.3 left ¶3] The feature generator G maps the input image to a feature vector fG, which has many highly entangled factors.) (Note: the DADA architecture corresponds to the dual neck autoencoder module; G corresponds to the encoder module; mapping the input into the feature vector fG corresponds to compressing the input data)
a decoder module; (Peng, fig.1 right,
PNG
media_image2.png
381
991
media_image2.png
Greyscale
) (Note: the top and bottom reconstructors collectively correspond to the decoder module)
and a first bottleneck module and a second bottleneck module coupled, in parallel, between the encoder module and the decoder module, the first bottleneck module configured to partially decompress a first feature set of the input data to produce a first intermediate data set and the second bottleneck module configured to partially decompress a second feature set of the input data to produce a second intermediate data set…(Peng, fig.1 middle,
PNG
media_image3.png
240
624
media_image3.png
Greyscale
) (Note: fci and fds correspond to the first and second intermediate data sets, respectively; the data used to generate fci corresponds to the first feature set; the data used to generate fds corresponds to the second feature set; the part of D that makes fci corresponds to the first bottleneck module; the part of D that makes fds corresponds to the second bottleneck module)
…that is at least partially decorrelated based, at least in part, on a correlation loss (Peng, p.3 bottom right, “In the second step, we fix the class identifier and train the disentangler D to fool the class identifier by generating class-irrelevant features fci. This can be achieved by minimizing the negative entropy of the predicted class distribution:
PNG
media_image4.png
66
449
media_image4.png
Greyscale
where the first term and the second term indicate minimizing the entropy on the source domain and on heterogeneous target, respectively. The above adversarial training process forces the corresponding disentangler to extract class-irrelevant features.
Domain Disentanglement To tackle the domain agnostic learning task, disentangling class-irrelevant features is not enough, as it fails to align the source domain with the target. To achieve better alignment, we further propose to disentangle the learned features into domain-specific and domain-invariant and to thus align the source with the target domain in the domain-invariant latent space. This is achieved by exploiting adversarial domain classification in the resulting latent space. Specifically, we leverage a domain identifier DI, which takes the disentangled feature (fdi or fds ) as input and outputs the domain label lf (source or target). The objective function of the domain identifier is as follows:
PNG
media_image5.png
31
464
media_image5.png
Greyscale
Then the disentangler is trained to fool the domain identifier DI to extract domain-invariant features”) (Note: the measure of domain invariancy in the disentangler corresponds to the correlation loss)
the decoder module configured to decompress the first and second intermediate data sets to generate a first estimate based, at least in part, on the first intermediate data set from the first bottleneck module, (Peng, fig.1 right,
PNG
media_image3.png
240
624
media_image3.png
Greyscale
) (Note: the top right f^G corresponds to the first estimate; fci corresponds to the first intermediate data set)
and a second estimate based, at least in part, on the second intermediate data set from the second bottleneck module,
(Peng, fig.1 right,
PNG
media_image3.png
240
624
media_image3.png
Greyscale
) (Note: the bottom right f^G corresponds to the second estimate; fds corresponds to the second intermediate data set)
wherein the first estimate and the second estimate correspond to restructured data sets.
(Peng,fig.1 right,
PNG
media_image3.png
240
624
media_image3.png
Greyscale
) (Note: both f^G are produced by the reconstructor, thus corresponding to restructured data sets)
Regarding Claim 2, Peng respectively teaches and incorporates the claimed limitations and rejections of Claim 1. Peng further teaches: The dual neck autoencoder module of claim 1, wherein the encoder module, the decoder module, the first bottleneck module and the second bottleneck module are trained using training data provided to the dual neck autoencoder, the training comprising minimizing a cost function that comprises a correlation loss function, the correlation loss function related to the first feature set produced by the first bottleneck module, and the second feature set produced by the second bottleneck module. (Peng, fig.1 leftmost “input”, p.3 right,
PNG
media_image6.png
440
481
media_image6.png
Greyscale
[eq.2]
PNG
media_image7.png
157
484
media_image7.png
Greyscale
[eq.3]
PNG
media_image8.png
131
488
media_image8.png
Greyscale
[eq.4]
PNG
media_image9.png
33
460
media_image9.png
Greyscale
[Algorithm 1, lines 4-6; 9; 15]
PNG
media_image10.png
537
496
media_image10.png
Greyscale
) (Note: D being updated by Eq 1-4 corresponds to the cost function which comprises a correlation loss function; Eq 1 contains fG, which contains the first feature set and the second feature set)
Regarding Claim 3, Peng respectively teaches and incorporates the claimed limitations and rejections of Claim 1. Peng further teaches: The dual neck autoencoder module of claim 1, wherein each module comprises an artificial neural network. (Peng, p.2 left ¶2, We implement a neural network to estimate the mutual information between the disentangled feature distributions…)
Regarding Claim 4, Peng respectively teaches and incorporates the claimed limitations and rejections of Claim 2. Peng further teaches: The dual neck autoencoder module of claim 2, wherein the cost function comprises a first mean square error associated with the first bottleneck module, and a second mean square error associated with the second bottleneck module. (Peng, p.3 right,
PNG
media_image11.png
438
488
media_image11.png
Greyscale
) (Note: each part of the disentangled representations corresponds to each mean square errors associated with the first and second bottleneck modules)
Regarding Claim 5, Peng teaches: A method for reducing adversarial attack transferability, the method comprising: receiving, by a dual neck autoencoder module, input data, the dual neck autoencoder module comprising an encoder module configured to at least partially decompress said input data, (Peng, fig.1/fig 1 left,
PNG
media_image2.png
381
991
media_image2.png
Greyscale
[p.3 left ¶3] The feature generator G maps the input image to a feature vector fG, which has many highly entangled factors.) (Note: the DADA architecture corresponds to the dual neck autoencoder module; G corresponds to the encoder module; mapping the input into the feature vector fG corresponds to compressing the input data)a decoder module, (Peng, fig.1 right,
PNG
media_image2.png
381
991
media_image2.png
Greyscale
) (Note: the top and bottom reconstructors collectively correspond to the decoder module)
and a first bottleneck module and a second bottleneck module coupled, in parallel, between the encoder module and the decoder module, wherein the first bottleneck module partially decompresses a first feature set of the input data to produce a first intermediate data set and the second bottleneck module partially decompresses a second feature set of the input data to produce a second intermediate data set… (Peng, fig.1 middle,
PNG
media_image3.png
240
624
media_image3.png
Greyscale
) (Note: fci and fds correspond to the first and second intermediate data sets, respectively; the data used to generate fci corresponds to the first feature set; the data used to generate fds corresponds to the second feature set; the part of D that makes fci corresponds to the first bottleneck module; the part of D that makes fds corresponds to the second bottleneck module)
…that is at least partially decorrelated based, at least in part, on a correlation loss; (Peng, p.3 bottom right, “In the second step, we fix the class identifier and train the disentangler D to fool the class identifier by generating class-irrelevant features fci. This can be achieved by minimizing the negative entropy of the predicted class distribution:
PNG
media_image4.png
66
449
media_image4.png
Greyscale
where the first term and the second term indicate minimizing the entropy on the source domain and on heterogeneous target, respectively. The above adversarial training process forces the corresponding disentangler to extract class-irrelevant features.
Domain Disentanglement To tackle the domain agnostic learning task, disentangling class-irrelevant features is not enough, as it fails to align the source domain with the target. To achieve better alignment, we further propose to disentangle the learned features into domain-specific and domain-invariant and to thus align the source with the target domain in the domain-invariant latent space. This is achieved by exploiting adversarial domain classification in the resulting latent space. Specifically, we leverage a domain identifier DI, which takes the disentangled feature (fdi or fds ) as input and outputs the domain label lf (source or target). The objective function of the domain identifier is as follows:
PNG
media_image5.png
31
464
media_image5.png
Greyscale
Then the disentangler is trained to fool the domain identifier DI to extract domain-invariant features”) (Note: the measure of domain invariancy in the disentangler corresponds to the correlation loss)
and generating, by the decoder module, a first estimate based, at least in part, on a first intermediate data set from the first bottleneck module, (Peng, fig.1 right,
PNG
media_image3.png
240
624
media_image3.png
Greyscale
) (Note: the top right f^G corresponds to the first estimate; fci corresponds to the first intermediate data set)
and a second estimate based, at least in part, on a second intermediate data set from the second bottleneck module, (Peng, fig.1 right,
PNG
media_image3.png
240
624
media_image3.png
Greyscale
) (Note: the bottom right f^G corresponds to the second estimate; fds corresponds to the second intermediate data set)
wherein the first estimate and the second estimate correspond to restructured data sets . (Peng,fig.1 right,
PNG
media_image3.png
240
624
media_image3.png
Greyscale
) (Note: both f^G are produced by the reconstructor, thus corresponding to restructured data sets)
Regarding Claim 6, Peng respectively teaches and incorporates the claimed limitations and rejections of Claim 5. Peng further teaches:
The method of claim 5, further comprising training, by a training module, the dual neck autoencoder module, the training comprising minimizing a cost function that comprises a correlation loss function, the correlation loss function related to the first feature set produced by the first bottleneck module, and the second feature set produced by the second bottleneck module. (Peng, p.3 right,
PNG
media_image6.png
440
481
media_image6.png
Greyscale
[eq.2]
PNG
media_image7.png
157
484
media_image7.png
Greyscale
[eq.3]
PNG
media_image8.png
131
488
media_image8.png
Greyscale
[eq.4]
PNG
media_image9.png
33
460
media_image9.png
Greyscale
[Algorithm 1, lines 4-6; 9; 15]
PNG
media_image10.png
537
496
media_image10.png
Greyscale
) (Note: Algorithm 1 corresponds to the training modules; D being updated by Eq 1-4 corresponds to the cost function which comprises a correlation loss function; Eq 1 contains fG, which contains the first data set and the second feature set)
Regarding Claim 7, Peng respectively teaches and incorporates the claimed limitations and rejections of Claim 5. Peng further teaches: The method of claim 5, further comprising determining an output, by a classifier module, based, at least in part, on the first estimate and based, at least in part, on the second estimate. (Peng, fig.1 right,
PNG
media_image3.png
240
624
media_image3.png
Greyscale
(Note The class identifier corresponds to the classifier module)
[p.3, bottom right, eq. (2)]
PNG
media_image12.png
162
372
media_image12.png
Greyscale
[Algorithm 1, line 5, line 17]
PNG
media_image13.png
399
366
media_image13.png
Greyscale
(Note: C being updated while not converged and using G and D corresponds to being based on the first estimate and the second estimate.)
Regarding Claim 8, Peng respectively teaches and incorporates the claimed limitations and rejections of Claim 5. Peng further teaches: The method of claim 5, wherein each module comprises an artificial neural network. (Peng, p.2 left ¶2, We implement a neural network to estimate the mutual information between the disentangled feature distributions…)
Regarding Claim 10, Peng respectively teaches and incorporates the claimed limitations and rejections of Claim 6. Peng further teaches: The method of claim 6, further comprising generating, by the training module, training data based, at least in part, on a surrogate adversarial model. (Peng, p.3 bottom left, “Adversarial training via a domain identifier aligns the source domain and the heterogeneous target domain in the fdi space”;
[Fig. 1]
PNG
media_image14.png
290
749
media_image14.png
Greyscale
) (Note: the source domain and heterogeneous target domain in the fdi space corresponds to the surrogate adversarial model)
Regarding Claim 11, Peng respectively teaches and incorporates the claimed limitations and rejections of Claim 6. Peng further teaches:
The method of claim 6, wherein the cost function comprises a first mean square error associated with the first bottleneck module, and a second mean square error associated with the second bottleneck module. (Peng, p.3 right,
PNG
media_image11.png
438
488
media_image11.png
Greyscale
) (Note: each part of the disentangled representations corresponds to each mean square error associated with the first and second bottleneck modules)
Regarding Claim 12, Peng respectively teaches and incorporates the claimed limitations and rejections of Claim 7. Peng further teaches: The method of claim 7, wherein the training comprises optimizing a classification based objective. (Peng, Algorithm 1, line 5, “Update…C by Eq.2”; [p.3, bottom right, Eq.2]
PNG
media_image12.png
162
372
media_image12.png
Greyscale
) (Note: eq.2 corresponds to a classification-based objective)
Independent Claim 13 recites A dual neck autoencoder system for reducing adversarial attack transferability, the system comprising: a computing device comprising a processor, a memory, an input/output circuitry, and a data store; (Peng, p.5 left 1st ¶, “We employ the popular neural networks (e.g. LeNet, AlexNet, or ResNet) as our feature generator G. The detailed training procedure is presented in Algorithm 1.” Note: training a neural network requires a computing device comprising a processor, memory, i/o circuitry, and a data store.) to perform precisely the methods of Claim 5. Thus, Claim 13 is rejected for reasons set forth in Claim 5.
Claim(s) 14-20, dependent on Claim 13, also recite the device configured to perform precisely the methods of Claim(s) 6-12 respectively, and thus are rejected for reasons set forth in these claim(s).
Conclusion
Claim 9 and 17 have been searched, but no prior art which teaches the limitations recited therein has been uncovered.
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to JOSEP HAN whose telephone number is (703)756-1346. The examiner can normally be reached Mon-Fri 9am-5pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Kakali Chaki can be reached on (571) 272-3719. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/J.H./Examiner, Art Unit 2122
/KAKALI CHAKI/Supervisory Patent Examiner, Art Unit 2122