DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Arguments
Applicant's arguments filed 4-22-2025 have been fully considered but they are not persuasive.
Regarding the 101 rejection, applicant’s representative [hereinafter applicant] submits “claim 1 does not recite a mental process that can be practically performed in the human mind, because, for example, the human mind, with or without a pen and paper, cannot perform a "generative portion to use the one or more parameters to detect one or more features of one or more images to generate one or more other images, wherein the generative portion is to use the second encoder portion to generate features depicted in the one or more other images that do not correspond to the one or more identical features," as recited by claim 1.”; the examiner’s position is that the argument fails to provide any articulated reason why allegedly cannot be performed mentally, with or without a pen and paper. Please note that the argued limitation is presented in the claim as intended use and intended use of the claimed invention must be evaluated to determine whether or not the recited purpose or intended use results in a structural difference. Moreover, the intended use of the generative portion is part of the “one or more neural network” which is also presented in the claim as intended use of the circuitry; thereby, having intended use depending on intended use. Please MPEP 2103 I. C. As explained above, the one or more neural network and it’s portions are not required to be part of the circuitry, just be used by the circuitry, resulting in no structural modification to the circuitry. Lastly, presenting a picture to a kindergarten class and asking the students to draw something from the picture can be practically performed in the human mind with a pen and paper [the examiner is assuming that students will select different features]. As to the arguments directed to step 2A and 2B, they point to the intended use which suggest but does not require the limitation; thereby, failing integrating the abstract idea into a practical application.
As to the 112 rejection, it is unclear the scope of the structures [if any] performing the steps and the scope of what may be equated to the parameters. And the specification fails to clarify the issues.
As to the art rejection, please note that the Siamese structure shown in fig.2 and 3, shows multiple encoders each directed to different features. Thereby, Yixiao still reading in the amended limitations.
The rest of the arguments they fall for the same reasons as shown above. The rejection of record still applies.
Claim Rejections - 35 USC § 112
The following is a quotation of the first paragraph of 35 U.S.C. 112(a):
(a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention.
The following is a quotation of the first paragraph of pre-AIA 35 U.S.C. 112:
The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor of carrying out his invention.
Claims 1-22 and 24-27 are rejected under 35 U.S.C. 112(a) or pre-AIA 35 U.S.C. 112, first paragraph, because the claim purports to invoke 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, but fails to recite a combination of elements as required by that statutory provision and thus cannot rely on the specification to provide the structure, material or acts to support the claimed function. As such, the claim recites a function that has no limits and covers every conceivable means for achieving the stated function, while the specification discloses at most only those means known to the inventor. Accordingly, the disclosure is not commensurate with the scope of the claim. The claims has no limits and covers every conceivable means for achieving the stated function, while the specification discloses at most only those means known to the inventor. Accordingly, the disclosure is not commensurate with the scope of the claim. Under broadest reasonable interoperation the claim describes one circuit (I.E. Single element) identifying similarities between first and second image. The encoder and discriminator portion of the neural network is not a part of (I.E., combination of) the circuit and is merely the information that it being based upon, because Neural Network is just a computer program or mathematical algorithm I.E it does not have a structure. Hence, the claims are directed to a signal function identifying similar features between pictures. Therefore, they are single means claims and are subject to an enablement rejection under 35 U.S.C. 112(a) or pre-AIA 35 U.S.C. 112, first paragraph. In re Hyatt, 708 F.2d 712, 714-715, 218 USPQ 195, 197 (Fed. Cir. 1983). The rest of the claims they share the deficiency by virtue of dependency.
MPEP 2164.08(a) points out:
A single means claim, i.e., where a means recitation does not appear in combination with another recited element of means, is subject to an enablement rejection under 35 U.S.C. 112(a) or pre-AIA 35 U.S.C. 112, first paragraph. In re Hyatt, 708 F.2d 712, 714- 715, 218 USPQ 195, 197 (Fed. Cir. 1983) (A single means claim which covered every conceivable means for achieving the stated purpose was held nonenabling for the scope of the claim because the specification disclosed at most only those means known to the inventor.). When claims depend on a recited property, a fact situation comparable to Hyatt is possible, where the claim covers every conceivable structure (means) for achieving the stated property (result) while the specification discloses at most only those known to the inventor.
Claims 1-22 and 24-27 are rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the written description requirement. The claim(s) contains subject matter which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor, or for applications subject to pre-AIA 35 U.S.C. 112, the inventor(s), at the time the application was filed, had possession of the claimed invention. The examiner was unable to find sufficient support to determine the scope of what are the parameters. Please indicate where the support can be found. The rest of the claims they share the deficiency by virtue of dependency.
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 1-22 and 24-27 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. The examiner was unable to find sufficient support to determine the scope of what are the parameters; thereby, the scope of the parameters is unclear. Please clarify. The rest of the claims they share the deficiency by virtue of dependency.
Claims 1-22 and 24-27 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. The circuitry identifies identical features in at least a first and second image; however, the generative and discriminative portions deal with one or more images and one or more other images. It is unclear what is connection between a first and second image and one or more images and one or more other images. Please clarify. The rest of the claims they share the deficiency by virtue of dependency.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-22 and 24-27 are rejected under 35 U.S.C. 101 because because the claimed invention is directed to abstract idea without significantly more. The claims recite a series of step; thereby a process claim. STEP 1: Yes.
The steps require to identify identical features
The claim(s) recite(s) identify whether one or more of the same features appear in at least a first and second image based, at least in part, on one or more neural networks sharing parameters if there is training, detect a feature to generate an image and detect features in the one or more images and the one or more other images to identify the one or more identical features. As drafted, under broadest reasonable interpretation, covers performance of the limitation in the mind and/or human behavior. Step 2A: Yes. Additional elements are portions or types of artificial intelligence models which mimic the human mind; processors, circuitry and computer readable-medium. That is other than reciting by recitation of generic computer components, nothing in the claim element precludes the step from practically being performed in the mind. For example, this claim encompasses user manually/visually identifying same/similar features appearing in at least a first and second image. If a claim limitation, under its broadest reasonable interoperation, covers performance of the limitation in the mind but for the recitation of generic computer components, then it falls withing “Mental Processes’ grouping of the abstract ideas. Accordingly, the claim recites an abstract idea. This judicial exception is not integrated into a practical application. In particular, the claim recites one additional element - using one or more neural networks having an encoder and a discriminator. The neural networks having an encoder and a discriminator is a high level of generality (i.e., generic processor performing a generic computer function of encoding and discriminating) such amounts to no more than mere instructions to apply exception using a generic computer component. Accordingly, this additional element does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. The claim is directed to an abstract idea. The claim(s) does/do not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional element of neural networks having an encoder and a discriminator to determining similarity between first and second image amounts to no more than mere instruction to apply the exception using a generic computer component. Mere instructions to apply an exception using a generic computer component cannot provide an inventive step. The rest of the dependent claims recite further data manipulations which fail to integrate the abstract idea into a practical application. The claims are not patent eligible. Step 2B: No (MPEP 07-05-16).
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
Claim(s) 1-22 and 24-27 are rejected under 35 U.S.C. 103 as being unpatentable over YIXIAO Ge et al (XP081433406) 6 October-2018 (Referred to as YIXIAO)..
As to claim 1, one or more processors comprising: circuitry to identify whether one or more identical features appear in at least a first and second image using one or more neural networks (see page 2, par. 1; section 3.2), the one or more neural networks comprising a first encoder portion, a second encoder portion, a generative portion, and a discriminative portion, the first encoder portion comprising one or more parameters shared by the generative and discriminative portions and updated during training of the generative and discriminative portions, the generative portion to use the one or more parameters to detect one or more features of one or more images to generate one or more other images, wherein the generative portion is to use the second encoder portion to generate features depicted in the one or more other images that do not correspond to the one or more identical features [please note that each encoder brings different features to the generator] (see fig. 2 and 3; sections 2 and 3 showing generator)., and the discriminative portion to use the one or more parameters of the generative portion to detect features in the one or more images and the one or more other images to identify the one or more identical features (see fig. 1 and 3; section 3.3 showing identity discriminator, pose discriminator).Although Yixiao may use different language the figures show the same inventive concept. Therefore, it would have been obvious to one of the ordinary skills in the art before the effective filing date of the present invention that Yixiao requires circuitry and processors to perform the teachings; thereby enabling the teachings.
As to claim 2, YIXIAO discloses the one or more processor of claim 1, wherein the first encoder portion encodes information indicative of appearance of one or more features (page.3. second paragraph) (The qualify of learned person features have direct impact on the generated person images. We can therefore tell what aspects of person appearances are captured by the features. For instance, for “input 1° in Figure 4/6), its generated frontal images do not show colored pattern on the upper body out only the general colors and shapes of the upper and lower bodies}.
As to claim 3, YIXIAO discloses the one or more processor of claim 2, wherein the first encoder, second encoder, generative, and discriminative portions are jointly trained by using a loss function comprising losses of the generative and discriminative portions (figures 1 and 3, page 4, paragraph 1) {Generator} {the entire framework is jointly trained in an end-to-end manner}
As to claim 4, YIXIAO discloses the one or more processor of claim 2, wherein the generative portion comprises one or more parameters of a second encoder portion to encode positional or geometric information (figure 2 {pose encoder} Be definition, pose of a person constitutes geometric information).
As to claim 5, YIXIAO discloses the one or more processors of claim 2, wherein the generative portion generates image data comprising a plurality of representations of the one or more detected features, each of the plurality of representations comprising a variation in appearance of the one or more features (Figure.2) {Fake images generated by generator! (Figure. 4a) {Features that vary in appearance) (section 3.1} {image generator G takes the encoded person features and target pose map as inputs, and aims at generating another image of fie same person specified by the target pose}.
As to claim 6, YIXIAO discloses the one or more processors of claim 1, wherein the one or more identical features comprise a person depicted in at least the one or more images (Figure 2, page.3, last paragraph.) {For each branch of the network, it takes a person image and a target pose landmark map as inputs. The image encoder E at each branch first transforms the input person image info feature representations}.
Regarding claims 7-8, are the respective system claims of processor claims 1 and 3. Therefore, claim 7-8 are rejected for the same reasons as shown above.
As to claim 9, YIXIAO discloses wherein training the generative and discriminative portions comprises minimizing generative and discriminative loss. (Section 3.4 and equation 4) {A reconstruction loss is introduced to minimize the L1 differences between the generated image vk and its corresponding real image yOk, which is shown to be helpful for more stable convergence of training the generator! and discriminative loss (Section 3.4, equation 2).
As to claim 10, YIXIAO discloses the system of claim 7, wherein the at least one encoder portion is an appearance encoder to encode features associated with one or more of clothing, color, and texture (Figure 2, 3, page,2, last paragraph) {Transferred the appearance from the source image onto the target image while preserving he target shape and clotting segmentation layout}.
As to claims 11 and 18, YIXIAO discloses the system of claim 7, wherein the generative portion uses the one or more parameters of the second encoder portion to encode features associated with one or more of size, pose, background, viewpoint, and lighting (Figure.3) {Pose encoder}.
As to claim 12, YIXIAO discloses the system of claim 7, wherein the one or more other images comprise variations in appearance of the one or more detect features (Page.3, Section 3) {Generative Adversarial Network (FD-GAN} aims at learning identify related and pose-unrelated person representations, in order to handle large pose variations across images in person reID}.
As to claims 13 and 20, YIXIAO discloses the system of claim 7, wherein the generative portion is trained to perform self-identity generation and cross-identity generation (page.2, Second paragraph) {Feature Distilling Generative Adversarial Network (FD-GAN) maintains identify feature consistency under pose variation without increasing the complexity inference (illustrated in Figure 1}. It adopts a Siamese structure for feature learning. Each of the branch consists of an image encoder and an image generator}.
Regarding claims 14 and 16, are the respective non-transitory computer readable medium claims of processor claims 1 and 3. Therefore, claim 14 and 16 are rejected for the same reasons as shown above.
Asper claim 15, YIXIAO discloses wherein the generative portion and the first encoder portion share appearance codes generated by the encoder portion (page.3. second paragraph) (The qualify of learned person features have direct impact on the generated person images. We can therefore tell what aspects of person appearances are captured by the features. For instance, for “input 1° in Figure 4/6), its generated frontal images do not show colored pattern on the upper body out only the general colors and shapes of the upper and lower bodies}.
As to claim 17, YIXIAO discloses the system of claim 7, wherein the at least one encoder portion is a generative encoder to encode features associated with one or more of clothing, color, and texture (Figure 2, 3, page,2, last paragraph) {Transferred the appearance from the source image onto the target image while preserving he target shape and clotting segmentation layout}.
As to claim 19, YIXIAO discloses wherein the generative portion generates a plurality of images (Figure.2} {Fake images generated by generator} (Figure. 4a) {Features that vary in appearance) (Section 3.1} {image generator G lakes the encoded person features and target pose man as inputs, and aims at generating another image of the same person specified by the target pose} permitting the discriminator portion to be trained to recognize fine-grained identity features (Figures. 1 and 3 and Section 3.3) /@rserinvnators, identity discriminator, pose discriminator).
As to claim 21, YIXIAO discloses the processor of claim 1, wherein the first encoder portion comprises an appearance encoder that encodes appearance-related output (Figures.1 and 3) /Encoder, image encoder} and wherein the discriminative portion uses one or more output of the encoder portion to identity the one or more identical features (Figures. 1 and 3 and Section 3.3) {Discriminators, identity discriminator, pose discriminator}.
As to claim 22, YIXIAO discloses wherein the identifying the one or more identical features comprises an identification of whether a person identified in the first image is also depicted in the second image (page.2, paragraph.1) {Person re-identification).
As to claim 24, YIXIAO discloses wherein the one or more neural networks separately encodes information about appearance and shape of the one or more features (Figure 4(b), Page.4, second paragraph.) {Generated frontal images do not show colored pattern on the upper body but only the general colors and shapes of the upper and lower bodies, which might demonstrate that the learned image encoder focus on embedding the overall appearances of persons but fail to capture the distinguishable details in appearance}.
As to claim 25, YIXIAO discloses wherein the one or more images comprise variations of appearance of the one or more detected features (Figure.2} {Fake images generated by generator! (Figure. 4a) {Features that vary in appearance} (section 3.4} {image generator G lakes the encoded person features and target pose map as inputs, and aims at generating another image of the same person specified by the targe! pose}.
As to claim 26, YIXIAO discloses wherein the circuitry is to identity whether one or more of the identical features appear in at least the first and second image using variations of appearance of the one or more detected features in the one or more images {Figure.2} {Fake images generated by generator} (Figure. 4a) (Features that vary in appearance} (section 3.1} {Image generator G fakes the encoded person features and target pose map as inputs, and aims al generating another image of the same person specified by the target pose}.
As to claim 27, YIXIAO discloses wherein the one or more images are input into the first encoder portion, the second encoder portion, and the discriminative portion during training of the one or more neural networks (Figures 1-3, Section 2 and 3; Page.4, paragraph.1) {Generator} {The entire framework is joint rained in an end-to-end manner}.
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to MARCOS L TORRES whose telephone number is (571)272-7926. The examiner can normally be reached 10:00 AM - 6:00 PM M-F.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Alison Slater can be reached at (571)270-0375. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
MARCOS L. TORRES
Primary Examiner
Art Unit 2647
/MARCOS L TORRES/Primary Examiner, Art Unit 2647