Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
DETAILED ACTION
This communication is a non-Final office action in merits. Claims 5, 7, 12, 14, 16, 18-20, 24-25, 28, 31-48 are canceled. Claims 1-4, 6, 8-11, 13, 15, 17, 21-23, 26-17, 29-30, 49, after preliminary amendment, are presently pending and have been elected and considered below.
Information Disclosure Statement
The information disclosure statement (IDS) submitted on and 7/12/2024 is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner.
Claim Objections
Claim 1 is objected to because of the following informalities: Claim 1 recites: “representation based on parameters of the encoder, the input plant image one of a plurality of plant images, each plant image representing a plant having a plurality of connected plant structures, and the predicted structural representation representing predicted connections between the plant structures of the input plant image” in which the bold portion appears to be: the input plant image being one of a plurality of plant images.
Claim 30 is objected to with the same reason as claim 1.
Appropriate correction is required
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries set forth in Graham v. John Deere Co., 383 U.S. 1, 148 USPQ 459 (1966), that are applied for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or
nonobviousness.
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
Claims 1, 3-4, 29-30, 49 are rejected under 35 U.S.C. 103 as being unpatentable over US 2022/0147838 A1, Gu et al. (hereinafter Gu) in view of US 10,922,788 B1, Yu et al. (hereinafter Yu) US 2021/0158041 A1, Chowdhary et al. (hereinafter Chowdhary).
As to claim 1, Gu discloses a method for training a machine learning model for generating a structural representation of a plant, the method performed by a processor (Fig 1) and comprising:
transforming, by an encoder, an input plant image to a predicted structural representation based on parameters of the encoder (pars 0023-0024, 0027, 0030, 0040-0041, 0098, encoder transforming input image for a structural representation), the input plant image one of a plurality of plant images (pars 0026, more than one images), each plant image representing a plant having a plurality of connected plant structures (Fig 2; pars 0005, 0024-0027, 0042, 0049, intra-modal and inter-modal representation of data object), and the predicted structural representation representing predicted connections between the plant structures of the input plant image (Fig 2; pars 0005-0006, 0021, 0024-0025, representing inter relationship of the structure);
transforming, by a decoder, the predicted structural representation to a reconstructed plant image based on parameters of the decoder (Fig 4; pars 0013, 0051, 0073-0074, 0078, decoder of a transformer with weight matrices (e.g. parameters)).
Gu does not expressly disclose the objects being transformed by an encoder include plants and a discriminator for classifying the reconstructed plant image as having been generated based on one of: the ground truth training dataset or parameters of the encoder to generate a classification based on parameters of the discriminator; and training parameters of the encoder based on the classification.
Yu, in the same or similar field of endeavor, further teaches the object may include plants (col 8, line 61; col 9, line 3) and a discriminator for classifying the reconstructed plant image as having been generated based on one of: the ground truth training dataset or parameters of the encoder to generate a classification based on parameters of the discriminator (Fig 3; col 2, line 28- col 3, line 6; col 3, lines 15-44; col 6, lines 1-44, classification based on the ground truth image or reconstructed image); and training parameters of the encoder based on the classification (Fig 2; col 2, line 28-col 3, line 14; col 13, lines 12-28).
Therefore, consider Gu and Yu’s teachings as a whole, it would have been obvious to one of skill in the art before the filling date of invention to incorporate Yu’s teachings n Gu’s method to provide an adversarial autoencoder including an encoding transformer and a discriminator for plant representation and classification.
As to claim 3, Gu as modified discloses the method according to claim 1 wherein transforming the predicted structural representation to the reconstructed plant image comprises transforming the predicted structural representation to the first reconstructed plant image based on a texture representation separate from the predicted structural representation (Gu: Fig 1; pars 0024, 0027, 0031-0032; Yu: (col 8, line 61; col 9, line 3).
As to claim 4, Gu as modified discloses the method according to claim 3 comprising transforming, by a texture encoder, the input plant image to the texture representation (Gu: pars 0024, 0027, 0030, 0034, different encoders for transforming different elements, including texture elements), the texture representation comprising a relatively lower-dimensional encoding than the input plant image; the method optionally further comprising training parameters of the texture encoder based on the input plant image, the reconstructed plant image, and the classification (Gu: pars 0023-0024, 0027, 0030, 0040-0041, 0098; Yu: Figs 2-3; col 2, line 28- col 3, line 6; col 3, lines 15-44; col 6, lines 1-44). Although Gu as modified does not expressly teach the texture encoder may comprise a relatively lower-dimensional encoding than the input plant image, it would have been obvious to one of skill in the art to understand that a simpler encoder (with lower dimension) may be adequate to transform relative simpler texture elements comparing with one for plant image.
5.(Cancelled)
As to claim 29, Gu as modified discloses a method for generating a structural representation of a plant by a machine learning model (Gu: Figs 1, 9; pars 0034, 0036-0037), the method performed by a processor and comprising: transforming, by an encoder, an input plant image to a predicted structural representation based on parameters of the encoder, the parameters of the encoder trained according to claim 1 (see rejection in claim 1).
As to claim 30, it recites similar limitations as claim 1 with a broader scope. It is therefore rejected with the same reason as set forth in claim 1.
As to claim 49, it recites a system encompassed claim 1. It is rejected with the same reason as set forth in claim 1.
Claims 6, 21, 26 are rejected under 35 U.S.C. 103 as being unpatentable over Gu in view of Yu and further in view of CN 115294555 A, Guo et al. see Google translated publication attached (hereinafter Guo).
As to claim 6, Gu as modified discloses the method according to claim 1 wherein training parameters of the encoder comprises training parameters of the encoder based on the input plant image and the reconstructed plant image (see rejection in claim 1 or 4) but does not expressly teach optionally wherein training the parameters of the encoder comprises determining a value of an objective function based on a difference between the input plant image and the reconstructed plant image; determining a gradient of the objective function relative to parameters of the encoder based on the difference; and updating parameters of the encoder based on the gradient.
Guo, in the same or similar field of endeavor, further teaches training the parameters of the encoder comprises determining a value of an objective function based on a difference between the input plant image and the reconstructed plant image (claim 3; pgs 3, 5-6, loss function of encoder); determining a gradient of the objective function relative to parameters of the encoder based on the difference; and updating parameters of the encoder based on the gradient (pgs 4, 6-7). Note minimizing a cost function of objective function with a gradient decent is a commonly used technique for encoder training process.
Therefore, consider Gu as modified and Guo’s teachings as a whole, it would have been obvious to one of skill in the art before the filing date of invention to incorporate Guo’s teachings in Gu as modified’s method for optionally facilitating encoder parameters training minimizing a training objective function.
7. (Cancelled)
12. (Cancelled)
14. (Cancelled)
16. (Cancelled)
18-20. (Cancelled)
As to claim 21, Gu as modified discloses the method according to claim 1 wherein training parameters of the encoder comprises determining a value of an objective function based on one or more structural classifications generated by classifying the predicted structural representation, by one or more structure discriminators, as having been drawn from the ground truth training dataset or generated by the encoder, the value of the objective function based on the one or more structural classifications (Yu: Figs 3-4; col 3, lines 14-44; col 4, lines 37-65; col 6, lines 45-59; Guo: claim 3, pgs 3, 5-6).
24-25. (Cancelled)
As to claim 26, Gu as modified discloses the method according to claim 1 wherein training parameters of the encoder comprises determining a value of an objective function based on one or more reconstruction classifications generated by classifying the reconstructed plant image, by one or more reconstruction discriminators, as having been generated by the decoder or drawn from the plurality of plant images, the value of the objective function based on the one or more reconstruction classifications (Yu: Figs 3-4; col 5, lines 18-67; col 12, lines 4-63; Guo: claim 3, pgs 3, 5-6).
28. (Cancelled)
Allowable Subject Matter
Claims 2, 8-11, 13, 15, 17, 22-23, and 27 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims.
Reasons for Allowance
Prior of record (Gu, Yu, and Guo) neither disclosed alone nor teaches in combination functions and features recited in claim 2, 8, 11, 22, and 27, respectively. Claims 9-10 depend from claim 8. Claims 13, 15, and 17 depend from claim 11. Claim 23 depends from claim 22.
Examiner’s Note
Examiner has cited particular column, line number, paragraphs and/or figure(s) in the reference(s) as applied to the claims for the convenience of the Applicant. Although the specified citations are representative of the teachings of the art and are applied to the specific limitations within the individual claim, other passages and figures may apply as well. It is respectfully requested from the applicant in preparing responses, to fully consider the reference(s) in entirety as potentially teaching all or part of the claimed invention, as well as the context of the passage as taught by the prior art or disclosed by the Examiner.
Contact Information
Any inquiry concerning this communication or earlier communications from the examiner should be directed to QUN SHEN whose telephone number is (571)270-7927. The examiner can normally be reached on Mon-Fri 8:30-5:50 PT.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Amandeep Saini can be reached on 571-272-3382. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/QUN SHEN/
Primary Examiner, Art Unit 2662