DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Specification
The lengthy specification has not been checked to the extent necessary to determine the presence of all possible minor errors. Applicant’s cooperation is requested in correcting any errors of which applicant may become aware in the specification.
Claim Objections
Claim 1 is objected to because of the following informalities: Line 3 of claim 1 recites, in part, “each image showing one or more herbs,” which appears to contain a minor informalities. The Examiner suggests amending the claim to --each image of the one or more images showing at least one of the one or more herbs,-- in order to improve the clarity and precision of the claim(s). Appropriate correction is required.
Claim 1 is objected to because of the following informalities: Line 5 of claim 1 recites, in part, “identifying at least one herb” which appears to contain inconsistent claim terminology and/or a minor informality. The Examiner suggests amending the claim to --identifying a herb-- in order to maintain consistency with, for example, line 7 of claim 1, lines 3 - 4 of claim 3, line 5 of claim 3 and line 3 of claim 6 and to improve the clarity and precision of the claim(s). Appropriate correction is required.
Claim 1 is objected to because of the following informalities: Line 8 of claim 1 recites, in part, “wherein the predefined class corresponds to a type” which appears to contain inconsistent claim terminology and/or a minor informality. The Examiner suggests amending the claim to --wherein the at least one predefined class corresponds to a type-- in order to maintain consistency with line 7 of claim 1 and to improve the clarity and precision of the claim. Appropriate correction is required.
Claim 1 is objected to because of the following informalities: Lines 10 - 11 of claim 1 recite, in part, “features, predict the type of herb” which appears to contain a grammatical error and/or a minor informality. The Examiner suggests amending the claim to --features, and predict the type of herb-- in order to improve the clarity and precision of the claim. Appropriate correction is required.
Claim 1 is objected to because of the following informalities: Lines 13 - 14 of claim 1 recite, in part, “in the image based on the combination” which appears to contain inconsistent claim terminology and/or a minor informality. The Examiner suggests amending the claim to --in the input image based on the combination-- in order to maintain consistency with line 5 of claim 1 and to improve the clarity and precision of the claim. Appropriate correction is required.
Claim 1 is objected to because of the following informalities: Line 15 of claim 1 recites, in part, “type of herb from the extracted features” which appears to contain inconsistent claim terminology and/or a minor informality. The Examiner suggests amending the claim to --type of herb from the extracted image features-- in order to maintain consistency with lines 9 - 10 of claim 1 and to improve the clarity and precision of the claim. Appropriate correction is required.
Claim 3 is objected to because of the following informalities: Lines 2 - 3 of claim 3 recite, in part, “a grouping module adapted to: identifying a parent class” which appears to contain a grammatical error and/or a minor informality. The Examiner suggests amending the claim to --a grouping module adapted to: identify[[ing]] a parent class-- in order to improve the clarity and precision of the claim. Appropriate correction is required.
Claim 4 is objected to because of the following informalities: Line 4 of claim 4 recites, in part, “and wherein the herb is grouped” which appears to contain inconsistent claim terminology and/or a minor informality. The Examiner suggests amending the claim to --and wherein the identified herb is grouped-- in order to maintain consistency with, for example, line 7 of claim 1, lines 3 - 4 of claim 3, line 5 of claim 3 and line 3 of claim 6 and to improve the clarity and precision of the claim(s). Appropriate correction is required.
Claim 5 is objected to because of the following informalities: Line 4 of claim 5 recites, in part, “to infer a herb in the image” which appears to contain inconsistent claim terminology and/or a minor informality. The Examiner suggests amending the claim to --to infer a herb in the received image-- in order to maintain consistency with line 3 of claim 5 and to improve the clarity and precision of the claim. Appropriate correction is required.
Claim 6 is objected to because of the following informalities: Lines 2 - 3 of claim 6 recite, in part, “apply a Multimodal AI model to processing the input image and grouping the identified herb” which appears to contain grammatical errors and/or minor informalities. The Examiner suggests amending the claim to --apply a Multimodal AI model to process the input image and group[[ing]] the identified herb-- in order to improve the clarity and precision of the claim. Appropriate correction is required.
Claim 11 is objected to because of the following informalities: Line 3 of claim 11 recites, in part, “compare the extracted features” which appears to contain inconsistent claim terminology and/or a minor informality. The Examiner suggests amending the claim to --compare the extracted image features-- in order to maintain consistency with lines 9 - 10 of claim 1 and line 6 of claim 11 and to improve the clarity and precision of the claim. Appropriate correction is required.
Claim 11 is objected to because of the following informalities: Lines 6 - 7 of claim 11 recite “matching the extracted image features to the set of predefined features, classify the herb into the determined predefined sub class, and;” which appears to contain multiple grammatical errors, inconsistent claim terminology and/or minor informalities. The Examiner suggests amending the claim to --matching of the extracted image features to the set of predefined features, and classify the identified herb into the determined to be correct. Appropriate correction is required.
Claim 13 is objected to because of the following informalities: Lines 3 - 4 of claim 13 recite, in part, “each image showing one or more herbs,” which appears to contain a minor informalities. The Examiner suggests amending the claim to --each image of the one or more images showing at least one of the one or more herbs,-- in order to improve the clarity and precision of the claim(s). Appropriate correction is required.
Claim 13 is objected to because of the following informalities: Line 5 of claim 13 recites, in part, “identifying at least one herb” which appears to contain inconsistent claim terminology and/or a minor informality. The Examiner suggests amending the claim to --identifying a herb-- in order to maintain consistency with, for example, line 7 of claim 13, line 2 of claim 15, line 3 of claim 15 and lines 2 - 3 of claim 18 and to improve the clarity and precision of the claim(s). Appropriate correction is required.
Claim 13 is objected to because of the following informalities: Line 8 of claim 13 recites, in part, “the predefined class corresponds to a type” which appears to contain inconsistent claim terminology and/or a minor informality. The Examiner suggests amending the claim to --the at least one predefined class corresponds to a type-- in order to maintain consistency with line 7 of claim 13 and to improve the clarity and precision of the claim. Appropriate correction is required.
Claim 13 is objected to because of the following informalities: Lines 12 - 13 of claim 13 recite, in part, “features, outputting the type of herb” which appears to contain a grammatical error and/or a minor informality. The Examiner suggests amending the claim to --features, and outputting the type of herb-- in order to improve the clarity and precision of the claim. Appropriate correction is required.
Claim 13 is objected to because of the following informalities: Lines 13 - 14 of claim 13 recite, in part, “in the image based on the combination” which appears to contain inconsistent claim terminology and/or a minor informality. The Examiner suggests amending the claim to --in the input image based on the combination-- in order to maintain consistency with line 5 of claim 13 and to improve the clarity and precision of the claim. Appropriate correction is required.
Claim 13 is objected to because of the following informalities: Lines 14 - 15 of claim 13 recite, in part, “the type of herb from the extracted features” which appears to contain inconsistent claim terminology and/or a minor informality. The Examiner suggests amending the claim to --the type of herb from the extracted image features-- in order to maintain consistency with lines 9 - 10 of claim 13 and to improve the clarity and precision of the claim. Appropriate correction is required.
Claim 14 is objected to because of the following informalities: Lines 1 - 2 of claim 14 recite, in part, “claim 13, comprises the step of grouping” which appears to contain a grammatical error and/or a minor informality. The Examiner suggests amending the claim to --claim 13, wherein the step of grouping-- in order to improve the clarity and precision of the claim. Appropriate correction is required.
Claim 16 is objected to because of the following informalities: Lines 4 - 5 of claim 16 recite, in part, “wherein the herb is grouped” which appears to contain inconsistent claim terminology and/or a minor informality. The Examiner suggests amending the claim to --wherein the identified herb is grouped-- in order to maintain consistency with, for example, line 7 of claim 13, line 2 of claim 15, line 3 of claim 15 and lines 2 - 3 of claim 18 and to improve the clarity and precision of the claim(s). Appropriate correction is required.
Claim 17 is objected to because of the following informalities: Lines 3 - 4 of claim 17 recite, in part, “to infer a herb in the image” which appears to contain inconsistent claim terminology and/or a minor informality. The Examiner suggests amending the claim to --to infer a herb in the received image-- in order to maintain consistency with line 3 of claim 17 and to improve the clarity and precision of the claim. Appropriate correction is required.
Claim 20 is objected to because of the following informalities: Line 7 of claim 20 recites, in part, “comparing the extracted features” which appears to contain inconsistent claim terminology and/or a minor informality. The Examiner suggests amending the claim to --comparing the extracted image features-- in order to maintain consistency with lines 9 - 10 of claim 13 and line 10 of claim 20 and to improve the clarity and precision of the claim. Appropriate correction is required.
Claim 20 is objected to because of the following informalities: Lines 10 - 11 of claim 20 recite “matching the extracted image features to the set of predefined features, classifying the herb into the determined predefined sub class, and;” which appears to contain multiple grammatical errors, inconsistent claim terminology and/or minor informalities. The Examiner suggests amending the claim to --matching of the extracted image features to the set of predefined features, and classify the identified herb into the determined to be correct. Appropriate correction is required.
Claim 20 is objected to because of the following informalities: Lines 12 - 13 of claim 20 recite, in part, “when the predefine sub class from the predicting step and the group step are substantially identical,” which appears to contain inconsistent claim terminology, grammatical and/or typographical errors and/or minor informalities. The Examiner suggests amending the claim to --when the predefined sub class from the predicting step and the one of 60 predefined sub classes from the grouping step are substantially identical,-- in order to maintain consistency with lines 2 - 3 of claim 20 and to improve the clarity and precision of the claims. Appropriate correction is required.
Claim Interpretation
The following is a quotation of 35 U.S.C. 112(f):
(f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph:
An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked.
As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph:
(A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function;
(B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and
(C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function.
Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function.
Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function.
Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action.
This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) is/are: “an image gateway arranged to receive”, “a classification engine arranged to: process”, “an output module arranged to output”, “a grouping module adapted to: identifying”, “an inference module adapted to process” and “a prediction module adapted to perform” in claims 1 - 6 and 9 - 12.
Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof.
If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 1 - 20 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
Claim 1 recites the limitation "the input image" in line 5. There is insufficient antecedent basis for this limitation in the claim.
Claim 1 recites the limitation "the type of herb recognised in the image" (emphasis added) in lines 13 - 14. There is insufficient antecedent basis for this limitation in the claim. The Examiner suggests the limitation to --the type of herb input image--.
Claim 2 recites the limitation "the identified herbs" (emphasis added) in line 2. There is insufficient antecedent basis for this limitation in the claim.
Clam 4 is rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention because it is unclear as to which identified parent class “the identified parent class” recited on line 4 is referencing. Is it referring to the “parent class” identified on line 3 of claim 3 or the parent class identified on line 2 of claim 4? Additionally, it is unclear as to whether the “parent class” identified on line 3 of claim 3 and the parent class identified on line 2 of claim 4 are the same parent class or different parent classes. Clarification and appropriate correction are required. For purposes of examination the Examiner will treat the “parent class” identified on line 3 of claim 3 and the parent class identified on line 2 of claim 4 as the same parent class.
Claim 5 recites the limitation "the input image received from the image gateway" (emphasis added) in lines 2 - 3. There is insufficient antecedent basis for this limitation in the claim.
Claim 5 recites the limitation "the received image showing one or more herbs" (emphasis added) in lines 3 - 4. There is insufficient antecedent basis for this limitation in the claim.
Claim 11 recites the limitation "the correct predefined sub class" (emphasis added) in line 5. There is insufficient antecedent basis for this limitation in the claim. The Examiner suggests amending line 5 of claim 11 to --determine the is correct based on a substantial--.
Claim 11 recites the limitation(s) "the predefine sub class from the predicting step and the group step" (emphasis added) in lines 8 - 9. There is insufficient antecedent basis for this/these limitation(s) in the claim. The Examiner suggests amending the aforementioned limitation(s) of claim 11 to --the predefined sub class from the prediction module and the one of 60 predefined subclasses--.
Claim 12 recites the limitation "the recognised herb" in lines 3 - 4. There is insufficient antecedent basis for this limitation in the claim.
Claim 13 recites the limitation "the input image" in line 5. There is insufficient antecedent basis for this limitation in the claim.
Claim 13 recites the limitation "the type of herb recognised in the image" (emphasis added) in line 13. There is insufficient antecedent basis for this limitation in the claim. The Examiner suggests the limitation to --the type of herb input image--.
Claim 14 recites the limitation "the identified herbs" (emphasis added) in lines 2 - 3. There is insufficient antecedent basis for this limitation in the claim.
Claim 16 is rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention because it is unclear as to which identified parent class “the identified parent class” recited on line 4 is referencing. Is it referring to the “parent class” identified on lines 2 - 3 of claim 15 or the parent class identified on lines 2 - 3 of claim 16? Additionally, it is unclear as to whether the “parent class” identified on lines 2 - 3 of claim 15 and the parent class identified on lines 2 - 3 of claim 16 are the same parent class or different parent classes. Clarification and appropriate correction are required. For purposes of examination the Examiner will treat the “parent class” identified on lines 2 - 3 of claim 15 and the parent class identified on lines 2 - 3 of claim 16 as the same parent class.
Claim 17 recites the limitation "the received image showing one or more herbs" in line 3. There is insufficient antecedent basis for this limitation in the claim.
Claim 20 recites the limitation "the correct predefined sub class" (emphasis added) in line 9. There is insufficient antecedent basis for this limitation in the claim. The Examiner suggests amending line 9 of claim 20 to --determining the is correct based on a substantial--.
Claims 3, 6 - 10, 15, 18 and 19 are also rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, due to being dependent upon a rejected base claim(s) but would be withdrawn from the rejection if their base claim(s) overcome the rejection.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1 - 20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to a judicial exception, an abstract idea, without significantly more. The claims are directed towards predicting/recognizing a type of herb in an image, which is an abstract idea.
The claims recite, at a high level of generality, processing the input image by identifying at least one herb of the one or more herbs, grouping the identified herb into at least one predefined class, wherein the predefined class corresponds to a type of herb, performing feature extraction on the input image to extract image features, and predicting the type of herb based on processing the extracted image features.
The limitation of “processing the input image by identifying at least one herb of the one or more herbs”, as drafted, is a process that, under its broadest reasonable interpretation, covers performance of the limitation in the mind using observation, evaluation, judgment, and opinion but for the recitation of generic computer components. That is, other than reciting “a classification engine arranged to:” (see claim 1) and “computer implemented” (see claim 13) nothing in the claim element precludes the step from practically being performed in the mind. For example, but for the recitation of the aforementioned generic computer components, the processing the input image by identifying at least one herb of the one or more herbs encompasses a user observing an image and performing an evaluation, judgment and/or opinion to mentally locate (identify) any possible herb or botanical material depicted in the image. If a claim limitation, under its broadest reasonable interpretation, covers performance of the limitation in the mind but for the recitation of generic computer components, with or without the use of a physical aid such as pen and paper, then it falls within the “Mental Processes” grouping of abstract ideas. See MPEP § 2106.04(a)(2)(III).
Similarly, the limitation of “grouping the identified herb into at least one predefined class, wherein the predefined class corresponds to a type of herb”, as drafted, is a process that, under its broadest reasonable interpretation, covers performance of the limitation in the mind using observation, evaluation, judgment, and opinion but for the recitation of generic computer components. That is, other than reciting “a classification engine arranged to:” (see claim 1) and “computer implemented” (see claim 13) nothing in the claim element precludes the step from practically being performed in the mind. For example, but for the recitation of the aforementioned generic computer components, the grouping the identified herb into at least one predefined class encompasses a user observing possible herb or botanical material depicted in an image and performing an evaluation, judgment and/or opinion to mentally decide on a rough category for the possible herb or botanical material, such as fruit, seed, grass, flower, etc. If a claim limitation, under its broadest reasonable interpretation, covers performance of the limitation in the mind but for the recitation of generic computer components, with or without the use of a physical aid such as pen and paper, then it falls within the “Mental Processes” grouping of abstract ideas. See MPEP § 2106.04(a)(2)(III).
Relatedly, the limitation of “performing feature extraction on the input image to extract image features”, as drafted, is a process that, under its broadest reasonable interpretation, covers performance of the limitation in the mind using observation, evaluation, judgment, and opinion but for the recitation of generic computer components. That is, other than reciting “a classification engine arranged to:” (see claim 1) and “computer implemented” (see claim 13) nothing in the claim element precludes the step from practically being performed in the mind. For example, but for the recitation of the aforementioned generic computer components, the extracting image features from an input image encompasses a user observing an image and performing an evaluation, judgment and/or opinion to mentally identify one or more characteristics, attributes and/or properties of the image. If a claim limitation, under its broadest reasonable interpretation, covers performance of the limitation in the mind but for the recitation of generic computer components, with or without the use of a physical aid such as pen and paper, then it falls within the “Mental Processes” grouping of abstract ideas. See MPEP § 2106.04(a)(2)(III).
Additionally, the limitation of “predicting the type of herb based on processing the extracted image features”, as drafted, is a process that, under its broadest reasonable interpretation, covers performance of the limitation in the mind using observation, evaluation, judgment, and opinion but for the recitation of generic computer components. That is, other than reciting “a classification engine arranged to:” (see claim 1) and “computer implemented” (see claim 13) nothing in the claim element precludes the step from practically being performed in the mind. For example, but for the recitation of the aforementioned generic computer components, the predicting the type of herb based on the extracted image features encompasses a user thinking about one or more characteristics, attributes and/or properties of an image depicting possible herb or botanical material depicted in an image and performing an evaluation, judgment and/or opinion mentally by comparing the one or more characteristics, attributes and/or properties against those of known types of herbs and deciding on a known type of herb that is most likely depicted in the image. If a claim limitation, under its broadest reasonable interpretation, covers performance of the limitation in the mind but for the recitation of generic computer components, with or without the use of a physical aid such as pen and paper, then it falls within the “Mental Processes” grouping of abstract ideas. See MPEP § 2106.04(a)(2)(III). Accordingly, the claims recite an abstract idea.
This judicial exception is not integrated into a practical application. In particular, the claims recite additional elements of: “an image gateway”, “receiv[ing] an input dataset comprising one or more images, each image showing one or more herbs”, “a classification arranged to:”, “an output module”, “output[ting] the type of herb recognised in the image” and a “computer implemented method”.
The limitations of “an image gateway”, “a classification arranged to:”, “an output module” and a “computer implemented method” are recited at a high level of generality such that they amount to no more than mere instructions to apply the exception using generic computer components. Furthermore, the claims as a whole merely describe how to generally “apply” the concept of predicting/recognizing a type of herb in an image in a computer environment. Simply implementing the abstract idea on a generic computer is not a practical application of the abstract idea. See MPEP § 2106.05(f).
Further, the limitations of “receiv[ing] an input dataset comprising one or more images, each image showing one or more herbs” and “output[ting] the type of herb recognised in the image” are mere data gathering and output recited at a high level of generality, and thus are insignificant extra-solution activity. See MPEP § 2106.05(g). In addition, all uses of the recited judicial exception require such data gathering and output, and, as such, these limitations do not impose any meaningful limits on the claims. These limitations amount to necessary data gathering. See MPEP § 2106.05. Additionally, the elements of the aforementioned limitations amount to recording and transmitting digital images and/or information by use of conventional or generic technology in a nascent but well-known environment and are well-understood, routine, conventional activity. See MPEP § 2106.05(d).
Even when viewed in combination, these additional elements do not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. Accordingly, the claims are directed to an abstract idea.
The claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception. The additional elements of: “an image gateway”, “receiv[ing] an input dataset comprising one or more images, each image showing one or more herbs”, “a classification arranged to:”, “an output module”, “output[ting] the type of herb recognised in the image” and a “computer implemented method” do not add a meaningful limitation to the abstract idea because they merely perform insignificant pre/post extrasolution activity, mere data gathering and output, and/or amount to no more than mere instructions to apply the abstract idea using generic computer components. Mere instructions to apply an exception using generic computer components cannot provide an inventive concept. The claims are not patent eligible.
Furthermore, with regards to dependent claims 6, 10, 12, 18 and 20, the limitations of “apply[ing] a Multimodal AI model” to process the image and group the identified herb, “utilise an EfficientNet model for feature extraction and prediction” and “a multiclass single label image classification model to recognize the type of herb and output the recognised herb”, as drafted, are processes that, under their broadest reasonable interpretations, cover performance of the limitations in the mind but for the recitation of generic computer components, a Multimodal AI model, an EfficientNet model and a multiclass single label image classification model. That is, other than reciting “a Multimodal AI model”, “an EfficientNet model” and “a multiclass single label image classification model”, nothing in the claim elements preclude the steps from practically being performed in the mind. The Examiner asserts that the claims do not provide any details nor limit how the models operate or how their functions are performed, and the plain meanings of processing, grouping, extracting, predicting, recognizing and outputting encompass mental observations, evaluations, judgments, and/or opinions, e.g., a user observing possible herb or botanical material depicted in an image and performing an observation(s), evaluation(s), judgment(s) and/or opinion(s) to mentally decide on a rough category of the possible herb or botanical material, to mentally identify one or more characteristics, attributes and/or properties of the image and to mentally determine a known type of herb that most resembles the one or more characteristics, attributes and/or properties of the image. Under their broadest reasonable interpretations when read in light of the specification, processing, grouping, extracting, predicting, recognizing and outputting encompass mental processes practically performed in the human mind by observation(s), evaluation(s), judgment(s) and/or opinion(s). For example, the claimed processing, grouping, extracting, predicting, recognizing and outputting encompass a user observing possible herb or botanical material depicted in an image and performing an observation(s), evaluation(s), judgment(s) and/or opinion(s) to mentally decide on a rough category of the possible herb or botanical material, to mentally identify one or more characteristics, attributes and/or properties of the image and to mentally determine a known type of herb that most resembles the one or more characteristics, attributes and/or properties of the image. See MPEP § 2106.04(a)(2)(I) and § 2106.04(a)(2)(III). If a claim limitation, under its broadest reasonable interpretation, covers performance of the limitation in the mind but for the recitation of generic computer components, then it falls within the “Mental Processes” grouping of abstract ideas. The claims are not patent eligible.
Moreover, with regards to dependent claims 6, 10, 12, 18 and 20, the limitations of “apply[ing] a Multimodal AI model” to process the image and group the identified herb, “utilise an EfficientNet model for feature extraction and prediction” and “a multiclass single label image classification model to recognize the type of herb and output the recognised herb”, provide nothing more than mere instructions to implement an abstract idea on a generic computer. See MPEP § 2106.05(f). MPEP 2106.05(f) provides the following considerations for determining whether a claim simply recites a judicial exception with the words “apply it” (or an equivalent), such as mere instructions to implement an abstract idea on a computer: (1) whether the claim recites only the idea of a solution or outcome i.e., the claim fails to recite details of how a solution to a problem is accomplished; (2) whether the claim invokes computers or other machinery merely as a tool to perform an existing process; and (3) the particularity or generality of the application of the judicial exception. Moreover, the aforementioned models are used to generally apply the abstract idea without placing any limits on how the aforementioned models function. See MPEP 2106.05(f). Additionally, the recitations of “a Multimodal AI model”, “an EfficientNet model” and “a multiclass single label image classification model” merely indicate a field of use or technological environment in which the judicial exception is performed. Although the additional elements of “a Multimodal AI model”, “an EfficientNet model” and “a multiclass single label image classification mode” limit the identified judicial exception predicting/recognizing a type of herb in an image, these types of limitations merely confine the use of the abstract idea to a particular technological environment (machine learning) and thus fail to add an inventive concept to the claims. See MPEP 2106.05(h). The claims are not patent eligible.
In addition, with regards to dependent claims 2 - 12 and 14 - 20, the Examiner asserts that claims 2 - 12 and 14 - 20 are also directed to the abstract idea of predicting/recognizing a type of herb in an image and dependent claims 2 - 12 and 14 - 20 merely further limit the abstract idea claimed in independent claims 1 and 13, for example by further identifying how the identified herb is grouped into classes, by further identifying additional generic computer components, and/or by further identifying additional insignificant pre/post extrasolution activity that is performed. However, the Examiner asserts that a more detailed abstract idea remains an abstract idea and that none of the limitations of the dependent claims considered as an ordered combination provide eligibility because taken as a whole the claims merely instruct the practitioner to apply the abstract idea using generic computer components. The claims are not eligible.
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
Claims 1 - 6 and 12 - 18 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Xu et al. U.S. Publication No. 2023/0042208 A1.
- With regards to claims 1 and 13, Xu et al. disclose a system and a computer implemented method for recognition of one or more herbs (Xu et al., Abstract, Figs. 1 & 4 - 6, Pg. 2 ¶ 0020 - 0021 and 0033, Pg. 3 ¶ 0037 - 0040, Pg. 3 ¶ 0046 - Pg. 4 ¶ 0048, Pg. 4 ¶ 0051 - 0053, Pg. 6 ¶ 0060 - 0062, Pg. 7 ¶ 0075 - 0077 [The Examiner notes that, in view of page 8 lines 6 - 27 of the instant specification, the broadest reasonable interpretation of a herb(s) encompasses, at least, “plant elements as well as non-botanic substances such as animal products, fungi, mineral products, or other non-plant (i.e., non-botanic) products or element[s]” and “Chinese medicinal compounds, or any materials or substances suitable for use in TCM practice, or medicinal compounds, or any materials or substances suitable for use in any therapeutic or dietary treatments or programs”]) comprising: an image gateway (Xu et al., Figs. 1, 2 & 4 - 6, Pg. 3 ¶ 0036 - 0037 and 0043, Pg. 7 ¶ 0073 - 0077) arranged to receive an input dataset comprising one or more images, each image showing one or more herbs, (Xu et al., Abstract, Figs. 1, 2 & 4 - 6, Pg. 3 ¶ 0036 - 0037 and 0040 - 0043, Pg. 5 ¶ 0055 - Pg. 6 ¶ 0059, Pg. 7 ¶ 0065 - 0067) a classification engine (Xu et al., Abstract, Figs. 1, 2 & 4 - 6, Pg. 4 ¶ 0051 - 0053, Pg. 5 ¶ 0058, Pg. 6 ¶ 062, Pg. 7 ¶ 0065 - 0067 and 0075 - 0077) arranged to: process the input image by identifying at least one herb of the one or more herbs, (Xu et al., Abstract, Figs. 2 & 4, Pg. 3 ¶ 0040 and 0043 - 0047, Pg. 4 ¶ 0051 - 0053) group the identified herb into at least one predefined class, wherein the predefined class corresponds to a type of herb, (Xu et al., Abstract, Figs. 2 & 4, Pg. 3 ¶ 0040 and 0043 - 0047, Pg. 4 ¶ 0051 - 0053) perform feature extraction on the input image to extract image features, (Xu et al., Pg. 3 ¶ 0044 - 0047, Pg. 4 ¶ 0051 - 0053, Pg. 5 ¶ 0055 - 0058, Pg. 6 ¶ 0061 - 0062) predict the type of herb based on processing the extracted image features, (Xu et al., Pg. 4 ¶ 0053, Pg. 5 ¶ 0055 - 0057, Pg. 6 ¶ 0061 - 0062, Pg. 7 ¶ 0065) and; an output module (Xu et al., Figs. 1, 2 & 4, Pg. 3 ¶ 0036 - 0037, Pg. 5 ¶ 0055 - 0056, Pg. 6 ¶ 0060 - 0063) arranged to output the type of herb recognised in the image based on the combination of grouping into a predefined class and predicting the type of herb from the extracted features. (Xu et al., Pg. 3 ¶ 0036 - 0037, Pg. 4 ¶ 0051 - 0053, Pg. 5 ¶ 0055 - 0058, Pg. 6 ¶ 0061 - 0063, Pg. 7 ¶ 0065)
- With regards to claims 2 and 14, Xu et al. disclose a system and a computer implemented method for recognition of one or more herbs of claims 1 and 13, respectively, wherein the classification engine is configured to group the identified herbs into multiple tier hierarchical classification. (Xu et al., Pg. 4 ¶ 0048, Pg. 4 ¶ 0053 - Pg. 5 ¶ 0056, Pg. 6 ¶ 0061 - 0063 [“the step of identifying the preliminary category of the object based on the first image may include: identifying the genus information of the object. For example, after the above-mentioned preliminary category identification processing of the object, it is possible to only identify the genus information of the object (for example, peach, cherry or rose, etc.), but the species information of plant (that is, precise category of plant) cannot be accurately identified. For example, the object is only identified as belonging to the genus Peach, but it is not possible to determine which peach species that the object belongs to. In this embodiment, the feature portion of the object may be determined based on the pre-established correspondence between the genus of the plant and a corresponding feature portion thereof. For example, for peach plants, it is possible to make further judgment based on the parts or features of its fruit, petal shape, calyx, overall shape (for example, whether the plant is a tree or a shrub), whether branches are hairy, or whether there are hairs on the front and back of leaves, so as to further determine the precise category of the peach plant.”])
- With regards to claims 3 and 15, Xu et al. disclose a system and a computer implemented method for recognition of one or more herbs of claims 2 and 14, respectively, wherein the classification engine comprises a grouping module (Xu et al., Figs. 1, 5 & 6, Pg. 3 ¶ 0035 - 0038, 0043 and 0047, Pg. 4 ¶ 0051 - 0053, Pg. 6 ¶ 0061 - 0063, Pg. 7 ¶ 0065, 0073 and 0075 - 0077) adapted to: identifying a parent class and at least one sub class for the identified herb, (Xu et al., Pg. 4 ¶ 0048, Pg. 4 ¶ 0053 - Pg. 5 ¶ 0056, Pg. 6 ¶ 0061 - 0063 [The Examiner notes that, in view of page 17 lines 1 - 12 of the instant specification, the broadest reasonable interpretation of a parent class encompasses, at least, classifications including plant material and non-botanic material classes.]) and group the identified herb into the parent class and the at least one sub class. (Xu et al., Pg. 4 ¶ 0048, Pg. 4 ¶ 0053 - Pg. 5 ¶ 0056, Pg. 6 ¶ 0061 - 0063)
- With regards to claims 4 and 16, Xu et al. disclose a system and a computer implemented method for recognition of one or more herbs of claims 3 and 15, respectively, wherein the grouping module is adapted to first identify a parent class from a plurality of parent classes (Xu et al., Pg. 4 ¶ 0048, Pg. 4 ¶ 0053 - Pg. 5 ¶ 0056, Pg. 6 ¶ 0061 - 0063) and subsequently identify a sub class from a plurality of sub classes within the identified parent class, (Xu et al., Pg. 4 ¶ 0048, Pg. 4 ¶ 0053 - Pg. 5 ¶ 0056, Pg. 6 ¶ 0061 - 0063) and wherein the herb is grouped into the identified sub class. (Xu et al., Pg. 4 ¶ 0048, Pg. 4 ¶ 0053 - Pg. 5 ¶ 0056, Pg. 6 ¶ 0061 - 0063)
- With regards to claims 5 and 17, Xu et al. disclose a system and a computer implemented method for recognition of one or more herbs of claims 4 and 16, respectively, comprising an inference module (Xu et al., Figs. 1, 5 & 6, Pg. 3 ¶ 0035 - 0038, 0043 and 0047, Pg. 4 ¶ 0051 - 0053, Pg. 6 ¶ 0061 - 0063, Pg. 7 ¶ 0065, 0073 and 0075 - 0077) adapted to process the input image received from the image gateway by applying an inference process to the received image showing one or more herbs to infer a herb in the image. (Xu et al., Pg. 3 ¶ 0044 - 0047, Pg. 4 ¶ 0051 - 0053, Pg. 5 ¶ 0055 - 0058, Pg. 6 ¶ 0061 - 0062, Pg. 7 ¶ 0065)
- With regards to claims 6 and 18, Xu et al. disclose a system and a computer implemented method for recognition of one or more herbs of claims 4 and 16, respectively, wherein the grouping module is adapted to apply a Multimodal AI model to processing the input image and grouping the identified herb. (Xu et al., Pg. 3 ¶ 0038 - 0039, Pg. 3 ¶ 0046 - Pg. 4 ¶ 0048, Pg. 4 ¶ 0051 - 0053, Pg. 6 ¶ 0061 - 0063 [Xu et al. discloses, for example, that location information may be utilized when identifying a category of a plant.])
- With regards to claim 12, Xu et al. disclose a system for recognition of one or more herbs of claim 1, wherein the classification engine comprises applying a multiclass single label image classification model to recognise the type of herb and output the recognised herb. (Xu et al., Figs. 1, 2 & 4, Pg. 3 ¶ 0036 - 0040 and 0046 - 0047, Pg. 4 ¶ 0051 - 0053, Pg. 6 ¶ 0061 - 0063, Pg. 7 ¶ 0065)
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims 7 - 9 and 19 are rejected under 35 U.S.C. 103 as being unpatentable over Xu et al. U.S. Publication No. 2023/0042208 A1.
- With regards to claim 7, Xu et al. disclose a system for recognition of one or more herbs of claim 6, wherein each parent class comprises sub classes. (Xu et al., Pg. 4 ¶ 0048, Pg. 4 ¶ 0053 - Pg. 5 ¶ 0056, Pg. 6 ¶ 0061 - 0063) Xu et al. fail to disclose expressly wherein each parent class comprises between 40 and 80 sub classes, however, it has been held that when the general conditions of a claim are disclosed in the prior art, discovering the optimum “ranges, or measurements” involves only routine skill in the art. See MPEP § 2144.05. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of Xu et al. to include between 40 and 80 sub classes in each parent class. The Examiner asserts that it has been held that when the general conditions of a claim are disclosed in the prior art, discovering the optimum “ranges, or measurements” involves only routine skill in the art. The normal desire of scientists or artisans to improve upon what is already generally known provides the motivation to determine where in a disclosed set of percentage ranges is the optimum combination of percentages. Therefore, it would have been obvious to one or ordinary skill in the art at the time of the invention to utilize parent classes comprising between 40 and 80 sub classes for a variety of reasons, such as to produce very precise classifications for a number of related target objects, to generate acceptable classifications for a very large number of unrelated target objects efficiently and/or in response to producing greater accuracy rates for classification of an existing dataset of target objects. Furthermore, this modification would have been prompted by the teachings and suggestions of Xu et al. that objects classified by their system include but are not limited to animals, people, scenery, natural objects, buildings, commodities, food, medicines, and/or daily necessities, etc and that for an image of a plant their system aims to identify genus and species information of the plant, see at least page 2 paragraph 0033 and page 4 paragraphs 0048 and 0053 of Xu et al. Also, see MPEP § 2144.05. This modification could be completed according to well-known techniques in the art and would likely yield predictable results, in that each of the parent classes of Xu et al. would comprise between 40 and 80 sub classes into which an imaged object could be classified. Therefore, it would have been obvious to combine Xu et al. with parent classes comprising between 40 and 80 sub classes to obtain the invention as specified in claim 7.
- With regards to claim 8, Xu et al. disclose a system for recognition of one or more herbs of claim 6, wherein the identified herb is initially grouped into one predefined parent class and one predefined sub class. (Xu et al., Pg. 4 ¶ 0048, Pg. 4 ¶ 0053 - Pg. 5 ¶ 0056, Pg. 6 ¶ 0061 - 0063) Xu et al. fail to disclose expressly utilizing ten predefined parent classes each comprising 60 predefined sub classes, however, it has been held that when the general conditions of a claim are disclosed in the prior art, discovering the optimum “ranges, or measurements” involves only routine skill in the art. See MPEP § 2144.05. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of Xu et al. to include ten predefined parent classes each comprising 60 predefined sub classes. The Examiner asserts that it has been held that when the general conditions of a claim are disclosed in the prior art, discovering the optimum “ranges, or measurements” involves only routine skill in the art. The normal desire of scientists or artisans to improve upon what is already generally known provides the motivation to determine where in a disclosed set of percentage ranges is the optimum combination of percentages. Therefore, it would have been obvious to one or ordinary skill in the art at the time of the invention to utilize ten predefined parent classes that each comprise 60 predefined sub classes for a variety of reasons, such as to produce very precise classifications for a number of related target objects, to generate acceptable classifications for a very large number of unrelated target objects efficiently and/or in response to producing greater accuracy rates for classification of an existing dataset of target objects. Furthermore, this modification would have been prompted by the teachings and suggestions of Xu et al. that objects classified by their system include but are not limited to animals, people, scenery, natural objects, buildings, commodities, food, medicines, and/or daily necessities, etc and that for an image of a plant their system aims to identify genus and species information of the plant, see at least page 2 paragraph 0033 and page 4 paragraphs 0048 and 0053 of Xu et al. Also, see MPEP § 2144.05. This modification could be completed according to well-known techniques in the art and would likely yield predictable results, in that the base device of Xu et al. would classify objects into one of ten predefined parent classes and one of 60 predefined sub classes. Therefore, it would have been obvious to combine Xu et al. with ten predefined parent classes each comprising 60 predefined sub classes to obtain the invention as specified in claim 8.
- With regards to claim 9, Xu et al. disclose a system for recognition of one or more herbs of claim 8, wherein the classification engine comprises a prediction module (Xu et al., Figs. 1, 5 & 6, Pg. 3 ¶ 0035 - 0038, 0043 and 0047, Pg. 4 ¶ 0051 - 0053, Pg. 6 ¶ 0061 - 0063, Pg. 7 ¶ 0065, 0073 and 0075 - 0077) adapted to perform feature extraction (Xu et al., Pg. 3 ¶ 0044 - 0047, Pg. 4 ¶ 0051 - 0053, Pg. 5 ¶ 0055 - 0058, Pg. 6 ¶ 0061 - 0062) and predict the type of herb. (Xu et al., Pg. 4 ¶ 0053, Pg. 5 ¶ 0055 - 0057, Pg. 6 ¶ 0061 - 0062, Pg. 7 ¶ 0065)
- With regards to claim 19, Xu et al. disclose a computer implemented method for recognition of one or more herbs of claim 16, wherein each parent class comprises sub classes. (Xu et al., Pg. 4 ¶ 0048, Pg. 4 ¶ 0053 - Pg. 5 ¶ 0056, Pg. 6 ¶ 0061 - 0063) Xu et al. fail to disclose expressly wherein each parent class comprises between 40 and 80 sub classes, however, it has been held that when the general conditions of a claim are disclosed in the prior art, discovering the optimum “ranges, or measurements” involves only routine skill in the art. See MPEP § 2144.05. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of Xu et al. to include between 40 and 80 sub classes in each parent class. The Examiner asserts that it has been held that when the general conditions of a claim are disclosed in the prior art, discovering the optimum “ranges, or measurements” involves only routine skill in the art. The normal desire of scientists or artisans to improve upon what is already generally known provides the motivation to determine where in a disclosed set of percentage ranges is the optimum combination of percentages. Therefore, it would have been obvious to one or ordinary skill in the art at the time of the invention to utilize parent classes comprising between 40 and 80 sub classes for a variety of reasons, such as to produce very precise classifications for a number of related target objects, to generate acceptable classifications for a very large number of unrelated target objects efficiently and/or in response to producing greater accuracy rates for classification of an existing dataset of target objects. Furthermore, this modification would have been prompted by the teachings and suggestions of Xu et al. that objects classified by their system include but are not limited to animals, people, scenery, natural objects, buildings, commodities, food, medicines, and/or daily necessities, etc and that for an image of a plant their system aims to identify genus and species information of the plant, see at least page 2 paragraph 0033 and page 4 paragraphs 0048 and 0053 of Xu et al. Also, see MPEP § 2144.05. This modification could be completed according to well-known techniques in the art and would likely yield predictable results, in that each of the parent classes of Xu et al. would comprise between 40 and 80 sub classes into which an imaged object could be classified. Therefore, it would have been obvious to combine Xu et al. with parent classes comprising between 40 and 80 sub classes to obtain the invention as specified in claim 19.
Claim 10 is rejected under 35 U.S.C. 103 as being unpatentable over Xu et al. U.S. Publication No. 2023/0042208 A1 as applied to claim 9 above, and further in view of Rao et al. U.S. Publication No. 2022/0222469 A1.
- With regards to claim 10, Xu et al. disclose a system for recognition of one or more herbs of claim 9, wherein the prediction module is configured to utilise a deep convolutional neural network for feature extraction and prediction. (Xu et al., Pg. 3 ¶ 0047, Pg. 4 ¶ 0051 - 0053, Pg. 5 ¶ 0058, Pg. 6 ¶ 0060 - 0062) Xu et al. fail to disclose explicitly an EfficientNet model. Pertaining to analogous art, Rao et al. disclose wherein the prediction module is configured to utilise an EfficientNet model for feature extraction and prediction. (Rao et al., Figs. 1 - 6, Pg. 5 ¶ 0061, Pg. 9 ¶ 0095 - 0097) Xu et al. and Rao et al. are combinable because they are all directed towards image processing systems that utilize neural networks to classify imaged objects. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the modified teachings of Xu et al. with the teachings of Rao et al. This modification would have been prompted in order to substitute the deep convolutional neural network of Xu et al. for the EfficientNet model of Rao et al. The EfficientNet model of Rao et al. could be substituted in place of the deep convolutional neural network of Xu et al. using well-known techniques in the art and would likely yield predictable results, in that in the combination an EfficientNet model would be utilized to identify a category of an object, a plant, in the captured image(s). Furthermore, this modification would have been prompted by the teachings and suggestions of Xu et al. that other training and identification models may be utilized and that the second object category identification model may be the same as or different from the first object category identification model, see at least page 4 paragraphs 0051 - 0052 and page 6 paragraph 0062 of Xu et al. This combination could be completed according to well-known techniques in the art and would likely yield predictable results, in that the modified base device of Xu et al. would utilize an EfficientNet model to identify the category of the object, the plant, in the captured image(s). Therefore, it would have been obvious to combine Xu et al. with Rao et al. to obtain the invention as specified in claim 10.
Claim 11 is rejected under 35 U.S.C. 103 as being unpatentable over Xu et al. U.S. Publication No. 2023/0042208 A1 as applied to claim 9 above, and further in view of Webb et al. U.S. Publication No. 2023/0252791 A1 in view of Schwartz et al. U.S. Publication No. 2021/0319263 A1.
- With regards to claim 11, Xu et al. disclose a system for recognition of one or more herbs of claim 9, wherein the prediction module is further adapted to: determine the correct predefined sub class based on the extracted image features, (Xu et al., Pg. 4 ¶ 0053, Pg. 5 ¶ 0055 - 0057, Pg. 6 ¶ 0061 - 0062, Pg. 7 ¶ 0065) and classify the herb into the determined predefined sub class. (Xu et al., Pg. 4 ¶ 0053, Pg. 5 ¶ 0055 - 0057, Pg. 6 ¶ 0061 - 0062, Pg. 7 ¶ 0065) Xu et al. fail to disclose explicitly comparing the extracted features with a set of predefined features corresponding to a predefined sub class, determining the correct predefined sub class based on a substantial matching the extracted image features to the set of predefined features, and; wherein the type of herb is identified when the predefine sub class from the predicting step and the group step are substantially identical. Pertaining to analogous art, Webb et al. disclose determining the correct predefined sub class based on the extracted image features, (Webb et al., Figs. 2 - 4D, 6 & 8A - 9, Pg. 2 ¶ 0042 - 0045 and 0048, Pg. 4 ¶ 0050, Pg. 5 ¶ 0058 - 0061, Pg. 8 ¶ 0076 - 0077, Pg. 9 ¶ 0086 - 0087) and; wherein the type of herb is identified when the predefine sub class from the predicting step and the group step are substantially identical. (Webb et al., Figs. 2 - 4D, 6 & 8A - 9, Pg. 3 ¶ 0042 - 0045 and 0048, Pg. 5 ¶ 0060 - 0061, Pg. 7 ¶ 0072, Pg. 9 ¶ 0081 and 0086 - 0087 [“if ML1 and ML2 provide object detections that are consistent with each other, such objects may be identified as target objects”]) Webb et al. fail to disclose explicitly comparing the extracted features with a set of predefined features corresponding to a predefined sub class, and determining the correct predefined sub class based on a substantial matching the extracted image features to the set of predefined features. Pertaining to analogous art, Schwartz et al. disclose comparing the extracted features with a set of predefined features corresponding to a predefined sub class, (Schwartz et al., Abstract, Figs. 1 - 3, Pg. 2 ¶ 0019 - 0021, Pg. 3 ¶ 0025 - 0028, Pg. 5 ¶ 0037) and determining the correct predefined sub class based on a substantial matching the extracted image features to the set of predefined features. (Schwartz et al., Abstract, Figs. 1 - 3, Pg. 2 ¶ 0019 - 0021, Pg. 3 ¶ 0025 - 0028, Pg. 5 ¶ 0037) Xu et al. and Webb et al. are combinable because they are both directed towards image processing systems that utilize a plurality of machine learning models neural to classify objects, such as plants, in images. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the modified teachings of Xu et al. with the teachings of Webb et al. This modification would have been prompted in order to enhance the modified base device of Xu et al. with the well-known and applicable technique Webb et al. applied to a comparable device. Identifying the type of herb when the predefined sub class from the predicting step and the group step are substantially identical, as taught by Webb et al., would enhance the modified base device of Xu et al. by improving its ability to accurately and reliably identify plants in images and provide end-users with factual information concerning plants since it would need both of their classifications to be in agreement before considering an imaged plant to be confidently classified. This combination could be completed according to well-known techniques in the art and would likely yield predictable results, in that the type of herb would be identified when the predefined sub class from the predicting step and the group step are substantially identical so as to improve the ability of the modified base device of Xu et al. to accurately and reliably identify plants in images and provide end-users with factual information concerning plants. In addition, Xu et al. in view of Webb et al. and Schwartz et al. are combinable because they are all directed towards image processing systems that utilize machine learning models to classify imaged objects. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the combined teachings of Xu et al. in view of Webb et al. with the teachings of Schwartz et al. This modification would have been prompted in order to substitute the classification technique utilized by the first or second object category identification model of Xu et al. for the classification process of Schwartz et al. The classification process of Schwartz et al. could be substituted in place of the classification technique utilized by the first or second object category identification model of Xu et al. using well-known techniques in the art and would likely yield predictable results, in that in the combination the features extracted by the first or second object category identification model of Xu et al. would be compared to a set of predefined features corresponding to a predefined sub class in order to identify the correct category of an object, a plant, in the captured image(s). Furthermore, this modification would have been prompted by the teachings and suggestions of Xu et al. that other training and identification models may be utilized and that the second object category identification model may be the same as or different from the first object category identification model, see at least page 4 paragraphs 0051 - 0052 and page 6 paragraph 0062 of Xu et al. This combination could be completed according to well-known techniques in the art and would likely yield predictable results, in that the combined base device would compare features extracted by the first or second object category identification model of Xu et al. to a set of predefined features corresponding to a predefined sub class in order to identify the correct category of an object, a plant, in the captured image(s). Therefore, it would have been obvious to combine Xu et al. with Webb et al. and Schwartz et al. to obtain the invention as specified in claim 11.
Claim 20 is rejected under 35 U.S.C. 103 as being unpatentable over Xu et al. U.S. Publication No. 2023/0042208 A1 as applied to claim 19 above, and further in view of Webb et al. U.S. Publication No. 2023/0252791 A1 in view of Schwartz et al. U.S. Publication No. 2021/0319263 A1 in view of Rao et al. U.S. Publication No. 2022/0222469 A1.
- With regards to claim 20, Xu et al. disclose a computer implemented method for recognition of one or more herbs of claim 19, wherein the identified herb is initially grouped into one predefined parent class and one predefined sub class, (Xu et al., Pg. 4 ¶ 0048, Pg. 4 ¶ 0053 - Pg. 5 ¶ 0056, Pg. 6 ¶ 0061 - 0063) and wherein the steps of feature extraction and predicting the type of herb are performed by utilizing a deep convolutional neural network (Xu et al., Pg. 3 ¶ 0047, Pg. 4 ¶ 0051 - 0053, Pg. 5 ¶ 0058, Pg. 6 ¶ 0060 - 0062) and, wherein the step of predicting the type of herb from the extracted image features comprises: determining the correct predefined sub class based on the extracted image features, (Xu et al., Pg. 4 ¶ 0053, Pg. 5 ¶ 0055 - 0057, Pg. 6 ¶ 0061 - 0062, Pg. 7 ¶ 0065) classifying the herb into the determined predefined sub class, (Xu et al., Pg. 4 ¶ 0053, Pg. 5 ¶ 0055 - 0057, Pg. 6 ¶ 0061 - 0062, Pg. 7 ¶ 0065) and wherein the method is implemented by a multiclass single label image classification model. (Xu et al., Figs. 1, 2 & 4, Pg. 3 ¶ 0036 - 0040 and 0046 - 0047, Pg. 4 ¶ 0051 - 0053, Pg. 6 ¶ 0061 - 0063, Pg. 7 ¶ 0065) Xu et al. fail to disclose expressly utilizing ten predefined parent classes each comprising 60 predefined sub classes, however, it has been held that when the general conditions of a claim are disclosed in the prior art, discovering the optimum “ranges, or measurements” involves only routine skill in the art. See MPEP § 2144.05. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of Xu et al. to include ten predefined parent classes each comprising 60 predefined sub classes. Additionally, Xu et al. fail to disclose explicitly an EfficientNet model comparing the extracted features with a set of predefined features corresponding to a predefined sub class, determining the correct predefined sub class based on a substantial matching the extracted image features to the set of predefined features, and; wherein the type of herb is identified when the predefine sub class from the predicting step and the group step are substantially identical. Pertaining to analogous art, Webb et al. disclose determining the correct predefined sub class based on the extracted image features, (Webb et al., Figs. 2 - 4D, 6 & 8A - 9, Pg. 2 ¶ 0042 - 0045 and 0048, Pg. 4 ¶ 0050, Pg. 5 ¶ 0058 - 0061, Pg. 8 ¶ 0076 - 0077, Pg. 9 ¶ 0086 - 0087) and; wherein the type of herb is identified when the predefine sub class from the predicting step and the group step are substantially identical. (Webb et al., Figs. 2 - 4D, 6 & 8A - 9, Pg. 3 ¶ 0042 - 0045 and 0048, Pg. 5 ¶ 0060 - 0061, Pg. 7 ¶ 0072, Pg. 9 ¶ 0081 and 0086 - 0087 [“if ML1 and ML2 provide object detections that are consistent with each other, such objects may be identified as target objects”]) Webb et al. fail to disclose explicitly an EfficientNet model comparing the extracted features with a set of predefined features corresponding to a predefined sub class, and determining the correct predefined sub class based on a substantial matching the extracted image features to the set of predefined features. Pertaining to analogous art, Schwartz et al. disclose comparing the extracted features with a set of predefined features corresponding to a predefined sub class, (Schwartz et al., Abstract, Figs. 1 - 3, Pg. 2 ¶ 0019 - 0021, Pg. 3 ¶ 0025 - 0028, Pg. 5 ¶ 0037) and determining the correct predefined sub class based on a substantial matching the extracted image features to the set of predefined features. (Schwartz et al., Abstract, Figs. 1 - 3, Pg. 2 ¶ 0019 - 0021, Pg. 3 ¶ 0025 - 0028, Pg. 5 ¶ 0037) Schwartz et al. fail to disclose explicitly an EfficientNet model. Pertaining to analogous art, Rao et al. disclose wherein the steps of feature extraction and predicting are performed by utilizing an EfficientNet mode. (Rao et al., Figs. 1 - 6, Pg. 5 ¶ 0061, Pg. 9 ¶ 0095 - 0097) It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of Xu et al. to include between 40 and 80 sub classes in each parent class. The Examiner asserts that it has been held that when the general conditions of a claim are disclosed in the prior art, discovering the optimum “ranges, or measurements” involves only routine skill in the art. The normal desire of scientists or artisans to improve upon what is already generally known provides the motivation to determine where in a disclosed set of percentage ranges is the optimum combination of percentages. Therefore, it would have been obvious to one or ordinary skill in the art at the time of the invention to utilize parent classes comprising between 40 and 80 sub classes for a variety of reasons, such as to produce very precise classifications for a number of related target objects, to generate acceptable classifications for a very large number of unrelated target objects efficiently and/or in response to producing greater accuracy rates for classification of an existing dataset of target objects. Furthermore, this modification would have been prompted by the teachings and suggestions of Xu et al. that objects classified by their system include but are not limited to animals, people, scenery, natural objects, buildings, commodities, food, medicines, and/or daily necessities, etc and that for an image of a plant their system aims to identify genus and species information of the plant, see at least page 2 paragraph 0033 and page 4 paragraphs 0048 and 0053 of Xu et al. Also, see MPEP § 2144.05. This modification could be completed according to well-known techniques in the art and would likely yield predictable results, in that each of the parent classes of Xu et al. would comprise between 40 and 80 sub classes into which an imaged object could be classified. Additionally, Xu et al. and Webb et al. are combinable because they are both directed towards image processing systems that utilize a plurality of machine learning models neural to classify objects, such as plants, in images. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the modified teachings of Xu et al. with the teachings of Webb et al. This modification would have been prompted in order to enhance the modified base device of Xu et al. with the well-known and applicable technique Webb et al. applied to a comparable device. Identifying the type of herb when the predefined sub class from the predicting step and the group step are substantially identical, as taught by Webb et al., would enhance the modified base device of Xu et al. by improving its ability to accurately and reliably identify plants in images and provide end-users with factual information concerning plants since it would need both of their classifications to be in agreement before considering an imaged plant to be confidently classified. This combination could be completed according to well-known techniques in the art and would likely yield predictable results, in that the type of herb would be identified when the predefined sub class from the predicting step and the group step are substantially identical so as to improve the ability of the modified base device of Xu et al. to accurately and reliably identify plants in images and provide end-users with factual information concerning plants. In addition, Xu et al. in view of Webb et al. and Schwartz et al. are combinable because they are all directed towards image processing systems that utilize machine learning models to classify imaged objects. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the combined teachings of Xu et al. in view of Webb et al. with the teachings of Schwartz et al. This modification would have been prompted in order to substitute the classification technique utilized by the first or second object category identification model of Xu et al. for the classification process of Schwartz et al. The classification process of Schwartz et al. could be substituted in place of the classification technique utilized by the first or second object category identification model of Xu et al. using well-known techniques in the art and would likely yield predictable results, in that in the combination the features extracted by the first or second object category identification model of Xu et al. would be compared to a set of predefined features corresponding to a predefined sub class in order to identify the correct category of an object, a plant, in the captured image(s). Furthermore, this modification would have been prompted by the teachings and suggestions of Xu et al. that other training and identification models may be utilized and that the second object category identification model may be the same as or different from the first object category identification model, see at least page 4 paragraphs 0051 - 0052 and page 6 paragraph 0062 of Xu et al. This combination could be completed according to well-known techniques in the art and would likely yield predictable results, in that the combined base device would compare features extracted by the first or second object category identification model of Xu et al. to a set of predefined features corresponding to a predefined sub class in order to identify the correct category of an object, a plant, in the captured image(s). Moreover, Xu et al. in view of Webb et al. in view of Schwartz et al. and Rao et al. are combinable because they are all directed towards image processing systems that utilize machine learning models to classify imaged objects. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the combined teachings of Xu et al. in view of Webb et al. in view of Schwartz et al. with the teachings of Rao et al. This modification would have been prompted in order to substitute the deep convolutional neural network of Xu et al. or Schwartz et al. for the EfficientNet model of Rao et al. The EfficientNet model of Rao et al. could be substituted in place of the deep convolutional neural network of Xu et al. or Schwartz et al. using well-known techniques in the art and would likely yield predictable results, in that in the combination an EfficientNet model would be utilized to extract the features from the captured image(s) that are utilized to classify the imaged object. Furthermore, this modification would have been prompted by the teachings and suggestions of Xu et al. that other training and identification models may be utilized and that the second object category identification model may be the same as or different from the first object category identification model, see at least page 4 paragraphs 0051 - 0052 and page 6 paragraph 0062 of Xu et al. This combination could be completed according to well-known techniques in the art and would likely yield predictable results, in that the combined base device would utilize an EfficientNet model to extract the features from the captured image(s) that are utilized to classify the imaged object. Therefore, it would have been obvious to combine Xu et al. with ten predefined parent classes each comprising 60 predefined sub classes, Webb et al., Schwartz et al. and Rao et al. to obtain the invention as specified in claim 20.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Croxford et al. U.S. Publication No. 2020/0034615 A1; which is directed towards an image processing method and system for detecting and recognizing objects in image data, wherein a plurality of classifiers implement hierarchical classification to recognize objects in images in a coarse to fine manner.
Ralls U.S. Publication No. 2018/0322353 A1; which is directed towards image processing systems and methods for identifying plant species, wherein image data, location data and additional metadata are utilized to identify a species of plant in a captured image.
Xu et al. U.S. Publication No. 2023/0044040 A1; which is directed towards an image processing method and system for recognizing plants, wherein trained neural networks are utilized to recognize plants in captured images.
Seeland et al., “Image-based classification of plant genus and family for trained and untrained plant species”, BMC Bioinformatics, Vol. 20, No. 4, Jan. 2019, pages 1 - 13; which is directed towards automated plant identification from images on the species, genus and family levels.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to ERIC RUSH whose telephone number is (571) 270-3017. The examiner can normally be reached 9am - 5pm Monday - Friday.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Andrew Bee can be reached at (571) 270 - 5183. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/ERIC RUSH/Primary Examiner, Art Unit 2677