DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Election/Restrictions
In the Response to Restriction Requirement, Applicant stated “Applicant elects Group II, Claims 26-36”.
According to MPEP 8.25.02: Election without Traverse based on Incomplete Reply: “Applicant’s election of Claims 1-12 in the reply filed on 01/06/2025 is acknowledged. Because applicant did not distinctly and specifically point out the supposed errors in the restriction requirement, the election has been treated as an election without traverse (MPEP 818.01.(a))”.
Examiner’s Objections
Claims are objected to due to the informalities.
Claim 26 is objected because of the following informalities:
Claim language “generating one or more field camera-specific training images from the hyperspectral training image, the field camera-specific training images having an equivalent image resolution to that of a field camera used to generate the field images” should read “generating one or more field camera-specific training images from the hyperspectral training image, the field camera-specific training images having an equivalent image resolution to that of [[a]] the field camera used to generate [[the]] field images”; “labelling a sub-set of identified crop features with the one or more crop feature attributes; and storing the labelled classified crop features in a database as a training data set for the machine learning model” should read “labelling a sub-set of identified crop features with [[the]] one or more crop feature attributes; and storing [[the]] labelled classified crop features in a database as a training data set for the machine learning model” in order to provide the appropriate antecedent basis.
Claim 27 is objected because of the following informalities:
Claim language “for each identified crop feature, one or more primary crop feature attributes based on the pixel attributes of the respective identified crop feature” should read “for the each identified crop feature, one or more primary crop feature attributes based on [[the]] pixel attributes of [[the]] respective identified crop feature” in order to provide the appropriate antecedent basis.
Claim 29 is objected because of the following informalities:
Claim language “determining, for each identified crop feature, one or more secondary crop feature attributes based at least in part on the primary crop features attributes and ground control data for the crops and/or image” should read “determining, for the each identified crop feature, one or more secondary crop feature attributes based at least in part on the primary crop features attributes and ground control data for the crops and/or image” in order to provide the appropriate antecedent basis.
Claim 30 is objected because of the following informalities:
Claim language “wherein the step of generating one or more field camera-specific training images comprises” should read “wherein the step of generating the one or more field camera-specific training images comprises”; “modifying the pixel values of the hyperspectral image based on the spectral response of the field camera” should read “modifying the pixel values of the hyperspectral image based on [[the]] spectral response of the field camera”; “generating the one or more field camera-specific training images from the modified pixel values of the hyperspectral image” should read “generating the one or more field camera-specific training images from [[the]] modified pixel values of the hyperspectral image” in order to provide the appropriate antecedent basis.
Claim 32 is objected because of the following informalities:
Claim language “identifying, for each point in time, one or more crop features of each crop in the field camera-specific training images; and labelling, for each point in time, a sub-set of identified crop features with the one or more crop feature attributes including a respective time stamp” should read “identifying, for each point in time, the one or more crop features of each crop in the field camera-specific training images; and labelling, for each point in time, [[a]] the sub-set of identified crop features with the one or more crop feature attributes including a respective time stamp” in order to provide the appropriate antecedent basis.
Claim 35 is objected because of the following informalities:
Claim language “training a machine learning model to identify crop features and determine crop feature attributes of crop features in images of crops generated by a field camera using the field-camera specific training images and training data set” should read “training [[a]] the machine learning model to identify crop features and determine the crop feature attributes of the crop features in the images of crops generated by [[a]] the field camera using the field-camera specific training images and the training data set” in order to provide the appropriate antecedent basis.
Claim Rejections – 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 26-36 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. The claims recite an abstract idea as discussed below. This abstract idea is not integrated into a practical application for the reasons discussed below. The claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception for the reasons discussed below.
Step 1 of the 2019 Guidance requires the examiner to determine if the claims are to one of the statutory categories of invention. Applied to the present application, the claims belong to one of the statutory classes (i.e., a process). The below claim is considered to be a statutory category.
Step 2A of the 2019 Guidance is divided into two Prongs. Prong 1 requires the examiner to determine if the claims recite an abstract idea, and further requires that the abstract idea belongs to one of three enumerated groupings: mathematical concepts, mental processes, and certain methods of organizing human activity.
Independent Claim 1 is copied below, with the limitations belonging to an abstract idea highlighted in bold; the remaining limitations are ‘’additional elements’’.
A method of generating training data for a machine learning model used to determine crop feature attributes of crop features in images of crops generated by a field camera for crop monitoring, the method comprising:
receiving image data containing a hyperspectral training image of crops generated in a controlled growth environment using a hyperspectral training camera;
generating one or more field camera-specific training images from the hyperspectral training image, the field camera-specific training images having an equivalent image resolution to that of a field camera used to generate the field images;
identifying one or more crop features of each crop in the field camera-specific training images;
labelling a sub-set of identified crop features with the one or more crop feature attributes; and
storing the labelled classified crop features in a database as a training data set for the machine learning model.
Under the Step 2A, Prong One, we consider whether the claim recites a judicial exception (abstract idea). In the above claim, the highlighted portion constitutes an abstract idea because, under the broadest reasonable interpretation and in light of the specification, it recites limitations that fall into abstract idea exceptions. Specifically, under the 2019 Revised Patent Subject Matter Eligibility Guidance, it falls into the grouping of subject matter that when recited as such in a claim limitation covers mathematical processes (mathematical relationships, mathematical formulas or equations, mathematical calculations).
For example, the limitations of “identifying one or more crop features of each crop in the field camera-specific training images” and “labelling a sub-set of identified crop features with the one or more crop feature attributes” are treated by the Examiner as belonging to mental process grouping.
With regards to the mental steps, according to the 2019 PEG: “If a claim, under its broadest reasonable interpretation, covers performance in the mind but for the recitation of generic computer components, then it is still in the mental processes category unless the claim cannot practically be performed in the mind. See Intellectual Ventures I LLC v. Symantec Corp., 838 F.3d 1307, 1318 (Fed. Cir. 2016) (‘‘[W]ith the exception of generic computer implemented steps, there is nothing in the claims themselves that foreclose them from being performed by a human, mentally or with pen and paper.”); Mortg. Grader, Inc. v. First Choice Loan Servs. Inc., 811 F.3d. 1314, 1324 (Fed. Cir. 2016) (holding that computer-implemented method for ‘‘anonymous loan shopping” was an abstract idea because it could be ‘‘performed by humans without a computer”); Versata Dev. Grp. v. SAP Am., Inc., 793 F.3d 1306, 1335 (Fed. Cir. 2015) (‘‘Courts have examined claims that required the use of a computer and still found that the underlying, patent-ineligible invention could be performed via pen and paper or in a person's mind.”).”
Next, under the Step 2A, Prong Two, we consider whether the claim that recites
a judicial exception is integrated into a practical application.
Prong 2 of Step 2A of the 2019 Guidance requires the examiner to determine if the claims recite additional elements or a combination of additional elements which integrate the abstract idea into a practical application. This requires additional elements in the claim to apply, rely on, or use the abstract idea in a manner that imposes a meaningful limit on the abstract idea, such that the claim is more than a drafting effort designed to monopolize the abstract idea.
In this step, we evaluate whether the claim recites additional elements that integrate the exception into a practical application of that exception.
Additional elements: “training data”, “machine learning model”, “crop feature attributes”, “crop features”, “images”, “images of crops”, “field camera”, “hyperspectral training image”, “controlled growth environment”, “hyperspectral training camera”, “hyperspectral training image”, “equivalent image resolution”, “field images”, “labelled classified crop features”, “database”, and “training data set” add extra-solution activities (i.e., mere data gathering, source/type of data to be manipulated) using elements recited at a high level of generality (see MPEP 2106.05(g)); generally link the use of the judicial exception to a particular technological environment or field of use (see MPEP 2106.05(h)); and add the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely use a computer as a tool to perform an abstract idea (see MPEP 2106.05(f)).
Regarding the preamble of Claim 1: “A method of generating training data for a machine learning model used to determine crop feature attributes of crop features in images of crops generated by a field camera for crop monitoring” – it is a generically recited preamble and it is not qualified for a meaningful limitation because it only generally links the use of the judicial exception to a particular technological environment or field of use.
The limitations of “receiving image data containing a hyperspectral training image of crops generated in a controlled growth environment using a hyperspectral training camera”, “generating one or more field camera-specific training images from the hyperspectral training image, the field camera-specific training images having an equivalent image resolution to that of a field camera used to generate the field images”, and “storing the labelled classified crop features in a database as a training data set for the machine learning model” are treated as an extra solution activity recited in generality (e.g., mere data gathering).
Various considerations are used to determine whether the additional elements are sufficient to integrate the abstract idea into a practical application. In this particular case, the claim does not recite a particular machine applying or being used by the abstract idea. The claim does not effect a real-world transformation or reduction of any particular article to a different state or thing. (Manipulating data from one form to another or obtaining a mathematical answer using input data does not qualify as a transformation in the sense of Prong 2.) The claim does not contain additional elements which describe the functioning of a computer, or which describe a particular technology or technical field, being improved by the use of the abstract idea. (This is understood in the sense of the claimed invention from Diamond v Diehr, in which the claim as a whole recited a complete rubber-curing process including a rubber-molding press, a timer, a temperature sensor adjacent the mold cavity, and the steps of closing and opening the press, in which the recited use of a mathematical calculation served to improve that particular technology by providing a better estimate of the time when curing was complete. Here, the claim does not recite carrying out any comparable particular technological process).
Instead the additional elements in the claim appear to be merely insignificant extra-solution activity – merely receiving and manipulating data.
The additional elements in Claim 1 are recited in generality and do not recite particular machines applying or being used by the abstract idea.
Therefore, the claims are directed to a judicial exception and require further analysis under the Step 2B.
Step 2B of the 2019 Guidance requires the examiner to determine whether the additional elements cause the claim to amount to significantly more than the abstract idea itself. The considerations for this particular claim are essentially the same as the considerations for Prong 2 of Step 2A, and the same analysis leads to the conclusion that the claim does not amount to significantly more than the abstract idea.
Essentially, the above claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception (Step 2B analysis) because they are well-understood and conventional in the relevant art of US20230049590 to Bauer et al. (hereinafter Bauer) in view of US20200117897 to Froloff (hereinafter Froloff).
The above claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception (Step 2B analysis).
Therefore, claim 26 is rejected under 35 U.S.C. 101 as directed to an abstract idea without significantly more. Independent claim 26, therefore, is not patent eligible.
With regards to the dependent claims, Claims 27-36 provide additional features/steps which are either part of the abstract idea of the independent claims and/or recite additional elements/steps that are not meaningful as they are recited in generality and/or not qualified as particular machine and/or eligible transformation and, therefore, do not reflect a practical application as well as not qualified for “significantly more” based on prior art of record.
The dependent claims are, therefore, also ineligible.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
Claims 26-33 and 35-36 are rejected under 35 U.S.C. 103 as being unpatentable over US20230049590 to Bauer et al. (hereinafter Bauer) in view of US20200117897 to Froloff (hereinafter Froloff).
Regarding Claim 26: Bauer discloses:
“A method of generating training data for a machine learning model used to determine crop feature attributes of crop features in images of crops generated by a field camera for crop monitoring, the method comprising” (”acquiring first training images using a first image acquisition technique, each first training image depicting a plant-related motive, wherein the plant-related motive is selected from a group comprising: an indoor or outdoor agricultural area, a plant, a plant product, a part of the plant, a part of the plant product”; para 0167 – “Almost every plant-related object class (plant of a particular group or species, fields covered with weeds, plants infected by a particular disease, plants with a nutrient deficiency, etc.) is characterized by a particular physiological state or state change that affects the object's reflective properties. Healthy crop and crop that is affected by disease reflect the sun light differently. Using hyperspectral imaging it's possible to detect very small changes in the physiology of the plant and correlate it with spectrum of reflected light for automatically labeling a large number of hyperspectral training images.”)
“receiving image data containing a hyperspectral training image of crops generated in a controlled growth environment using a hyperspectral training camera” (para 0167 – “Healthy crop and crop that is affected by disease reflect the sun light differently. Using hyperspectral imaging it's possible to detect very small changes in the physiology of the plant and correlate it with spectrum of reflected light for automatically labeling a large number of hyperspectral training images”);
“generating one or more field camera-specific training images from the hyperspectral training image” (Claim 7 – “the second image acquisition technique is hyperspectral image acquisition using a hyperspectral sensor.”; Claim 11 – “ the first training images are RGB images and wherein the second training images are hyperspectral images, the spatially aligning of the first and second training images of each of the pairs… generating an RGB representation of the second training image as a function of the computed red, green and blue intensity values”),
“the field camera-specific training images having an equivalent image resolution to that of a field camera used to generate the field images” (Claim 9 – “wherein the first image acquisition technique has a higher spatial resolution than the second image acquisition technique”; para 0068 – “for acquiring the first training images and/or the test image, a “standard” RGB camera of a smartphone, an RGB camera integrated in a drone used in precision farming, or an RGB camera integrated in a microscope used for acquiring magnified images of plants, plant products and parts thereof have a high spatial resolution can be used. The spatial resolution of these RGB cameras is larger (often by many orders of magnitude) than that of many hyperspectral cameras used e.g. for precision farming”);
“identifying one or more crop features of each crop in the field camera-specific training images; labelling a sub-set of identified crop features with the one or more crop feature attributes” (para 0034 – “the labels can be used in order to perform an image segmentation of the test image for identifying regions in an agricultural area where water, fertilizer, pesticides and/or fungicides need to be applied or where a certain type of plant should be grown or harvested. The one or more labeled test images output by the trained ML model can also be used as training images for various second order machine learning tasks”; see also para 0040).
Bauer does not specifically disclose:
“storing the labelled classified crop features in a database as a training data set for the machine learning model”.
However, Froloff discloses:
“storing the labelled classified crop features in a database as a training data set for the machine learning model” (para 0030 – “a system for adapting an in situ wireless sensor network monitoring system to AI analytic trained automated crop or plant monitoring system, accumulating wireless sensor image data into identifiable labeled image objects for training AI analytics. A plurality of integrated sensors network wirelessly collecting plant and insect primary sensor data have logic for primary sensor data transfer with associated sensor metadata onto a database.”).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the method, disclosed by Bauer, as taught by Froloff, in order to develop more accurate machine learning models based on the existing ones with the data stored in the database.
Regarding Claim 27: Bauer/Froloff combination discloses the method of Claim 26.
Bauer further discloses:
“wherein the step of labelling comprises determining, for each identified crop feature, one or more primary crop feature attributes based on the pixel attributes of the respective identified crop feature” (para 0025 – “the label can comprise or consist of one or more data values of any type. For example, the label can comprise or consist of a Boolean value, a numerical value or an alphanumeric string. These labels can be used for indicating the membership of the pixel, pixel blob or image within a predefined class such as “soil”, “healthy plants” or “infected plants”).
Regarding Claim 28: Bauer/Froloff combination discloses the method of Claim 27.
Bauer further discloses:
“the one or more primary attributes comprise one or more geometric and/or spectral attributes derived from the pixel attributes of the respective identified crop feature” (para 0037 – “the first features comprise one or more image features selected from a group comprising: an intensity value, an intensity gradient, a contrast, an intensity gradient direction, color indices and/or spectral indices, and linear as well as non-linear combinations of two or more of the aforementioned image features. According to some embodiments, the software program used for training the ML-model comprises one or more algorithms for automatically extracting a plurality of different image features from the image and during the training of the model, the model learns a subset of the first features and/or combinations of two or more of the first features which are particularly predictive in respect to a particular label aligned to the first training image”; and,
“optionally or preferably wherein the geometric attributes include one or more of:
location, dimensions, area, aspect ratio, sub-feature size and/or count; and/or wherein the spectral attributes include one or more of: dominant colour, RGB, red edge and/or NIR pattern, hyperspectral signature, normalised difference vegetation index (NDVI), and normalised difference water index (NDWI)” (paras 0045 - 0051 - “the first and the second image acquisition technique are different image acquisition techniques, respectively, selected from a group comprising: hyperspectral image acquisition; RGB image acquisition; monochromatic image acquisition; for example, the monochromatic image acquisition may comprise using a monochromator (an optical device that transmits a mechanically selectable narrow band of wavelengths of light or other radiation chosen from a wider range of wavelengths available at the input); active image acquisition using an excitation light source; multi spectral image acquisition; and IR image acquisition”).
Regarding Claim 29: Bauer/Froloff combination discloses the method of Claim 27.
Bauer further discloses:
“wherein the step of labelling further comprises determining, for each identified crop feature, one or more secondary crop feature attributes based at least in part on the primary crop features attributes and ground control data for the crops and/or image” (Claim 4 – “extracting second features from each of the second training images wherein the automatically assigning of the at least one label to each of the acquired second training images comprises analyzing the second features extracted from the second training image for predicting the at least one label of the second training image as a function of the second features extracted from the second training image”; para 0025 – “the label can comprise or consist of one or more data values of any type. For example, the label can comprise or consist of a Boolean value, a numerical value or an alphanumeric string. These labels can be used for indicating the membership of the pixel, pixel blob or image within a predefined class such as “soil”, “healthy plants” or “infected plants”; see also paras 0030, 0040); and,
“optionally or preferably wherein the ground control data comprises known information including one or more of: crop type, disease type, weed type, growth conditions, and crop age” (paras 0092 - 0094 – “The labels are selected from a group of predefined motive classes comprising: surface area of a plant or of a product or part of this plant, whereby the surface area is healthy; surface area of a plant or of a product or part of this plant, whereby the surface area shows symptoms associated with an infection of this area with a particular disease”).
Regarding Claim 30: Bauer/Froloff combination discloses the method of Claim 26.
Bauer further discloses:
“wherein the step of generating one or more field camera-specific training images comprises: modifying the pixel values of the hyperspectral image based on the spectral response of the field camera, optionally by determining a set of spectral filter weights for each spectral band of the field camera based on the spectral response of the respective spectral band of the field camera” (para 0145 – “By comparing the spectral reference signature stored in the repository of module 118 with the spectral signatures of expects in each second training image, the module 118 can identify the one of the reference spectral signature being most similar to the spectral signature of the respective pixel . The class name of this “most similar reference spectral signature” of the pixel in the second training image is assigned to this pixel. Alternatively, a numerical value being indicative of a likelihood that the pixel in the second training image depicts the type of objects (interpreted as having the best response, added by examiner) represented by the “most similar reference spectral signature” (i.e. having the best weight, added by examiner) is assigned as a label to the pixel of the second training image”; see also para 0147), and
“applying the set of filter weights to the spectral bands of each pixel of the hyperspectral image; and generating the one or more field camera-specific training images from the modified pixel values of the hyperspectral image; and,
optionally or preferably wherein the one or more field camera-specific training images comprise one or more of: an RGB, near infrared and red-edge image” (para 0120 – “The goal of hyperspectral imaging is to obtain the spectrum for each pixel in the image of a scene, with the purpose of finding objects, identifying materials, or detecting processes. In hyperspectral imaging, the recorded spectra have fine wavelength resolution and cover a wide range of wavelengths”; para 0122 – “a digital camera that use a CMOS or CCD image sensor that comprises three different sensors for the three spectral ranges corresponding to red, green and blue light of the visible spectrum can be used for obtaining an RGB image. Some RGB image acquisition systems can use a Bayer filter arrangement wherein green is given twice as many detectors as red and blue (ratio 1:2:1) in order to achieve higher luminance resolution than chrominance resolution. The sensor has a grid of red, green, and blue detectors”).
Regarding Claim 31: Bauer/Froloff combination discloses the method of Claim 26.
Bauer further discloses:
“wherein the step of generating one or more field camera-specific training images comprises re-sampling the hyperspectral training image to substantially match spatial and/or pixel resolution of the field camera; and, optionally or preferably wherein the re-sampling is based on one or more equivalence parameters of the field camera” (para 0057 – “it may be sufficient to acquire one or more hyperspectral or multispectral reference images depicting an agricultural area covered with this particular plant. Then, a reference signature is extracted from those parts of the one or more reference images depicting these plants. The feature extraction step for extracting second features from the second training images comprises extracting spectral signatures at each pixel of each second training image and use them to divide the second training image in groups of similar pixels (segmentation) using different approaches. As a last step, a label, e.g. a class name, is assigned to each of the segments (or to each pixel in the segments) by comparing the signatures of each pixel (or an averaged spectral signature of the pixels of a segment) with the known spectral reference signature of the particular plant (or other object) of interest. Ultimately correct matching of spectral signatures (interpreted as equivalence parameters, added by examiner) in the pixels or segments of the second training images with the reference spectral signature leads to accurate prediction and assignment of labels to the second training images which indicate the presence of the above-mentioned plants of interest”).
Regarding Claim 32: Bauer/Froloff combination discloses the method of Claim 26.
Bauer further discloses:
“wherein the image data comprises a series or plurality of hyperspectral training images of the crops, each hyperspectral training image taken at a different point in time, and wherein the method comprises: generating one or more field camera-specific training images from each hyperspectral training image in the time series” (para 0108 – “in many real-world application scenarios, first and second training images of the same motive are taken in close temporal succession (interpreted as in the time series, added by examiner), e.g. within less than one hour, preferably less than 20 minutes, preferably less than 5 minutes, still preferably less than 5 seconds delay”; para 0114 – “A “training image” as used herein is a digital image used for training an ML-model. To the contrary, a “test image” as used herein is a digital image used at test time (“prediction time”) as input to the already trained model. While the training images are provided to the model to be trained in association with labels considered to be correct (“ground truth”), the test image is provided to the trained ML-model without any label assigned. Rather, it is the task of the trained ML-program to calculate and predict the labels and label positions correctly”);
“identifying, for each point in time, one or more crop features of each crop in the field camera-specific training images” (para 0174 – “the first training images can be obtained in one or more flights of the first carrier system, and the second training images can be obtained in one or more flights of the second carrier systems. The flights of the first and the second carrier systems are performed at different times, in particular with inter-flight time interval of at least 5 minutes, or even some hours. During this time interval, the position of the plants (interpreted as the crop feature, added by examiner) may have changed slightly, e.g. because of the wind, or because of the movement or re-orientation of the plant or plant parts towards the light”; see also paras 0171 and 0173);
“labelling, for each point in time, a sub-set of identified crop features with the one or more crop feature attributes including a respective time stamp” (para 0029 – “each label can be a number, e.g. an integer or a float. According to embodiments, the trained ML-model has learned and is configured to automatically assign these numerical values as labels to any input image acquired via the first image acquisition technique at test time. Hence, the trained ML has learned to automatically label input images with numeric values, e.g. percentage values. The labeled numerical values of one or more test images can be used for assessing plant health or other phenotypic information. For example, this assessment can be implemented as a regression problem making use of the automatically labeled test images. For example, the numerical value can be a value “68%” indicating the likelihood that a pixel, pixel blob or image depicts an object of a particular class, e.g. “soil””).
Regarding Claim 33: Bauer/Froloff combination discloses the method of Claim 32.
Bauer further discloses:
“comprising applying one or more geometric and/or spectral corrections to the hyperspectral training images or the one or more field camera-specific training images associated with each different point in time to account for temporal variations in camera position and lighting conditions” (para 0057 – “a label, e.g. a class name, is assigned to each of the segments (or to each pixel in the segments) by comparing the signatures of each pixel (or an averaged spectral signature of the pixels of a segment) with the known spectral reference signature of the particular plant (or other object) of interest. Ultimately correct matching of spectral signatures in the pixels or segments of the second training images with the reference spectral signature leads to accurate prediction and assignment of labels to the second training images which indicate the presence of the above-mentioned plants of interest”; para 0114 – “it is the task of the trained ML-program to calculate and predict the labels and label positions correctly”; see also paras 0022 and 0108).”
Regarding Claim 35: Bauer/Froloff combination discloses the method of Claim 26.
Bauer further discloses:
“comprising training a machine learning model to identify crop features and determine crop feature attributes of crop features in images of crops generated by a field camera using the field-camera specific training images and training data set; and, optionally or preferably, wherein the machine learning model is or comprises a deep or convolutional neural network” (para 0034 – “the labels can be used in order to perform an image segmentation of the test image for identifying regions in an agricultural area where water, fertilizer, pesticides and/or fungicides need to be applied or where a certain type of plant should be grown or harvested. The one or more labeled test images output by the trained ML model can also be used as training images for various second order machine learning tasks”; see also para 0040; para 0149 – “The aligned labels 124, i.e., the content of the labels and also the an indication of the one or more pixels of the first training image to which the label is aligned, are input together with the first training image 205 to which the labels have been aligned into a software 126 configured for training the machine learning model. For example, the software 126 can comprise a module 128 comprising a plurality of algorithms for extracting features 130 from each of the first training images. In addition, the software 126 can comprise additional algorithms and modules needed during training”; para 0182 – “the spatial alignment of the labels and the first image features may enable a machine learning model, e.g. a semantic segmentation deep neural network, to learn spatial correlations between the labels and the first features during the training”).
Regarding Claim 36: Bauer/Froloff combination discloses the method of Claim 26.
Bauer further discloses:
“comprising generating the image data by taking a plurality of hyperspectral images over a period of time using a hyperspectral camera in substantially the same position relative to the crops; and/or wherein each hyperspectral image is taken from substantially the same position relative to the crops” (para 0032 – “the spatially aligning of the first and second training images of each of the pairs comprises aligning the first and second images depicting the same motive based on their respective geopositions, thereby providing a roughly aligned image pair; and then refining the alignment as a function of pixel intensity and/or color similarities such that intensity-differences and/or color-differences between the first and second images are minimized for providing the alignment of the first and second image of the pair. The plant-related motive depicted in the test image is preferably similar to the plant-related motive depicted in the first and second training images. For example, if the training was performed on individual plants of a particular species, the test image should also be an image of a plant of the same or a similar species acquired at a similar relative position of optical sensor and plant”; see also 0022, 0108, 0147).
Allowable Subject Matter
The following is an examiner’s statement of reasons for the indication of allowable subject matter.
Claim 34 includes the allowable subject matter. If the 101 issues are resolved, Claim 34 would be allowable if rewritten or amended to overcome the rejection under 35 USC 101, set forth in this Office Action.
In regards to Claim 34, the teachings of Bauer, Froloff, McGuire, Chowdhary, and Alshurafa combined show all the elements of the claim except “applying a geometric transformation to the other hyperspectral training images or the other field camera-specific training images associated different points in time to substantially match the spatial location and pixel sampling of the reference image, optionally based on the location size of one or more pixels of one or more ground control points in each image; and/or applying a white balance to the other hyperspectral training images or the other field camera-specific training images associated different points in time to substantially match the white balance of the reference image, optionally based on one or more pixels values of one or more ground control points in each image”, in combination with the rest of the claim’s limitations as claimed and defined by the applicant.
Any comments considered necessary by applicant must be submitted no later than the payment of the issue fee and, to avoid processing delays, should preferably accompany the issue fee. Such submissions should be clearly labeled “Comments on Statement of Reasons for Allowance.”
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
US7058197 to McGuire et al. (hereinafter McGuire) discloses multi-variable model for identifying crop response zones in a field.
US20240061440 to Chowdhary et al. (hereinafter Chowdhary) discloses apparatus and method for agricultural data collection and agricultural operations.
US20200005455 to Alshurafa et al. (hereinafter Alshurafa) discloses hyperspectral imaging sensor.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to Lyudmila Zaykova-Feldman whose telephone number is (469)295-9269. The examiner can normally be reached 7:30am - 4:30pm, Monday through Friday.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Arleen Vazquez, can be reached on 571-272-2619. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/LYUDMILA ZAYKOVA-FELDMAN/
Examiner Art Unit 2857
/LINA CORDERO/ Primary Examiner, Art Unit 2857