DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 10/31/2025 has been entered.
Response to Arguments
Applicant’s arguments, see Remarks page 6, filed 10/31/2025, with respect to the rejection of claim 8 under 35 U.S.C. 112(b) have been fully considered and are persuasive. The rejection of claim 8 has been withdrawn.
Applicant’s arguments, see Remarks page 6-8, filed 10/31/2025, with respect to the rejection of claim(s) 1 & 6 under 35 U.S.C. 103 have been fully considered and are moot in view of the new grounds of rejection (detailed in the rejections below) necessitated by Applicant’s amendment to the claim(s).
Claim Interpretation
The following is a quotation of 35 U.S.C. 112(f):
(f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph:
An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked.
As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph:
(A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function;
(B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and
(C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function.
Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function.
Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function.
Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action.
This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) is/are:
“…an image management unit” in claims 1 and 6. The image management unit has been interpreted under 112(f) and corresponds to hardware, software, or a combination with a means for processing and storing image information (see Spec, 0031-0032, and 0036) and equivalents thereof.
“…a cloud cover calculation unit” in claims 1 and 6. The cloud cover calculation unit has been interpreted under 112(f) and corresponds to hardware, software, or a combination with a means for processing image information (see Spec, 0031-0032, and 0036) and equivalents thereof.
“…a cloud cover information storage unit” in claims 1 and 6. The cloud cover information storage unit has been interpreted under 112(f) and corresponds to hardware, software, or a combination with a means for storing image information (see Spec, 0031-0032, and 0036) and equivalents thereof
Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, they are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof.
If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph.
Claim Objections
Claims 1 & 6 are objected to because of the following informalities:
Regarding claims 1 & 6, the limitation “wherein the cloud cover calculation model performs semantic segmentation that classifies every pixel of the image into object classes including thick cloud, thin cloud, cloud shadow, and background, is trained with class-specific weights, and a weight for the thin-cloud class is set differently depending on whether a ratio of thin cloud contained in the image is at least a preset ratio,” should be corrected to provide separation and clarity between the model’s task and the model’s training. For example, “wherein the cloud cover calculation model, which performs semantic segmentation that classifies every pixel of the image into object classes including thick cloud, thin cloud, cloud shadow, and background, is trained with class-specific weights, and a weight for the thin-cloud class is set differently depending on whether a ratio of thin cloud contained in the image is at least a preset ratio.”
Appropriate correction is required.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 1 & 6 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
Claims 1 and 6 recite the limitation "wherein the cloud cover calculation model performs semantic segmentation that classifies every pixel of the image…depending on whether a ratio of thin cloud contained in the image is at least a preset ratio." There is insufficient antecedent basis for this limitation in the claim. For the purposes of examination, the limitation is interpreted as “wherein the cloud cover calculation model performs semantic segmentation that classifies every pixel of the image information…depending on whether a ratio of thin cloud contained in the image information is at least a preset ratio.”
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1 and 6 is/are rejected under 35 U.S.C. 103 as being unpatentable over Guan et al. (US 20170357872 A1) hereinafter referenced as Guan, in view of Chen et al. (CN112200787A) hereinafter referenced as Chen, Liang et al. (CN113096129A) hereinafter referenced as Liang, Wakui et al. (US 20210287052 A1) hereinafter referenced as Wakui, and Waibel et al. (InstantDL: an easy-to-use deep learning pipeline for image segmentation and classification) hereinafter referenced as Waibel.
Regarding claim 1, Guan discloses: A system implemented on one or more computing devices for automatically analyzing a cloud cover (Guan: 0003: “the present disclosure relates to using a hybrid of data-driven and clustering methods in computer programs or electronic digital data processing apparatus for cloud detection in remote sensing imagery.”) in an optical satellite image (Guan: 0096: “Remote sensor 112 may be aerial sensors, such as satellites…”; 0159: “Many of the examples presented herein assume that the satellite imagery is capable of detecting various bands, such as the blue, green, red, red edge, and near infrared (NIR) bands at various resolutions.”) based on machine learning (Guan: 0163: “…the examples provided herein will refer to the machine learning technique used by the cloud detection subsystem 170…”), the system comprising:
an image management unit receiving image information collected from a satellite (Guan: 0059: “Examples of field data 106 include …(i) imagery data (for example, imagery and light spectrum information from an agricultural apparatus sensor, camera, computer, smartphone, tablet, unmanned aerial vehicle, planes or satellite)”; 0075: “In an embodiment, model and field data is stored in model and field data repository 160.”; Wherein satellite images are stored in the field data repository), creating and storing a list of the collected image information (Guan: 0075: “Model and field data may be stored in data structures in memory, rows in a database table, in flat files or spreadsheets, or other forms of stored digital data.”), and determining whether or not the collected image information is new image information (Guan: 0187: “…the remote sensing imagery 510 is provided via the model data and field data repository 160 and/or external data 110. For example, the provider of the remote sensing imagery 510 may periodically send updated images to the model data and field data repository 160… the cloud detection subsystem 170 may retrieve the remote sensing imagery 510 from the model data and field data repository 160 and/or external data 110.”; Wherein the data and field data repository notifies what is new image information.);
a cloud cover calculation unit receiving the new image information (Guan: 0160: “the remote sensing imagery 510 used as input to the cloud detection subsystem 170 may be updated only on a periodic basis… there may be a significant delay between image captures of the area being monitored by the satellite.”; Wherein the images are used as input once the images are updated) from the image management unit (Guan: 0119: “The cloud detection subsystem 170 collects images and other information related to an area, such as an agricultural field, from the model data and field data repository 160 and/or external data 110 and determines which portions correspond to clouds and/or cloud shadows.”) and calculating a cloud cover based on machine learning (Guan: 0163: “…the examples provided herein will refer to the machine learning technique used by the cloud detection subsystem 170…”);
and a cloud cover information storage unit receiving and storing the cloud cover image information for which the cloud cover has been calculated from the cloud cover calculation unit (Guan: Figure 5; 0140: “The cloud mask 512 and shadow mask 513 are provided as output, and may be digitally stored in electronic digital storage, such as main memory, coupled to or within the cloud detection subsystem 170.”), wherein the cloud cover calculation unit includes a machine learning-based cloud cover calculation model (Guan: Figure 5) inputting the new image information (Guan: 0160: “the remote sensing imagery 510 used as input to the cloud detection subsystem 170 may be updated only on a periodic basis… there may be a significant delay between image captures of the area being monitored by the satellite.”) received from the image management unit as an input pattern and outputting the image information for which the cloud cover has been calculated as an output pattern (Guan: 0119: “The cloud detection subsystem 170 collects images and other information related to an area, such as an agricultural field, from the model data and field data repository 160 and/or external data 110 and determines which portions correspond to clouds and/or cloud shadows.”; Wherein the satellite images are the input and the cloud mask 512 and shadow mask 513 are the output.), and wherein the output pattern is generated by the cloud cover calculation model by detecting objects by object class on a pixel basis (Guan: 181: “In some embodiments, in addition to marking clouds, the training mask also identifies types of clouds (e.g. normal clouds, haze, etc.) and/or features related to clouds (such as cloud shadows).”; 0209: “At block 825, the pixel clusterer 506 determines whether the selected neighboring pixel is a candidate cloud pixel as defined by the candidate cloud mask”; Wherein pixels within the image are classified based on object classes), and
the cloud cover calculation model is generated by performing machine learning with an input data set being input thereto (Guan: Figure 5; 0180: “…the high-precision pixel classifier 501 and/or the high-recall pixel classifier 503 may require labeled “ground truth” data in order to train the classifier and develop the function that maps between pixel features and classifications.”), the input data set including a plurality of pieces of image information and labels corresponding to object classes of the respective pieces of image information (Guan: 0181: “One way to generate a labeled training set is to collect a sample of satellite images depicting areas with typical or a variety of cloud coverings…The sample images can then be manually labeled by experts in the field to identify which pixels represent clouds and which pixels represent ground.”),
wherein the cloud cover calculation model performs semantic segmentation that classifies every pixel of the image into object classes including thick cloud, thin cloud, cloud shadow, and background (Claim limitation is interpreted according to the interpretation recited in the rejection of claim 1 under 35 U.S.C. 112(b) disclosed above) (Guan: 0180-0181: “the high-precision pixel classifier 501 and/or the high-recall pixel classifier 503 may require labeled “ground truth” data in order to train the classifier and develop the function that maps between pixel features and classifications.
One way to generate a labeled training set is to collect a sample of satellite images depicting areas with typical or a variety of cloud coverings. The sample images can then be manually labeled by experts in the field to identify which pixels represent clouds and which pixels represent ground…In some embodiments, in addition to marking clouds, the training mask also identifies types of clouds (e.g. normal clouds, haze, etc.) and/or features related to clouds (such as cloud shadows).”; Wherein normal clouds, haze, cloud shadows, and ground constitutes thick cloud, thin cloud, cloud shadow, and background, respectively).
Guan does not disclose expressly: wherein after being generated, the cloud cover calculation model is evaluated with test data being input thereto, while proportions of indicators are reflected in the test data according to indicator characteristics, wherein the indicator characteristics include snow/ice indicators, city indicators, river/sea indicators, forest indicators, desert indicators, and other indicators.
Chen discloses: a cloud cover calculation model evaluated (Chen: Abstract) with test data being input thereto (Chen: 0083: “As shown in Figure 2, the cloud detection process mainly includes two parts: training phase and testing phase.”), while proportions of indicators are reflected in the test data according to indicator characteristics (Chen: 0080: “Establishment of training sample set: Select sample points representing different types of clouds and ground objects from the existing image dataset to form a training sample set…When selecting sample points that can typically represent different types of clouds and landforms from the Landsat8 images, in order for the model to achieve the best classification effect, the number of sample points for each type is allocated according to the area ratio, that is, types occupying a large area require more samples than types occupying a small area.”; Wherein the number of samples for each type is selected based on their area), wherein the indicator characteristics include snow/ice indicators, city indicators, ocean indicators, vegetation indicators, desert indicators, and other indicators (Chen: 0036: “The existing image dataset includes Landsat8 images of different locations and seasons within a certain area, including vegetation, urban areas, lakes, Gobi, snowy areas, deserts and ocean surface types”).
Before the effective filing date of the claimed invention, it would have been obvious to a person of ordinary skill in the art to implement the generation of training and test datasets according to cloud and landform area ratios taught by Chen for the generation of training and testing datasets for the models taught by Guan.. The suggestion/motivation for doing so would have been “in order for the model to achieve the best classification effect…types occupying a large area require more samples than types occupying a small area” (Chen: 0080). Further, one skilled in the art could have combined the elements as described above by known methods with no change in their respective functions, and the combination would have yielded nothing more than predictable results.
Guan in view of Chen does not disclose expressly: wherein the indicator characteristics include snow/ice indicators, city indicators, river/sea indicators, forest indicators, desert indicators, and other indicators.
Liang discloses: training data with indicator characteristics including snow/ice indicators, city indicators, river/sea indicators, forest indicators, and other indicators (Liang: 0027: “hyperspectral satellite images of four types of underlying surfaces, including water bodies (oceans, rivers, lakes), vegetation (farmland, grassland, forest), artificial surfaces (industrial land, roads, towns), and others (bare land, snow and coastline), were selected as training samples”).
Before the effective filing date of the claimed invention, it would have been obvious to a person of ordinary skill in the art to implement the known technique as taught by Liang of incorporating forest and river training images into the training and testing dataset disclosed by Guan in view of Chen. The suggestion/motivation for doing so would have been “…to avoid the phase error caused by the change of the underlying surface features over time” (Liang: 0027; Wherein more locations increase the accuracy and flexibility of the trained model). Further, one skilled in the art could have combined the elements as described above by known methods with no change in their respective functions, and the combination would have yielded nothing more than predictable results.
Guan in view of Chen and Liang does not disclose expressly: wherein the system is configured to provide cloud cover analysis result value in consideration of reflectance according to the indicator characteristics.
Chen further discloses: wherein the system is configured to provide cloud cover analysis result value in consideration of reflectance according to the indicator characteristics (Chen: 0079: “based on the differences in reflectance and texture characteristics between clouds and the underlying surface, a series of new features are extracted from the four bands of visible light and near-infrared to distinguish between clouds and the underlying surface of ground objects, increase the data dimension, and improve the classification accuracy to a certain extent.”;
0089-0090: “the features extracted from the pre-processed image include…Reflection spectral characteristics: Based on the characteristics of clouds showing high brightness and continuous coverage in optical remote sensing images, the spectral information of visible light and near-infrared bands is used as characteristics to distinguish clouds from the underlying surface of ground objects. In the process of remote sensing image interpretation, it is generally believed that each type of ground object has a corresponding spectral characteristic curve in each band, so the reflection characteristics of the ground object in each band can be used as the main basis for ground object interpretation. Due to the unique reflective characteristics of clouds, they often appear as high brightness and continuous coverage in optical remote sensing images. Therefore, using the spectral information of visible light and near-infrared bands as features can distinguish clouds from ground objects.”;
Wherein the determination of whether a pixel is classified as a cloud or a ground object, based on the reflection spectral characteristics of each type of ground object and of the cloud objects, constitutes the cloud cover analysis result value).
Before the effective filing date of the claimed invention, it would have been obvious to a person of ordinary skill in the art to implement the features extracted from the image feature extraction process as further taught by Chen into the cloud detection subsystem models disclosed by Guan in view of Chen and Liang. The suggestion/motivation for doing so would have been “…based on the differences in reflectance and texture characteristics between clouds and the underlying surface, a series of new features are extracted from the four bands of visible light and near-infrared to distinguish between clouds and the underlying surface of ground objects, increase the data dimension, and improve the classification accuracy to a certain extent.” (Chen: 0079). Further, one skilled in the art could have combined the elements as described above by known methods with no change in their respective functions, and the combination would have yielded nothing more than predictable results.
Guan in view of Chen and Liang does not disclose expressly: wherein the cloud cover calculation model is trained with class-specific weights, and a weight for the thin-cloud class is set differently depending on whether a ratio of thin cloud contained in the image is at least a preset ratio (Claim limitation is interpreted according to the interpretation recited in the rejection of claim 1 under 35 U.S.C. 112(b) disclosed above).
Wakui discloses: the training of a semantic segmentation model using a mini-batch learning process (Wakui: Abstract). Wherein for each mini-batch set, a class ratio is calculated for each class present in the mini-batch, by counting all a class’s pixels in a mini-batch and dividing by the total number of pixels in the mini-batch (Wakui: 0063: “The calculation unit 51 calculates an area ratio of each of the plurality of classes in the mini-batch data 11 . More specifically, the calculation unit 51 adds, for each class, the number of pixels of regions, which are manually designated in the divided annotation images 215 of the divided annotation image group 13 of the mini-batch data 11 generated from the generation unit 50 . Next, the calculation unit 51 calculates an area ratio by dividing the added number of pixels by the total number of pixels of the divided annotation images 215.”). In addition, if a class’s ratio exceeds a threshold value in a mini-batch, the class’s training weight coefficient is reduced (Wakui: 0089: “In FIG. 15, assuming that the setting value is 50%, as shown in a table 75, for the 30th set of the mini-batch data 11 , a case where the class-2 undifferentiated cells of which the area ratio is 56% higher than the setting value are specified as a non-rare class is exemplified.”; 0093: “the non-rare class of which the area ratio is higher than the setting value is specified as the correction target class, and processing of setting the weight for the first loss value to be smaller than the weight for the second loss value is executed as the correction processing.”).
Before the effective filing date of the claimed invention, it would have been obvious to a person of ordinary skill in the art to implement the class-specific and mini-batch specific adaptive training coefficient based on class ratios taught by Wakui for the training of the pixel classifier disclosed by Guan in view of Chen and Liang by separating the training dataset into batches. The suggestion/motivation for doing so would have been “Therefore, as in the first embodiment, it is possible to prevent a decrease in the class determination accuracy of the model 10” (Wakui: 0093; Wherein larger classes do not mask smaller classes during training). Further, one skilled in the art could have combined the elements as described above by known methods with no change in their respective functions, and the combination would have yielded nothing more than predictable results.
Guan in view of Chen, Liang, and Wakui does not disclose expressly: wherein the cloud cover calculation model is trained with class-specific weights, and a weight for the thin-cloud class is set differently depending on whether a ratio of thin cloud contained in the image is at least a preset ratio.
Waibel discloses: a deep-learning pipeline for the training of a convolutional neural network for the purpose of performing semantic segmentation (Waibel: Abstract: “Convolutional neural networks have become the state-of-the-art tool to provide accurate and fast image data processing. However, published algorithms mostly solve only one specific problem and they typically require a considerable coding effort and machine learning background for their application.
Results: We have thus developed InstantDL, a deep learning pipeline for four common image processing tasks: semantic segmentation, instance segmentation, pixel-wise regression and classification.”), wherein the model training is performed with an image batch-size of 1 (Waibel: Hardware requirements: “This dataset contains 120 training images of size 512 × 512 pixels and we ran it with a batch size of 1 for 37 epochs.”).
Before the effective filing date of the claimed invention, it would have been obvious to a person of ordinary skill in the art to implement the deep-learning pipeline disclosed by Waibel for the training of the pixel classifier disclosed by Guan in view of Chen, Liang, and Wakui by separating the training dataset into batches. The suggestion/motivation for doing so would have been “The pipeline is designed for maximum automation to make training and testing as convenient and as easy as possible…Moreover, we included uncertainty prediction to provide an additional level of interpretability of predictions... Due to InstantDLs modular implementation it is however easy for users with python knowledge to exchange deep learning algorithms for a taylormade solution… Our pipeline can easily be installed and run locally on a computer or server, ensuring data scalability, privacy and security.” (Waibel: Discussion). Further, one skilled in the art could have combined the elements as described above by known methods with no change in their respective functions, and the combination would have yielded nothing more than predictable results. Therefore, it would have been obvious to combine Guan in view of Chen, Liang, and Wakui with Waibel to obtain the invention as specified in claim 1.
Regarding claim 6, Guan discloses: A method, implemented by one or more computing devices, for automatically analyzing a cloud cover (Guan: 0003: “the present disclosure relates to using a hybrid of data-driven and clustering methods in computer programs or electronic digital data processing apparatus for cloud detection in remote sensing imagery.”) in an optical satellite image (Guan: 0096: “Remote sensor 112 may be aerial sensors, such as satellites…”; 0159: “Many of the examples presented herein assume that the satellite imagery is capable of detecting various bands, such as the blue, green, red, red edge, and near infrared (NIR) bands at various resolutions.”) based on machine learning for calculating a cloud cover within image information collected from a satellite (Guan: 0163: “…the examples provided herein will refer to the machine learning technique used by the cloud detection subsystem 170…”), the method comprising:
a) generating, by a cloud cover calculation unit, a cloud cover calculation model by performing machine learning with an input data set being input thereto (Guan: Figure 5; 0180: “…the high-precision pixel classifier 501 and/or the high-recall pixel classifier 503 may require labeled “ground truth” data in order to train the classifier and develop the function that maps between pixel features and classifications.”), the input data set including a plurality of pieces of image information collected from a satellite and labels corresponding to object classes of the respective pieces of image information (Guan: 0181: “One way to generate a labeled training set is to collect a sample of satellite images depicting areas with typical or a variety of cloud coverings…The sample images can then be manually labeled by experts in the field to identify which pixels represent clouds and which pixels represent ground.”);
b) determining, by an image management unit, whether the image information collected from the satellite is new image information (Guan: 0187: “…the remote sensing imagery 510 is provided via the model data and field data repository 160 and/or external data 110. For example, the provider of the remote sensing imagery 510 may periodically send updated images to the model data and field data repository 160… the cloud detection subsystem 170 may retrieve the remote sensing imagery 510 from the model data and field data repository 160 and/or external data 110.”; Wherein the data and field data repository notifies what is new image information.);
c) calculating, by the cloud cover calculation unit, a cloud cover (Guan: 0119: “The cloud detection subsystem 170 collects images and other information related to an area, such as an agricultural field, from the model data and field data repository 160 and/or external data 110 and determines which portions correspond to clouds and/or cloud shadows.”), with the image information determined by the image management unit as new image information being input to the cloud cover calculation model (Guan: 0160: “the remote sensing imagery 510 used as input to the cloud detection subsystem 170 may be updated only on a periodic basis… there may be a significant delay between image captures of the area being monitored by the satellite.”; Wherein the images are used as input once the images are updated), wherein the cloud cover is calculated by the cloud cover calculation model by detecting objects by object class on a pixel basis (Guan: 181: “In some embodiments, in addition to marking clouds, the training mask also identifies types of clouds (e.g. normal clouds, haze, etc.) and/or features related to clouds (such as cloud shadows).”; 0209: “At block 825, the pixel clusterer 506 determines whether the selected neighboring pixel is a candidate cloud pixel as defined by the candidate cloud mask”; Wherein pixels within the image are classified based on object classes); and
d) outputting, by the cloud cover calculation unit, the cloud cover image information for which the cloud cover has been calculated from the cloud cover calculation model, and transferring the cloud cover image information to the cloud cover information storage unit (Guan: Figure 5; 0140: “The cloud mask 512 and shadow mask 513 are provided as output, and may be digitally stored in electronic digital storage, such as main memory, coupled to or within the cloud detection subsystem 170.”),
wherein between step a) and step b ), the cloud cover calculation model is generated by performing a first machine learning with an input data set being input thereto (Guan: Figure 5; 0180: “…the high-precision pixel classifier 501 and/or the high-recall pixel classifier 503 may require labeled “ground truth” data in order to train the classifier and develop the function that maps between pixel features and classifications.”), the input data set including a plurality of pieces of image information and labels corresponding to object classes of the respective pieces of image information (Guan: 0181: “One way to generate a labeled training set is to collect a sample of satellite images depicting areas with typical or a variety of cloud coverings…The sample images can then be manually labeled by experts in the field to identify which pixels represent clouds and which pixels represent ground.”),
wherein the cloud cover calculation model performs semantic segmentation that classifies every pixel of the image into object classes including thick cloud, thin cloud, cloud shadow, and background (Claim limitation is interpreted according to the interpretation recited in the rejection of claim 6 under 35 U.S.C. 112(b) disclosed above) (Guan: 0180-0181: “the high-precision pixel classifier 501 and/or the high-recall pixel classifier 503 may require labeled “ground truth” data in order to train the classifier and develop the function that maps between pixel features and classifications.
One way to generate a labeled training set is to collect a sample of satellite images depicting areas with typical or a variety of cloud coverings. The sample images can then be manually labeled by experts in the field to identify which pixels represent clouds and which pixels represent ground…In some embodiments, in addition to marking clouds, the training mask also identifies types of clouds (e.g. normal clouds, haze, etc.) and/or features related to clouds (such as cloud shadows).”; Wherein normal clouds, haze, cloud shadows, and ground constitutes thick cloud, thin cloud, cloud shadow, and background, respectively).
Guan does not disclose expressly: after performing the first machine learning, additional machine learning is performed using test data and training data reflecting indicator characteristics, wherein the indicator characteristics include snow/ice indicators, city indicators, river/sea indicators, forest indicators, desert indicators, and other indicators.
Chen discloses: a cloud cover calculation model (Chen: Abstract) evaluated with test data being input thereto (Chen: 0083: “As shown in Figure 2, the cloud detection process mainly includes two parts: training phase and testing phase.”), while proportions of indicators are reflected in the training and test data according to indicator characteristics (Chen: 0080: “Establishment of training sample set: Select sample points representing different types of clouds and ground objects from the existing image dataset to form a training sample set…When selecting sample points that can typically represent different types of clouds and landforms from the Landsat8 images, in order for the model to achieve the best classification effect, the number of sample points for each type is allocated according to the area ratio, that is, types occupying a large area require more samples than types occupying a small area.”; Wherein the number of samples for each type is selected based on their area), wherein the indicator characteristics include snow/ice indicators, city indicators, ocean indicators, vegetation indicators, desert indicators, and other indicators (Chen: 0036: “The existing image dataset includes Landsat8 images of different locations and seasons within a certain area, including vegetation, urban areas, lakes, Gobi, snowy areas, deserts and ocean surface types”).
Before the effective filing date of the claimed invention, it would have been obvious to a person of ordinary skill in the art to implement the generation of training and test datasets according cloud and landform area ratios taught by Chen for the generation of training and testing datasets for the models taught by Guan. The suggestion/motivation for doing so would have been “in order for the model to achieve the best classification effect…types occupying a large area require more samples than types occupying a small area” (Chen: 0080). Further, one skilled in the art could have combined the elements as described above by known methods with no change in their respective functions, and the combination would have yielded nothing more than predictable results.
Guan in view of Chen does not disclose expressly: wherein the indicator characteristics include snow/ice indicators, city indicators, river/sea indicators, forest indicators, desert indicators, and other indicators.
Liang discloses: training data with indicator characteristics including snow/ice indicators, town indicators, river/sea indicators, forest indicators, and other indicators (Liang: 0027: “hyperspectral satellite images of four types of underlying surfaces, including water bodies (oceans, rivers, lakes), vegetation (farmland, grassland, forest), artificial surfaces (industrial land, roads, towns), and others (bare land, snow and coastline), were selected as training samples”).
Before the effective filing date of the claimed invention, it would have been obvious to a person of ordinary skill in the art to implement the known technique as taught by Liang of incorporating forest and river training images into the training and testing dataset disclosed by Guan in view of Chen. The suggestion/motivation for doing so would have been “…to avoid the phase error caused by the change of the underlying surface features over time” (Liang: 0027; Wherein more locations increase the accuracy and flexibility of the trained model). Further, one skilled in the art could have combined the elements as described above by known methods with no change in their respective functions, and the combination would have yielded nothing more than predictable results.
Guan in view of Chen and Liang does not disclose expressly: after performing the first machine learning, additional machine learning is performed using test data and training data reflecting indicator characteristics in order to consider different reflectance based on the indicator characteristics, wherein the method is performed to provide cloud cover analysis result value in consideration of reflectance according to the indicator characteristics.
Chen further discloses: additional machine learning is performed using test data and training data reflecting indicator characteristics in order to consider different reflectance based on the indicator characteristics, and wherein the method is performed to provide cloud cover analysis result value in consideration of reflectance according to the indicator characteristics (Chen: 0079: “S2: Feature Extraction: …based on the differences in reflectance and texture characteristics between clouds and the underlying surface, a series of new features are extracted from the four bands of visible light and near-infrared to distinguish between clouds and the underlying surface of ground objects, increase the data dimension, and improve the classification accuracy to a certain extent. S3: Establishment of training sample set: Sample points representing different types of clouds and ground objects are selected from the existing image dataset to form a training sample set.”
0089-0090: “the features extracted from the pre-processed image include…Reflection spectral characteristics: Based on the characteristics of clouds showing high brightness and continuous coverage in optical remote sensing images, the spectral information of visible light and near-infrared bands is used as characteristics to distinguish clouds from the underlying surface of ground objects. In the process of remote sensing image interpretation, it is generally believed that each type of ground object has a corresponding spectral characteristic curve in each band, so the reflection characteristics of the ground object in each band can be used as the main basis for ground object interpretation.
Due to the unique reflective characteristics of clouds, they often appear as high brightness and continuous coverage in optical remote sensing images. Therefore, using the spectral information of visible light and near-infrared bands as features can distinguish clouds from ground objects.”; Wherein the samples of different ground types, or indicators, are analyzed based on their reflection spectral characteristics in order to classify pixels as either a ground or cloud class. The pixel classification constitutes a cloud cover analysis result value).
Before the effective filing date of the claimed invention, it would have been obvious to a person of ordinary skill in the art to implement the features extracted from the image feature extraction process as further taught by Chen into the cloud detection subsystem models disclosed by Guan in view of Chen and Liang. The suggestion/motivation for doing so would have been “…based on the differences in reflectance and texture characteristics between clouds and the underlying surface, a series of new features are extracted from the four bands of visible light and near-infrared to distinguish between clouds and the underlying surface of ground objects, increase the data dimension, and improve the classification accuracy to a certain extent.” (Chen: 0079). Further, one skilled in the art could have combined the elements as described above by known methods with no change in their respective functions, and the combination would have yielded nothing more than predictable results.
Guan in view of Chen and Liang does not disclose expressly: wherein the cloud cover calculation model is trained with class-specific weights, and a weight for the thin-cloud class is set differently depending on whether a ratio of thin cloud contained in the image is at least a preset ratio (Claim limitation is interpreted according to the interpretation recited in the rejection of claim 6 under 35 U.S.C. 112(b) disclosed above).
Wakui discloses: the training of a semantic segmentation model using a mini-batch learning process (Wakui: Abstract). Wherein for each mini-batch set, a class ratio is calculated for each class present in the mini-batch, by counting all a class’s pixels in a mini-batch and dividing by the total number of pixels in the mini-batch (Wakui: 0063: “The calculation unit 51 calculates an area ratio of each of the plurality of classes in the mini-batch data 11 . More specifically, the calculation unit 51 adds, for each class, the number of pixels of regions, which are manually designated in the divided annotation images 215 of the divided annotation image group 13 of the mini-batch data 11 generated from the generation unit 50 . Next, the calculation unit 51 calculates an area ratio by dividing the added number of pixels by the total number of pixels of the divided annotation images 215.”). In addition, if a class’s ratio exceeds a threshold value in a mini-batch, the class’s training weight coefficient is reduced (Wakui: 0089: “In FIG. 15, assuming that the setting value is 50%, as shown in a table 75, for the 30th set of the mini-batch data 11 , a case where the class-2 undifferentiated cells of which the area ratio is 56% higher than the setting value are specified as a non-rare class is exemplified.”; 0093: “the non-rare class of which the area ratio is higher than the setting value is specified as the correction target class, and processing of setting the weight for the first loss value to be smaller than the weight for the second loss value is executed as the correction processing.”).
Before the effective filing date of the claimed invention, it would have been obvious to a person of ordinary skill in the art to implement the class-specific and mini-batch specific adaptive training coefficient based on class ratios taught by Wakui for the training of the pixel classifier disclosed by Guan in view of Chen and Liang by separating the training dataset into batches. The suggestion/motivation for doing so would have been “Therefore, as in the first embodiment, it is possible to prevent a decrease in the class determination accuracy of the model 10” (Wakui: 0093; Wherein larger classes do not mask smaller classes during training). Further, one skilled in the art could have combined the elements as described above by known methods with no change in their respective functions, and the combination would have yielded nothing more than predictable results.
Guan in view of Chen, Liang, and Wakui does not disclose expressly: wherein the cloud cover calculation model is trained with class-specific weights, and a weight for the thin-cloud class is set differently depending on whether a ratio of thin cloud contained in the image is at least a preset ratio.
Waibel discloses: a deep-learning pipeline for the training of a convolutional neural network for the purpose of performing semantic segmentation (Waibel: Abstract: “Convolutional neural networks have become the state-of-the-art tool to provide accurate and fast image data processing. However, published algorithms mostly solve only one specific problem and they typically require a considerable coding effort and machine learning background for their application.
Results: We have thus developed InstantDL, a deep learning pipeline for four common image processing tasks: semantic segmentation, instance segmentation, pixel-wise regression and classification.”), wherein the model training is performed with an image batch-size of 1 (Waibel: Hardware requirements: “This dataset contains 120 training images of size 512 × 512 pixels and we ran it with a batch size of 1 for 37 epochs.”).
Before the effective filing date of the claimed invention, it would have been obvious to a person of ordinary skill in the art to implement the deep-learning pipeline disclosed by Waibel for the training of the pixel classifier disclosed by Guan in view of Chen, Liang, and Wakui by separating the training dataset into batches. The suggestion/motivation for doing so would have been “The pipeline is designed for maximum automation to make training and testing as convenient and as easy as possible…Moreover, we included uncertainty prediction to provide an additional level of interpretability of predictions... Due to InstantDLs modular implementation it is however easy for users with python knowledge to exchange deep learning algorithms for a taylormade solution… Our pipeline can easily be installed and run locally on a computer or server, ensuring data scalability, privacy and security.” (Waibel: Discussion). Further, one skilled in the art could have combined the elements as described above by known methods with no change in their respective functions, and the combination would have yielded nothing more than predictable results. Therefore, it would have been obvious to combine Guan in view of Chen, Liang, and Wakui with Waibel to obtain the invention as specified in claim 6.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to ANTHONY J RODRIGUEZ whose telephone number is (703)756-5821. The examiner can normally be reached Monday-Friday 10am-7pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Sumati Lefkowitz can be reached at (571) 272-3638. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/ANTHONY J RODRIGUEZ/Examiner, Art Unit 2672
/SUMATI LEFKOWITZ/Supervisory Patent Examiner, Art Unit 2672