Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
DETAILED ACTION
Response to Amendment
In response to applicant’s amendment received on 10/30/2025, all requested changes to the claims have been entered.
Response to Argument
Applicant’s arguments filed on 10/30/2025 have been considered but they are moot in view of the new ground(s) of rejection.
Claim Rejections - 35 USC § 102
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale or otherwise available to the public before the effective filing date of the claimed invention.
Claims 1 and 10-16 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Giuffrida et al. (“CloudScout: A Deep Neural Network for On-Board Cloud Detection on Hyperspectral Images”, remote sensing, 10 July 2020).
With respect to claim 1, Giuffrida et al. teach an image sensor configured to output sensor data, wherein the sensor data comprises a sensor image (page 3. bands of hyperspectral cubes produced by the HyperScout-2 sensor,);
a control unit configured to (Fig. 7).
receive the sensor image from the image sensor (Fig. 7 input);
execute a feature amount conversion process to convert the sensor image into a specific feature amount (feature extraction), wherein
the control unit comprises a convolutional neural network (CNN model),
the convolutional neural network is configured to execute the feature amount conversion process (feature extraction), and
the specific feature amount includes a value of one of a convolution layer or a pooling layer of the convolutional neural network (page 9, convolutional layer and global max pooling layer);
execute a recognition process on the specific feature amount to obtain a recognition process result; (pages8-9, 3.4. CloudScout Network fig. 7 output)
execute a feature amount generation process on the specific feature amount, to generate first feature amount data (page 10-11, 4.Result; cloudy, not cloudy.), wherein the first feature amount data comprises metadata associated with the recognition process result (page 13-14, 5. Discussion; binary response (cloudy/not cloudy) leads); and
a transmission unit configured to transmit the generated first feature amount data by wireless communication (page 1, downlink transmission, images transmitted to ground (not cloudy image)).
With respect to claim 10, Giuffrida et al. teach execute processing operation for generation of a second of generating additional feature amount data, based on the generated first feature amount data is insufficient data for the recognition process (page 13-14, 5. Discussion; pixel-level information on the presence of clouds can be exploited to improve compression performance through the substitution of cloudy areas through completely white areas.),
With respect to claim 11, Giuffrida et al. teach execute data integration processing operation for integration of the generated second feature amount data and the generated first feature amount data (page 13-14, 5. Discussion; pixel-level information on the presence of clouds can be exploited to improve compression performance through the substitution of cloudy areas through completely white areas.),
With respect to claim 12, Giuffrida et al. teach that the first feature amount data is information indicating a change amount of the sensor data detected in a monitoring target area. (page 9-10, cloudy/cloudless).
With respect to claim 13, Giuffrida et al. teach that the first feature amount data is information indicating distribution data of the sensor data detected in a monitoring target area (page 10-11, 4.Result; cloudy, not cloudy.).
With respect to claim 14, Giuffrida et al. teach that the sensor device is installed on one of a mobile object on an ocean (page 13, to capture Aral sea cloud)
With respect to claim 15, Giuffrida et al. teach that the transmission unit is further configured to transmit the feature amount data to an unmanned aircraft by the wireless communication (page 1, downlink transmission from satellite, images transmitted to ground).
Claim 16 is rejected as same reason as claim 1 above.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The text of those sections of Title 35, U.S. Code not included in this action can be found in a prior Office action.
The factual inquiries set forth in Graham v. John Deere Co., 383 U.S. 1, 148 USPQ 459 (1966), that are applied for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claim 2 is rejected under 35 USC 103 as being unpatentable over Giuffrida et al. (“CloudScout: A Deep Neural Network for On-Board Cloud Detection on Hyperspectral Images”, remote sensing, 10 July 2020).in view of Rutschman et al. (US 2018/0239948).
Giuffrida et al. teaches all the limitations of claim 1 as applied above from which claim 2 respectively depend.
Giuffrida et al do not teach expressly that transmits the feature amount data to an artificial satellite by the wireless communication.
Rutschman et al. teach transmits the feature amount data to an artificial satellite by the wireless communication. (para [0208], The image data can be processed on-board by a processor 504N (para [0202], neural network comparisons performed by the processor 504N) the parking data (feature amount) can be transmitted to another satellite 500N trailing the satellite 500 in its orbital path).
At the time of effective filing, it would have been obvious to a person of ordinary skill in the art to transmits the feature amount data to an artificial satellite by the wireless communication in the method of Giuffrida et al.
The suggestion/motivation for doing so would have been that it can continuously process a geographical area when one satellite is moving away from the geographical area and other satellite is moving into the geographical area.
Therefore, it would have been obvious to combine Rutschman et al. with Giuffrida et al. to obtain the invention as specified in claim 2.
Claim 3 is rejected under 35 USC 103 as being unpatentable over Giuffrida et al. (“CloudScout: A Deep Neural Network for On-Board Cloud Detection on Hyperspectral Images”, remote sensing, 10 July 2020).in view of Chan (US 2019/0236793).
Giuffrida et al. teaches all the limitations of claim 1 as applied above from which claim 3 respectively depend.
Giuffrida et al do not teach expressly that the control unit is further configured to cause a storage unit to store the first feature amount data, wherein the storage unit excludes storage of the sensor data.
Chan teaches the control unit is further configured to cause a storage unit to store the first feature amount data, wherein the storage unit excludes storage of the sensor data (para [0010], the captured images are discarded from said analytic device after analyzing, and only a structured data set containing an identity and said path of movement of said object across time is retained).
At the time of effective filing, it would have been obvious to a person of ordinary skill in the art to store the first feature amount data, wherein the storage unit excludes storage of the sensor data in the method of Giuffrida et al.
The suggestion/motivation for doing so would have been that to save resources in edge computing.
Therefore, it would have been obvious to combine Chan with Giuffrida et al. to obtain the invention as specified in claim 3.
Claims 7-9 are rejected under 35 USC 103 as being unpatentable over Giuffrida et al. (“CloudScout: A Deep Neural Network for On-Board Cloud Detection on Hyperspectral Images”, remote sensing, 10 July 2020).in view of Ricaert et. al. (JP 2019513315, English translation).
With respect to claim 7, Giuffrida et al. teaches all the limitations of claim 1 as applied above from which claim 7 respectively depend.
Giuffrida et al do not teach expressly that perform an image capturing operation again by the image sensor, based on the generated first feature amount data is insufficient data for the recognition process.
Ricaert et. al. teach perform an image capturing operation again by the image sensor, based on the generated first feature amount data is insufficient data for the recognition process (page 36, 2nd para., acquire a second set of images based at least in part on the first subset of images providing an unobstructed view).
At the time of effective filing, it would have been obvious to a person of ordinary skill in the art to capture image again based on the generated first feature amount data is insufficient data in the method of Giuffrida et al.
The suggestion/motivation for doing so would have been that to get higher accuracy image analysis result.
Therefore, it would have been obvious to combine Ricaert et. al. with Giuffrida et al. to obtain the invention as specified in claim 7.
With respect to claim 8, Ricaert et. al. teach control to increase a resolution of the image sensor. and perform the image-capturing operation again (page 36, 2nd para., the second of the images The set of is acquired by a second sensor having a higher resolution than the first sensor).
With respect to claim 9, Ricaert et. al. teach perform the image-capturing operation again of a specific area of the sensor image captured by the image sensor (page 36, 2nd para., narrow field of view sensor).
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to Randolph Chu whose telephone number is 571-270-1145. The examiner can normally be reached on Monday to Thursday from 7:30 am - 5 pm.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Matthew Bella can be reached on (571) 272-7778.
The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free).
/RANDOLPH I CHU/
Primary Examiner, Art Unit 2667