Prosecution Insights
Last updated: April 19, 2026
Application No. 18/997,313

METHOD AND SYSTEM FOR DAYTIME INFRARED SPACE SURVEILLANCE

Non-Final OA §103
Filed
Jan 21, 2025
Examiner
SHIBRU, HELEN
Art Unit
2484
Tech Center
2400 — Computer Networks
Assignee
ArianeGroup SAS
OA Round
1 (Non-Final)
59%
Grant Probability
Moderate
1-2
OA Rounds
3y 8m
To Grant
62%
With Interview

Examiner Intelligence

Grants 59% of resolved cases
59%
Career Allow Rate
443 granted / 756 resolved
+0.6% vs TC avg
Minimal +4% lift
Without
With
+3.7%
Interview Lift
resolved cases with interview
Typical timeline
3y 8m
Avg Prosecution
36 currently pending
Career history
792
Total Applications
across all art units

Statute-Specific Performance

§101
8.7%
-31.3% vs TC avg
§103
42.6%
+2.6% vs TC avg
§102
31.3%
-8.7% vs TC avg
§112
10.2%
-29.8% vs TC avg
Black line = Tech Center average estimate • Based on career data from 756 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-8 are rejected under 35 U.S.C. 103 as being unpatentable over SHADDIX et al. (US PG PUB 2021/0064849 hereinafter referred as Shaddix) in view of Fitzgerald et al. (Geosynchronous satellite detection and tracking with WFOV camera arrays using spatio-temporal neural networks; hereinafter referred as Fitzgerald). Regarding claim 1, Shaddix discloses a space surveillance method for detecting space objects in orbit around the Earth in images captured during the daytime, the method comprising the following steps: capturing a plurality of infrared shots of the daytime sky using a camera comprising at least one infrared sensor, each infrared shot comprising an array of pixels which are each associated with an intensity of light received by an infrared sensor (see paragraph 0054 SWIR image of the daytime sky captured by camera system; see figure 4 and paragraph 0037), detecting space objects in orbit around the Earth on the basis of said shots, identifying each object detected from a catalogue of known space objects in orbit around the Earth (see paragraphs 0088-0090 space objects identified), and the step of detecting space objects in orbit around the Earth comprising: detecting bright spots in each shot; discriminating the detected bright spots, the discrimination comprising tracking each detected bright spot that is stationary in successive shots, and recording the coordinates of the detected bright spots at possibly different positions and grouped together by this tracking, the recording being performed, for each bright spot detected, following its disappearance in the following shots (see paragraphs 0021-0023 a satellite tracking system may be configured to sense in shortwave infrared (SWIR) to mitigate the challenge of daytime imaging; SWIR sensors configured to sense wavelengths between 0.7 microns and 2.5 microns, such as wavelengths between 0.9 microns and 1.7 microns, providing two complementary benefits: (i) the diffuse sky spectral surface brightness is approximately two orders of magnitude lower in regions of the sky in SWIR than visible, and (ii) the spectral reflectance profile (e.g., the ability to reflect or absorb EM radiation) of many satellites markedly increases for wavelengths around 1.0 micron (where visible sensors fall off); the satellite tracking system may utilize SWIR sensors configured for wavelengths in a range of 0.7-2.3 microns. For example, the SWIR sensors may be configured for wavelengths in a range of 1.0-1.2 microns, 1.0-1.4 microns, 1.2-1.7 microns, 1.4-1.7 microns, or other ranges between 1.0-1.7 microns; tracking and detecting RSOs during daytime hours, or more generally, for periods of time with high-intensity background noise; see also paragraphs 0048 and 00925). Claim 1 differs from Shaddix in that the claim further requires the step of detecting space objects in orbit around the Earth is implemented by a deep-learning artificial intelligence system comprising a plurality of layers of artificial neural network connected together in order to analyse the information from the preceding layer of neurons, the deep learning being based on simulation images generated in order to reproduce typical images coming from the infrared sensor and comprising a background and the background noise, spots of light corresponding either to stars or to space objects or to defects of the infraredsensor, each simulation image being associated with a truth based on the positions of the real objects in the image. In the same field of endeavor Fitzgerald discloses detecting space objects in orbit around the Earth is implemented by a deep-learning artificial intelligence system comprising a plurality of layers of artificial neural network connected together in order to analyse the information from the preceding layer of neurons, the deep learning being based on simulation images generated in order to reproduce typical images coming from the sensor and comprising a background and the background noise, spots of light corresponding either to stars or to space objects or to defects of the sensor, each simulation image being associated with a truth based on the positions of the real objects in the image (see abstract, appearance based detectors; implementing a spatio-temporal deep learning architecture for GEO object detection. and tracking; annotated sequential frames including object motion are used to train GEO-SPANN, which uses a two-stage CNN to provide a learned temporal mapping of GEO objects in sequences of annotated PANDORA images. We present the GEO object detection and tracking results of GEO-SPANN on sequences of 100 frames of PANDORA data. GEO-SPANN advances strategies for autonomous detection and tracking of GEO satellites, allowing PANDORA to be leveraged for orbit catalogue maintenance and space object anomaly detection; see Methods in page 6, convolutional neural network (CNN) first proposes a heatmap of regions of interest (ROI) to reduce the search space, and then feeds the heatmap of probable object locations to a object detector, which predicts the bounding box of the RSO; the design operates as both a detector and re-identifier of previously detected objects, which will allow for the tracking of RSO; see section 3 and figure 3, data is partitioned in temporal sequences, see also section 4.2). Therefore, in light of the teaching in Fitzgerald it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Shaddix by adding the feature of deep-learning artificial intelligence system as claimed in order to implement a spatio-temporal neural network to detect RSOs and in order to learn complex patterns and features directly from vast amount of raw unstructured data. Regarding claim 2, Fitzgerald discloses the layers of artificial neural networks are calibrated, prior to their use for detecting space objects, by a supervised learning method, using a base of various images enabling the artificial intelligence system to determine the typical features of a space object (see sections 2.1 and 3.1). The motivation to combine the references is discussed in claim 1 above. Regarding claim 3, Shaddix discloses immediately after capturing (100) shots, applying a non- uniformity correction to the captured shots (see paragraphs 0042 and 0070). Regarding claim 4, Shaddix discloses filtering each shot (see paragaphs 0035 and 0037). Regarding claim 5, Shaddix discloses forming stacked images from a superposition of a plurality of said shots, each pixel of a stacked image being associated with a received intensity of light corresponding to the average of the intensities of the superimposed shots for the same pixel, the detection of space objects using the stacked images as shots to be processed (see abstract and paragraphs 0042, and 0045-0048). Regarding claim 6, Shaddix discloses before the step of detecting the bright spots, a destriping step of each stacked image in order to remove the streak defects in the stacked image (see paragraphs 0047-0048, 0057-0058 and 0063). Regarding claim 7, the limitation of claim 7 can be found in claim 1 above. Therefore, claim 7 is analyzed and rejected for the same reasons as discussed in claim 1. Furthermore, Shaddix discloses the system comprises a reflecting telescope mounted on a mechanical support with motorised displacement, a camera comprising at least one infrared sensor mounted at the output of the reflecting telescope and configured to take series of shots of the daytime sky at a frequency between 1 Hz and several hundred Hertz, and a processing unit receiving each shot captured by the camera (see figure 1, paragraphs 0019, 0024, 0042, 0092 and 0094). Regarding claim 8, Shaddix discloses at least one visible light sensor mounted at the output of the reflecting telescope and configured to take series of shots of the night sky, the space surveillance system further comprising a day/night alternation module making it possible to change the type of sensor receiving the light from the sky as a function of the environmental light intensity (see paragraphs 0019, 0022, 0026 and 0029). Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to HELEN SHIBRU whose telephone number is (571)272-7329. The examiner can normally be reached M-TR 8:00AM-5:00PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, THAI TRAN can be reached at 571 272 7382. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /HELEN SHIBRU/ Primary Examiner, Art Unit 2484 January 10, 2026
Read full office action

Prosecution Timeline

Jan 21, 2025
Application Filed
Jan 10, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12603970
METHOD, APPARATUS, ELECTRONIC DEVICE, AND COMPUTER-READABLE MEDIUM FOR DISPLAYING LYRIC EFFECTS
2y 5m to grant Granted Apr 14, 2026
Patent 12603112
VIDEO GENERATION METHOD, APPARATUS, DEVICE, AND STORAGE MEDIUM
2y 5m to grant Granted Apr 14, 2026
Patent 12586375
Content Validation Using Scene Modification
2y 5m to grant Granted Mar 24, 2026
Patent 12563277
USER-CUSTOMIZED ALREADY VIEWED VIDEO SUMMARY
2y 5m to grant Granted Feb 24, 2026
Patent 12562194
METHOD, APPARATUS, DEVICE AND MEDIUM FOR GENERATING A VIDEO
2y 5m to grant Granted Feb 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
59%
Grant Probability
62%
With Interview (+3.7%)
3y 8m
Median Time to Grant
Low
PTA Risk
Based on 756 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month