Prosecution Insights
Last updated: April 19, 2026
Application No. 18/481,074

METHODS AND APPARATUS FOR ONBOARD CAMERA CALIBRATION

Final Rejection §103
Filed
Oct 04, 2023
Examiner
DANG, PHILIP
Art Unit
2488
Tech Center
2400 — Computer Networks
Assignee
The Boeing Company
OA Round
4 (Final)
77%
Grant Probability
Favorable
5-6
OA Rounds
2y 10m
To Grant
99%
With Interview

Examiner Intelligence

Grants 77% — above average
77%
Career Allow Rate
363 granted / 470 resolved
+19.2% vs TC avg
Strong +33% interview lift
Without
With
+33.2%
Interview Lift
resolved cases with interview
Typical timeline
2y 10m
Avg Prosecution
49 currently pending
Career history
519
Total Applications
across all art units

Statute-Specific Performance

§101
4.5%
-35.5% vs TC avg
§103
48.6%
+8.6% vs TC avg
§102
11.1%
-28.9% vs TC avg
§112
25.5%
-14.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 470 resolved cases

Office Action

§103
DETAILED ACTIONNotice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Applicant Response to Official Action The response filed on 7/22/2025 has been entered and made of record. Acknowledgment Claims 1, 8-10, 12-17, and 20, amended on 7/22/2025, are acknowledged by the examiner. Response to Arguments Applicant’s arguments with respect to claims 1, 8, 15 and their dependent claims have been considered but they are moot in view of the new grounds of rejection necessitated by amendments initiated by the applicant. Examiner addresses the main arguments of the Applicant as below. Regarding the U.S.C. 103 rejection, the cited references teach the amended limitations as follow: generate randomized images from images stored in a data storage carried by the aircraft (HA computer device 102 generates a plurality of images 112 and 114 that each contain a plurality of random pixel mixtures associated with the at least one background item to train a program to recognize the at least one item to detect) [Garrison: col. 12, line 19-23], the images stored in the data storage corresponding to at least one of satellite or aerial images (In the example embodiments, hyperspectral camera is placed on an airborne platform for remote sensing, such as an aircraft or satellite. The hyperspectral system includes multiple attributes that affect the ability of the hyperspectral system to collect data and images) [Garrison: col. 4, line 52-55] obtained prior to a flight of the aircraft (Prior to deployment, a hyperspectral system is trained to recognize items of interest in contrast to background or environment details. In many cases, this training is performed by having the hyperspectral system analyze a large plurality of hyperspectral images to learn how to differentiate pixels associated with items of interest from pixels associated with the background of the image. For example, a hyperspectral system may be trained to be able to recognize a tent in contrast to the surrounding forest. Depending on the mission, the hyperspectral system requires different training to recognize important features in contrast to background details. Proper training of a hyperspectral system may be expensive both in setting up and in training time. Furthermore, without proper design, the hyperspectral system may require additional training to meet the requirements of the mission.) [Garrison: col. 1, line 12-27]. Accordingly, the Examiner respectfully maintains the rejections and applicability of the arts used. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The factual inquiries set forth in Graham v. John Deere Co., 383 U.S. 1, 148 USPQ 459 (1966), that are applied for establishing a background for determining obviousness under pre-AIA 35 U.S.C. 103(a) are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. This application currently names joint inventors. In considering patentability of the claims under pre-AIA 35 U.S.C. 103(a), the examiner presumes that the subject matter of the various claims was commonly owned at the time any inventions covered therein were made absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and invention dates of each claim that was not commonly owned at the time a later invention was made in order for the examiner to consider the applicability of pre-AIA 35 U.S.C. 103(c) and potential pre-AIA 35 U.S.C. 102(e), (f) or (g) prior art under pre-AIA 35 U.S.C. 103(a). Claims 1-3, 5, 8-10, 12, 15-17 and 19 are rejected under 35 U.S.C. 103 as being unpatentable over Garrison (US Patent 10,657,422 B2), (“Garrison”), in view of Zamora et al. (US Patent Application Publication 2023/0394708 A1), (“Zamora”). Regarding claim 1, Garrison meets the claim limitations as follows: An apparatus (a server computer device) [Garrison: col. 9, line 58; Figs. 2, 4] for use with an aircraft (hyperspectral camera is placed on an airborne platform for remote sensing, such as an aircraft or satellite) [Garrison: col. 8, line 52-54], the apparatus comprising interface circuitry ((Server computer device 401 may include, but is not limited to, database server 215 and HA computer device 210 (both shown in FIG. 2). [Garrison: col. 9, line 58-60; Figs. 2, 4]; (HA computer device 210 is remote from at least one of user computer device 205, database server 215, and hyperspectral system 225 and communicates with the remote computer device through the Internet. More specifically, HA computer device 210 is communicatively coupled to Internet through many interfaces including, but not limited to, at least one of a network, such as a local area network (LAN), a wide area network (WAN), or an integrated services digital network (ISDN), a dial-up-connection, a digital subscriber line (DSL), a cellular phone connection, and a cable modem. HA computer device 210 can be any device capable of accessing the Internet, or another network, including, but not limited to, a desktop computer, a laptop computer, a personal digital assistant (PDA), a cellular phone, a smartphone, a tablet, a phablet, or other web-based connectable equipment.) [Garrison: col. 8, line 15-30; Fig. 2]) communicatively coupled to a camera (Hyperspectral systems 225 include hyperspectral cameras and/or other devices capable of taking hyperspectral images) [Garrison: col. 8, line 40-41] of the aircraft ((hyperspectral camera is placed on an airborne platform for remote sensing, such as an aircraft or satellite) [Garrison: col. 8, line 52-54]; ( A hyperspectral system collects and processes information from across the electromagnetic spectrum. The hyperspectral system is configured to obtain the spectrum for the pixels in the image of a scene, with the purpose of finding objects, identifying materials, or detecting processes. Example hyperspectral system may include, but are not limited to, one of a push broom scanning and snapshot hyperspectral imaging. In push broom scanning, the camera images the scene line by line using the "push broom" scanning mode) [Garrison: col. 4, line 1-10], the camera to capture an image (In push broom scanning, the camera images the scene line 10 by line using the "push broom" scanning mode. One narrow spatial line in the scene is imaged at a time, and this line is split into its spectral components before reaching a sensor array. When the sensor array is a two-dimensional (2D) sensor array, one dimension is used for spectral separation and the second dimension is used for imaging in one spatial direction. The second spatial dimension in the scene arises from scanning the camera over the scene (e.g., aircraft movement). The result can be seen as one 2D image for each spectral channel. Alternatively every pixel in the image 20 contains one full spectrum. In snapshot hyperspectral imaging, the camera generates an image of the scene at a specific point in time.) [Garrison: col. 4, line 11-22]; machine readable instructions (Instructions may be stored in a memory area 410) [Garrison: col. 9, line 62-63; Fig. 4]; and programmable circuitry to be programed by the machine readable instructions to (Server computer device 401 also includes a processor 405 for executing instructions) [Garrison: col. 9, line 60-62; Fig. 4]: generate randomized images from images stored in a data storage carried by the aircraft (HA computer device 102 generates a plurality of images 112 and 114 that each contain a plurality of random pixel mixtures associated with the at least one background item to train a program to recognize the at least one item to detect) [Garrison: col. 12, line 19-23], the images stored in the data storage corresponding to at least one of satellite or aerial images (In the example embodiments, hyperspectral camera is placed on an airborne platform for remote sensing, such as an aircraft or satellite. The hyperspectral system includes multiple attributes that affect the ability of the hyperspectral system to collect data and images) [Garrison: col. 4, line 52-55] obtained prior to a flight of the aircraft (Prior to deployment, a hyperspectral system is trained to recognize items of interest in contrast to background or environment details. In many cases, this training is performed by having the hyperspectral system analyze a large plurality of hyperspectral images to learn how to differentiate pixels associated with items of interest from pixels associated with the background of the image. For example, a hyperspectral system may be trained to be able to recognize a tent in contrast to the surrounding forest. Depending on the mission, the hyperspectral system requires different training to recognize important features in contrast to background details. Proper training of a hyperspectral system may be expensive both in setting up and in training time. Furthermore, without proper design, the hyperspectral system may require additional training to meet the requirements of the mission.) [Garrison: col. 1, line 12-27]; determine at least one relationship between (compare the one or more mission parameters to the generated one or more spectral bands to determine whether the at least one item will be detected; and generate a plurality of images based on a distribution of simulated individual pixel measurements associated with the one or more mission parameters, wherein the plurality of images comprise a plurality of random pixel mixtures associated with the at least one background item to train a program to recognize the at least one item to detect) [Garrison: col. 14, line 50-59] an image captured by the camera during the flight of the aircraft (In push broom scanning, the camera images the scene line 10 by line using the "push broom" scanning mode. One narrow spatial line in the scene is imaged at a time, and this line is split into its spectral components before reaching a sensor array. When the sensor array is a two-dimensional (2D) sensor array, one dimension is used for spectral separation and the second dimension is used for imaging in one spatial direction. The second spatial dimension in the scene arises from scanning the camera over the scene (e.g., aircraft movement). The result can be seen as one 2D image for each spectral channel. Alternatively every pixel in the image 20 contains one full spectrum. In snapshot hyperspectral imaging, the camera generates an image of the scene at a specific point in time.) [Garrison: col. 4, line 11-22] and the randomized images ((compare the one or more mission parameters to the generated one or more spectral bands to determine whether the at least one item will be detected; and generate a plurality of images based on a distribution of simulated individual pixel measurements associated with the one or more mission parameters, wherein the plurality of images comprise a plurality of random pixel mixtures associated with the at least one background item to train a program to recognize the at least one item to detect) [Garrison: col. 14, line 50-59]; (the systems and methods described herein describe a more cost-efficient and quicker method of training and analyzing a hyperspectral system by using random pixel distribution) [Garrison: col. 14, line 18-20]; (HA computer device 102 applies algorithms and exploitations 118 to the two sets of hyperspectral images 112 and 114 to determine performance metrics 120. Examples of algorithms and exploitation include, but are not limited to, best linear unbiased estimation and orthogonal subspace projection. Examples of performance metrics include, but are not limited to, signal-to-noise ratio (SNR), signal compression ratio (SCR), probability of detection (Pd), probability of false alarms (Pfa), minimum detectable quantity (MDQ), and minimum identifiable quantity (MIQ). In the example embodiment, HA computer device 102 generates data plots and graphics 122 based on performance metrics 120. In the example embodiment, HA computer device 102 outputs performance metrics 120 and data plots and graphics 122 to a user) [Garrison: col. 6, line 58 - col. 7, line 5]); and determine a parameter of the camera based on the at least one relationship (HA computer device 102 applies algorithms and exploitations 118 to the two sets of hyperspectral images 112 and 114 to determine performance metrics 120. Examples of algorithms and exploitation include, but are not limited to, best linear unbiased estimation and orthogonal subspace projection. Examples of performance metrics include, but are not limited to, signal-to-noise ratio (SNR), signal compression ratio (SCR), probability of detection (Pd), probability of false alarms (Pfa), minimum detectable quantity (MDQ), and minimum identifiable quantity (MIQ). In the example embodiment, HA computer device 102 generates data plots and graphics 122 based on performance metrics 120. In the example embodiment, HA computer device 102 outputs performance metrics 120 and data plots and graphics 122 to a user) [Garrison: col. 6, line 58 - col. 7, line 5] to calibrate the camera. Garrison does not explicitly disclose the following claim limitations (Emphasis added). determine a parameter of the camera based on the at least one relationship to calibrate the camera. However, in the same field of endeavor Zamora further discloses the deficient claim limitations as follows: determine a parameter of the camera based on the at least one relationship ((determining calibration parameters) [Zamora: para. 0033]; (invokes position calculator circuitry 208 to determine calibration information) [Zamora: para. 0027]; (position calculator circuitry 208 calculates and/or otherwise determines spatial information relative to the example camera 112 and the table 102 in view of a coordinate frame 116 (e.g., a spatial frame or mapping having an x-axis, a y-axis and a z-axis). In some examples, the position calculator circuitry 208 is instantiated by dedicated hardware circuitry and/or by programmable circuitry executing position calculation instructions and/or configured to perform operations such as those represented by the flowchart(s) of FIGS. 7-10. As described in further detail below, the example distribution circuitry 210 transmits and/or otherwise distributes calibration information to one or more computing devices) [Zamora: para. 0021]) to calibrate the camera (As described above, the example third stage when calibrating an image device (e.g., a camera) includes determining calibration information (e.g., calibration values, calibration parameters), such as determining camera tilt metrics and camera pan metrics) [Zamora: para. 0034]. It would have been obvious to one with an ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of Garrison with Zamora to program the system to implement of Zamora’s method. Therefore, the combination of Garrison with Zamora will enable the system to improve the accuracy of calibration metrics and improve calibration accuracy [Zamora: para. 0099]. Regarding claim 2, Garrison meets the claim limitations as set forth in claim 1. Garrison further meets the claim limitations as follow. determine (determining, by the processor) [Garrison: col. 1, line 55-56] first points (every pixel in the image) [Garrison: col. 4, line 19] of the randomized images ((a plurality of images 112 and 114 that each contain a plurality of random pixel mixtures associated with the at least one background item to train a program to recognize the at least one item to detect) [Garrison: col. 12, line 19-23] – Note: The first points can be a plurality of random pixel mixtures associated with the at least one background item to be detected); determine (determining, by the processor) [Garrison: col. 1, line 55-56] second points (every pixel in the image) [Garrison: col. 4, line 19] of the captured image ((In push broom scanning, the camera images the scene) [Garrison: col. 4, line 11] ; (For each pixel in an image, a hyperspectral camera acquires the light intensity (radiance) for a large number (typically a few tens to several hundred) of contiguous spectral bands. Every pixel in the image thus contains a continuous spectrum (in radiance or reflectance) and can be used to characterize the objects in the scene with great precision and detail) [Garrison: col. 4, line 29-35]); and compare the first points to the second points to determine the at least one relationship ((The at least one background item may include terrain information, terrain type, climate information, geographic locations, and other information necessary to model the background of the images that the item(s) of interest will be compared against. Furthermore, background may include ground cover (such as cultivated and uncultivated fields, brush, forests, and deserts), geographic features (such as rivers, valleys, mountains, and plains), and human artifacts (such as isolated buildings, bridges, highways, and suburban and urban structures)) [Garrison: col. 10, line 68-67]; (compare the one or more mission parameters to the generated one or more spectral bands to determine whether the at least one item will be detected; and generate a plurality of images based on a distribution of simulated individual pixel measurements associated with the one or more mission parameters, wherein the plurality of images comprise a plurality of random pixel mixtures associated with the at least one background item to train a program to recognize the at least one item to detect) [Garrison: col. 14, line 50-59]; (HA computer device 102 applies algorithms and exploitations 118 to the two sets of hyperspectral images 112 and 114 to determine performance metrics 120. Examples of algorithms and exploitation include, but are not limited to, best linear unbiased estimation and orthogonal subspace projection. Examples of performance metrics include, but are not limited to, signal-to-noise ratio (SNR), signal compression ratio (SCR), probability of detection (Pd), probability of false alarms (Pfa), minimum detectable quantity (MDQ), and minimum identifiable quantity (MIQ). In the example embodiment, HA computer device 102 generates data plots and graphics 122 based on performance metrics 120. In the example embodiment, HA computer device 102 outputs performance metrics 120 and data plots and graphics 122 to a user) [Garrison: col. 6, line 58 - col. 7, line 5]). Regarding claim 3, Garrison meets the claim limitations as set forth in claim 1. Garrison further meets the claim limitations as follow. wherein the programmable circuitry is (a processor 405 for executing instructions) [Garrison: col. 9, line 61-62; Fig. 4] at least one of correct or adjust output (correct output) [Garrison: col. 13, line 10] of the camera based on the parameter (An output adapter is operatively coupled to processor 305 and operatively coupleable to an output device such as a display device (e.g., a cathode ray tube (CRT), liquid crystal display (LCD), light emitting diode (LED) display, or "electronic ink" display) or an audio output device ( e.g., a speaker or headphones). In some embodiments, media output component 315 is configured to present a graphical user interface ( e.g., a web browser and/or a client application) to user 301. A graphical user interface may include, for example, analysis of one or more hyperspectral images) [Garrison: col. 9, line 14-24]. In the same field of endeavor Zamora also discloses the claim limitations as follows: wherein the programmable circuitry is at least one of correct or adjust (Generally speaking, the quadric generator circuitry 206 adjusts the parameters in the quadric frame by frame to reduce an error between the binarized image and an output of the training rule based on the randomly selected pixels) [Zamora: para. 0033] output of the camera based on the parameter ((The example filter circuitry 204 generates a binarized image from the captured and/or stored image(s)) [Zamora: para. 0063; Fig. 7]; (In the illustrated example of Equation 6, r represents a learning rate and d represents a value indicative of whether the pixel of interest is either inside the quadric (a value of "1") or outside the quadric (a value of "0"). As a result, the quadric is trained to yield a trained quadric for further use when determining calibration parameters. As discussed above, while one or more applied filters applied by the example filter circuitry 204 transforms an original captured image 302 to a binarized image 318, such filtering techniques may still exhibit background noise 322 in portions of an identified table. As such, the aforementioned machine learning tasks applied to the quadric reduce, remove (e.g., minimize) the background noise 322 so that the quadric errors are, likewise, reduced) [Zamora: para. 0033]. It would have been obvious to one with an ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of Garrison with Zamora to program the system to implement of Zamora’s method. Therefore, the combination of Garrison with Zamora will enable the system to improve the accuracy of calibration metrics and improve calibration accuracy [Zamora: para. 0099]. Regarding claim 5, Garrison meets the claim limitations as set forth in claim 1. Garrison further meets the claim limitations as follow. wherein the programmable circuitry is to (a processor 405 for executing instructions) [Garrison: col. 9, line 61-62; Fig. 4] generate the randomized images (HA computer device 102 generates a plurality of images 112 and 114 that each contain a plurality of random pixel mixtures associated with the at least one background item to train a program to recognize the at least one item to detect) [Garrison: col. 12, line 19-23] by selecting ones of orthoimages (HA computer device 102 applies algorithms and exploitations 118 to the two sets of hyperspectral images 112 and 114 to determine performance metrics 120. Examples of algorithms and exploitation include, but are not limited to, best linear unbiased estimation and orthogonal subspace projection) [Garrison: col. 6, line 58-63; Fig. 4] stored in the data storage based on a flight parameter of the aircraft (HA computer device 102 generates a plurality of images 112 and 114 that each contain a plurality of random pixel mixtures associated with the at least one background item to train a program to recognize the at least one item to detect) [Garrison: col. 12, line 19-23]; (The method includes storing, in the memory, a plurality of spectral analysis data,) [Garrison: col. 1, line 48-49]). Regarding claim 8, Garrison meets the claim limitations as follows: A non-transitory machine readable storage medium comprising instructions (Instructions may be stored in a memory area 410) [Garrison: col. 9, line 62-63; Fig. 4] to cause at least one programmable circuitry to at least (Server computer device 401 also includes a processor 405 for executing instructions) [Garrison: col. 9, line 60-62; Fig. 4]: generate randomized images from images stored in a data storage carried by the aircraft (HA computer device 102 generates a plurality of images 112 and 114 that each contain a plurality of random pixel mixtures associated with the at least one background item to train a program to recognize the at least one item to detect) [Garrison: col. 12, line 19-23], the images stored in the data storage corresponding to at least one of satellite or aerial images (In the example embodiments, hyperspectral camera is placed on an airborne platform for remote sensing, such as an aircraft or satellite. The hyperspectral system includes multiple attributes that affect the ability of the hyperspectral system to collect data and images) [Garrison: col. 4, line 52-55] obtained prior to a flight of the aircraft (Prior to deployment, a hyperspectral system is trained to recognize items of interest in contrast to background or environment details. In many cases, this training is performed by having the hyperspectral system analyze a large plurality of hyperspectral images to learn how to differentiate pixels associated with items of interest from pixels associated with the background of the image. For example, a hyperspectral system may be trained to be able to recognize a tent in contrast to the surrounding forest. Depending on the mission, the hyperspectral system requires different training to recognize important features in contrast to background details. Proper training of a hyperspectral system may be expensive both in setting up and in training time. Furthermore, without proper design, the hyperspectral system may require additional training to meet the requirements of the mission.) [Garrison: col. 1, line 12-27];determine at least one relationship between (compare the one or more mission parameters to the generated one or more spectral bands to determine whether the at least one item will be detected; and generate a plurality of images based on a distribution of simulated individual pixel measurements associated with the one or more mission parameters, wherein the plurality of images comprise a plurality of random pixel mixtures associated with the at least one background item to train a program to recognize the at least one item to detect) [Garrison: col. 14, line 50-59] an image captured by the camera of the aircraft during the flight of the aircraft (In push broom scanning, the camera images the scene line 10 by line using the "push broom" scanning mode. One narrow spatial line in the scene is imaged at a time, and this line is split into its spectral components before reaching a sensor array. When the sensor array is a two-dimensional (2D) sensor array, one dimension is used for spectral separation and the second dimension is used for imaging in one spatial direction. The second spatial dimension in the scene arises from scanning the camera over the scene (e.g., aircraft movement). The result can be seen as one 2D image for each spectral channel. Alternatively every pixel in the image 20 contains one full spectrum. In snapshot hyperspectral imaging, the camera generates an image of the scene at a specific point in time.) [Garrison: col. 4, line 11-22] and the randomized images ((compare the one or more mission parameters to the generated one or more spectral bands to determine whether the at least one item will be detected; and generate a plurality of images based on a distribution of simulated individual pixel measurements associated with the one or more mission parameters, wherein the plurality of images comprise a plurality of random pixel mixtures associated with the at least one background item to train a program to recognize the at least one item to detect) [Garrison: col. 14, line 50-59]; (the systems and methods described herein describe a more cost-efficient and quicker method of training and analyzing a hyperspectral system by using random pixel distribution) [Garrison: col. 14, line 18-20]; (HA computer device 102 applies algorithms and exploitations 118 to the two sets of hyperspectral images 112 and 114 to determine performance metrics 120. Examples of algorithms and exploitation include, but are not limited to, best linear unbiased estimation and orthogonal subspace projection. Examples of performance metrics include, but are not limited to, signal-to-noise ratio (SNR), signal compression ratio (SCR), probability of detection (Pd), probability of false alarms (Pfa), minimum detectable quantity (MDQ), and minimum identifiable quantity (MIQ). In the example embodiment, HA computer device 102 generates data plots and graphics 122 based on performance metrics 120. In the example embodiment, HA computer device 102 outputs performance metrics 120 and data plots and graphics 122 to a user) [Garrison: col. 6, line 58 - col. 7, line 5]); and determine a parameter of the camera based on the at least one relationship (HA computer device 102 applies algorithms and exploitations 118 to the two sets of hyperspectral images 112 and 114 to determine performance metrics 120. Examples of algorithms and exploitation include, but are not limited to, best linear unbiased estimation and orthogonal subspace projection. Examples of performance metrics include, but are not limited to, signal-to-noise ratio (SNR), signal compression ratio (SCR), probability of detection (Pd), probability of false alarms (Pfa), minimum detectable quantity (MDQ), and minimum identifiable quantity (MIQ). In the example embodiment, HA computer device 102 generates data plots and graphics 122 based on performance metrics 120. In the example embodiment, HA computer device 102 outputs performance metrics 120 and data plots and graphics 122 to a user) [Garrison: col. 6, line 58 - col. 7, line 5] to calibrate the camera. Garrison does not explicitly disclose the following claim limitations (Emphasis added). determine a parameter of the camera based on the at least one relationship to calibrate the camera. However, in the same field of endeavor Zamora further discloses the deficient claim limitations as follows: determine a parameter of the camera based on the at least one relationship ((determining calibration parameters) [Zamora: para. 0033]; (invokes position calculator circuitry 208 to determine calibration information) [Zamora: para. 0027]; (position calculator circuitry 208 calculates and/or otherwise determines spatial information relative to the example camera 112 and the table 102 in view of a coordinate frame 116 (e.g., a spatial frame or mapping having an x-axis, a y-axis and a z-axis). In some examples, the position calculator circuitry 208 is instantiated by dedicated hardware circuitry and/or by programmable circuitry executing position calculation instructions and/or configured to perform operations such as those represented by the flowchart(s) of FIGS. 7-10. As described in further detail below, the example distribution circuitry 210 transmits and/or otherwise distributes calibration information to one or more computing devices) [Zamora: para. 0021]) to calibrate the camera (As described above, the example third stage when calibrating an image device (e.g., a camera) includes determining calibration information (e.g., calibration values, calibration parameters), such as determining camera tilt metrics and camera pan metrics) [Zamora: para. 0034]. It would have been obvious to one with an ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of Garrison with Zamora to program the system to implement of Zamora’s method. Therefore, the combination of Garrison with Zamora will enable the system to improve the accuracy of calibration metrics and improve calibration accuracy [Zamora: para. 0099]. Regarding claim 9, Garrison meets the claim limitations as set forth in claim 8. Garrison further meets the claim limitations as follow. instruction cause one or more the at least one programmable circuitry to (a processor 405 for executing instructions) [Garrison: col. 9, line 61-62; Fig. 4]: determine (determining, by the processor) [Garrison: col. 1, line 55-56] first points (every pixel in the image) [Garrison: col. 4, line 19] of the randomized images ((a plurality of images 112 and 114 that each contain a plurality of random pixel mixtures associated with the at least one background item to train a program to recognize the at least one item to detect) [Garrison: col. 12, line 19-23] – Note: The first points can be a plurality of random pixel mixtures associated with the at least one background item to be detected); determine (determining, by the processor) [Garrison: col. 1, line 55-56] second points (every pixel in the image) [Garrison: col. 4, line 19] of the captured image ((In push broom scanning, the camera images the scene) [Garrison: col. 4, line 11] ; (For each pixel in an image, a hyperspectral camera acquires the light intensity (radiance) for a large number (typically a few tens to several hundred) of contiguous spectral bands. Every pixel in the image thus contains a continuous spectrum (in radiance or reflectance) and can be used to characterize the objects in the scene with great precision and detail) [Garrison: col. 4, line 29-35]); and compare the first points to the second points to determine the at least one relationship ((The at least one background item may include terrain information, terrain type, climate information, geographic locations, and other information necessary to model the background of the images that the item(s) of interest will be compared against. Furthermore, background may include ground cover (such as cultivated and uncultivated fields, brush, forests, and deserts), geographic features (such as rivers, valleys, mountains, and plains), and human artifacts (such as isolated buildings, bridges, highways, and suburban and urban structures)) [Garrison: col. 10, line 68-67]; (compare the one or more mission parameters to the generated one or more spectral bands to determine whether the at least one item will be detected; and generate a plurality of images based on a distribution of simulated individual pixel measurements associated with the one or more mission parameters, wherein the plurality of images comprise a plurality of random pixel mixtures associated with the at least one background item to train a program to recognize the at least one item to detect) [Garrison: col. 14, line 50-59]; (HA computer device 102 applies algorithms and exploitations 118 to the two sets of hyperspectral images 112 and 114 to determine performance metrics 120. Examples of algorithms and exploitation include, but are not limited to, best linear unbiased estimation and orthogonal subspace projection. Examples of performance metrics include, but are not limited to, signal-to-noise ratio (SNR), signal compression ratio (SCR), probability of detection (Pd), probability of false alarms (Pfa), minimum detectable quantity (MDQ), and minimum identifiable quantity (MIQ). In the example embodiment, HA computer device 102 generates data plots and graphics 122 based on performance metrics 120. In the example embodiment, HA computer device 102 outputs performance metrics 120 and data plots and graphics 122 to a user) [Garrison: col. 6, line 58 - col. 7, line 5]). Regarding claim 10, Garrison meets the claim limitations as set forth in claim 8. Garrison further meets the claim limitations as follow. instruction cause one or more the at least one programmable circuitry to (a processor 405 for executing instructions) [Garrison: col. 9, line 61-62; Fig. 4] at least one of correct or adjust output (correct output) [Garrison: col. 13, line 10] of the camera based on the parameter (An output adapter is operatively coupled to processor 305 and operatively coupleable to an output device such as a display device (e.g., a cathode ray tube (CRT), liquid crystal display (LCD), light emitting diode (LED) display, or "electronic ink" display) or an audio output device ( e.g., a speaker or headphones). In some embodiments, media output component 315 is configured to present a graphical user interface ( e.g., a web browser and/or a client application) to user 301. A graphical user interface may include, for example, analysis of one or more hyperspectral images) [Garrison: col. 9, line 14-24]. In the same field of endeavor Zamora also discloses the claim limitations as follows: one or more the at least one programmable circuitry (Generally speaking, the quadric generator circuitry 206 adjusts the parameters in the quadric frame by frame to reduce an error between the binarized image and an output of the training rule based on the randomly selected pixels) [Zamora: para. 0033] to at least one of correct or adjust output of the camera based on the parameter ((The example filter circuitry 204 generates a binarized image from the captured and/or stored image(s)) [Zamora: para. 0063; Fig. 7]; (In the illustrated example of Equation 6, r represents a learning rate and d represents a value indicative of whether the pixel of interest is either inside the quadric (a value of "1") or outside the quadric (a value of "0"). As a result, the quadric is trained to yield a trained quadric for further use when determining calibration parameters. As discussed above, while one or more applied filters applied by the example filter circuitry 204 transforms an original captured image 302 to a binarized image 318, such filtering techniques may still exhibit background noise 322 in portions of an identified table. As such, the aforementioned machine learning tasks applied to the quadric reduce, remove (e.g., minimize) the background noise 322 so that the quadric errors are, likewise, reduced) [Zamora: para. 0033]. It would have been obvious to one with an ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of Garrison with Zamora to program the system to implement of Zamora’s method. Therefore, the combination of Garrison with Zamora will enable the system to improve the accuracy of calibration metrics and improve calibration accuracy [Zamora: para. 0099]. Regarding claim 12, Garrison meets the claim limitations as set forth in claim 8. Garrison further meets the claim limitations as follow. one or more of the at least programmable circuitry to (a processor 405 for executing instructions) [Garrison: col. 9, line 61-62; Fig. 4] generate the randomized images (HA computer device 102 generates a plurality of images 112 and 114 that each contain a plurality of random pixel mixtures associated with the at least one background item to train a program to recognize the at least one item to detect) [Garrison: col. 12, line 19-23] by selecting ones of orthoimages (HA computer device 102 applies algorithms and exploitations 118 to the two sets of hyperspectral images 112 and 114 to determine performance metrics 120. Examples of algorithms and exploitation include, but are not limited to, best linear unbiased estimation and orthogonal subspace projection) [Garrison: col. 6, line 58-63; Fig. 4] stored in the data storage based on a flight parameter of the aircraft (HA computer device 102 generates a plurality of images 112 and 114 that each contain a plurality of random pixel mixtures associated with the at least one background item to train a program to recognize the at least one item to detect) [Garrison: col. 12, line 19-23]; (The method includes storing, in the memory, a plurality of spectral analysis data,) [Garrison: col. 1, line 48-49]). Regarding claim 15, Garrison meets the claim limitations as follows: A method (A method) [Garrison: col. 16, line 4; Fig. 5] comprising: generating (HA computer device 102 generates a plurality of images 112 and 114 that each contain a plurality of random pixel mixtures associated with the at least one background item to train a program to recognize the at least one item to detect) [Garrison: col. 12, line 19-23], by executing instructions with at least one programmable circuitry (Server computer device 401 also includes a processor 405 for executing instructions) [Garrison: col. 9, line 60-62; Fig. 4], randomized images from images stored in a data storage carried by the aircraft (HA computer device 102 generates a plurality of images 112 and 114 that each contain a plurality of random pixel mixtures associated with the at least one background item to train a program to recognize the at least one item to detect) [Garrison: col. 12, line 19-23], the images stored in the data storage corresponding to at least one of satellite or aerial images (In the example embodiments, hyperspectral camera is placed on an airborne platform for remote sensing, such as an aircraft or satellite. The hyperspectral system includes multiple attributes that affect the ability of the hyperspectral system to collect data and images) [Garrison: col. 4, line 52-55] obtained prior to a flight of the aircraft (Prior to deployment, a hyperspectral system is trained to recognize items of interest in contrast to background or environment details. In many cases, this training is performed by having the hyperspectral system analyze a large plurality of hyperspectral images to learn how to differentiate pixels associated with items of interest from pixels associated with the background of the image. For example, a hyperspectral system may be trained to be able to recognize a tent in contrast to the surrounding forest. Depending on the mission, the hyperspectral system requires different training to recognize important features in contrast to background details. Proper training of a hyperspectral system may be expensive both in setting up and in training time. Furthermore, without proper design, the hyperspectral system may require additional training to meet the requirements of the mission.) [Garrison: col. 1, line 12-27]; determining (determining, by the processor) [Garrison: col. 1, line 55-56], by executing instructions with one or more of the at least one programmable circuitry (Server computer device 401 also includes a processor 405 for executing instructions) [Garrison: col. 9, line 60-62; Fig. 4], at least one relationship between (compare the one or more mission parameters to the generated one or more spectral bands to determine whether the at least one item will be detected; and generate a plurality of images based on a distribution of simulated individual pixel measurements associated with the one or more mission parameters, wherein the plurality of images comprise a plurality of random pixel mixtures associated with the at least one background item to train a program to recognize the at least one item to detect) [Garrison: col. 14, line 50-59] an image captured by the camera during the flight (In push broom scanning, the camera images the scene line 10 by line using the "push broom" scanning mode. One narrow spatial line in the scene is imaged at a time, and this line is split into its spectral components before reaching a sensor array. When the sensor array is a two-dimensional (2D) sensor array, one dimension is used for spectral separation and the second dimension is used for imaging in one spatial direction. The second spatial dimension in the scene arises from scanning the camera over the scene (e.g., aircraft movement). The result can be seen as one 2D image for each spectral channel. Alternatively every pixel in the image 20 contains one full spectrum. In snapshot hyperspectral imaging, the camera generates an image of the scene at a specific point in time.) [Garrison: col. 4, line 11-22] and the randomized images ((compare the one or more mission parameters to the generated one or more spectral bands to determine whether the at least one item will be detected; and generate a plurality of images based on a distribution of simulated individual pixel measurements associated with the one or more mission parameters, wherein the plurality of images comprise a plurality of random pixel mixtures associated with the at least one background item to train a program to recognize the at least one item to detect) [Garrison: col. 14, line 50-59]; (the systems and methods described herein describe a more cost-efficient and quicker method of training and analyzing a hyperspectral system by using random pixel distribution) [Garrison: col. 14, line 18-20]; (HA computer device 102 applies algorithms and exploitations 118 to the two sets of hyperspectral images 112 and 114 to determine performance metrics 120. Examples of algorithms and exploitation include, but are not limited to, best linear unbiased estimation and orthogonal subspace projection. Examples of performance metrics include, but are not limited to, signal-to-noise ratio (SNR), signal compression ratio (SCR), probability of detection (Pd), probability of false alarms (Pfa), minimum detectable quantity (MDQ), and minimum identifiable quantity (MIQ). In the example embodiment, HA computer device 102 generates data plots and graphics 122 based on performance metrics 120. In the example embodiment, HA computer device 102 outputs performance metrics 120 and data plots and graphics 122 to a user) [Garrison: col. 6, line 58 - col. 7, line 5]); and determining (determining, by the processor) [Garrison: col. 1, line 55-56], by executing instructions with one or more of the at least one programmable circuitry (Server computer device 401 also includes a processor 405 for executing instructions) [Garrison: col. 9, line 60-62; Fig. 4], a parameter of the camera based on the at least one relationship (HA computer device 102 applies algorithms and exploitations 118 to the two sets of hyperspectral images 112 and 114 to determine performance metrics 120. Examples of algorithms and exploitation include, but are not limited to, best linear unbiased estimation and orthogonal subspace projection. Examples of performance metrics include, but are not limited to, signal-to-noise ratio (SNR), signal compression ratio (SCR), probability of detection (Pd), probability of false alarms (Pfa), minimum detectable quantity (MDQ), and minimum identifiable quantity (MIQ). In the example embodiment, HA computer device 102 generates data plots and graphics 122 based on performance metrics 120. In the example embodiment, HA computer device 102 outputs performance metrics 120 and data plots and graphics 122 to a user) [Garrison: col. 6, line 58 - col. 7, line 5] to calibrate the camera. Garrison does not explicitly disclose the following claim limitations (Emphasis added). determining a parameter of the camera based on the at least one relationship to calibrate the camera. However, in the same field of endeavor Zamora further discloses the deficient claim limitations as follows: determining a parameter of the camera based on the at least one relationship ((determining calibration parameters) [Zamora: para. 0033]; (invokes position calculator circuitry 208 to determine calibration information) [Zamora: para. 0027]; (position calculator circuitry 208 calculates and/or otherwise determines spatial information relative to the example camera 112 and the table 102 in view of a coordinate frame 116 (e.g., a spatial frame or mapping having an x-axis, a y-axis and a z-axis). In some examples, the position calculator circuitry 208 is instantiated by dedicated hardware circuitry and/or by programmable circuitry executing position calculation instructions and/or configured to perform operations such as those represented by the flowchart(s) of FIGS. 7-10. As described in further detail below, the example distribution circuitry 210 transmits and/or otherwise distributes calibration information to one or more computing devices) [Zamora: para. 0021]) to calibrate the camera (As described above, the example third stage when calibrating an image device (e.g., a camera) includes determining calibration information (e.g., calibration values, calibration parameters), such as determining camera tilt metrics and camera pan metrics) [Zamora: para. 0034]. It would have been obvious to one with an ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of Garrison with Zamora to program the system to implement of Zamora’s method. Therefore, the combination of Garrison with Zamora will enable the system to improve the accuracy of calibration metrics and improve calibration accuracy [Zamora: para. 0099]. Regarding claim 16, Garrison meets the claim limitations as set forth in claim 15. Garrison further meets the claim limitations as follow. determining (determining, by the processor) [Garrison: col. 1, line 55-56] first points (every pixel in the image) [Garrison: col. 4, line 19], by executing one or more of the at least one programmable circuitry (a processor 405 for executing instructions) [Garrison: col. 9, line 61-62; Fig. 4], first points of the randomized images ((a plurality of images 112 and 114 that each contain a plurality of random pixel mixtures associated with the at least one background item to train a program to recognize the at least one item to detect) [Garrison: col. 12, line 19-23] – Note: The first points can be a plurality of random pixel mixtures associated with the at least one background item to be detected); determining (determining, by the processor) [Garrison: col. 1, line 55-56], by executing one or more of the at least one programmable circuitry (a processor 405 for executing instructions) [Garrison: col. 9, lin
Read full office action

Prosecution Timeline

Oct 04, 2023
Application Filed
Apr 21, 2025
Non-Final Rejection — §103
Jul 21, 2025
Interview Requested
Jul 22, 2025
Response Filed
Jul 29, 2025
Examiner Interview Summary
Jul 29, 2025
Applicant Interview (Telephonic)
Oct 01, 2025
Final Rejection — §103
Nov 03, 2025
Response after Non-Final Action
Dec 01, 2025
Request for Continued Examination
Dec 07, 2025
Response after Non-Final Action
Dec 10, 2025
Examiner Interview (Telephonic)
Dec 17, 2025
Non-Final Rejection — §103
Mar 10, 2026
Interview Requested
Mar 18, 2026
Applicant Interview (Telephonic)
Mar 18, 2026
Examiner Interview Summary
Mar 19, 2026
Response Filed
Apr 10, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602837
ON SUB-DIVISION OF MESH SEQUENCES
2y 5m to grant Granted Apr 14, 2026
Patent 12593116
IMAGING MEASUREMENT DEVICE USING GAS ABSORPTION IN THE MID-INFRARED BAND AND OPERATING METHOD OF IMAGING MEASUREMENT DEVICE
2y 5m to grant Granted Mar 31, 2026
Patent 12581069
METHOD FOR ENCODING/DECODING VIDEO SIGNAL, AND APPARATUS THEREFOR
2y 5m to grant Granted Mar 17, 2026
Patent 12581106
IMAGE DECODING METHOD AND DEVICE THEREFOR
2y 5m to grant Granted Mar 17, 2026
Patent 12574557
SCALABLE VIDEO CODING USING BASE-LAYER HINTS FOR ENHANCEMENT LAYER MOTION PARAMETERS
2y 5m to grant Granted Mar 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

5-6
Expected OA Rounds
77%
Grant Probability
99%
With Interview (+33.2%)
2y 10m
Median Time to Grant
High
PTA Risk
Based on 470 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month