Prosecution Insights
Last updated: April 19, 2026
Application No. 18/509,782

THERMAL SENSOR IMAGE AUGMENTATION FOR OBJECT DETECTION

Non-Final OA §102§103
Filed
Nov 15, 2023
Examiner
SHIMELES, BEZAWIT NOLAWI
Art Unit
2673
Tech Center
2600 — Communications
Assignee
Faurecia Irystec Inc.
OA Round
1 (Non-Final)
100%
Grant Probability
Favorable
1-2
OA Rounds
2y 9m
To Grant
0%
With Interview

Examiner Intelligence

Grants 100% — above average
100%
Career Allow Rate
1 granted / 1 resolved
+38.0% vs TC avg
Minimal -100% lift
Without
With
+-100.0%
Interview Lift
resolved cases with interview
Typical timeline
2y 9m
Avg Prosecution
13 currently pending
Career history
14
Total Applications
across all art units

Statute-Specific Performance

§101
17.4%
-22.6% vs TC avg
§103
47.8%
+7.8% vs TC avg
§102
13.0%
-27.0% vs TC avg
§112
19.6%
-20.4% vs TC avg
Black line = Tech Center average estimate • Based on career data from 1 resolved cases

Office Action

§102 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Information Disclosure Statement The information disclosure statement (IDS) submitted on 11/15/2023 is being considered by the examiner. Drawing Objections The drawings are objected to because: Regarding Figure 1, “visible image sensor 12” is referred to only as “visible light sensor 12” in the specification and as “visible light sensor” in claims. Alignment is required in the language used to refer to the sensor between the specification, claims, and figures for clarity. Corrected drawing sheets in compliance with 37 CFR 1.121(d) are required in reply to the Office action to avoid abandonment of the application. Any amended replacement drawing sheet should include all of the figures appearing on the immediate prior version of the sheet, even if only one figure is being amended. The figure or figure number of an amended drawing should not be labeled as “amended.” If a drawing figure is to be canceled, the appropriate figure must be removed from the replacement sheet, and where necessary, the remaining figures must be renumbered and appropriate changes made to the brief description of the several views of the drawings for consistency. Additional replacement sheets may be necessary to show the renumbering of the remaining figures. Each drawing sheet submitted after the filing date of an application must be labeled in the top margin as either “Replacement Sheet” or “New Sheet” pursuant to 37 CFR 1.121(d). If the changes are not accepted by the examiner, the applicant will be notified and informed of any required corrective action in the next Office action. The objection to the drawings will not be held in abeyance. Specification Objections The disclosure is objected to because of the following informalities: In paragraph [0021], lines 7-8, “Such processing results in the captured light represented as a visible light image” should read “Such processing results in the captured light are represented as a visible light image.” Appropriate correction is required. Claim Objections Claims 3, 4, and 10, are objected to because of the following informalities: In Claim 3, Line 1, the term “further comprising the step of” should be changed to, “further comprising in order to avoid insufficient antecedent issue and prevent a rejection under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, since the term “step of” is not stated in independent claim 1. In Claim 8, Line 1, the term “further comprising the steps of” should be changed to, “further comprising in order to avoid insufficient antecedent issue and prevent a rejection under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, since the term “steps of” is not stated in independent claim 1. In Claim 10, Line 9, the term “when it is determined that the visible light image is degraded” should be changed to “when it is determined that the visible light image is not degraded” in order to properly align with the present disclosure’s conditions provided in the specification. Appropriate correction is required. Claim Interpretation The following is a quotation of 35 U.S.C. 112(f): (f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph: An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked. As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph: (A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function; (B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and (C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function. Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function. Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function. Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Claims 11-13 and claim 17 recite limitations that use words like “means” (or “step”) or similar terms with functional language but do not invoke 35 U.S.C. 112(f): Claim 11; recites the limitation, “a computer subsystem having at least one processor and memory storing computer instructions that, when executed by the at least one processor, cause the thermal-augmented image object detection system to…” [Lines 4-6]. Claims 12, 13, 17; recite the limitation, “computer subsystem is further configured so that…” [Line 2]. Such claim limitation(s) is/are: (i) “computer subsystem” has a structure associated with it a computer. Because this/these claim limitation(s) is/are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are not being interpreted to cover only the corresponding structure, material, or acts described in the specification as performing the claimed function, and equivalents thereof. If applicant does intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to remove the structure, materials, or acts that performs the claimed function; or (2) present a sufficient showing that the claim limitations do not recite sufficient structure to perform the claimed function. Claim Rejections – 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claims 1-5, 11, 15, and 16 are rejected under 35 U.S.C. 102(a)(1)/(a)(2) as being anticipated by Huynh et al. (US 11106903 B1), hereinafter referenced as Huynh. Regarding claim 1, Huynh teaches a method of detecting an object within an image (Fig. 11, Paragraph [0060] – Huynh discloses Fig. 11 depicts a flow chart showing an example process 1100 for object detection in image data), comprising: obtaining a thermal image (Fig. 1, Paragraph [0019] – Huynh discloses detector 118 may receive a frame of input image data 108 that is in the NIR domain) captured by a thermal image sensor (Fig. 1, Paragraph [0022] – Huynh discloses camera 130 may comprise one or more image sensors effective to generate image data representing a scene. In various examples, camera 130 may be effective to capture visible (e.g., RGB) image data and/or infrared image data); projecting the thermal image into a visible light color space to generate a projected thermal image (Fig. 1, Paragraph [0067] – Huynh discloses the generator 106 may transform the feature data from a non-target domain (e.g., NIR) into a target domain (e.g., RGB)); and detecting an object within the projected thermal image through inputting the projected thermal image into a visible light object detector (Fig. 11, Paragraph [0053] – Huynh discloses a detector (e.g., an SSD) trained using a dataset of annotated RGB images to detect objects in the RGB domain may receive NIR image data during runtime (e.g., during inference), transform the NIR image data to synthetic RGB image data using generator 802 and detect objects in the synthetic RGB image data) configured to detect objects within image data within the visible light color space (Fig. 11, Paragraph [0068] – Huynh further discloses the detector may also classify an object within the bounding box. For example, if the detector is trained to detect dogs, the detector may output classification data indicating a confidence value that an object depicted within the bounding box is a dog). Regarding claim 2, Huynh teaches the method of claim 1, Huynh further teaches wherein the visible light object detector (Fig. 1, Paragraph [0018] - Huynh discloses system 100 may comprise a detector 118. Huynh further discloses detector 118 may be effective to receive input image data 108 (e.g., a key frame)) is configured to detect objects within a visible light image (Fig. 1, Paragraph [0019] - Huynh discloses detector 118 may be trained to detect objects in a particular domain or modality. For example, detector 118 may be effective to detect objects in RGB image data) captured by a visible light sensor (Fig. 1, camera 130, Paragraph [0022]), the projected thermal image (Fig. 1, Paragraph [0019] – Huynh discloses a generator 106 may be trained using a GAN 120, in accordance with various techniques described in further detail below, to transform feature data extracted from image data of a first domain into feature data in a second domain for which the classifier of detector 118 has been trained. See also Paragraph [0067]) captured by the thermal image sensor (Fig. 1, camera 130, Paragraph [0022]), or both the visible light image and the projected thermal image (Fig. 1, Paragraph [0021] – Huynh discloses the system depicted in FIG. 1 is able to perform object detection in two different domains (e.g., in the visible, RGB domain, and in the NIR domain)). Regarding claim 3, Huynh teaches the method of claim 1, Huynh further teaches further comprising the step of: capturing a visible light image (Fig. 1, Paragraph [0022] – Huynh discloses camera 130 may be effective to capture visible (e.g., RGB) image data) and inputting the visible light image into the visible light object detector (Fig. 1, Paragraph [0018] - Huynh discloses system 100 may comprise a detector 118. Huynh further discloses detector 118 may be effective to receive input image data 108 (e.g., a key frame)). in order to detect one or more objects located within the visible light image (Fig. 1, Paragraph [0019] - Huynh discloses detector 118 may be trained to detect objects in a particular domain or modality. For example, detector 118 may be effective to detect objects in RGB image data). Regarding claim 4, Huynh teaches the method of claim 3, Huynh further teaches wherein the visible light image includes at least three visible light channels (Paragraph [0069] – Huynh discloses RGB image data includes three channels). Regarding claim 5, Huynh teaches the method of claim 4, Huynh further teaches wherein at least three visible light channels include one or more of the following: a red channel, a green channel, a blue channel (Paragraph [0014] – Huynh discloses datasets typically comprise RGB (red, green, blue) image data representing image data in the visible light spectrum). Regarding claim 11, Huynh teaches a thermal-augmented image object detection system (Fig. 1, Paragraph [0021], Huynh discloses the system depicted in FIG. 1 is able to perform object detection in two different domains (e.g., in the visible, RGB domain, and in the NIR domain). Paragraph [0017] – Huynh further discloses image data may be transformed to and from the far infrared domain, the ultraviolet domain, thermal infrared domain, visible spectrum, etc.), comprising: a visible light sensor (Fig. 1, Paragraph [0022] – Huynh discloses camera 130 may comprise one or more image sensors effective to generate image data representing a scene) configured to capture a visible light image (Fig. 1, Paragraph [0022] – Huynh further discloses in various examples, camera 130 may be effective to capture visible (e.g., RGB) image data and/or infrared image data); a thermal image sensor (Fig. 1, Paragraph [0022] – Huynh discloses camera 130 may comprise one or more image sensors effective to generate image data representing a scene) configured to capture a thermal image (Fig. 1, Paragraph [0022] – Huynh further discloses in various examples, camera 130 may be effective to capture visible (e.g., RGB) image data and/or infrared image data); a computer subsystem (Fig. 1, Paragraph [0018] – Huynh discloses system 100 may comprise a detector 118, a GAN 120, a generator 106. Paragraph [0022] – Huynh further discloses computing device(s) 102 may be effective to implement detector 118, GAN 120, and/or generator 106) having at least one processor and memory storing computer instructions that, when executed by the at least one processor, (Fig. 1, Paragraph [0022] – Huynh discloses non-transitory, computer-readable memory 103 may be effective to store one or more instructions that, when executed by at least one processor of computing device(s) 102, program the at least one processor to perform the various techniques described herein) cause the thermal-augmented image object detection system (Fig. 1, Paragraph [0021]) to: project the thermal image into a visible light color space to generate a projected thermal image (Fig. 1, Paragraph [0067] – Huynh discloses the generator 106 may transform the feature data from a non-target domain (e.g., NIR) into a target domain (e.g., RGB)); and detect an object within a shared field of view by inputting one or both of the visible light image (Fig. 11, Paragraph [0060] – Huynh discloses at action 1110, the detector 118 may receive an input frame of image data) and the projected thermal image (Fig. 11, Paragraph [0053] – Huynh discloses a detector (e.g., an SSD) trained using a dataset of annotated RGB images to detect objects in the RGB domain may receive NIR image data during runtime (e.g., during inference), transform the NIR image data to synthetic RGB image data using generator 802 and detect objects in the synthetic RGB image data) into a visible light object detector (Fig. 1 called detector 118, Paragraph [0018]) configured to detect objects within image data within the visible light color space (Fig. 1, Paragraph [0019] - Huynh discloses detector 118 may be trained to detect objects in a particular domain or modality. For example, detector 118 may be effective to detect objects in RGB image data). Regarding claim 15, Huynh teaches the thermal-augmented image object detection system of claim 11, Huynh further teaches wherein the visible light image includes at least three visible light channels (Paragraph [0069] – Huynh discloses RGB image data includes three channels). Regarding claim 16, Huynh teaches the thermal-augmented image object detection system of claim 15, Huynh further teaches wherein at least three visible light channels include one or more of the following: a red channel, a green channel, a blue channel (Paragraph [0014] – Huynh discloses datasets typically comprise RGB (red, green, blue) image data representing image data in the visible light spectrum). Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 6, 7, 17, & 18 are rejected under 35 U.S.C. 103 as being unpatentable over Huynh (US 11106903 B1), hereinafter referenced as Huynh in view of Feyh (US 20140152772 A1), hereinafter referenced as Feyh. Regarding claim 6, Huynh teaches the method of claim 1, Huynh further teaches further comprising: wherein the upsampled thermal image (Fig. 7, #702 and #705 called generators, Paragraph [0048] – Huynh discloses the decoder may construct the output, transformed image from the latent feature data using a series of deconvolution (transpose convolution) layers. The deconvolution may include an up-sampling rate/deconvolution stride in order to recover a target output resolution) is projected to the visible light color space (Fig. 7, Fig. 8, Paragraph [0047] – Huynh discloses the diagram in FIG. 7 depicts a cycle GAN 700 in which RGB image data 701 (e.g., a frame of RGB image data) is transformed by a first generator 702 to generate synthetic frame of NIR image data 703 and is transformed by a second generator 705 to generate synthetic RGB image data 706 from the synthetic NIR image data 703. Paragraph [0051] -Huynh further discloses the diagram of FIG. 8 depicts a cycle GAN 800 in which real NIR image data 801 (e.g., a frame of NIR image data captured by an IR sensor) is transformed by a first generator 802 to generate synthetic RGB image data 803) using a projection function (Fig. 8, Paragraph [0016] – Huynh discloses the generator [wherein the generator is performing projection] is trained to map data from a latent space to a particular data distribution of interest (e.g., from RGB image data to near infrared image data). Paragraph [0017] – Huynh further discloses image data may be transformed to and from the far infrared domain, the ultraviolet domain, thermal infrared domain, visible spectrum, etc.) to generate a projected thermal image (Fig. 1, Paragraph [0067] – Huynh discloses the generator 106 may transform the feature data from a non-target domain (e.g., NIR) into a target domain (e.g., RGB)), Although Huynh explicitly teaches and wherein the projected thermal image is used as the upsampled thermal image for object detection (Fig. 8, Paragraph [0053] – Huynh discloses cycle GAN 800 may be used to train generator 802 to transform real NIR image data (e.g., frames of NIR image data) to synthetic, but realistic, RGB image data. Accordingly, a detector (e.g., an SSD) trained using a dataset of annotated RGB images to detect objects in the RGB domain may receive NIR image data during runtime (e.g., during inference), transform the NIR image data to synthetic RGB image data using generator 802 and detect objects in the synthetic RGB image data. Fig. 11, Paragraph [0067] – Huynh further discloses processing may continue from action 1160 to action 1170, “Transform feature data to synthetic feature data in target domain.” Huynh further discloses the generator 106 may transform the feature data from a non-target domain (e.g., NIR) into a target domain (e.g., RGB)). Huynh fails to explicitly teach upsampling the thermal image using a super-resolution technique to generate an upsampled image. However, Feyh explicitly teaches upsampling the thermal image using a super-resolution technique (Fig. 1, Paragraph [0020] – Feyh discloses the thermal images generated by the IR sensor can be registered with respect to each other to produce a combined thermal image having a higher resolution and covering a larger area than the individual thermal images generated by the IR sensor 12) to generate an upsampled thermal image (Fig. 4, Paragraph [0021] – Feyh discloses the algorithms for processing sensor output to generate the combined thermal image are established and can be found by referring to super-resolution and pixel de-mosaicking. Paragraph [0022] – Feyh further discloses the device and method in accordance with the present disclosure are capable of producing thermal images that have a much higher resolution than would otherwise be possible using a traditional IR camera with a single pixel, or small pixel array, imager). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date the claimed invention was made to combine the teachings of Huynh of having the method further comprising: wherein the upsampled thermal image is projected to the visible light color space using a projection function to generate a projected thermal image, and wherein the projected thermal image is used as the upsampled thermal image for object detection, with the teachings of Feyh having upsampling the thermal image using a super-resolution technique to generate an upsampled thermal image. Wherein having Huynh’s method further comprising: upsampling the thermal image using a super-resolution technique to generate an upsampled thermal image, wherein the upsampled thermal image is projected to the visible light color space using a projection function to generate a projected thermal image, and wherein the projected thermal image is used as the upsampled thermal image for object detection. The motivation behind the modification would have been to obtain a method of detecting an object within an image that utilizes an upsampled thermal image, since both Huynh and Feyh relate to image processing devices and methods that utilize thermal images, wherein Huynh has techniques that may be used to leverage currently available datasets to train object detection models, even when the modality of the training data is different from the modality of the object detection model, while Feyh’s device for generating thermal images includes a low resolution infrared (IR) sensor where the low resolution IR sensor does not require an expensive lens to direct infrared radiation onto multiple pixels, thus, a device a device capable of producing high resolution thermal images in accordance with the present disclosure can be provided in much smaller sizes and at much less cost than traditional IR cameras. Please see Huynh (US 11106903 B1), Paragraph [0014, 0069], and Feyh (US 20140152772 A1), Paragraph [0022]. Regarding claim 7, Huynh in view of Feyh teaches the method of claim 6, Huynh further teaches wherein the projected thermal image (Fig. 1, Paragraph [0067]) is combined with a visible light image (Fig. 10, Paragraph [0058] – Huynh discloses two detectors (e.g., SSDs) 1006 and 1008 are shown. In an example, detector 1006 may be trained to detect objects in the RGB domain, in accordance with the various techniques described above. Similarly, detector 1008 may be trained to detect objects in the NIR domain, in accordance with the various techniques described above. As previously discussed, the RGB domain and NIR domain are used for illustrative purposes only, and any other two domains may be used in accordance with the present disclosure) in order to detect an object visible (Fig. 10, Paragraph [0059] – Huynh discloses the results from detector 1006 and detector 1008 may be fused using non-maximal suppression to find the bounding boxes and/or classifications from detectors 1006, 1008 with the highest confidence scores that are more likely to represent a more accurate location of the object-of-interest) within a shared field of view (Fig. 1, Paragraph [0022] – Huynh discloses camera 130 may be effective to capture visible (e.g., RGB) image data and/or infrared image data. For example, camera 130 may be a home security camera effective to capture RGB image data when lighting conditions allow and NIR image data in low light conditions), and wherein the shared field of view is a field of view defined by overlapping of a field of view of the thermal image sensor and a field of view of a visible light sensor used to capture the visible light image (Fig. 6, Paragraph [0045] – Huynh discloses a panoramic camera system may comprise multiple image sensors 632 resulting in multiple images and/or video frames that may be stitched and may be blended to form a seamless panoramic output. An example of an image sensor 632 may be camera 130 shown and described in FIG. 1. As described, camera 130 may be configured to capture color information, IR image data, image geometry information, and/or ambient light information). Regarding claim 17, Huynh teaches the thermal-augmented image object detection system of claim 11, Huynh further teaches wherein the computer subsystem (Fig. 1, Paragraph [0018, 0022]) is further configured so that, when the computer instructions are executed by the at least one processor (Fig. 1, Paragraph [0022]), wherein the upsampled thermal image (Fig. 7, #702 and #705 called generators, Paragraph [0048] – Huynh discloses the decoder may construct the output, transformed image from the latent feature data using a series of deconvolution (transpose convolution) layers. The deconvolution may include an up-sampling rate/deconvolution stride in order to recover a target output resolution) is projected to the visible light color space (Fig. 7, Fig. 8, Paragraph [0047] – Huynh discloses the diagram in FIG. 7 depicts a cycle GAN 700 in which RGB image data 701 (e.g., a frame of RGB image data) is transformed by a first generator 702 to generate synthetic frame of NIR image data 703 and is transformed by a second generator 705 to generate synthetic RGB image data 706 from the synthetic NIR image data 703. Paragraph [0051] -Huynh further discloses the diagram of FIG. 8 depicts a cycle GAN 800 in which real NIR image data 801 (e.g., a frame of NIR image data captured by an IR sensor) is transformed by a first generator 802 to generate synthetic RGB image data 803) using a projection function (Fig. 8, Paragraph [0016] – Huynh discloses the generator [wherein the generator is performing projection] is trained to map data from a latent space to a particular data distribution of interest (e.g., from RGB image data to near infrared image data). Paragraph [0017] – Huynh further discloses image data may be transformed to and from the far infrared domain, the ultraviolet domain, thermal infrared domain, visible spectrum, etc.) to generate a projected thermal image (Fig. 1, Paragraph [0067] – Huynh discloses the generator 106 may transform the feature data from a non-target domain (e.g., NIR) into a target domain (e.g., RGB)), and wherein the projected thermal image is used as the upsampled thermal image for object detection (Fig. 8, Paragraph [0053] – Huynh discloses cycle GAN 800 may be used to train generator 802 to transform real NIR image data (e.g., frames of NIR image data) to synthetic, but realistic, RGB image data. Accordingly, a detector (e.g., an SSD) trained using a dataset of annotated RGB images to detect objects in the RGB domain may receive NIR image data during runtime (e.g., during inference), transform the NIR image data to synthetic RGB image data using generator 802 and detect objects in the synthetic RGB image data. Fig. 11, Paragraph [0067] – Huynh further discloses processing may continue from action 1160 to action 1170, “Transform feature data to synthetic feature data in target domain.” Huynh further discloses the generator 106 may transform the feature data from a non-target domain (e.g., NIR) into a target domain (e.g., RGB)). Although Huynh explicitly teaches the thermal-augmented image object detection system (Fig. 1, Paragraph [0021]). Huynh fails to explicitly teach the thermal-augmented image object detection system upsamples the thermal image using a super-resolution technique to generate an upsampled thermal image. However, Feyh explicitly teaches the thermal-augmented image object detection system (Fig. 1, Paragraph [0014] – Feyh discloses a device and method for producing thermal images that utilizes a low resolution IR sensor incorporated into a smartphone, tablet, or other type of mobile device) upsamples the thermal image using a super-resolution technique (Fig. 1, Paragraph [0020] – Feyh discloses the thermal images generated by the IR sensor can be registered with respect to each other to produce a combined thermal image having a higher resolution and covering a larger area than the individual thermal images generated by the IR sensor 12) to generate an upsampled thermal image (Fig. 4, Paragraph [0021] – Feyh discloses the algorithms for processing sensor output to generate the combined thermal image are established and can be found by referring to super-resolution and pixel de-mosaicking. Paragraph [0022] – Feyh further discloses the device and method in accordance with the present disclosure are capable of producing thermal images that have a much higher resolution than would otherwise be possible using a traditional IR camera with a single pixel, or small pixel array, imager). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date the claimed invention was made to combine the teachings of Huynh of having the thermal-augmented image object detection system of claim 11, wherein the computer subsystem is further configured so that, when the computer instructions are executed by the at least one processor, wherein the upsampled thermal image is projected to the visible light color space using a projection function to generate a projected thermal image, and wherein the projected thermal image is used as the upsampled thermal image for object detection, with the teachings of Feyh having the thermal-augmented image object detection system upsamples the thermal image using a super-resolution technique to generate an upsampled thermal image. Wherein having Huynh’s thermal-augmented image object detection system wherein the computer subsystem is further configured so that, when the computer instructions are executed by the at least one processor, the thermal-augmented image object detection system upsamples the thermal image using a super-resolution technique to generate an upsampled thermal image, wherein the upsampled thermal image is projected to the visible light color space using a projection function to generate a projected thermal image, and wherein the projected thermal image is used as the upsampled thermal image for object detection. The motivation behind the modification would have been to obtain a method of detecting an object within an image that utilizes an upsampled thermal image, since both Huynh and Feyh relate to image processing devices and methods that utilize thermal images, wherein Huynh has techniques that may be used to leverage currently available datasets to train object detection models, even when the modality of the training data is different from the modality of the object detection model, while Feyh’s device for generating thermal images includes a low resolution infrared (IR) sensor where the low resolution IR sensor does not require an expensive lens to direct infrared radiation onto multiple pixels, thus, a device a device capable of producing high resolution thermal images in accordance with the present disclosure can be provided in much smaller sizes and at much less cost than traditional IR cameras. Please see Huynh (US 11106903 B1), Paragraph [0014, 0069], and Feyh (US 20140152772 A1), Paragraph [0022]. Regarding claim 18, Huynh in view of Feyh teaches the thermal-augmented image object detection system of claim 17, Huynh further teaches wherein the projected thermal image (Fig. 1, Paragraph [0067]) is combined with a visible light image (Fig. 10, Paragraph [0058] – Huynh discloses two detectors (e.g., SSDs) 1006 and 1008 are shown. In an example, detector 1006 may be trained to detect objects in the RGB domain, in accordance with the various techniques described above. Similarly, detector 1008 may be trained to detect objects in the NIR domain, in accordance with the various techniques described above. As previously discussed, the RGB domain and NIR domain are used for illustrative purposes only, and any other two domains may be used in accordance with the present disclosure) in order to detect an object visible (Fig. 10, Paragraph [0059] – Huynh discloses the results from detector 1006 and detector 1008 may be fused using non-maximal suppression to find the bounding boxes and/or classifications from detectors 1006, 1008 with the highest confidence scores that are more likely to represent a more accurate location of the object-of-interest) within a shared field of view (Fig. 1, Paragraph [0022] – Huynh discloses camera 130 may be effective to capture visible (e.g., RGB) image data and/or infrared image data. For example, camera 130 may be a home security camera effective to capture RGB image data when lighting conditions allow and NIR image data in low light conditions), and wherein the shared field of view is a field of view defined by overlapping of a field of view of the thermal image sensor and a field of view of a visible light sensor used to capture the visible light image (Fig. 6, Paragraph [0045] – Huynh discloses a panoramic camera system may comprise multiple image sensors 632 resulting in multiple images and/or video frames that may be stitched and may be blended to form a seamless panoramic output. An example of an image sensor 632 may be camera 130 shown and described in FIG. 1. As described, camera 130 may be configured to capture color information, IR image data, image geometry information, and/or ambient light information). Claims 8, 10, & 12-14 are rejected under 35 U.S.C. 103 as being unpatentable over Huynh (US 11106903 B1), hereinafter referenced as Huynh in view of Bleyer (US 20210160441 A1), hereinafter referenced as Bleyer. Regarding claim 8, Huynh teaches the method of claim 1, although Huynh further teaches further comprising the steps of: capturing a visible light image using a visible light sensor (Fig.1, Paragraph [0022] – Huynh discloses camera 130 [wherein camera 130 is a visible light sensor] may be effective to capture visible (e.g., RGB) image data)); Huynh fails to explicitly teach and determining whether the visible light is degraded, wherein the upsampled thermal image is used for object detection when it is determined that the visible light image is degraded, and wherein the visible light image is used for object detection when it is determined that the visible light image is not degraded. However, Bleyer explicitly teaches and determining whether the visible light is degraded (Fig. 12, Paragraph [0152] – Bleyer discloses a light sensor may be used to initially determine the light conditions of the environment. If the light sensor indicates that the light is low, then an attempt may be made to use the low light cameras to generate the passthrough image. Alternatively, if the light sensor indicates the light conditions are too low, then the embodiments may refrain from using either one of the visible light cameras or the low light cameras); wherein the upsampled thermal image is used for object detection (Fig. 3B, Paragraph [0070]- Bleyer discloses the camera distortion correction 330 may include optimizations to correct for distortions related to barrel distortion, pincushion distortion, flare, ghosts, spherical aberrations, chromatic aberrations, coma aberrations, astigmatisms, shutter speed, resolution, [wherein resolution correction is upsampling of the thermal image] brightness abilities, intensity abilities, and/or exposure properties) when it is determined that the visible light image is degraded (Fig. 12, Paragraph [0153] – Bleyer discloses in response to determining that neither one of the first camera or the second camera is usable, there is an act (act 1215) of causing the third camera to generate a thermal image of the environment. Paragraph [0154] – Bleyer further discloses method 1200 then includes an act (act 1220) of performing planar reprojection on the thermal image. The planar reprojection is performed by selecting, relative to the computer system/HMD, a perspective distance at which to project the thermal image so as to provide depth for the thermal image), and wherein the visible light image is used for object detection when it is determined that the visible light image is not degraded (Fig. 10, Paragraph [0140] – Bleyer discloses because the environment 900 is a lighted environment, the embodiments determined that the visible light cameras will be able to accurately generate passthrough images. As a result, the visible light cameras were selected and were used to generate the visible light image data 1005, which is included in the composite passthrough image 1000). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date the claimed invention was made to combine the teachings of Huynh of having a method of detecting an object within an image, further comprising the steps of: capturing a visible light image using a visible light sensor, with the teachings of Bleyer having and determining whether the visible light is degraded, wherein the upsampled thermal image is used for object detection when it is determined that the visible light image is degraded, and wherein the visible light image is used for object detection when it is determined that the visible light image is not degraded. Wherein having Huynh’s method of detecting an object within an image, further comprising the steps of: capturing a visible light image using a visible light sensor, and determining whether the visible light is degraded, wherein the upsampled thermal image is used for object detection when it is determined that the visible light image is degraded, and wherein the visible light image is used for object detection when it is determined that the visible light image is not degraded. The motivation behind the modification would have been to obtain a method of detecting an object within an image using an upsampled thermal image, since both Huynh and Bleyer relate to image processing devices and methods that utilize thermal images, wherein Huynh has techniques that may be used to leverage currently available datasets to train object detection models, even when the modality of the training data is different from the modality of the object detection model, while Bleyer’s enhanced passthrough visualization brings substantial benefits to the technology by generating enhanced passthrough visualizations and by directly improving the user's experience with the computer system. In particular, the embodiments are able to merge, fuse, overlay, or otherwise combine different types of image data into a composite “passthrough” image. This composite passthrough image is an enhanced image because it provides additional information that would not be available if only one of the different types of image data were used. Please see Huynh (US 11106903 B1), Paragraph [0014, 0069], and Bleyer (US 20210160441 A1), Paragraph [0035]. Regarding claim 10, Huynh teaches a method of detecting an object within a thermal image (Fig. 11, Paragraph [0060] – Huynh discloses Fig. 11 depicts a flow chart showing an example process 1100 for object detection in image data. Paragraph [0059] – Huynh further discloses any image domains (e.g., thermal IR, UV, visible, etc.) may instead be used in accordance with the techniques described herein), comprising: Although Huynh teaches obtaining a visible light image (Fig. 11, Paragraph [0060] – Huynh discloses at action 1110, the detector 118 may receive an input frame of image data) captured by a visible light sensor (Fig. 1, Paragraph [0022] – Huynh discloses camera 130 may comprise one or more image sensors effective to generate image data representing a scene. In various examples, camera 130 may be effective to capture visible (e.g., RGB) image data and/or infrared image data);. Huynh fails to explicitly teach determining whether the visible light is degraded; when it is determined that the visible light image is degraded: projecting the thermal image into a visible light color space to generate a projected thermal image; and detecting an object within the projected thermal image through inputting the projected thermal image into a visible light object detector, and when it is determined that the visible light image is degraded, detecting an object within the visible light image through inputting the visible light image into the image object detector. However, Bleyer explicitly teaches determining whether the visible light is degraded (Fig. 12, Paragraph [0152] – Bleyer discloses a light sensor may be used to initially determine the light conditions of the environment. If the light sensor indicates that the light is low, then an attempt may be made to use the low light cameras to generate the passthrough image. Alternatively, if the light sensor indicates the light conditions are too low, then the embodiments may refrain from using either one of the visible light cameras or the low light cameras.); when it is determined that the visible light image is degraded: projecting the thermal image into a visible light color space to generate a projected thermal image (Fig. 12, Paragraph [0153] – Bleyer discloses in response to determining that neither one of the first camera or the second camera is usable, there is an act (act 1215) of causing the third camera to generate a thermal image of the environment. Paragraph [0154] – Bleyer further discloses method 1200 then includes an act (act 1220) of performing planar reprojection on the thermal image. The planar reprojection is performed by selecting, relative to the computer system/HMD, a perspective distance at which to project the thermal image so as to provide depth for the thermal image.); and detecting an object within the projected thermal image through inputting the projected thermal image into a visible light object detector (Fig. 12, Paragraph [0149] – Bleyer discloses Fig. 12 illustrates another flowchart of an example method 1200 for selectively using image data to generate a passthrough image for display on an HMD. Method 1200 may be performed by any of the HMDs discussed thus far. Paragraph [0039] – Bleyer further discloses HMD 100 can use the scanning sensor(s) 105 to scan environments, map environments, capture environmental data, and/or generate images of any kind of environment (e.g., by generating a 3D representation of the environment or by generating a “passthrough” visualization) [wherein the HMD is a visible light object detector]. Scanning sensor(s) 105 may comprise any number or any type of scanning devices, without limit.); and when it is determined that the visible light image is degraded, (Fig. 10, Paragraph [0140] – Bleyer discloses because the environment 900 is a lighted environment, the embodiments determined that the visible light cameras will be able to accurately generate passthrough images. As a result, the visible light cameras were selected and were used to generate the visible light image data 1005, which is included in the composite passthrough image 1000.) detecting an object within the visible light image through inputting the visible light image into the image object detector (Fig. 1, Paragraph [0039] – Bleyer discloses HMD 100 can use the scanning sensor(s) 105 to scan environments, map environments, capture environmental data, and/or generate images of any kind of environment (e.g., by generating a 3D representation of the environment or by generating a “passthrough” visualization) [wherein the HMD is a visible light object detector]. Scanning sensor(s) 105 may comprise any number or any type of scanning devices, without limit.) Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date the claimed invention was made to combine the teachings of Huynh of having a method of detecting an object within a thermal image, comprising: obtaining a visible image captured by a visible light sensor, with the teachings of Bleyer having determining whether the visible light is degraded; when it is determined that the visible light image is degraded: projecting the thermal image into a visible light color space to generate a projected thermal image; and detecting an object within the projected thermal image through inputting the projected thermal image into a visible light object detector, and when it is determined that the visible light image is degraded, detecting an object within the visible light image through inputting the visible light image into the image object detector. Wherein having Huynh’s method of detecting an object within a thermal image, comprising: obtaining a visible image captured by a visible light sensor, wherein determining whether the visible light is degraded; when it is determined that the visible light image is degraded: projecting the thermal image into a visible light color space to generate a projected thermal image; and detecting an object within the projected thermal image through inputting the projected thermal image into a visible light object detector, and when it is determined that the visible light image is degraded, detecting an object within the visible light image through inputting the visible light image into the image object detector. The motivation behind the modification would have been to obtain a method of detecting an object within an image using an upsampled thermal image, since both Huynh and Bleyer relate to image processing devices and methods that utilize thermal images, wherein Huynh has techniques that may be used to leverage currently available datasets to train object detection models, even when the modality of the training data is different from the modality of the object detection model, while Bleyer’s enhanced passthrough visualization brings substantial benefits to the technology by generating enhanced passthrough visualizations and by directly improving the user's experience with the computer system. In particular, the embodiments are able to merge, fuse, overlay, or otherwise combine different types of image data into a composite “passthrough” image. This composite passthrough image is an enhanced image because it provides additional information that would not be available if only one of the different types of image data were used. Please see Huynh (US 11106903 B1), Paragraph [0014, 0069], and Bleyer (US 20210160441 A1), Paragraph [0035]. Regarding claim 12, Huynh teaches the thermal-augmented image object detection system of claim 11, Huynh further teaches wherein the computer subsystem (Fig. 1, Paragraph [0018, 0022]) is further configured so that, when the computer instructions are executed by the at least one processor (Fig. 1, Paragraph [0022]), Although Huynh explicitly teaches the thermal-augmented image object detection system (Fig. 1, Paragraph [0021]). Huynh fails to explicitly teach the thermal-augmented image object detection system determines whether the visible light is degraded; and, when it is determined that the visible light image is degraded, to perform upsampling of the thermal image to generate the upsampled thermal image and to input the upsampled thermal image into the visible light object detector in order to detect objects. However, Bleyer explicitly teaches the thermal-augmented image object detection system determines whether the visible light is degraded (Fig. 12, Paragraph [0152] – Bleyer discloses a light sensor may be used to initially determine the light conditions of the environment. If the light sensor indicates that the light is low, then an attempt may be made to use the low light cameras to generate the passthrough image. Alternatively, if the light sensor indicates the light conditions are too low, then the embodiments may refrain from using either one of the visible light cameras or the low light cameras.); and, when it is determined that the visible light image is degraded (Fig. 10, Paragraph [0140] – Bleyer discloses because the environment 900 is a lighted environment, the embodiments determined that the visible light cameras will be able to accurately generate passthrough images. As a result, the visible light cameras were selected and were used to generate the visible light image data 1005, which is included in the composite passthrough image 1000), to perform upsampling of the thermal image to generate the upsampled thermal image (Fig. 3B, Paragraph [0070]- Bleyer discloses the camera distortion correction 330 may include optimizations to correct for distortions related to barrel distortion, pincushion distortion, flare, ghosts, spherical aberrations, chromatic aberrations, coma aberrations, astigmatisms, shutter speed, resolution, [wherein resolution correction is upsampling of the thermal image] brightness abilities, intensity abilities, and/or exposure properties) and to input the upsampled thermal image into the visible light object detector in order to detect objects (Fig. 12, Paragraph [0149] – Bleyer discloses Fig. 12 illustrates another flowchart of an example method 1200 for selectively using image data to generate a passthrough image for display on an HMD. Method 1200 may be performed by any of the HMDs discussed thus far. See also Paragraph [0039]). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date the claimed invention was made to combine the teachings of Huynh of having the thermal-augmented image object detection system of claim 11, wherein the computer subsystem is further configured so that, when the computer instructions are executed by the at least one processor, with the teachings of Bleyer having the thermal-augmented image object detection system determines whether the visible light is degraded; and, when it is determined that the visible light image is degraded, to perform upsampling of the thermal image to generate the upsampled thermal image and to input the upsampled thermal image into the visible light object detector in order to detect objects. Wherein having Huynh’s thermal-augmented image object detection system, wherein the thermal-augmented image object detection system determines whether the visible light is degraded; and, when it is determined that the visible light image is degraded, to perform upsampling of the thermal image to generate the upsampled thermal image and to input the upsampled thermal image into the visible light object detector in order to detect objects. The motivation behind the modification would have been to obtain a thermal-augmented image object detection system using an upsampled thermal image, since both Huynh and Bleyer relate to image processing devices and methods that utilize thermal images, wherein Huynh has techniques that may be used to leverage currently available datasets to train object detection models, even when the modality of the training data is different from the modality of the object detection model, while Bleyer’s enhanced passthrough visualization brings substantial benefits to the technology by generating enhanced passthrough visualizations and by directly improving the user's experience with the computer system. In particular, the embodiments are able to merge, fuse, overlay, or otherwise combine different types of image data into a composite “passthrough” image. This composite passthrough image is an enhanced image because it provides additional information that would not be available if only one of the different types of image data were used. Please see Huynh (US 11106903 B1), Paragraph [0014, 0069], and Bleyer (US 20210160441 A1), Paragraph [0035]. Regarding claim 13, Huynh in view of Bleyer teaches the thermal-augmented image object detection system of claim 12, Huynh further teaches wherein the computer subsystem (Fig. 1, Paragraph [0018, 0022]) is further configured so that, when the computer instructions are executed by the at least one processor (Fig. 1, Paragraph [0022]) the thermal-augmented image object detection system inputs the visible light image into the visible light object detector (Fig. 1, Paragraph [0018] - Huynh discloses system 100 may comprise a detector 118. Huynh further discloses detector 118 may be effective to receive input image data 108 (e.g., a key frame)) in order to detect objects (Fig. 1, Paragraph [0019] - Huynh discloses detector 118 may be trained to detect objects in a particular domain or modality. For example, detector 118 may be effective to detect objects in RGB image data). Regarding claim 14, Huynh in view of Bleyer teaches the thermal-augmented image object detection system of claim 12, Huynh further teaches wherein the shared field of view is a field of view defined by overlapping of a field of view of the thermal image sensor and a field of view of a visible light sensor used to capture the visible light image (Fig. 6, Paragraph [0045] – Huynh discloses a panoramic camera system may comprise multiple image sensors 632 resulting in multiple images and/or video frames that may be stitched and may be blended to form a seamless panoramic output. An example of an image sensor 632 may be camera 130 shown and described in FIG. 1. As described, camera 130 may be configured to capture color information, IR image data, image geometry information, and/or ambient light information). Claim 9 is rejected under 35 U.S.C. 103 as being unpatentable over Huynh (US 11106903 B1), hereinafter referenced as Huynh in view of Mohr (US 20230306747 A1), hereinafter referenced as Mohr. Regarding claim 9, Huynh teaches the method of claim 1, although Huynh further teaches wherein the visible light object detector (Fig. 1, Paragraph [0018] - Huynh discloses system 100 may comprise a detector 118. Huynh further discloses detector 118 may be effective to receive input image data 108 (e.g., a key frame)). Huynh fails to explicitly teach wherein the visible light object detector includes a bidirectional feature pyramid network and an object detector head. However, Mohr explicitly teaches wherein the visible light object detector includes a bidirectional feature pyramid network and an object detector head (Fig. 5, Paragraph [0063] – Mohr discloses the object detection unit 104 employs a weighted bi-directional feature pyramid network (BiFPN), which allows easy and fast multi-scale feature fusion and a compound scaling method that uniformly scales the resolution, depth, and width for all backbone, feature network, and box/class prediction networks at the same time). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date the claimed invention was made to combine the teachings of Huynh of having a method of detecting an object within an image wherein the visible light object detector, with the teachings of Mohr having wherein the visible light object detector includes a bidirectional feature pyramid network and an object detector head. Wherein having Huynh’s method of detecting an object within an image wherein the visible light object detector includes a bidirectional feature pyramid network and an object detector head. The motivation behind the modification would have been to obtain a method of detecting an object within an image that can perform a plurality of optimizations and/or operations for detecting the objects in the environment, since both Huynh and Mohr relate to image processing devices and methods for object detection, wherein Huynh has techniques that may be used to leverage currently available datasets to train object detection models, even when the modality of the training data is different from the modality of the object detection model, while Mohr has an object detection unit configured to perform a plurality of optimizations and/or operations for detecting the objects in the environment, such as the use of a weighted Bi-directional Feature Pyramid Network (BiFPN) for allowing easier and faster multi-scale feature combinations, and/or a compound scaling method for uniformly scaling the resolution, depth and width for all feature networks and box/class prediction networks at the same time. Please see Huynh (US 11106903 B1), Paragraph [0014, 0069], and Mohr (US 20230306747 A1), Paragraph [0045]. - Claims 19 & 20 are rejected under 35 U.S.C. 103 as being unpatentable over Huynh (US 11106903 B1), hereinafter referenced as Huynh in view of Choi (US 20220080829 A1), hereinafter referenced as Choi. Regarding claim 19, Huynh teaches the thermal-augmented image object detection system of claim 11, Huynh fails to explicitly teach a vehicle having vehicle electronics that include the thermal-augmented image object detection system. However, Choi explicitly teaches a vehicle having vehicle electronics (Fig. 2, Paragraph [0059] – Choi discloses a vehicle 10 may include a user interface device 200, an object detection device 210, a communication device 220, a driving operation device 230, a main ECU 240, a driving control device 250, an autonomous device 260, a sensing unit 270, and a location data generation device 280) that include the thermal-augmented image object detection system (Fig. 8, Paragraph [0207] – Choi discloses the vehicle according to the present disclosure may include the first camera 510, the second camera 520, and the vehicle image processing device 600 [wherein the vehicle image processing device 600 is the thermal-augmented image object detection system]. Fig. 32, Paragraph [0311] – Choi further discloses when an AR function is executed in S101, and it is checked whether an illumination is less than a predetermined value in S102, the vehicle image processing device 600 may run a thermal imaging (FIR) camera depending on the illumination and obtain an image in S105). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date the claimed invention was made to combine the teachings of Huynh of having a thermal-augmented image object detection system, with the teachings of Choi having a vehicle having vehicle electronics that include the thermal-augmented image object detection system. Wherein having Huynh’s thermal-augmented image object detection system wherein a vehicle having vehicle electronics that include the thermal-augmented image object detection system. The motivation behind the modification would have been to obtain a thermal-augmented image object detection system that can improve driver visibility by configuring a screen through association of images taken by a plurality of cameras, since both Huynh and Choi relate to image processing devices and methods for object detection, wherein Huynh has techniques that may be used to leverage currently available datasets to train object detection models, even when the modality of the training data is different from the modality of the object detection model, while Choi has a vehicle image processing method comprising generating augmented reality data based on the extracted target information so that a visual object corresponding to the target is displayed. Please see Huynh (US 11106903 B1), Paragraph [0014, 0069], and Choi (US 20220080829 A1), Paragraph [0367]. Regarding claim 20, Huynh teaches comprising the thermal-augmented image object detection system of claim 11, Huynh fails to explicitly teach an advanced driver assistance system (ADAS) for a vehicle. However, Choi explicitly teaches an advanced driver assistance system (ADAS) (Fig. 2, Paragraph [0091] – Choi discloses the autonomous device 260 can implement at least one advanced driver assistance system (ADAS) function) for a vehicle (Fig. 2, Paragraph [0059] – Choi discloses a vehicle 10 may include… an autonomous communication device 260). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date the claimed invention was made to combine the teachings of Huynh of having a thermal-augmented image object detection system, with the teachings of Choi having an advanced driver assistance system (ADAS) for a vehicle. Wherein having Huynh’s thermal-augmented image object detection system wherein an advanced driver assistance system (ADAS) for a vehicle. The motivation behind the modification would have been to obtain a thermal-augmented image object detection system that can improve driver visibility by configuring a screen through association of images taken by a plurality of cameras, since both Huynh and Choi relate to image processing devices and methods for object detection, wherein Huynh has techniques that may be used to leverage currently available datasets to train object detection models, even when the modality of the training data is different from the modality of the object detection model, while Choi has a vehicle image processing method comprising generating augmented reality data based on the extracted target information so that a visual object corresponding to the target is displayed. Please see Huynh (US 11106903 B1), Paragraph [0014, 0069], and Choi (US 20220080829 A1), Paragraph [0367]. Conclusion Listed below are the prior arts made of record and not relied upon but are considered pertinent to applicant’s disclosure. Luan et al. (US 20220215528 A1) - An example three-dimensional (3D) printer may include a camera to capture a low-resolution thermal image of a build material bed. The 3D printer may include an interpolation engine to generate an interpolated thermal image based on the low-resolution thermal image. The 3D printer may also include a correction engine to enhance fine details of the interpolated thermal image without distorting thermal values from portions of the interpolated thermal image without fine details to produce an enhanced thermal image…. Fig. 3, Abstract. Mojaver et al. (US 20220132052 A1) - A system for monitoring a human operator of critical equipment comprises an imaging module; a biometric measurement module; a risk detection module; and a risk response module. The imaging module includes a multi-spectral light source configured to emit light in a first spectral wavelength range for illuminating at least a portion of the human operator; a camera configured to detect light received from the human operator in a second spectral wavelength range; an imaging data generator configured to generate image data based on the emitted light and detected light. The biometric measurement module is configured to receive the image data; and based on the image data, perform at least one biometric measurement on the human operator. The risk detection module is configured to establish a safety risk associated with the human operator; and the risk response module is configured to based on the safety risk generate a risk response.…. Fig. 3, Abstract. Paul et al. (US 20220094896 A1) - An example computing system comprises a processor and a storage device holding instructions executable by the processor to receive a thermal image acquired via a thermal imaging system, each pixel of the thermal image comprising an intensity level, and generate a histogram via binning pixels by intensity level. The instructions are further executable to, based at least on the histogram, determine a subset of pixels to colorize, colorize the subset of pixels to produce a selectively colorized image, and output the selectively colorized image...…. Fig. 1, Abstract. Weng et al. (US 20200342275 A1) - The present disclosure provides a target-image acquisition method. The target-image acquisition method includes acquiring a visible-light image and an infrared (IR) image of a target, captured at a same time point by a photographing device; weighting and fusing the visible-light image and the IR image to obtain a fused image; and obtaining an image of the target according to the fused image. The present disclosure also provides a photographing device and an unmanned aerial vehicle (UAV) using the method above...…. Fig. 1, 2, Abstract. Schulte et al. (US 20180300884 A1) - An infrared (IR) imaging module may capture a background image in response to receiving IR radiation from a background of a scene and determine background calibration terms using the background image. The determined background calibration terms may be scale factors and/or offsets that equalize the pixel values of the background image to a baseline, value. IR imaging device may use the background calibration terms to capture images that have the baseline value for pixels corresponding to IR radiation received from the background and higher values (or lower values) for pixels corresponding to IR radiation received from a foreground. Such images may be used to count people and generate a heat map. The background calibration terms may be updated periodically, with the update period being increased at least for some pixels or a pixel area when a person is detected. Fig. 1, Abstract. Nguyen et al. (US 20130188058 A1) - Systems, methods, and devices for thermal detection. A thermal detection device includes a visual camera, a thermal detector, a controller, a user interface, a display, and a removable and rechargeable battery pack. The thermal detection device also includes a plurality of additional software and hardware modules configured to perform or execute various functions and operations of the thermal detection device. An output from the visual camera and an output from the thermal detector are combined by the controller or the plurality of additional modules to generate a combined image for display on the display... Fig. 6, 9, Abstract. Wolff et al. (US 7620265 B1) - A methodology for forming a composite color image fusion from a set of N gray level images takes advantage of the natural decomposition of color spaces into 2-D chromaticity planes and 1-D intensity. This is applied to the color fusion of thermal infrared and reflective domain (e.g., visible) images whereby chromaticity representation of this fusion is invariant to changes in reflective illumination.... Fig. 1, Abstract. Mankowski et al. (US 20130286236 A1) – Enhanced passthrough images are generated and displayed. A current visibility condition of an environment is determined. Based on the current visibility condition, a first camera or a second camera, which detect light spanning different ranges of illuminance, is selected to generate a passthrough image of the environment. The selected camera is then caused to generate the passthrough image. Additionally, a third camera, which is structured to detect long wave infrared radiation, is caused to generate a thermal image of the environment. Parallax correction is performed by aligning coordinates of the thermal image with corresponding coordinates identified within the passthrough image... Fig. 7, 12, Abstract Any inquiry concerning this communication or earlier communications from the examiner should be directed to BEZAWIT N SHIMELES whose telephone number is (571)272-7663. The examiner can normally be reached M-F 7:30am-5pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Chineyere Wills-Burns can be reached at (571) 272-9752. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /BEZAWIT NOLAWI SHIMELES/Examiner, Art Unit 2673 /CHINEYERE WILLS-BURNS/Supervisory Patent Examiner, Art Unit 2673
Read full office action

Prosecution Timeline

Nov 15, 2023
Application Filed
Dec 23, 2025
Non-Final Rejection — §102, §103 (current)

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
100%
Grant Probability
0%
With Interview (-100.0%)
2y 9m
Median Time to Grant
Low
PTA Risk
Based on 1 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month