Prosecution Insights
Last updated: April 17, 2026
Application No. 17/352,853

System And Device For The Contactless Measure Of The Body Temperature Of A Person

Final Rejection §103§112
Filed
Jun 21, 2021
Examiner
PADDA, ARI SINGH KANE
Art Unit
3791
Tech Center
3700 — Mechanical Engineering & Manufacturing
Assignee
unknown
OA Round
2 (Final)
17%
Grant Probability
At Risk
3-4
OA Rounds
4y 1m
To Grant
32%
With Interview

Examiner Intelligence

Grants only 17% of cases
17%
Career Allow Rate
7 granted / 42 resolved
-53.3% vs TC avg
Strong +16% interview lift
Without
With
+15.6%
Interview Lift
resolved cases with interview
Typical timeline
4y 1m
Avg Prosecution
50 currently pending
Career history
92
Total Applications
across all art units

Statute-Specific Performance

§101
13.3%
-26.7% vs TC avg
§103
44.4%
+4.4% vs TC avg
§102
10.7%
-29.3% vs TC avg
§112
31.4%
-8.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 42 resolved cases

Office Action

§103 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claims Pending Applicant's arguments, filed 11/10/2025, have been fully considered. The following rejections and/or objections are either reiterated or newly applied. They constitute the complete set presently being applied to the instant application. Applicants have amended their claims, filed 11/10/2025, and therefore rejections newly made in the instant office action have been necessitated by amendment. The previous withdrawal of claims 13-17 and 19-20 and cancellation of claim 2 is acknowledged. Claims 1, 3-12, and 18 are currently under examination. Specification The attempt to incorporate subject matter into this application by reference to "Rapid Object Detection using a Boosted Cascade of Simple Features” (Par. 155 of applicant’s spec., filed 11/10/2025), “ImageNet Classification with Deep Convolutional Neural Networks” (Par. 156 of applicant’s spec., filed 11/10/2025), and “SegNet: A Deep Convolutional Encoder-Decoder Architecture for Image Segmentation” (Par. 156 of applicant’s spec., filed 11/10/2025) is ineffective because the root words “incorporate” and/or “reference” have been omitted, see 37 CFR 1.57(c)(1). The incorporation by reference will not be effective until correction is made to comply with 37 CFR 1.57(c), (d), or (e). If the incorporated material is relied upon to meet any outstanding objection, rejection, or other requirement imposed by the Office, the correction must be made within any time period set by the Office for responding to the objection, rejection, or other requirement for the incorporation to be effective. Compliance will not be held in abeyance with respect to responding to the objection, rejection, or other requirement for the incorporation to be effective. In no case may the correction be made later than the close of prosecution as defined in 37 CFR 1.114(b), or abandonment of the application, whichever occurs earlier. Any correction inserting material by amendment that was previously incorporated by reference must be accompanied by a statement that the material being inserted is the material incorporated by reference and the amendment contains no new matter. 37 CFR 1.57(g). Drawings – Objection Withdrawn Applicant’s amendments, filed 11/10/2025, have been fully considered, and the previous objection withdrawn. Claim Objections Claims 1, 3-12, and 18 are objected to because of the following informalities: In Claim 1 “each image pixel” (4th to last line), should read -for each infrared image pixel-, Claims 3-12 and 18 are dependent on claim 1, and as such are also objected to. Appropriate correction is required. Claim Interpretation The following is a quotation of 35 U.S.C. 112(f): (f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph: An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked. As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph: (A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function; (B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and (C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function. Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function. Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function. Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) is/are: Claim 1: The claim limitation “a processing unit configured to: identify the infrared image pixels corresponding to at least a part of an individual's face and to determine a maximum temperature value associated with the identified infrared image pixels and to deduce therefrom a body temperature of the individual” has been interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because it uses a generic placeholder “unit” coupled with functional language “identify the infrared image pixels corresponding to at least a part of an individual's face and to determine a maximum temperature value associated with the identified infrared image pixels and to deduce therefrom a body temperature of the individual” without reciting sufficient structure to achieve the function. Furthermore, the generic placeholder is not preceded by a structural modifier that has a known structural meaning before the phrase “unit”. Claim 3: The claim limitation “the processing unit being configured to match the visible image and the electronic image to identify the infrared image pixels corresponding to the visible image pixels identified” has been interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because it uses a generic placeholder “unit” coupled with functional language “match the visible image and the electronic image to identify the infrared image pixels corresponding to the visible image pixels identified” without reciting sufficient structure to achieve the function. Furthermore, the generic placeholder is not preceded by a structural modifier that has a known structural meaning before the phrase “unit”. Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof. A review of the specification shows that the following appears to be the corresponding structure described in the specification for the 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph limitation: “The processing unit 15 can notably comprise a computer of the processor, microprocessor, microcontroller, etc. type, configured to execute instructions and control the processor”, “determine a maximum temperature value Tmax associated with the infrared image pixels corresponding to the face, eyes and/or inner corner of the eyes”, “For this, the processing unit 15 detects the face (respectively, the eyes and/or at least one inner corner of the eyes) according to the method of Viola and Jones (or integral image), which is a supervised learning method using a Haar cascade classifier. As described in the article by Paul Viola and Michael Jones, "Rapid Object Detection using a Boosted Cascade of Simple Features", 2001 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, a cascade of classifiers may be constructed by selecting a small number of important features among Harr-like features using AdaBoost…” or equivalents thereof, as described in Par. 125, Par. 130, and Par. 155-156 of the disclosure filed on 11/10/2025, “The processing unit 15 can notably comprise a computer of the processor, microprocessor, microcontroller, etc. type, configured to execute instructions and control the processor” and “The visible and infrared image pixels can be matched by transposition of the image pixel coordinates of the visible image to the electronic image by taking into account the positions and orientations of the visible spectrum camera 6 and the infrared camera 12” or equivalents thereof, as described in Par. 125 and Par. 158 of the disclosure filed on 11/10/2025. If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. Claim Rejections - 35 USC § 112 Applicant’s arguments, filed 11/10/2025, regarding the previous 112(a) rejection, have been fully considered, and the previous rejection withdrawn. The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 1, 3-12, and 18 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claim 1 recites the limitation “the corrected infrared image pixels” in the 2nd to last line. There is insufficient antecedent basis for this limitation in the claim. For examination purposes, this will be interpreted as -corrected infrared image pixels-. (Examiner's Note: There is no previous indication within claim 1 regarding the usage of the phrase “corrected” or “correcting” directly with “infrared image pixels”.) Claims 3-12 and 18 are dependent on claim 1, and as such are also rejected. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The claims are generally directed towards a temperature measurement system. The system comprises an archway with an infrared camera attached to the archway that generates an electrotonic image made up of infrared image pixels. The system further comprises a calibration module that is made up of a first and second reference target that are positioned to be in view of the infrared camera and two thermal sensors that measure the temperatures of each reference target. The system further comprises a processing unit that identifies pixels that correspond to the users face to identify the user’s body temperature, determines and applies gain and offset coefficients to generate a corrected electronic image, and determine a maximum temperature value from the corrected image. Claim(s) 1, 4, and 8-10 is/are rejected under 35 U.S.C. 103 as being unpatentable over Fraden (US Pub. No. 20070153871) hereinafter Fraden, and further in view of Fraden (US Pat. No. 6447160) hereinafter Fraden 2, Beevor (US Pub. No. 20040080315) hereinafter Beevor, McQuilkin (US Pub. No. 20080154138) hereinafter McQuilkin, and Kuybeda (US Pat. No. 20210343005) hereinafter Kuybeda. Regarding claim 1, Fraden discloses A temperature measurement system comprising (Abstract): an archway comprising two side panels connected by a cross member and together delimiting a passageway for an individual (Fig. 2, target gate – 4 (observable horizontal beam that is between and contiguous with two vertical structures on either side of subject – 3)); an infrared camera (Par. 33, (thermal imaging camera)) comprising an infrared detection chip comprising an infrared pixel matrix and being configured to convert an infrared radiation received by each infrared pixel into a corresponding temperature value and to generate an electronic image comprising a plurality of infrared image pixels (Fig. 4, Par. 34, “The subject's face and torso is represented by a pattern having different levels of brightness, related to various degrees of the IR signal emanated from the surface. Each facial area 20(a, b, c and d) is formed by numerous pixels of the pattern and thus represents a specific strength of the signal from these pixels…”)(Par. 33, (thermal imaging camera)), each infrared image pixel being representative of the temperature value received by a corresponding infrared pixel (Fig. 4, Par. 34, “The strength of the IR signal in each pixel depends on at least two major factors: 1) the surface temperature of that particular area of the subject, and 2) the surface emissivity of that particular area of the subject.”)(Par. 33, (thermal imaging camera)), a calibration module (Par. 37 (reference targets involved with calibration)) comprising: a first reference target and a second reference target (Par. 37, reference targets – 18a,18b) positioned in the field of view of the infrared camera so that the electronic image comprises the infrared image pixels representative of the temperature value of the first reference target and the second reference target (Par. 35, “Besides a thermal image of the subject, the filed of view 16 contains also the pixels 110 corresponding to the background panel 10, pixels 118a and 118b, corresponding respectively to the reference targets 18a and 18b, respectively.”) (Par. 37, “one or better two IR reference targets 18a and 18b should be present within the filed of view 16 of each snapshot”); and a processing unit configured to (Computational means – 25) (Par. 50 (hardware and software)) (Par. 33 (Processing equipment)): identify the infrared image pixels corresponding to at least a part of an individual's face and to determine a maximum temperature value associated with the identified infrared image pixels and to deduce therefrom a body temperature of the individual (Par. 37, “To relate the skin temperature from a thermal image to a temperature scale, it is essential to accurately calibrate camera 5. That is, the signal strength from each pixel must have a metrologically accurate relationship to an absolute temperature…” “… Preferably, the target blackbodies within each set should have temperatures”.) (Par. 40, “To compute the skin temperature and subsequently estimate the core temperature, the pixels with the highest IR…” “… A single "hot" pixel may be caused by noise, e.g., so for the improved reliability, the highest thermal level should be detected from the several adjacent pixels, at least two and preferably more, depending on the camera resolution and size of an image”). Fraden fails to explicitly disclose the infrared camera being fixed onto the archway so that a field of view of the infrared camera covers all or part of the passageway. However, Fraden does teach a field of view of the infrared camera covers all or part of the passageway (Fig. 1, Par. 31, (field of view)) (Fig. 4, Par. 37 (field of view)). Beevor teaches a camera being fixed onto the archway (Fig. 1, Par. 24, camera – 3, support – 4 (observable that camera is fixed to the archway with support – 4)) so that a field of view of the camera covers all or part of the passageway (Fig. 1-2, Par. 24-25 (camera field of view covers the subject in the passageway)) (Par. 64)). Therefore, it would have been obvious to a person of ordinary skill in the art to modify the system of Fraden with that of Beevor to include the infrared camera of Fraden being fixed onto the archway so that a field of view of the infrared camera of Fraden covers all or part of the passageway through the combination of references and it would have yielded the same result as that of Fraden of ensuring that an image of a subject traveling through the passageway is captured (Beevor (Par. 24)). Modified Fraden fails to explicitly disclose a first thermal sensor configured to measure an instantaneous temperature of the first reference target and a second thermal sensor configured to measure an instantaneous temperature of the second reference target. However, Fraden does teach the first reference target and the second reference target (Par. 37, “one or better two IR reference targets 18a and 18b should be present within the filed of view 16 of each snapshot. A reference target is a source of a thermal (IR) radiation signal having properties of a blackbody with a precisely known temperature. Emissivity of such a reference target shall be near 0.990 and preferably as close to 1.000 as possible. FIG. 4 shows two images 118a and 118b of the reference targets within the field of view 16”) (Par. 37, “fabricating a blackbody target is taught by the U.S. Pat. No. 6,447,160 issued to J. Fraden and thus is not described here in detail.”). Fraden 2 teaches a thermal sensor configured to measure an instantaneous temperature (Col. 3, lines 49-51, “The wall temperature is monitored by temperature sensor 14 which is thermally coupled to wall 6.” (temperature sensor that measures the temperature of the target)). Therefore, it would have been obvious to a person of ordinary skill in the art to modify the system of Fraden and Beevor with that of Fraden 2 to include a first thermal sensor of Fraden 2 configured to measure an instantaneous temperature of the first reference target of Fraden and a second thermal sensor of Fraden 2 configured to measure an instantaneous temperature of the second reference target of Fraden through the combination of references and applying a separate temperature sensor of Fraden 2 to each reference target of Fraden as it would have yielded the predictable result of ensuring that each of the reference targets is at the proper temperature (Fraden (Par. 37)). Modified Fraden fails to explicitly disclose determine a gain coefficient and an offset coefficient from the respective instantaneous temperatures of the first reference target and the second reference target and temperature values of the first reference target and the second reference target in the electronic image. However, Fraden does teach the first reference target and the second reference target (Par. 37, “one or better two IR reference targets 18a and 18b should be present within the filed of view 16 of each snapshot. A reference target is a source of a thermal (IR) radiation signal having properties of a blackbody with a precisely known temperature. Emissivity of such a reference target shall be near 0.990 and preferably as close to 1.000 as possible. FIG. 4 shows two images 118a and 118b of the reference targets within the field of view 16”) (Par. 37, “fabricating a blackbody target is taught by the U.S. Pat. No. 6,447,160 issued to J. Fraden and thus is not described here in detail.”), calibrating with reference targets (Par. 37 (blackbody targets with known temperatures are used in a calibration))(Par. 32 (reference targets)), and temperature scale constants (Par. 45, “A, B, C, D and E are the experimentally determined constants whose values depend on the temperature scale.”). McQuilkin teaches determine a gain coefficient (Par. 86, “m.sub.1 is the slope of the calibration equation derived from the actual temperatures of the reference surfaces and the temperatures of those surfaces as measured by the thermal imaging camera”) and an offset coefficient (Par. 86, “C.sub.1 is the offset or y-intercept for the calibration equation derived from the actual temperatures of the reference surfaces and the temperatures of those surfaces as measured by the thermal imaging camera”) from the respective instantaneous temperatures of the first reference target and the second reference target and temperature values of the first reference target and the second reference target in the electronic image (Par. 86, “The use of temperature references, preferably the multiple, in-frame temperature references discussed above, allows the skin or surface temperature measurements to be accurately calibrated for each image”) (Par. 86, “m.sub.1 is the slope of the calibration equation derived from the actual temperatures of the reference surfaces and the temperatures of those surfaces as measured by the thermal imaging camera; T.sub.S is the surface temperature of the target area from the thermal camera prior to calibration and optionally after surface emissivity correction; and C.sub.1 is the offset or y-intercept for the calibration equation derived from the actual temperatures of the reference surfaces and the temperatures of those surfaces as measured by the thermal imaging camera.”)(Par. 57, "FIGS. 1 through 5 schematically shows a frame 18 of the field of view of a thermal imaging device (not shown) in which frame 18 includes a subject 20 whose thermal image is to be acquired and an in-frame temperature reference system 10. Temperature reference system 10 includes multiple black body temperature references 12 and 14” (two temperature references)). Therefore, it would have been obvious to a person of ordinary skill in the art to modify the system of Fraden, Fraden 2, and Beevor with that of McQuilkin to include determine a gain coefficient and an offset coefficient from the respective instantaneous temperatures of the first reference target and the second reference target of Fraden and temperature values of the first reference target and the second reference target of Fraden in the electronic image through the combination of references as it would have yielded the same or similar result of calibrating the data (Fraden (Par. 37)) (McQuilkin (Par. 86)). Modified Fraden fails to explicitly disclose for any infrared image pixel of the electronic image, apply the gain coefficient and the offset coefficient to the temperature value associated with the infrared image pixel so as to obtain, for each image pixel, a corrected temperature value, and deduce therefrom a corrected electronic image that comprises the corrected infrared image pixels; and determine the maximum temperature value from the corrected electronic image. However, Fraden does disclose for an infrared image pixel of the electronic image, apply the coefficient to the temperature value associated with the infrared image pixel so as to obtain, for each image pixel, a corrected temperature value (Fig. 4, 9)(Par. 45, “To compute a core temperature from the temperature of a selected skin area, the following equation may be employed: T.sub.c=AT.sub.s.sup.2+(B+CT.sub.r)T.sub.s+DT.sub.r+E (1) where A, B, C, D and E are the experimentally determined constants whose values depend on the temperature scale. For example, the factor C typically is between 0.1 and 0.3 if T is in Celsius.” (application of the constants for the skin area)) (Par. 52, “The reference temperature T.sub.r may be warmer or cooler than the ambient air temperature T.sub.a. If in the thermal image contains pixels "colder" than…” “… Use the computed T.sub.r in Eq. (1) to compute the core temperature before using the fever threshold T.sub.F.”), and determine the maximum temperature value from an electronic image (Par. 37, “To relate the skin temperature from a thermal image to a temperature scale, it is essential to accurately calibrate camera 5. That is, the signal strength from each pixel must have a metrologically accurate relationship to an absolute temperature…” “… Preferably, the target blackbodies within each set should have temperatures”.) (Par. 40, “To compute the skin temperature and subsequently estimate the core temperature, the pixels with the highest IR…” “… A single "hot" pixel may be caused by noise, e.g., so for the improved reliability, the highest thermal level should be detected from the several adjacent pixels, at least two and preferably more, depending on the camera resolution and size of an image”). Kuybeda teaches for any infrared image pixel of the electronic image, apply the gain coefficient and the offset coefficient to the temperature value associated with the infrared image pixel so as to obtain, for each image pixel, a corrected temperature value (Par. 58, “At S310, a sensor calibration table and parameters are obtained. In an embodiment, the calibration table includes calibrated values for each pixel of the infrared sensor. For example, calibrated values may represent the amount of anticipated gain (G), offset (O), and drift (D) associated with each pixel under certain conditions, e.g., adjustment values that are associated with specific ambient temperatures. In an embodiment, specific ambient temperatures include room temperature. The calibration parameters may include the attenuation factor A(r), linear calibration values a and b, and a predefined nominal temperature T0…”) (Par. 38, “The calibration tables include calibration values for each pixel computed in the lab. The calibration values may include gain and offset values calculated from two temperature points (T.sub.1, T.sub.2) for the purpose of overcoming the irregularities in the infrared sensor (120, FIG. 1) and unifying the pixels' response to infrared radiation for a normal ambient room temperature.”)(Par. 59, “At S320, a thermal image is received from the infrared sensor, e.g., the sensor 120 of FIG. 1”) (Par. 61, “S350, an FPA temperature stabilization process is performed to ensure that the sensor and, hence, the camera output, becomes invariant to changes in the FPA's temperature. As noted above, the output of S350 is an ambient-stabilized thermal image, I.sub.s, responsive to the input image.”) (Par. 62, “an ambient calibration process is performed to estimate a scene temperature (TS) for the scene shown in the received input image…”), and deduce therefrom a corrected electronic image that comprises the corrected infrared image pixels (Par. 63, “a list of objects identified in the denoised thermal image, together with the distance of each object from the camera, is received. At S380, the temperature (T.sub.obj) of some or all of the objects in the received list is measured. An object temperature (T.sub.obj) is measured independently of the ambient temperature of the camera. In an embodiment, the object temperature is measured, in part, using the scene temperature (T.sub.S) and a calibrated attenuation factor (A(r)). In an embodiment, an object temperature may be not measured for an object when the object's distance from the camera is above a predefined threshold”) (Par. 64, “The measured object temperatures are displayed next to each respective object overlaid in the denoised image”) (Par. 65, “FIG. 4 shows an example output denoised image 400, applicable to identify persons (objects) 410 and 420. Next to each person 410 or 420, the respective temperature is displayed. The displayed temperature may be color-coded, for example, the person 420 may be boxed with a red box, while the person 410 may be boxed with a green box”); and determine the maximum temperature value from the corrected electronic image (Par. 65, “FIG. 4 shows an example output denoised image 400, applicable to identify persons (objects) 410 and 420. Next to each person 410 or 420, the respective temperature is displayed. The displayed temperature may be color-coded, for example, the person 420 may be boxed with a red box, while the person 410 may be boxed with a green box”) (Fig. 4 (temperature value for person 420 at 38.2 C)). Fraden, Fraden 2, Beevor, McQuilkin, and Kuybeda are considered to be analogous art to the claimed invention as they are involved with image capture devices (Examiner's Note: Fraden incorporates the teachings of Fraden 2 as indicated in Par. 37 of Fraden (Par. 37, “fabricating a blackbody target is taught by the U.S. Pat. No. 6,447,160 issued to J. Fraden and thus is not described here in detail.”)). Therefore, it would have been obvious to a person of ordinary skill in the art to modify the system of Fraden, Fraden 2, Beevor, and McQuilkin with that of Kuybeda to include for any infrared image pixel of the electronic image, apply the gain coefficient and the offset coefficient to the temperature value associated with the infrared image pixel so as to obtain, for each image pixel, a corrected temperature value, and deduce therefrom a corrected electronic image that comprises the corrected infrared image pixels; and determine the maximum temperature value from the corrected electronic image through the combination of references as it would have yielded the predictable result of improving data quality and providing infection information to the user (Kuybeda (Par. 4, 65)). Regarding claim 4, modified Fraden further discloses wherein the processing unit is also configured to apply a predetermined compensation coefficient to the temperature value of each infrared image pixel (Fraden (Par. 40 (emissivity)) (Par. 34 (signal strength of pixels)) (Par. 37 (reference target emissivity)) (Par. 39 (subject emissivity of 1))). Regarding claim 8, modified Fraden further discloses wherein the first reference target and the second reference target are fixed onto a support chosen from one of the two side panels and the cross member (Fraden (Fig. 2, (observable that reference targets 18A and 18B are fixed onto the side panels of gate 4))). Regarding claim 9, modified Fraden further discloses also comprising a presence sensor configured to determine a presence of an individual in the passageway (Fraden (Par. 31, “The moment of passage is detected by a conventional presence detector 17, for example, by breaking a beam of light. As soon the detector 17 generates a signal, thermal imaging camera 5 (not shown in FIG. 1) takes a thermal picture (a snapshot). The thermal image is limited by the camera's field of view indicated by a broken line 16. If the subject is short in height--a child, e.g., his face may be outside of the filed of view 16. To force the camera 5 to reposition the filed of view (either manually or automatically), a secondary presence detector 19 may be employed.”)), the processing unit being configured to generate the electronic image only when the presence sensor detects an object inside the passageway (Fraden (Par. 31, “The moment of passage is detected by a conventional presence detector 17, for example, by breaking a beam of light. As soon the detector 17 generates a signal, thermal imaging camera 5 (not shown in FIG. 1) takes a thermal picture (a snapshot). The thermal image is limited by the camera's field of view indicated by a broken line 16. If the subject is short in height--a child, e.g., his face may be outside of the filed of view 16.”) (Par. 33, “The presence detectors 17 and 19 are connected to the computational means 25 that actuates the camera 5 for taking a snapshot thermal image of the subject 3 when the subject is present in the clearance of the gate 4.”)). Regarding claim 10, modified Fraden further discloses the infrared camera (Fraden (Par. 33, (thermal imaging camera))). Modified Fraden fails to explicitly disclose wherein the infrared camera is mounted on an arm fixed onto the cross member and extending from an exit of the archway. However, Beevor further teaches wherein the camera is mounted on an arm fixed onto the cross member and extending from an exit of the archway (Beevor (Fig. 1, Par. 24, camera – 3, support – 4 (observable that support 4 is fixed to the unlabeled cross member of portal 2) (observable that support 4 extends from the exit of portal 2) (observable that camera 3 is attached to support 4)) (Par. 28 (digital camera))(Par. 64)). Therefore, it would have been obvious to a person of ordinary skill in the art to modify the system of Fraden, Fraden 2, Beevor, McQuilkin, and Kuybeda with that of Beevor to include wherein the infrared camera of Fraden is mounted on an arm fixed onto the cross member and extending from an exit of the archway through the combination of references and it would have yielded the same result as that of Fraden of ensuring that an image of a subject traveling through the passageway is captured (Beevor (Par. 24)). Claim(s) 3 is/are rejected under 35 U.S.C. 103 as being unpatentable over Fraden in view of Fraden 2, Beevor, McQuilkin, and Kuybeda as applied to claim 1 above, and further in view of Johnson (US Pub. No. 20060249679) hereinafter Johnson. Fraden, Fraden 2, Beevor, McQuilkin, and Kuybeda teach the system of claim 1 above. Regarding claim 3, modified Fraden fails to explicitly disclose further comprising a visible spectrum camera comprising a visible spectrum detection chip comprising a visible pixel matrix and being configured to generate a visible image comprising a plurality of visible image pixels. However, Fraden does teach in an alternate embodiment further comprising a visible spectrum camera comprising a visible spectrum detection chip comprising a visible pixel matrix and being configured to generate a visible image comprising a plurality of visible image pixels (Fraden (Par. 31, (visible camera))). Therefore, it would have been obvious to a person of ordinary skill in the art to modify the system of Fraden, Fraden 2, Beevor, McQuilkin, and Kuybeda with that of Fraden to include further comprising a visible spectrum camera comprising a visible spectrum detection chip comprising a visible pixel matrix and being configured to generate a visible image comprising a plurality of visible image pixels through the combination of embodiments as it is a known alternative (Fraden (Par. 31)) and would have yielded the predictable result of providing additional image data. Modified Fraden fails to explicitly disclose the visible spectrum camera being fixed onto the archway so that a field of view of the visible spectrum camera covers at least a portion of the passageway. However, Fraden does teach in an alternate embodiment the visible spectrum camera (Fraden (Par. 31 (visible range camera))). Beevor further teaches the visible spectrum camera being fixed onto the archway so that a field of view of the visible spectrum camera covers at least a portion of the passageway (Beevor (Fig. 1, Par. 24, camera – 3, support – 4 (observable that camera is fixed to the archway with support – 4)) (Fig. 1-2, Par. 24-25 (camera field of view covers the subject in the passageway)) (Par. 28 (digital camera))(Par. 64))). Therefore, it would have been obvious to a person of ordinary skill in the art to modify the system of Fraden, Fraden 2, Beevor, McQuilkin, and Kuybeda with that of Beevor to include the visible spectrum camera of Fraden being fixed onto the archway so that a field of view of the visible spectrum camera of Fraden covers at least a portion of the passageway for the reasoning as indicated above in claim 1. Modified Fraden fails to explicitly disclose the processing unit being configured to match the visible image and the electronic image to identify the infrared image pixels corresponding to the visible image pixels identified. However, Fraden does teach an alternate embodiment with a pattern recognition system (Fraden (Par. 31)). However, Johnson teaches the processing unit being configured to match the visible image and the electronic image to identify the infrared image pixels corresponding to the visible image pixels identified (Par. 45 (match pixels of infrared image to the visible light image)) (Fig. 14-16, Par. 61-65 (Alpha blending)). Fraden, Fraden 2, Beevor, McQuilkin, Kuybeda, and Johnson are considered to be analogous art to the claimed invention as they are involved with image capture devices (Examiner's Note: Fraden incorporates the teachings of Fraden 2 as indicated in Par. 37 of Fraden (Par. 37, “fabricating a blackbody target is taught by the U.S. Pat. No. 6,447,160 issued to J. Fraden and thus is not described here in detail.”)). Therefore, it would have been obvious to a person of ordinary skill in the art to modify the system of Fraden, Fraden 2, Beevor, McQuilkin, and Kuybeda with that of Johnson to include the processing unit being configured to match the visible image and the electronic image to identify the infrared image pixels corresponding to the visible image pixels identified through the combination of references as it would have yielded the predictable result of showing the exact location of infrared targets on the visible image (Johnson (Par. 76)). Claim(s) 5-7 is/are rejected under 35 U.S.C. 103 as being unpatentable over Fraden in view of Fraden 2, Beevor, McQuilkin, and Kuybeda as applied to claim 1 above, and further in view of Yasbek (US Pub. No. 20210199508) hereinafter Yasbek. Fraden, Fraden 2, Beevor, McQuilkin, and Kuybeda teach the system of claim 1. Regarding claim 5, modified Fraden fails to explicitly disclose the limitations of the claim. However, Fraden does teach the first reference target and the second reference target (Fraden (Par. 37, “one or better two IR reference targets 18a and 18b should be present within the filed of view 16 of each snapshot. A reference target is a source of a thermal (IR) radiation signal having properties of a blackbody with a precisely known temperature. Emissivity of such a reference target shall be near 0.990 and preferably as close to 1.000 as possible. FIG. 4 shows two images 118a and 118b of the reference targets within the field of view 16”) (Par. 37, “fabricating a blackbody target is taught by the U.S. Pat. No. 6,447,160 issued to J. Fraden and thus is not described here in detail.”)). Yasbek teaches wherein the thermal sensor connected to the reference target, respectively, by means of a conductive track (Par. 10, “The printed circuit board 102 mechanically supports and electrically connects a non-resistive heat source 108 and a contact-based temperature detector 110.”) (Par. 13, “one example, the thermally conductive interface 106 comprises a thermally conductive, soft, conformable, and compliant material (e.g., foam, such as ceramic-filled silicon foam) that allows heat to conduct from the non-resistive heat source 108 to the emitter face 104 and from the emitter face 104 to the non-contact temperature detector 110.”) (Par. 16, “Referring both to FIGS. 1 and 2, in operation, the non-resistive heat source 108 generates heat that conducts, via the thermally conductive interface 106, to the emitter face 104. The non-contact temperature detector 110, in turn, detects (via the thermally conductive interface 106) the resultant temperature of the emitter face 104.”), the processing unit being configured to determine an instantaneous temperature of the conductive tracks and deduce therefrom the instantaneous temperature of the reference targets respectively (Par. 11, “the contact-based temperature sensor 110 comprises a resistance temperature detector (RTD), a thermistor, a thermocouple, or the like.”) (Par. 16, “referring both to FIGS. 1 and 2, in operation, the non-resistive heat source 108 generates heat that conducts, via the thermally conductive interface 106, to the emitter face 104. The non-contact temperature detector 110, in turn, detects (via the thermally conductive interface 106) the resultant temperature of the emitter face 104.”). Fraden, Fraden 2, Beevor, McQuilkin, Kuybeda, and Yasbek are considered to be analogous art to the claimed invention as they are involved with signal measurements (Examiner's Note: Fraden incorporates the teachings of Fraden 2 as indicated in Par. 37 of Fraden (Par. 37, “fabricating a blackbody target is taught by the U.S. Pat. No. 6,447,160 issued to J. Fraden and thus is not described here in detail.”)). Therefore, it would have been obvious to a person of ordinary skill in the art to modify the system of Fraden, Beevor, and Fraden 2 with that of Yasbek to include wherein the first and the second thermal sensors of Fraden 2 are connected to the first and second reference targets of Fraden, respectively, by means of a first and a second conductive track, the processing unit being configured to determine an instantaneous temperature of the first and the second conductive tracks and deduce therefrom the instantaneous temperature of the first and second reference targets of Fraden respectively through the combination of references and applying a conductive of Yasbek to each reference target and thermal sensor pairing of Fraden and Fraden 2 as it would have yielded the predictable result of explicitly providing the structures needed for the conduction of heat (Yasbek (Par. 13)). Regarding claim 6, modified Fraden fails to explicitly disclose the limitations of the claim. However, Fraden does teach the first reference target and the second reference target (As indicated in claim 5 above). Yasbek further teaches wherein the thermal sensor each comprise a semiconductor chip (Yasbek (Par. 11, “the contact-based temperature sensor 110 comprises a resistance temperature detector (RTD), a thermistor, a thermocouple, or the like.”) (Par. 16, “referring both to FIGS. 1 and 2, in operation, the non-resistive heat source 108 generates heat that conducts, via the thermally conductive interface 106, to the emitter face 104. The non-contact temperature detector 110, in turn, detects (via the thermally conductive interface 106) the resultant temperature of the emitter face 104.”)) connected to the reference target, respectively, by means of the conductive tracks (Yasbek (Par. 10, “The printed circuit board 102 mechanically supports and electrically connects a non-resistive heat source 108 and a contact-based temperature detector 110.”) (Par. 13, “one example, the thermally conductive interface 106 comprises a thermally conductive, soft, conformable, and compliant material (e.g., foam, such as ceramic-filled silicon foam) that allows heat to conduct from the non-resistive heat source 108 to the emitter face 104 and from the emitter face 104 to the non-contact temperature detector 110.”) (Par. 16, “Referring both to FIGS. 1 and 2, in operation, the non-resistive heat source 108 generates heat that conducts, via the thermally conductive interface 106, to the emitter face 104. The non-contact temperature detector 110, in turn, detects (via the thermally conductive interface 106) the resultant temperature of the emitter face 104.”)). Therefore, it would have been obvious to a person of ordinary skill in the art to modify the system of Fraden, Fraden 2, Beevor, McQuilkin, Kuybeda, and Yasbek with that of Yasbek to include wherein the first and second thermal sensors of Fraden 2 each comprise a semiconductor chip connected to the first and second reference targets of Fraden, respectively, by means of the first and second conductive tracks through the combination of references for the reasoning as indicated above in claim 5. Regarding claim 7, modified Fraden fails to explicitly disclose the limitations of the claim. However, Fraden does teach the first reference target and the second reference target (As indicated in claim 5 above) and maintaining temperature (Fraden (Par. 37)). Yasbek further teaches a heating element configured to maintain the reference target at a reference temperature (Yasbek (Par. 11-12 (heat source))), and wherein the semiconductor chip of the thermal sensor (Yasbek (Par. 11, “the contact-based temperature sensor 110 comprises a resistance temperature detector (RTD), a thermistor, a thermocouple, or the like.”)) is fixed at a center of a heated surface of the reference target (Yasbek (Fig. 1, (observable that sensor 110 is at the centrally located on emitter face 104))). Therefore, it would have been obvious to a person of ordinary skill in the art to modify the system of Fraden, Fraden 2, Beevor, McQuilkin, Kuybeda, and Yasbek with that of Yasbek to include a first heating element and a second heating element of Yasbek configured to maintain the first reference target and the second reference target at a first reference temperature and at a second reference temperature, respectively, and wherein the semiconductor chip of the first and second thermal sensors of Fraden 2 is fixed at a center of a heated surface of the first reference target and the second reference target of Fraden, respectively through the combination of references and applying a heat source of Yasbek to each reference target of Fraden as the chip being located at a center is merely a design variation and would yielded the same or similar results of measuring temperature of the surface (Yasbek (Par. 16)) and would have yielded the predicable result of controlling the temperature (Yabsek (Par. 8)) (Fraden (Par. 37)). Claim(s) 11 and 18 is/are rejected under 35 U.S.C. 103 as being unpatentable over Fraden in view of Fraden 2, Beevor, McQuilkin, and Kuybeda as applied to claim 1 above, and further in view of Dierenbach (US Pub. No. 20090034958) hereinafter Dierenbach. Fraden, Fraden 2, Beevor, McQuilkin, and Kuybeda teach the system of claim 1 above. Regarding claim 11, modified Fraden further discloses the infrared camera (Fraden (Par. 33, (thermal imaging camera))) and the passageway (Fraden (Fig. 2, target gate – 4)). Modified Fraden fails to explicitly disclose further comprising a light adjacent to the infrared camera and configured to draw a gaze of an individual passing through the passageway. Dierenbach teaches further comprising a light adjacent to the camera and configured to draw a gaze of an individual (Par. 80 (flashing light sources surrounding the camera)) (Fig. 8 (single light source able to be actuated)). Fraden, Fraden 2, Beevor, McQuilkin, Kuybeda, and Dierenbach are considered to be analogous art to the claimed invention as they are involved with the image capture devices (Examiner's Note: Fraden incorporates the teachings of Fraden 2 as indicated in Par. 37 of Fraden (Par. 37, “fabricating a blackbody target is taught by the U.S. Pat. No. 6,447,160 issued to J. Fraden and thus is not described here in detail.”)). Therefore, it would have been obvious to a person of ordinary skill in the art to modify the system of Fraden, Fraden 2, Beevor, McQuilkin, and Kuybeda with that of Dierenbach to include further comprising a light adjacent to the infrared camera of Fraden and configured to draw a gaze of an individual passing through the passageway of Fraden through the combination of references as it would have yielded the predictable result of attracting the subject’s attention (Dierenbach (Abstract)). Regarding claim 18, modified Fraden further discloses the infrared camera (Fraden (Par. 33, (thermal imaging camera))) and the predetermined passageway (Fraden (Fig. 2, target gate – 4)). Modified Fraden fails to explicitly disclose a flashing light adjacent to the infrared camera and configured to draw a gaze of an individual passing through the predetermined passageway. However, Dierenbach teaches a flashing light adjacent to the infrared camera and configured to draw a gaze of an individual passing through the predetermined passageway (Par. 80 (flashing light sources surrounding the camera)). Therefore, it would have been obvious to a person of ordinary skill in the art to modify the system of Fraden, Fraden 2, Beevor, McQuilkin, and Kuybeda with that of Dierenbach to a flashing light adjacent to the infrared camera of Fraden and configured to draw a gaze of an individual passing through the predetermined passageway of Fraden through the combination of references as it would have yielded the predictable result of attracting the subject’s attention (Dierenbach (Abstract)). Claim(s) 12 is/are rejected under 35 U.S.C. 103 as being unpatentable over Fraden in view of Fraden 2, Beevor, McQuilkin, and Kuybeda as applied to claim 1 above, and further in view of Berme (US Pat. No. 9763604) hereinafter Berme. Fraden, Fraden 2, Beevor, McQuilkin, and Kuybeda teach the system of claim 1 above. Regarding claim 12, modified Fraden fails to explicitly disclose the limitations of the claim. However, Berme teaches wherein the field of view of the infrared camera has a vertical angle and a horizontal angle greater than or equal to 30° and less than or equal to 120° (Col. 15, lines 22-39 (field of view of infrared camera between 40 and 80 degrees)). Fraden, Fraden 2, Beevor, McQuilkin, Kuybeda, and Berme are considered to be analogous art to the claimed invention as they are involved with the image capture devices (Examiner's Note: Fraden incorporates the teachings of Fraden 2 as indicated in Par. 37 of Fraden (Par. 37, “fabricating a blackbody target is taught by the U.S. Pat. No. 6,447,160 issued to J. Fraden and thus is not described here in detail.”)). Therefore, it would have been obvious to a person of ordinary skill in the art to modify the system of Fraden, Fraden 2, Beevor, McQuilkin, and Kuybeda with that of Berme to include wherein the field of view of the infrared camera has a vertical angle and a horizontal angle greater than or equal to 30° and less than or equal to 120° through the combination of references as differing camera field of views are known in the art (Berme (Col. 15, lines 22-39)) and it would have yielded the same or similar results as the camera of Fraden. Response to Arguments Applicant's arguments filed 11/10/2025, regarding the 103 rejection have been fully considered, but are moot in view of the newly applied rejection as a result of the applicant’s amendments to the claims. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure: Manneschi (US Pub. No. 20110316995) hereinafter Manneschi. Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to ARI SINGH KANE PADDA whose telephone number is (571)272-7228. The examiner can normally be reached Monday - Friday 8:00 am - 5:00 pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jason Sims can be reached at (571) 272-7540. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /ARI S PADDA/Examiner, Art Unit 3791 /JASON M SIMS/Supervisory Patent Examiner, Art Unit 3791
Read full office action

Prosecution Timeline

Jun 21, 2021
Application Filed
Jul 28, 2021
Response after Non-Final Action
May 19, 2025
Non-Final Rejection — §103, §112
Oct 13, 2025
Interview Requested
Oct 22, 2025
Examiner Interview Summary
Oct 22, 2025
Applicant Interview (Telephonic)
Nov 10, 2025
Response Filed
Feb 27, 2026
Final Rejection — §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12588839
Component Concentration Measuring Device
2y 5m to grant Granted Mar 31, 2026
Patent 12564351
PERSONAL APPARATUS FOR CONDUCTING ELECTROENCEPHALOGRAPHY
2y 5m to grant Granted Mar 03, 2026
Patent 12558189
METHODS AND APPARATUS FOR DIRECT MARKING
2y 5m to grant Granted Feb 24, 2026
Patent 12029548
DEVICE FOR SELECTIVE COLLECTION AND CONDENSATION OF EXHALED BREATH
2y 5m to grant Granted Jul 09, 2024
Patent 11850049
APPARATUS FOR AUTOMATICALLY MEASURING URINE VOLUME AND SYSTEM FOR AUTOMATICALLY MEASURING URINE VOLUME
2y 5m to grant Granted Dec 26, 2023
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
17%
Grant Probability
32%
With Interview (+15.6%)
4y 1m
Median Time to Grant
Moderate
PTA Risk
Based on 42 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in for Full Analysis

Enter your email to receive a magic link. No password needed.

Free tier: 3 strategy analyses per month