Prosecution Insights
Last updated: April 19, 2026
Application No. 17/524,277

BODY TEMPERATURE PREDICTION APPARATUS AND BODY TEMPERATURE PREDICTION METHOD, AND METHOD FOR TRAINING BODY TEMPERATURE PREDICTION APPARATUS

Final Rejection §101§103§112
Filed
Nov 11, 2021
Examiner
PORTILLO, JAIRO H
Art Unit
3791
Tech Center
3700 — Mechanical Engineering & Manufacturing
Assignee
Research & Business Foundation Sungkyunkwan University
OA Round
2 (Final)
54%
Grant Probability
Moderate
3-4
OA Rounds
4y 6m
To Grant
85%
With Interview

Examiner Intelligence

Grants 54% of resolved cases
54%
Career Allow Rate
181 granted / 335 resolved
-16.0% vs TC avg
Strong +31% interview lift
Without
With
+31.0%
Interview Lift
resolved cases with interview
Typical timeline
4y 6m
Avg Prosecution
42 currently pending
Career history
377
Total Applications
across all art units

Statute-Specific Performance

§101
20.5%
-19.5% vs TC avg
§103
46.9%
+6.9% vs TC avg
§102
9.3%
-30.7% vs TC avg
§112
21.0%
-19.0% vs TC avg
Black line = Tech Center average estimate • Based on career data from 335 resolved cases

Office Action

§101 §103 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claim 1, 7, and 11 and claims dependent thereon rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor, or for pre-AIA the applicant regards as the invention. Regarding Claim 1, the terms “an external environment/activity estimation neural network configured to detect at least one facial region as a region of interest" in lines 5-6 and “the at least one facial region includes inner sides of eyes, a nose, and a cheek” in line 17 render the claim indefinite because it is unclear if the claim is intending to be read as 1) the at least one facial region chosen for analysis is one of the inner sides of eyes, the nose, and the cheek or 2) the at least one facial region must include “inner sides of eyes, a nose, and a cheek” with other facial regions (e.g. a forehead) being potentially detected and analyzed in different steps. Examiner suggests amending the claim to read “an external environment/activity estimation neural network configured to detect a facial region as a region of interest" in lines 5-6 and “the facial region includes inner sides of eyes, a nose, and a cheek” in line 17 for claim clarity as this is the facial region relevant to the analysis and no claim refers to the detection and analysis of a second facial region. Mirrored changes should be made for independent Claims 7 and 11 and dependent claims 6, 10, and 14. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1, 5-7, 9-11, and 13-14 are rejected under 35 U.S.C. 101 because the claimed invention is directed to a judicial exception (i.e., a law of nature, a natural phenomenon, or an abstract idea) without significantly more. Regarding Claim 1, the claim(s) recites “an external environment/activity estimation neural network configured to detect at least one facial region as a region of interest from an input thermal image of a target person to be measured, and estimate an environmental type including an external temperature and participation in physical activity based on a temperature of the at least one region of interest; and a body temperature prediction neural network configured to predict a body temperature of the target person based on the environmental type estimated by the external environment/activity estimation neural network and the temperature of the at least one region of interest.; wherein: the at least one facial region includes inner sides of eyes, a nose, and a cheek, the external environment/activity estimation neural network is trained by using, as a first input data for training, a temperature of the at least one region of interest for each of face thermal images of a plurality of training targets, and by using, as label data, an environmental type including an external temperature and participation in physical activity according to the temperature of the at least one region of interest when measuring the temperature for each of the plurality of training targets, and the body temperature prediction neural network is trained by using, as a second input data for training, a temperature of the at least one region of interest for each of the face thermal images of the plurality of training targets and an environmental type for training including an external temperature and participation in physical activity according to the temperature of the at least one region of interest for each of the plurality of training targets, and by using, as a label data, a body temperature obtained when measuring the temperature for each of the training targets; and the first input data for training or the second input data for training includes: a temperature of the inner sides of the eves, which is determined as a highest temperature among pixel temperatures within a region of the inner sides of the eves, a temperature of the nose, which is determined as an average temperature of all pixels within a region of the nose, and a temperature of the cheek, which is determined as an average temperature of all pixels within a region of the cheek.” which amounts to an abstract idea (mathematical concepts; see also claim 2 of example 47 from the 2024 AI SME Update1, which establishes that a neural network corresponds to a series of mathematical calculations). Furthermore, the limitations also recite a mental process as nothing from the claims suggest that a skilled artisan would not be able to practically perform these steps (clinician can predict body temperature based on mental analysis of thermal image, or using simple pen/paper). This judicial exception is not integrated into a practical application because: - The claims fail to outline an improvement to the technical field. - The claims fail to apply the judicial exception to effect a particular treatment. - The claims fail to apply the judicial exception with a particular machine. - The claims fail to effect a transformation or reduction of a particular article to a different state or thing. Next, the claim as a whole is analyzed to determine whether any element or a combination of elements, integrates judicial exception into a practical application. For this part of the 101 analysis, the following additional limitations are considered: “an input interface device configured to receive an input thermal image of a target person to be measured;” “an output interface device configured to output the body temperature of the target person” The additional elements are insufficient to amount to significantly more than the judicial exception because they seem to merely generally link the use of the judicial exception to a particular technological environment. Moreover, the claim(s) does/do not include additional elements that are sufficient to amount to significantly more than the judicial exception because they pertain merely to insignificant extrasolution data gathering activities and generic postsolution activities. Furthermore, input interface devices and output interface devices are general fields of use. Dependent claims 5-6 also do not recite patent eligible subject matter as they merely further limit the abstract idea, recite limitations that do not integrate the claims into a practical application for similar reasons as set forth above, and/or do not recite significantly more than the identified abstract idea for substantially similar reasons as set forth above. Regarding Claim 7, the claim(s) recites “estimating an environmental type including an external temperature and participation in physical activity based on a temperature of the at least one region of interest; and predicting a body temperature of the target person based on the estimated environmental type and the temperature of the at least one region of interest for the input thermal image; …wherein: the at least one facial region includes inner sides of eyes, a nose, and a cheek, the external environment/activity estimation neural network is trained by using, as a first input data for training, a temperature of the at least one region of interest for each of face thermal images of a plurality of training targets, and by using, as label data, an environmental type including an external temperature and participation in physical activity according to the temperature of the at least one region of interest when measuring the temperature for each of the plurality of training targets, and the body temperature prediction neural network is trained by using, as a second input data for training, a temperature of the at least one region of interest for each of the face thermal images of the plurality of training targets and an environmental type for training including an external temperature and participation in physical activity according to the temperature of the at least one region of interest for each of the plurality of training targets, and by using, as a label data, a body temperature obtained when measuring the temperature for each of the training targets; and the first input data for training or the second input data for training includes: a temperature of the inner sides of the eves, which is determined as a highest temperature among pixel temperatures within a region of the inner sides of the eves, a temperature of the nose, which is determined as an average temperature of all pixels within a region of the nose, and a temperature of the cheek, which is determined as an average temperature of all pixels within a region of the cheek.” which amounts to an abstract idea (mathematical concepts, under the broadest reasonable interpretation when read in light of the Specification, a prediction of the numerical value of body temperature from input data would corresponds to an algorithm, i.e. a mathematical calculation. Further, the limitations also recite a mental process, i.e. clinician can predict body temperature based on mental analysis of thermal image, or using simple pen/paper). This judicial exception is not integrated into a practical application because: - The claims fail to outline an improvement to the technical field. - The claims fail to apply the judicial exception to effect a particular treatment. - The claims fail to apply the judicial exception with a particular machine. - The claims fail to effect a transformation or reduction of a particular article to a different state or thing. Next, the claim as a whole is analyzed to determine whether any element or a combination of elements, integrates judicial exception into a practical application. For this part of the 101 analysis, the following additional limitations are considered: “detecting at least one facial region as a region of interest from an input thermal image of a target person to be measured received via an input interface device,” “and outputting the body temperature of the target person via an output interface device,” The additional elements are insufficient to amount to significantly more than the judicial exception because they seem to merely generally link the use of the judicial exception to a particular technological environment. Moreover, the claim(s) does/do not include additional elements that are sufficient to amount to significantly more than the judicial exception because they pertain merely to insignificant extrasolution data gathering activities and generic postsolution activities. None of these limitations, considered as an ordered combination provide eligibility because the claim taken as a whole, does not amount to significantly more than the underlying abstract idea of making a prediction on body temperature based on a facial region temperature and secondary contextual parameter (e.g. environmental temperature and physical activity context) and does not purport to improve the functioning of the signal processing, or to improve any other technology or technical field. Use of a generic signal processing does not amount to significantly more than the abstract idea itself. Dependent claims 9-10 do not recite patent eligible subject matter as they merely further limit the abstract idea, recite limitations that do not integrate the claims into a practical application for similar reasons as set forth above, and/or do not recite significantly more than the identified abstract idea for substantially similar reasons as set forth above. Regarding Claim 11, the claim(s) recites “training an external environment/activity estimation neural network by using, as first training input data, a plurality of face thermal images for training and by using, as first label data, an environmental type including an external temperature and participation in physical activity according to a temperature of at least one facial region, which is a region of interest, of each of the plurality of face thermal images for training to detect the at least one region of interest of a target person from an input thermal image of the target person, and estimate the environmental type for the target person based on the temperature of the at least one region of interest for the target person; and training a body temperature prediction neural network by using, as second training input data, a plurality of face thermal images for training and a plurality of estimated environmental types for training including an external temperature and participation in physical activity and by using, as second label data, body temperatures obtained based on a temperature of at least one facial region, which is a region of interest, of each of the plurality of face thermal images for training and the plurality of estimated environmental types for training to predict a body temperature of the target person based on the temperature of the at least one region of interest of the target person and the estimated environmental type for the target person; wherein the at least one facial region includes inner sides of eyes, a nose, and a cheek, and the first training input data or the second training input data includes: a temperature of the inner sides of the eves, which is determined as a highest temperature among pixel temperatures within a region of the inner sides of the eves, a temperature of the nose, which is determined as an average temperature of all pixels within a region of the nose, and a temperature of the cheek, which is determined as an average temperature of all pixels within a region of the cheek.” which amounts to an abstract idea (mathematical concepts; see also claim 2 of example 47 from the 2024 AI SME Update2, which establishes that training a neural network corresponds to a series of mathematical calculations). Furthermore, the limitations also recite a mental process as nothing from the claims suggest that a skilled artisan would not be able to practically perform these steps (clinician can test and optimize a body temperature prediction based on mental analysis of training thermal images, or using simple pen/paper). This judicial exception is not integrated into a practical application because: - The claims fail to outline an improvement to the technical field. - The claims fail to apply the judicial exception to effect a particular treatment. - The claims fail to apply the judicial exception with a particular machine. - The claims fail to effect a transformation or reduction of a particular article to a different state or thing. Next, the claim as a whole is analyzed to determine whether any element or a combination of elements, integrates judicial exception into a practical application. For this part of the 101 analysis, there are no additional limitations to consider. Dependent claims 13-14 do not recite patent eligible subject matter as they merely further limit the abstract idea, recite limitations that do not integrate the claims into a practical application for similar reasons as set forth above, and/or do not recite significantly more than the identified abstract idea for substantially similar reasons as set forth above. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claim(s) 1, 5-7, 9-10 is/are rejected under 35 U.S.C. 103 as being unpatentable over Frank et al (US 2020/0397306) (“Frank”) in view of Beall (US 2021/0302238) and further in view of Putterman et al (US 2022/0157146) (“Putterman”) and further in view of LaBelle et al (US 2015/0238140) (“LaBelle”). Regarding Claim 1, while Frank teaches an apparatus for predicting a body temperature (Abstract, Figs. 1, 17A, [0114]-[0116], [0160], [0177]-[0179] system as a whole used to find body temperature, a value that can then be compared to a threshold to identify fever in a patient, [0182] and the system can measure for other conditions described within the specification such as stroke monitoring in Figs. 17a-17c), the apparatus comprising: an input interface device configured to receive an input thermal image of a target person to be measured (Fig. 1d, [0194] smartglasses 370 uses detection components to receive data of a target person to be measured, thermal data of a target person can be captured from head-mounted temperature sensor 372, thermal data of the target person’s environment can be captured from head-mounted temperature sensor 374, [0093] where the thermal data captured may be a thermal image, [0217] environmental thermal images may be derived from thermal data of a target person’s face by an inward-facing temperature sensor); detect at least one facial region from input images of the target person ([0202]-[0203] inward-facing head-mounted camera 376 provides images 377 that can be processed to detect desired facial regions, [0199] where desired regions of interest include nose, temple, forehead, and/or cheekbone); an external environment/activity estimation neural network configured to estimate an environmental type, including at least one of an external temperature and participation in physical activity based on a temperature of the at least one region of interest ([0217] environmental / external temperature may be estimated from a temperature measurement of a region of interest of a patient, the environmental temperature reflecting an environmental type, [0081] additional data that may characterize the environmental data, [0148], [0150] hemoglobin concentration measurements can act as environmental data reflective of environmental temperature, [0178]-[0181] a model 346 is trained to generate feature values to identify fever from input data, the model is trained with input data representing various external environmental temperatures and activity states, the model 346 can be a neural network. Thus, the environmental type and its effect on the skin temperature must be estimated and accounted for in the final fever determination); a body temperature prediction neural network configured to predict a body temperature of the target person based on the environmental type estimated by the external environment/activity estimation neural network and the temperature of the at least one region of interest ([0178]-[0181], [0217] model 346 acts as both external environmental/activity estimation neural network and body temperature prediction neural network to predict a body temperature as an intermediate value in determining fever), the at least one facial region includes inner sides of eyes, a nose, and a cheek ([0199] the facial regions for skin temperature can include nose and cheek, [0173] a stress-specific inward-facing thermal camera can be used, measuring the periorbital region which includes the area around a target person’s eye, [0222] where stress may be used as input to the fever detection model), the body temperature prediction neural network is trained by using, as a input data for training, a temperature of the at least one region of interest for each of face thermal images of a plurality of training targets, an external temperature, according to the temperature of the at least one region of interest when measuring the temperature for each of the plurality of training targets, and additional input data of participation in physical activity ([0172], [0178]-[0180]) and by using, as label data, a body temperature obtained when measuring the temperature for each of the training targets ([0184]-[0185]); and and Frank teaches an output interface device configured to output the calculated values of the target person ([0187] a calculated value would include the predicted internal body temperature), the calculated values including body temperature. And Frank further teaches in a second embodiment determining congestive heart failure that imaging-related values may be captured as averages of pixels in a region ([0314]) and that teachings from separate embodiments may be combined ([0540]-[0541]) Frank fails to teach the first input data for training or the second input data for training includes: a temperature of the inner side of the eyes, which is determined as a representative temperature among pixel temperatures within a region of the inner side of the eyes, and a temperature of the nose, which is determined as an average temperature of all pixels within a region of the nose, and a temperature of the cheek, which is determined as an average temperature of all pixels within a region of the cheek. It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to that the thermal image of the fever evaluating embodiment of Frank may additionally utilize the imaging processing step of averaging pixel values from the CHF evaluating embodiment of Frank to ensure a representative feature value is provided while reducing the amount of facial data that must be processed. Yet Frank fails to teach an external environment/activity estimation configured to detect at least one facial region as a region of interest from the input thermal image of the target person; and the representative temperature of the inner side of the eyes is determined as a highest temperature. However Beall teaches a method for predicting a body temperature (Abstract, [0046], [0078]), the method comprising: detecting at least one facial region as a region of interest from an input thermal image of a target person to be measured ([0078] “Upon a subject entering the FOV, the device 100 detects face and eye regions directly in the thermal image and a predetermined ROI shape is applied at each visible canthus location and the maxima or other summary statistic is taken as the canthi temperature estimate.” Subject’s face is the region of interest [0081]-[0082]), and the representative temperature of the inner side of the eyes is determined as a highest temperature ([0078] “More specifically, thermal imaging of key locations on the body (e.g., the inner canthus) typically have few confounds and provide the closest correspondence to a subject's core body temperature, suitable for applying a correction for physiology to arrive at a reliable estimate of core body temperature…Upon a subject entering the FOV, the device 100 detects face and eye regions directly in the thermal image and a predetermined ROI shape is applied at each visible canthus location and the maxima or other summary statistic is taken as the canthi temperature estimate.” Where an inner canthi is a subregion within the periorbital region). It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to detect the facial region of interest in Frank from the thermal image as taught by Beall as this identification ensures the appropriate facial regions are being measured, and thus the measured temperatures are serving their intended purpose of identifying body temperature. Furthermore, it would be obvious for the representative temperature of the inner eye taught in Frank to be substituted from an average temperature to a max temperature as the max temperature is a recognized equivalent summary statistic by Beall. Yet their combined efforts fail to teach the external environment/activity estimation neural network configured to detect at least one facial region as a region of interest from the input thermal image of the target person the external environment/activity estimation neural network is trained by using, as a first input data for training, a temperature of the at least one region of interest for each of face thermal images of a plurality of training targets, and by using, as label data, an environmental type including an external temperature and participation in physical activity according to the temperature of the at least one region of interest when measuring the temperature for each of the plurality of training targets, and the body temperature prediction neural network is trained by using, as a second input data for training, a temperature of the at least one region of interest for each of the face thermal images of the plurality of training targets and an environmental type for training including an external temperature and participation in physical activity according to the temperature of the at least one region of interest for each of the plurality of training targets, and by using, as a label data, a body temperature obtained when measuring the temperature for each of the training targets. However Putterman teaches an imaging-based body temperature prediction device (Abstract) where facial skin temperature is used as an input to predict body temperature and fever (Abstract, [0013]-[0014]) comprising Processing by using of more or more neural networks ([0013]), a categorizing clustering step related to contextual environment data of a target person ([0028]-[0029]), the categorizing clustering step performed with machine learning techniques ([0028]); and a representative temperature value may be either a mean temperature or a maximum temperature ([0054]). It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, that Frank’s neural network monitoring can be performed with an intermediate patient categorization step by clustering as taught by Putterman as a way to process the same information of environmental temperature and skin temperature in Frank, while also enabling a caregiver to review the machine learning’s categorization outcome. In doing so, one can verify that the environmental data is being appropriately considered. And Frank teaches that machine learning steps of clustering and neural networks may both be applied in processing information ([0077]). Consequently, it would be obvious that that adding an external environment/activity estimation neural network utilizing a clustering technique to Frank can support the body temperature identification of Frank, by considering how the baseline temperature of individual target person should be corrected for in a final body temperature/fever calculation. And applying this to Frank would require facial skin temperature and environmental data as the input and a label of environmental/cluster category as the output to train this neural network. Correspondingly, the original body temperature prediction neural network of Frank can utilize the skin temperature and environmental/cluster category as the input and a label of body temperature as the output for training Frank’s neural network. Finally, it would be obvious that the external environment/activity estimation neural network step would incorporate Beall’s facial region detection to facilitate a streamlined, automated processing by first identifying the appropriate measuring location before measuring input data and establishing a relationship with labeled output data. Yet their combined efforts fail to teach the body temperature prediction neural network is trained by using, as a second input data for training, an environmental type for training including participation in physical activity according to the temperature of the at least one region of interest for each of the plurality of training targets. However LaBelle teaches a physiological monitoring device (Abstract) and teaches that changes in surface temperature due to stress and physical activity are related ([0008]-[0010] physical exercise is a stress state of the body, [0030]-[0032] changes in surface temperature and body temperature due to stress and exercise are equivalent). It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, that the stress evaluation included in the fever prediction of Frank can alternatively be considered as a measure of exercise / participation in physical activity as LaBelle teaches that exercise is a stress state and exercise and stress cause related changes in surface temperature and body temperature of a patient. Thus, the included stress evaluation of facial regions of interest through neural networks of Frank ([0173], [0222]) would also act as physical activity evaluations and would be trained by the same mechanisms. Regarding Claim 5, Frank, Beall, Putterman, and LaBelle teach the apparatus of claim 1, wherein the environmental type includes at least one of a hot environment, an environment with exercise, a normal environment without exercise, and a cold environment (See Claim 1 Rejection, Frank [0179] describes training conditions for different temperature environment and different patient movement states that fulfill at least one of a hot environment, an environment with exercise, a normal environment without exercise, and a cold environment). Regarding Claim 6, Frank, Beall, Putterman, and LaBelle teach the apparatus of claim 1, and Beall further teaches wherein in the detected region of interest, a face region is detected from the input thermal image of the target person based on a first object detection algorithm ([0107] described facial recognition algorithm), and the at least one facial region is detected as the region of interest within the detected face region based on a second object detection algorithm ([0078] predetermined region of interest shape, relative to a face shape used to identify location of canthi). It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, that the facial recognition of Beall utilize multiple detection algorithms as this enables the detection algorithms to be specialized to their relative location. Regarding Claim 7, while Frank teaches a method for predicting a body temperature to be performed by a body temperature apparatus including an external environment/activity estimation neural network and a body temperature prediction neural network (Abstract, Figs. 1, 17A, [0114]-[0116], [0160], [0177]-[0181] system as a whole used to find body temperature, a value that can then be compared to a threshold to identify fever in a patient, a model 346 is trained to generate feature values to identify fever from input data, the model is trained with input data representing various external environmental temperatures and activity states, the model 346 can be a neural network. Thus, model 346 acts as both external environmental/activity estimation neural network and body temperature prediction neural network to predict a body temperature as an intermediate value in determining fever, [0182] and the system can measure for other conditions described within the specification such as stroke monitoring in Figs. 17a-17c), the method comprising: detect at least one facial region from input images of the target person ([0202]-[0203] inward-facing head-mounted camera 376 provides images 377 that can be processed to detect desired facial regions, [0199] where desired regions of interest include nose, temple, forehead, and/or cheekbone, Fig. 1d, [0194] smartglasses 370 uses detection components to receive data of a target person to be measured) and receiving an input thermal input image of a target person via an input interface device (Fig. 1d, [0194] thermal data of a target person can be captured from head-mounted temperature sensor 372, thermal data of the target person’s environment can be captured from head-mounted temperature sensor 374, [0093] where the thermal data captured may be a thermal image, [0217] environmental thermal images may be derived from thermal data of a target person’s face by an inward-facing temperature sensor); estimating an environmental type, including at least one of an external temperature and participation in physical activity based on a temperature of the at least one region of interest ([0217] environmental / external temperature may be estimated from a temperature measurement of a region of interest of a patient, the environmental temperature reflecting an environmental type, [0081] additional data that may characterize the environmental data, [0148], [0150] hemoglobin concentration measurements can act as environmental data reflective of environmental temperature, [0178]-[0181] a model 346 is trained to generate feature values to identify fever from input data, the model is trained with input data representing various external environmental temperatures and activity states, thus, the environmental type and its effect on the skin temperature must be estimated and accounted for in the final fever determination); predicting a body temperature of the target person based on the estimated environmental type and the temperature of the at least one region of interest ([0178]-[0181], [0217] model 346 acts as both external environmental/activity estimation neural network and body temperature prediction neural network to predict a body temperature as an intermediate value in determining fever), the at least one facial region includes inner sides of eyes, a nose, and a cheek ([0199] the facial regions for skin temperature can include nose and cheek, [0173] a stress-specific inward-facing thermal camera can be used, measuring the periorbital region which includes the area around a target person’s eye, [0222] where stress may be used as input to the fever detection model), the body temperature prediction neural network is trained by using, as a input data for training, a temperature of the at least one region of interest for each of face thermal images of a plurality of training targets, an external temperature, according to the temperature of the at least one region of interest when measuring the temperature for each of the plurality of training targets, and additional input data of participation in physical activity ([0172], [0178]-[0180]) and by using, as label data, a body temperature obtained when measuring the temperature for each of the training targets ([0184]-[0185]); Frank teaches outputting the calculated values of the target person via an output interface device ([0187] a calculated value would include the predicted internal body temperature), the calculated values including body temperature; and Frank further teaches in a second embodiment determining congestive heart failure that imaging-related values may be captured as averages of pixels in a region ([0314]) and that teachings from separate embodiments may be combined ([0540]-[0541]), Frank fails to teach the first input data for training or the second input data for training includes: a temperature of the inner side of the eyes, which is determined as a representative temperature among pixel temperatures within a region of the inner side of the eyes, and a temperature of the nose, which is determined as an average temperature of all pixels within a region of the nose, and a temperature of the cheek, which is determined as an average temperature of all pixels within a region of the cheek. It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to that the thermal image of the fever evaluating embodiment of Frank may additionally utilize the imaging processing step of averaging pixel values from the CHF evaluating embodiment of Frank to ensure a representative feature value is provided while reducing the amount of facial data that must be processed. Yet Frank fails to teach detecting at least one facial region as a region of interest from the input thermal image of the target person; and the representative temperature of the inner side of the eyes is determined as a highest temperature. However Beall teaches a method for predicting a body temperature (Abstract, [0046], [0078]), the method comprising: detecting at least one facial region as a region of interest from an input thermal image of a target person to be measured ([0078] “Upon a subject entering the FOV, the device 100 detects face and eye regions directly in the thermal image and a predetermined ROI shape is applied at each visible canthus location and the maxima or other summary statistic is taken as the canthi temperature estimate.” Subject’s face is the region of interest [0081]-[0082]), and the representative temperature of the inner side of the eyes is determined as a highest temperature ([0078] “More specifically, thermal imaging of key locations on the body (e.g., the inner canthus) typically have few confounds and provide the closest correspondence to a subject's core body temperature, suitable for applying a correction for physiology to arrive at a reliable estimate of core body temperature…Upon a subject entering the FOV, the device 100 detects face and eye regions directly in the thermal image and a predetermined ROI shape is applied at each visible canthus location and the maxima or other summary statistic is taken as the canthi temperature estimate.” Where an inner canthi is a subregion within the periorbital region). It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to detect the facial region of interest in Frank from the thermal image as taught by Beall as this identification ensures the appropriate facial regions are being measured, and thus the measured temperatures are serving their intended purpose of identifying body temperature. Furthermore, it would be obvious for the representative temperature of the inner eye taught in Frank to be substituted from an average temperature to a max temperature as the max temperature is a recognized equivalent summary statistic by Beall. Yet their combined efforts fail to teach the external environment/activity estimation neural network is trained by using, as a first input data for training, a temperature of the at least one region of interest for each of face thermal images of a plurality of training targets, and by using, as label data, an environmental type including an external temperature and participation in physical activity according to the temperature of the at least one region of interest when measuring the temperature for each of the plurality of training targets, and the body temperature prediction neural network is trained by using, as a second input data for training, a temperature of the at least one region of interest for each of the face thermal images of the plurality of training targets and an environmental type for training including an external temperature and participation in physical activity according to the temperature of the at least one region of interest for each of the plurality of training targets, and by using, as a label data, a body temperature obtained when measuring the temperature for each of the training targets. However Putterman teaches an imaging-based body temperature prediction device (Abstract) where facial skin temperature is used as an input to predict body temperature and fever (Abstract, [0013]-[0014]) comprising Processing by using of more or more neural networks ([0013]), a categorizing clustering step related to contextual environment data of a target person ([0028]-[0029]), the categorizing clustering step performed with machine learning techniques ([0028]); and a representative temperature value may be either a mean temperature or a maximum temperature ([0054]). It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, that Frank’s neural network monitoring can be performed with an intermediate patient categorization step by clustering as taught by Putterman as a way to process the same information of environmental temperature and skin temperature in Frank, while also enabling a caregiver to review the machine learning’s categorization outcome. In doing so, one can verify that the environmental data is being appropriately considered. And Frank teaches that machine learning steps of clustering and neural networks may both be applied in processing information ([0077]). Consequently, it would be obvious that that adding an external environment/activity estimation neural network utilizing a clustering technique to Frank can support the body temperature identification of Frank, by considering how the baseline temperature of individual target person should be corrected for in a final body temperature/fever calculation. And applying this to Frank would require facial skin temperature and environmental data as the input and a label of environmental/cluster category as the output to train this neural network. Correspondingly, the original body temperature prediction neural network of Frank can utilize the skin temperature and environmental/cluster category as the input and a label of body temperature as the output for training Frank’s neural network. Finally, it would be obvious that Frank’s neural network step would incorporate Beall’s facial region detection to facilitate a streamlined, automated processing by first identifying the appropriate measuring location before measuring input data and establishing a relationship with labeled output data. Yet their combined efforts fail to teach the body temperature prediction neural network is trained by using, as a second input data for training, an environmental type for training including participation in physical activity according to the temperature of the at least one region of interest for each of the plurality of training targets. However LaBelle teaches a physiological monitoring device (Abstract) and teaches that changes in surface temperature due to stress and physical activity are related ([0008]-[0010] physical exercise is a stress state of the body, [0030]-[0032] changes in surface temperature and body temperature due to stress and exercise are equivalent). It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, that the stress evaluation included in the fever prediction of Frank can alternatively be considered as a measure of exercise / participation in physical activity as LaBelle teaches that exercise is a stress state and exercise and stress cause related changes in surface temperature and body temperature of a patient. Thus, the included stress evaluation of facial regions of interest through neural networks of Frank ([0173], [0222]) would also act as physical activity evaluations and would be trained by the same mechanisms. Regarding Claim 9, Frank, Beall, Putterman, and LaBelle teach the method of claim 7, wherein the environmental type includes at least one of a hot environment, an environment with exercise, a normal environment without exercise, and a cold environment (See Claim 7 Rejection, Frank [0179] describes training conditions for different temperature environment and different patient movement states that fulfill at least one of a hot environment, an environment with exercise, a normal environment without exercise, and a cold environment). Regarding Claim 10, Frank, Beall, Putterman, and LaBelle teach the method of claim 7, and Beall further teaches wherein in the detected region of interest, a face region is detected from the input thermal image of the target person based on a first object detection algorithm ([0107] described facial recognition algorithm), and the at least one facial region is detected as the region of interest within the detected face region based on a second object detection algorithm ([0078] predetermined region of interest shape, relative to a face shape used to identify location of canthi). It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, that the facial recognition of Beall utilize multiple detection algorithms as this enables the detection algorithms to be specialized to their relative location. Regarding Claim 11, while Frank teaches a method for training a body temperature prediction apparatus (Abstract, Figs. 1, 17A, [0114]-[0116], [0160], [0177]-[0181] system as a whole used to find body temperature, a value that can then be compared to a threshold to identify fever in a patient, a model 346 is trained to generate feature values to identify fever from input data, the model is trained with input data representing various external environmental temperatures and activity states, the model 346 can be a neural network. Thus, model 346 acts as both external environmental/activity estimation neural network and body temperature prediction neural network to predict a body temperature as an intermediate value in determining fever), the method comprising: detect at least one facial region from input images of the target person ([0202]-[0203] inward-facing head-mounted camera 376 provides images 377 that can be processed to detect desired facial regions, [0199] where desired regions of interest include nose, temple, forehead, and/or cheekbone, Fig. 1d, [0194] smartglasses 370 uses detection components to receive data of a target person to be measured) and receiving an input thermal input image of a target person via an input interface device (Fig. 1d, [0194] thermal data of a target person can be captured from head-mounted temperature sensor 372, thermal data of the target person’s environment can be captured from head-mounted temperature sensor 374, [0093] where the thermal data captured may be a thermal image, [0217] environmental thermal images may be derived from thermal data of a target person’s face by an inward-facing temperature sensor); estimating an environmental type, including at least one of an external temperature and participation in physical activity based on a temperature of the at least one region of interest ([0217] environmental / external temperature may be estimated from a temperature measurement of a region of interest of a patient, the environmental temperature reflecting an environmental type, [0081] additional data that may characterize the environmental data, [0148], [0150] hemoglobin concentration measurements can act as environmental data reflective of environmental temperature, [0178]-[0181] a model 346 is trained to generate feature values to identify fever from input data, the model is trained with input data representing various external environmental temperatures and activity states, thus, the environmental type and its effect on the skin temperature must be estimated and accounted for in the final fever determination); predicting a body temperature of the target person based on the estimated environmental type and the temperature of the at least one region of interest ([0178]-[0181], [0217] model 346 acts as both external environmental/activity estimation neural network and body temperature prediction neural network to predict a body temperature as an intermediate value in determining fever), the at least one facial region includes inner sides of eyes, a nose, and a cheek ([0199] the facial regions for skin temperature can include nose and cheek, [0173] a stress-specific inward-facing thermal camera can be used, measuring the periorbital region which includes the area around a target person’s eye, [0222] where stress may be used as input to the fever detection model), the body temperature prediction neural network is trained by using, as a input data for training, a temperature of the at least one region of interest for each of face thermal images of a plurality of training targets, an external temperature, according to the temperature of the at least one region of interest when measuring the temperature for each of the plurality of training targets, and additional input data of participation in physical activity ([0172], [0178]-[0180]) and by using, as label data, a body temperature obtained when measuring the temperature for each of the training targets ([0184]-[0185]); Yet Frank fails to teach Training an external environmental/activity estimation neural network includes detecting at least one facial region as a region of interest from the input thermal image of the target person; and the representative temperature of the inner side of the eyes is determined as a highest temperature. However Beall teaches a method for predicting a body temperature (Abstract, [0046], [0078]), the metho
Read full office action

Prosecution Timeline

Nov 11, 2021
Application Filed
Mar 20, 2025
Non-Final Rejection — §101, §103, §112
Jul 23, 2025
Response Filed
Nov 23, 2025
Final Rejection — §101, §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12593996
PULSE WAVE TRANSIT TIME MEASUREMENT DEVICE AND LIVING BODY STATE ESTIMATION DEVICE
2y 5m to grant Granted Apr 07, 2026
Patent 12569148
BLOOD-VISCOSITY MEASUREMENT METHOD
2y 5m to grant Granted Mar 10, 2026
Patent 12557997
PROXIMITY SENSOR CIRCUITS AND RELATED SENSING METHODS
2y 5m to grant Granted Feb 24, 2026
Patent 12543998
Conductive Instrument
2y 5m to grant Granted Feb 10, 2026
Patent 12539043
LESION VISUALIZATION USING DUAL WAVELENGTH APPROACH
2y 5m to grant Granted Feb 03, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
54%
Grant Probability
85%
With Interview (+31.0%)
4y 6m
Median Time to Grant
Moderate
PTA Risk
Based on 335 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month