Prosecution Insights
Last updated: April 19, 2026
Application No. 18/471,247

MACHINE-LEARNING BASED DETERMINATION OF VITAL SIGNS AND A PHYSIOLOGICAL STATE OF AN EXISTING OR POTENTIAL POLICY HOLDER FOR INSURANCE UNDERWRITING

Non-Final OA §103
Filed
Sep 20, 2023
Examiner
ZUBERI, MOHAMMED H
Art Unit
2178
Tech Center
2100 — Computer Architecture & Software
Assignee
Brightermd LLC
OA Round
1 (Non-Final)
70%
Grant Probability
Favorable
1-2
OA Rounds
3y 1m
To Grant
98%
With Interview

Examiner Intelligence

Grants 70% — above average
70%
Career Allow Rate
306 granted / 437 resolved
+15.0% vs TC avg
Strong +28% interview lift
Without
With
+27.8%
Interview Lift
resolved cases with interview
Typical timeline
3y 1m
Avg Prosecution
23 currently pending
Career history
460
Total Applications
across all art units

Statute-Specific Performance

§101
11.3%
-28.7% vs TC avg
§103
53.6%
+13.6% vs TC avg
§102
20.8%
-19.2% vs TC avg
§112
12.7%
-27.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 437 resolved cases

Office Action

§103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . DETAILED ACTION This action is responsive to patent application as filed on 9/20/2023. This action is made Non-Final. Claims 1 – 20 are pending in the case. Claims 1 and 11 are independent claims. Information Disclosure Statement The information disclosure statement (IDS) submitted on 8/5/2024, is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner. Drawings The drawings filed on 9/20/2023 have been accepted by the Examiner. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1-8 and 11-18 is/are rejected under 35 U.S.C. 103 as being unpatentable over Frank (USPUB 20200390337 A1) in view of Stempora (USPUB 20150025917 A1). Claim 1: Frank teaches A method comprising: capturing, by camera of a smart device, a plurality of images of a body part of [a user] (0015: systems and methods are described that utilize images of a user’s face in order to detect temperature changes on a user’s face for various purposes such as detecting intoxication. The images are captured using one or more inward facing cameras); determining, by the smart device, a plurality of hemoglobin concentration (HC) changes based on the plurality of images (0020 and 0026: calculate, based on baseline images captured with Cam.sub.1&2 while the user did not have a fever, a baseline pattern comprising values indicative of first and second baseline hemoglobin concentrations at the first and second regions, respectively; calculate, based on a current set of images captured with Cam.sub.1&2, a current pattern comprising values indicative of first and second current hemoglobin concentrations at the first and second regions, respectively; and detect whether the user has a fever based on a deviation of the current pattern from the baseline pattern. Optionally, the computer calculates the values indicative of the baseline and current hemoglobin concentrations... calculate, based on baseline images captured with Cam.sub.1&2 while the user did not have a fever, a baseline pattern comprising values indicative of first and second baseline hemoglobin concentrations at the first and second regions, respectively; calculate, based on a current set of images captured with Cam.sub.1&2, a current pattern comprising values indicative of first and second current hemoglobin concentrations at the first and second regions, respectively; and detect whether the user is intoxicated based on a deviation of the current pattern from the baseline pattern. Optionally, the computer calculates the values indicative of the baseline and current hemoglobin concentrations based on detecting facial flushing patterns in the baseline and current images); determining, by the smart device, a set of bitplanes of the plurality of images that represent the plurality of hemoglobin concentration (HC) changes of the [user] (0020 and 0026: calculate, based on baseline images captured with Cam.sub.1&2 while the user did not have a fever, a baseline pattern comprising values indicative of first and second baseline hemoglobin concentrations at the first and second regions, respectively; calculate, based ona current set of images captured with Cam.sub.1&2, a current pattern comprising values indicative of first and second current hemoglobin concentrations at the first and second regions, respectively; and detect whether the user has a fever based ona deviation of the current pattern from the baseline pattern. Optionally, the computer calculates the values indicative of the baseline and current hemoglobin concentrations... calculate, based on baseline images captured with Cam.sub.1&2 while the user did not have a fever, a baseline pattern comprising values indicative of first and second baseline hemoglobin concentrations at the first and second regions, respectively; calculate, based ona current set of images captured with Cam.sub.1&2, a current pattern comprising values indicative of first and second current hemoglobin concentrations at the first and second regions, respectively; and detect whether the user is intoxicated based on a deviation of the current pattern from the baseline pattern. Optionally, the computer calculates the values indicative of the baseline and current hemoglobin concentrations based on detecting facial flushing patterns in the baseline and current images); extracting, by the smart device, a value for a vital sign from the plurality of HC changes; building, by the smart device, a feature set comprising the plurality of HC changes (0132: a hemoglobin concentration pattern, such as one of the examples described above, may be calculated, in some embodiments, from images by a computer, such as computer 340 (described below). Optionally, the hemoglobin concentration pattern may be utilized to generate one or more feature values that are used in a machine learning -based approach by the computer for various applications, such as...detecting intoxication... the hemoglobin concentration pattern may be utilized to calculate additional values used to represent the extent of facial blood flow and/or extent of vascular dilation, which may be evaluated, e.g., by comparing the extent of blood flow and/or vascular dilation to thresholds in order to detect whether the user has a fever...detect alcohol intoxication); performing, by the smart device, a trained machine learning model comprising a computational model on the feature set to obtain an output data set comprising a physiological state for the vital sign (0113: The machine learning based model may be personalized for a specific user. For example, after receiving a verified diagnosis of an extent of a physiological condition (such as blood pressure level, extent of a cardiovascular disease, extent of a pulmonary disease, extent of a migraine attack, etc.), the computed can use the verified diagnosis as labels and generate from a physiological measurement (such as the PPG signal, the temperature signal, the movement signal, and/or the audio signal) feature values to train a personalized machine learning-based model for the user. Then the computer can utilize the personalized machine learning-based model for future calculations of the extent of the physiological condition based on feature values). Frank, by itself, does not seem to completely teach generating, by the smart device, an underwriting package comprising the value for the vital sign and the physiological state for the vital sign; and sending, by the smart device, the underwriting package to a central computer. The Examiner maintains that these features were previously well-known as taught by Stempora. Stempora teaches generating, by the smart device, an underwriting package comprising the value for the vital sign and the physiological state for the vital sign; and sending, by the smart device, the underwriting package to a central computer (Fig 9 and 0272-274: a system 900 for determining a level of risk 917 associated with an individual 901 for underwriting purposes comprising one or more sensors 920 (such as a vehicle mounted camera 904 or a camera in a portable device 903) mounted to the vehicle 902 capturing sensor information 921 (such as one or more images or video 905 or other sensor information 922) and a processor 906 analyzing the sensor information 921 to determine first information 907. The first information 907 determined by the processor 906 can include, for example, operator identification information 911, environmental or contextual information 909, operator performance information 912, or eye related information 910. The eye related information 910 may include one or more selected from the group: pupil size or dilation, eyelid state/motion (incl. sleepy eyelid movement, blinking frequency or speed, closed eyelids, etc.), microsaccade amplitude, frequency or orientation, eye orientation, eye movement or fixation, gaze direction, details of the iris, and details of the retina... the first information 907 determined from the sensor information 921 by the first processor 906 (such as eye related information 910 and/or other individual, environmental or contextual information 909) is used to determine cognitive information 914 (such as cognitive load and or cognitive capacity 915, the use of reflexive or analytical decision making processes 916 by the vehicle operator 901, or distraction/selective attention cognitive information 923) solely or in combination with other information 913 (such as heart rate information or circadian rhythm information). The eye related information 910 or other first information 907 (such as non-eye related first information, not shown) may be processed by a cognitive information algorithm 924 and optionally a distraction algorithm 925 to generate the cognitive information 914. The distraction algorithm 925 may be used to generate distraction or selective attention cognitive information 923 for the individual 901. The cognitive information 914 is processed by a second processor 908 (such as by implementing a cognitive analysis algorithm on the second processor 908) along with risk or loss exposure information 7106 and optionally cognitive map or profile information 926 to determine a level of risk 917 which may be used to determine a risk score and/or cost of insurance 918, such as an automobile insurance premium... a system 900 for determining risk related information 613 to modify the individual's ability to use portable device software applications 116; modify the ability of the individual to use portable device functional features 117; alert or provide feedback 118 to the individual; or provide information to a second and/or third party 122 and optionally be used to generate a risk score, to generate an insurance rate 611, or for insurance underwriting 612 purposes. The system 900 may use one or more methods or devices described in FIGS. 1-9, such as the vehicle operation performance analysis system 140, the method 400 of generating risk related information 408 for an operator of a vehicle, the method 600 of generating risk related information 613 for an operator of a vehicle using a risk assessment algorithm 608, a method 7100 of determining a risk assessment, risk score, underwriting, or cost of insurance 7118 for an individual, a method 8200 of determining a risk assessment, risk score, underwriting, or cost of insurance 8218, and a system 900 for determining a level of risk 917 associated with an individual. The first information 907 may be derived from and/or include one or more selected from the group: vehicle sensor information 123, portable device sensor information 125, portable device feature or software use information 1001, and other external information 1002. The first information may further include one or more selected from the group: historical, present, or predicted input information 607, vehicle operation performance information 108, cognitive capacity information 403, cognitive load information 407, distraction information 1003, risk or loss exposure information 7106, and monitored or inferred risk-related decision information 7101. The first information may be analyzed by one or more algorithms (such as one or more of the algorithms referenced in FIG. 3) on one or more processors to generate risk related information 613 and may include the use of one or more selected from the group: cognitive maps of other individuals 8217, decision information for a new situation 8214, a propensity model 8215, and a predictive model 8216. The risk related information may be processed to provide one or more of the following functions: modify the individual's ability to use portable device software applications 116; modify the ability of the individual to use portable device functional features 117; alert or provide feedback 118 to the individual; and provide information to a second and/or third party 122 and optionally be used to generate a risk score, to generate an insurance rate 611, or for insurance underwriting 612 purposes). Frank and Stempora are analogous art because they are from the same problem-solving area, identifying vital signs from an image for down stream processing. Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art, having the teachings of Frank and Stempora before him or her, to combine the teachings of Frank and Stempora. The rationale for doing so would have been to providing an accurate insurance quote based on the risk level of an individual. Therefore, it would have been obvious to combine Frank and Stempora to obtain the invention as specified in the instant claim(s). Claims 2 and 12: Frank, by itself, does not seem to completely teach receiving, by the smart device from the central computer, an insurance policy premium that was determined based at least in part on the underwriting package. The Examiner maintains that these features were previously well-known as taught by Stempora. Stempora teaches receiving, by the smart device from the central computer, an insurance policy premium that was determined based at least in part on the underwriting package (Fig 9 and 0272-274: a system 900 for determining a level of risk 917 associated with an individual 901 for underwriting purposes comprising one or more sensors 920 (such as a vehicle mounted camera 904 or a camera in a portable device 903) mounted to the vehicle 902 capturing sensor information 921 (such as one or more images or video 905 or other sensor information 922) and a processor 906 analyzing the sensor information 921 to determine first information 907. The first information 907 determined by the processor 906 can include, for example, operator identification information 911, environmental or contextual information 909, operator performance information 912, or eye related information 910. The eye related information 910 may include one or more selected from the group: pupil size or dilation, eyelid state/motion (incl. sleepy eyelid movement, blinking frequency or speed, closed eyelids, etc.), microsaccade amplitude, frequency or orientation, eye orientation, eye movement or fixation, gaze direction, details of the iris, and details of the retina... the first information 907 determined from the sensor information 921 by the first processor 906 (such as eye related information 910 and/or other individual, environmental or contextual information 909) is used to determine cognitive information 914 (such as cognitive load and or cognitive capacity 915, the use of reflexive or analytical decision making processes 916 by the vehicle operator 901, or distraction/selective attention cognitive information 923) solely or in combination with other information 913 (such as heart rate information or circadian rhythm information). The eye related information 910 or other first information 907 (such as non-eye related first information, not shown) may be processed by a cognitive information algorithm 924 and optionally a distraction algorithm 925 to generate the cognitive information 914. The distraction algorithm 925 may be used to generate distraction or selective attention cognitive information 923 for the individual 901. The cognitive information 914 is processed by a second processor 908 (such as by implementing a cognitive analysis algorithm on the second processor 908) along with risk or loss exposure information 7106 and optionally cognitive map or profile information 926 to determine a level of risk 917 which may be used to determine a risk score and/or cost of insurance 918, such as an automobile insurance premium... a system 900 for determining risk related information 613 to modify the individual's ability to use portable device software applications 116; modify the ability of the individual to use portable device functional features 117; alert or provide feedback 118 to the individual; or provide information to a second and/or third party 122 and optionally be used to generate a risk score, to generate an insurance rate 611, or for insurance underwriting 612 purposes. The system 900 may use one or more methods or devices described in FIGS. 1-9, such as the vehicle operation performance analysis system 140, the method 400 of generating risk related information 408 for an operator of a vehicle, the method 600 of generating risk related information 613 for an operator of a vehicle using a risk assessment algorithm 608, a method 7100 of determining a risk assessment, risk score, underwriting, or cost of insurance 7118 for an individual, a method 8200 of determining a risk assessment, risk score, underwriting, or cost of insurance 8218, and a system 900 for determining a level of risk 917 associated with an individual. The first information 907 may be derived from and/or include one or more selected from the group: vehicle sensor information 123, portable device sensor information 125, portable device feature or software use information 1001, and other external information 1002. The first information may further include one or more selected from the group: historical, present, or predicted input information 607, vehicle operation performance information 108, cognitive capacity information 403, cognitive load information 407, distraction information 1003, risk or loss exposure information 7106, and monitored or inferred risk-related decision information 7101. The first information may be analyzed by one or more algorithms (such as one or more of the algorithms referenced in FIG. 3) on one or more processors to generate risk related information 613 and may include the use of one or more selected from the group: cognitive maps of other individuals 8217, decision information for a new situation 8214, a propensity model 8215, and a predictive model 8216. The risk related information may be processed to provide one or more of the following functions: modify the individual's ability to use portable device software applications 116; modify the ability of the individual to use portable device functional features 117; alert or provide feedback 118 to the individual; and provide information to a second and/or third party 122 and optionally be used to generate a risk score, to generate an insurance rate 611, or for insurance underwriting 612 purposes). Frank and Stempora are analogous art because they are from the same problem-solving area, identifying vital signs from an image for down stream processing. Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art, having the teachings of Frank and Stempora before him or her, to combine the teachings of Frank and Stempora. The rationale for doing so would have been to providing an accurate insurance quote based on the risk level of an individual. Therefore, it would have been obvious to combine Frank and Stempora to obtain the invention as specified in the instant claim(s). Claims 3 and 13: Frank, by itself, does not seem to completely teach prior to capturing, sending, by the smart device, a notification message to the central computer; and prior to capturing and after sending the notification message, receiving, by the smart device from the central computer, a request to scan the body part of the existing or potential insurance policy holder. The Examiner maintains that these features were previously well-known as taught by Stempora. Stempora teaches prior to capturing, sending, by the smart device, a notification message to the central computer; and prior to capturing and after sending the notification message, receiving, by the smart device from the central computer, a request to scan the body part of the existing or potential insurance policy holder (0145: the cognitive capacity algorithm receives cognitive capacity input information and measures or estimates the cognitive capacity of the operator. The cognitive capacity input information may include current or historical information: received from one or more vehicles, portable devices, or external device sensors; received from one or more user interface features of the vehicle and/or portable device; received from an external server or device; related to the mental or physical condition of the operator; or related to the age, education, or health of the operator. In one embodiment, the cognitive capacity algorithm updates the estimation or measurement of the cognitive capacity of the operator at regular intervals, at irregular intervals, before operation of the vehicle or portable device, during the operation of the vehicle and/or portable device, or at times between operations of the vehicle. For example, in one embodiment, the cognitive capacity algorithm is executed on a portable device processor when one or more sensors indicate a change in physical or mental condition of the vehicle operator (such as sensors that determine sleepiness such as cameras, eye tracking software, or sensors that detect or provide information related to the blood alcohol level of the vehicle operator or the alcohol level in the air within the vehicle). In one embodiment, the cognitive load of the operator for a series of historical vehicle operation events is analyzed to estimate the cognitive capacity. In one embodiment, statistical data from measurements of the cognitive load and/or cognitive capacity of other portable device and/or vehicle operators is used to estimate or extrapolate the cognitive capacity of the vehicle operator in question. For example, the success rate or accuracy data and data corresponding to the use of one or more portable device features for a current vehicle operator simultaneously operating a portable device may be compared with similar historical data from other vehicle operators (where the cognitive capacity may be known, estimated, or validated) to estimate the cognitive capacity of the current operator. In this example, an application on a portable device may transmit current sensor, vehicle, user interface or device information to a server comprising historical cognitive load and/or cognitive capacity data correlated with a plurality of users wherein the server provides the current cognitive load, cognitive capacity, historical information, or related information (such as a new insurance rate based on the current conditions) to the portable device). Frank and Stempora are analogous art because they are from the same problem-solving area, identifying vital signs from an image for down stream processing. Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art, having the teachings of Frank and Stempora before him or her, to combine the teachings of Frank and Stempora. The rationale for doing so would have been to providing an accurate insurance quote based on the risk level of an individual. Therefore, it would have been obvious to combine Frank and Stempora to obtain the invention as specified in the instant claim(s). Claims 4 and 14: Frank, by itself, does not seem to completely teach performed in real time. The Examiner maintains that these features were previously well-known as taught by Stempora. Stempora teaches performed in real time (0135: the vehicle operation performance algorithm can provide risk related information for the vehicle operator that could be used, for example, to provide real-time, dynamic, event-based, irregular, or regular vehicle operation risk assessment, risk scoring, and/or insurance pricing for the operator. Frank and Stempora are analogous art because they are from the same problem-solving area, identifying vital signs from an image for down stream processing. Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art, having the teachings of Frank and Stempora before him or her, to combine the teachings of Frank and Stempora. The rationale for doing so would have been to providing an accurate insurance quote based on the risk level of an individual. Therefore, it would have been obvious to combine Frank and Stempora to obtain the invention as specified in the instant claim(s). Claims 5 and 15: Frank, by itself, does not seem to completely teach determining, by the smart device, a classification for the physiological state for the vital sign, wherein the underwriting package includes the classification. The Examiner maintains that these features were previously well-known as taught by Stempora. Stempora teaches determining, by the smart device, a classification for the physiological state for the vital sign, wherein the underwriting package includes the classification (0026 and 0262: the initial underwriting profile is generated through traditional means, such as credit scoring, that serves as an underwriting baseline or constant upon which discounts are applied based on a different underwriting method. In one embodiment, the initial underwriting profile comprises information received from the individual or other data sources and/or the results of processing the information received from the individual or other data sources. In one embodiment, the information received from the individual is obtained through a survey, test, or initial monitoring. In one embodiment, a survey, test, or initial monitoring infers or monitors one or more decision-making processes and decision outcomes for one or more decisions in one or more contextual situations. In another embodiment, one or more initial correlations are made between the risk-related decision-making processes and the decisions with the resulting decision outcomes. In one embodiment, an initial underwriting profile is generated subsequent to monitoring and analyzing information from the individual related to one or more decisions made in one or more risk-related situations. In another embodiment, the individual is rated on a scale ranging from a very risk-seeking individual to a very risk-averse individual. In another embodiment, the individual is initially segmented according to one or more risk scores, risk scales, or risk-related categories...the propensity model 7115 uses one or more risk-related decision-making or judgment processes 7103 (such as System 1 decision-making processes 7107 or heuristics), the individual's cognitive map 7102, one or more correlations 7113, and decision information for a new situation 7114 to determine a propensity for the individual to be risk-seeking or risk-averse for the new situation. The propensity model 7115 may determine the probability of the individual to use one or more risk-related decision-making processes 7103 and/or make risk-related decisions 7109 that result in negative decision outcomes 7112 or positive decision outcomes 7111 for a situation. This probability can be used to generate the risk assessment, risk score, underwriting, or cost of insurance). Frank and Stempora are analogous art because they are from the same problem-solving area, identifying vital signs from an image for down stream processing. Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art, having the teachings of Frank and Stempora before him or her, to combine the teachings of Frank and Stempora. The rationale for doing so would have been to providing an accurate insurance quote based on the risk level of an individual. Therefore, it would have been obvious to combine Frank and Stempora to obtain the invention as specified in the instant claim(s). Claims 6 and 16: Frank, by itself, does not seem to completely teach the classification is normal, elevated, or severe. The Examiner maintains that these features were previously well-known as taught by Stempora. Stempora teaches the classification is normal, elevated, or severe (0026: the initial underwriting profile is generated through traditional means, such as credit scoring, that serves as an underwriting baseline or constant upon which discounts are applied based on a different underwriting method. In one embodiment, the initial underwriting profile comprises information received from the individual or other data sources and/or the results of processing the information received from the individual or other data sources. In one embodiment, the information received from the individual is obtained through a survey, test, or initial monitoring. In one embodiment, a survey, test, or initial monitoring infers or monitors one or more decision-making processes and decision outcomes for one or more decisions in one or more contextual situations. In another embodiment, one or more initial correlations are made between the risk-related decision-making processes and the decisions with the resulting decision outcomes. In one embodiment, an initial underwriting profile is generated subsequent to monitoring and analyzing information from the individual related to one or more decisions made in one or more risk-related situations. In another embodiment, the individual is rated on a scale ranging from a very risk-seeking individual to a very risk-averse individual. In another embodiment, the individual is initially segmented according to one or more risk scores, risk scales, or risk-related categories). Frank and Stempora are analogous art because they are from the same problem-solving area, identifying vital signs from an image for down stream processing. Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art, having the teachings of Frank and Stempora before him or her, to combine the teachings of Frank and Stempora. The rationale for doing so would have been to providing an accurate insurance quote based on the risk level of an individual. Therefore, it would have been obvious to combine Frank and Stempora to obtain the invention as specified in the instant claim(s). Claims 7 and 17: Frank, by itself, does not seem to completely teach receiving, by the smart device from a machine learning computer, the trained machine learning model. The Examiner maintains that these features were previously well-known as taught by Stempora. Stempora teaches receiving, by the smart device from a machine learning computer, the trained machine learning model (0074: information related to individual health or performance, operational performance of an activity (such as operating a vehicle), individual identification or security, environmental or contextual information, decision information, information used to generate decision information, cognitive information, or neurophysiological information is obtained from one or more data sources selected from the group: data supplied by the individual; a portable or wearable device; a telematics device or vehicle or craft comprising a telematics device, data recorder or one or more sensors; a building or structure system (such as an alarm system or automation system for a home or building); a medical device; a magnetoencephalography device; government data sources; industrial control systems; one or more sensors or one or more devices comprising one or more sensors; and external data providers, external data sources, or external networks. This information may be received directly or indirectly from the data source and information from the data source may be processed (such as by a processor executing a decision-making process algorithm, cognitive information algorithm, cognitive analysis algorithm, or distraction algorithm) to generate other information. The information used to generate additional information, the situation information, the propensity model algorithm, the predictive model algorithm, the cognitive maps of individuals, the risk score, the cost of insurance information, the algorithms used to generate the risk score or cost of insurance, the feedback or the behavior modification algorithms, or the other algorithms or information discussed herein may be stored on one or more non-transitory computer-readable media that are connected or in communication with one or more devices (including portable devices, wearable devices, desktops, laptops, servers, etc.), or that are in operable communication via wired (internet protocol, etc.) or wireless formats (Wi-Fi, Bluetooth.TM., IEEE 802.11 formats, cellular communication data formats (GPRS, 3G, 4G (Mobile WiMAX, LTE, etc.), or optical, etc.) with one or more devices or processors. In one embodiment, one or more of the devices (such as a portable device for example) communicates this information to another device (such as a server). The information or information used to generate the information may be stored on a non-transitory computer-readable media on or in operable communication with the portable or wearable device, a remote computer or server (such as an insurer's computer or the insured's computer, for example), or an automobile or craft or device operatively connected thereto). Frank and Stempora are analogous art because they are from the same problem-solving area, identifying vital signs from an image for down stream processing. Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art, having the teachings of Frank and Stempora before him or her, to combine the teachings of Frank and Stempora. The rationale for doing so would have been to providing an accurate insurance quote based on the risk level of an individual. Therefore, it would have been obvious to combine Frank and Stempora to obtain the invention as specified in the instant claim(s). Claims 8 and 18: Frank teaches the trained machine learning model is a K-means clustering model or a neural network model (0107 and 0209). Claim 11: Claim 11 essentially recites a computer system comprising a smart device, wherein the smart device is configured to complete the steps of claim 1. As Frank teaches the recited computer system (0020), claim 11 is rejected using the same rationale used in the rejection of claim 1. Allowable Subject Matter Claims 9, 10, 19 and 20 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. Note The Examiner cites particular columns, line numbers and/or paragraph numbers in the references as applied to the claims below for the convenience of the Applicant(s). Although the specified citations are representative of the teachings in the art and are applied to the specific limitations within the individual claim, other passages and figures may apply as well. It is respectfully requested that, in preparing responses, the Applicant fully consider the references in their entirety as potentially teaching all or part of the claimed invention, as well as the context of the passage as taught by the prior art or disclosed by the Examiner. See MPEP 2123. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure and is listed in the attached PTOL-892 form. Any inquiry concerning this communication or earlier communications from the examiner should be directed to MOHAMMED-IBRAHIM ZUBERI whose telephone number is (571)270-7761. The examiner can normally be reached on M-Th 8-6 Fri: 7-12/OFF. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Steph Hong can be reached on (571) 272-4124. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /MOHAMMED H ZUBERI/ Primary Examiner, Art Unit 2178
Read full office action

Prosecution Timeline

Sep 20, 2023
Application Filed
Dec 10, 2025
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12585923
DESPARSIFIED CONVOLUTION FOR SPARSE ACTIVATIONS
2y 5m to grant Granted Mar 24, 2026
Patent 12582478
SYSTEMS AND METHODS FOR INTEGRATING INTRAOPERATIVE IMAGE DATA WITH MINIMALLY INVASIVE MEDICAL TECHNIQUES
2y 5m to grant Granted Mar 24, 2026
Patent 12579650
IMPROVED SPINAL HARDWARE RENDERING
2y 5m to grant Granted Mar 17, 2026
Patent 12567496
METHOD AND APPARATUS FOR DISPLAYING AND ANALYSING MEDICAL SCAN IMAGES
2y 5m to grant Granted Mar 03, 2026
Patent 12547819
MODULAR SYSTEMS AND METHODS FOR SELECTIVELY ENABLING CLOUD-BASED ASSISTIVE TECHNOLOGIES
2y 5m to grant Granted Feb 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
70%
Grant Probability
98%
With Interview (+27.8%)
3y 1m
Median Time to Grant
Low
PTA Risk
Based on 437 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month