Prosecution Insights
Last updated: April 19, 2026
Application No. 18/374,250

CONTEXTUAL AWARENESS FOR UNSUPERVISED ADMINISTRATION OF COGNITIVE ASSESSMENTS REMOTELY OR IN A CLINICAL SETTING

Final Rejection §101§103
Filed
Sep 28, 2023
Examiner
WEBB, JESSICA MARIE
Art Unit
3683
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Linus Health Inc.
OA Round
2 (Final)
33%
Grant Probability
At Risk
3-4
OA Rounds
3y 0m
To Grant
86%
With Interview

Examiner Intelligence

Grants only 33% of cases
33%
Career Allow Rate
33 granted / 99 resolved
-18.7% vs TC avg
Strong +52% interview lift
Without
With
+52.5%
Interview Lift
resolved cases with interview
Typical timeline
3y 0m
Avg Prosecution
21 currently pending
Career history
120
Total Applications
across all art units

Statute-Specific Performance

§101
33.6%
-6.4% vs TC avg
§103
34.3%
-5.7% vs TC avg
§102
5.1%
-34.9% vs TC avg
§112
23.3%
-16.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 99 resolved cases

Office Action

§101 §103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . DETAILED ACTION Response to Amendment In the amendment dated 01/29/2026, the following occurred: Claims 1, 12, 15 and 20 have been amended. Claims 1-20 are pending and have been examined. Priority This application claims priority to U.S. Provisional Patent Application No. 63/377,435 filed 09/28/2022. Information Disclosure Statement The Information Disclosure Statement(s) (IDS)(s) submitted on 12/18/2025 follow(s) the provisions of 37 CFR 1.97 and has/have been fully considered by the Examiner. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-20 are rejected under 35 U.S.C. §101 because the claimed invention is directed to a judicial exception (i.e., a law of nature, a natural phenomenon, or an abstract idea) without significantly more. Claims 1, 15 and 20 are rejected under 35 U.S.C. §101 because the claimed invention is directed to an abstract idea without significantly more. Eligibility Analysis Step 1 (YES): Claims 1, 15 and 20 fall into at least one of the statutory categories (i.e., process, machine or non-transitory CRM). Eligibility Analysis Step 2A1 (YES): The claims recite an abstract idea. The identified abstract idea is for assessing environmental context around an individual taking an assessment as underlined (claim 1 being representative): receiving a plurality of signals, each signal from one sensor of a plurality of sensors, wherein each signal is associated with a modality of assessment; processing each of the plurality of signals with an individualized signal processing module; extracting a plurality of features, each from one of the processed plurality of signals; aggregating the plurality of features into a machine learning input with a feature processing module; providing the machine learning input to a machine learning algorithm; associating the aggregated plurality of features, by the machine learning algorithm, to likelihood values for each of one or more environmental interference types based on a series of time instances; determining the one or more environmental interference types is present based on a predetermined likelihood threshold; and outputting an environmental context of the individual, the environmental context comprising one or more determined present environmental interference types. The identified claim elements, as drafted, is a process that under the broadest reasonable interpretation (BRI) covers a method of organizing human activity (i.e., managing personal behavior or relationships or interactions between people including following rules or instructions) but for the recitation of generic computer component language (discussed below in 2A2). That is, other than reciting the generic computer component language, the claimed invention amounts to managing personal behavior or relationships or interactions between people. For example, but for the generic computer component language, the claims encompass a person providing the machine learning input to a machine learning algorithm that associates the aggregated features to likelihood values, determining one or more environmental interference types is present based on a predetermined likelihood threshold, and outputting the one or more environmental interference types determined as being present. The Examiner notes that certain “method[s] of organizing human activity” includes a person’s interaction with a computer (see MPEP § 2106.04(a)(2)(II)). The Examiner notes that the Applicant has described machine learning to encompass simplistic mathematical models such as logistic regression (see Specification para. 0056), and thus the machine learning is interpreted to be part of the abstract idea. If a claim limitation, under its broadest reasonable interpretation, covers managing personal behavior or relationships or interactions between people but for the recitation of generic computer component language, then it falls within the “certain methods of organizing human activity” grouping of abstract ideas. See additionally MPEP § 2106. Accordingly, the claims recite an abstract idea. Eligibility Analysis Step 2A2 (NO): The judicial exception, the above-identified abstract idea, is not integrated into a practical application. In particular, the claims recite the additional elements of a computing node (claim 15) having a non-transitory computer readable storage medium (claims 15 and 20) and a processor (claims 15 and 20) having various processing modules (claims 1, 15 and 20) that implement the identified abstract idea. The additional elements aforementioned are not described by the applicant and are recited at a high-level of generality (i.e., a generic computer or computer component performing a generic computer or computer component function that facilitates the identified abstract idea) such that these amount no more than mere instructions to apply the exception using a generic computer component (see Specification e.g., at para. 0064 and para. 0068). See MPEP § 2106.04(d)(I). Accordingly, even in combination, these additional elements do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea. The claims are directed to an abstract idea. The claims further recite the additional elements of a plurality of sensors that collect, transmit or output data. The additional elements are recited at a high-level of generality (i.e., each as a general means of collecting, transmitting or outputting data) and each amounts to a location from which data is received or to which data is transmitted or outputted, each of which represents an extra-solution activity (e.g., mere data gathering and data output). MPEP § 2106.04(d)(I) indicates that extra-solution data gathering and data output activity cannot provide a practical application. Accordingly, even in combination, these additional elements do not integrate the abstract idea into a practical application. The claims are directed to an abstract idea. Eligibility Analysis Step 2B (NO): The claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional elements of a computing node (claim 15) having a (presumably non-transitory) computer readable storage medium (claims 15 and 20) and a processor (claims 15 and 20) to perform the method (represented by claim 1) amount no more than mere instructions to apply the exception using a generic computer or generic computer component. Mere instructions to apply an exception using generic computer(s) and/or generic computer component(s) cannot provide an inventive concept (“significantly more”). See MPEP § 2106.05(f). Also discussed above with respect to integration of the abstract idea into a practical application, the additional elements of the plurality of sensors (i.e., each a device that collects, transmits or outputs data) are each considered extra-solution activity. This has been re-evaluated under the “significantly more” analysis and determined to be well-understood, routine, conventional activity in the field. MPEP 2016.05(d)(II) indicates that receiving, transmitting or outputting data over a network has been held by the courts to be well-understood, routine, conventional activity (citing TLI Communications, Symantec, OIP Techs., and buySAFE). See also MPEP 2106.05(g) (citing Cybersource, Mayo, OIP Techs.) Well-understood, routine, conventional activity cannot provide an inventive concept (“significantly more”). As such, the claims are not patent eligible. Dependent claims 2-14 and 16-19, when analyzed as a whole, are similarly rejected under 35 U.S.C. §101 because the additional limitation(s) fail(s) to establish that the claim(s) is/are not directed to an abstract idea without significantly more. The claims, when considered alone or as an ordered combination, either (1) merely further define the abstract idea, (2) do not further limit the claim to a practical application, or (3) do not provide an inventive concept such that the claims are subject matter eligible. Claims 2 and 16 each further recites the additional element of the plurality of sensors includes at least two of an accelerometer, a gyroscope, a microphone, and a video camera. Under practical application, the additional elements are merely generally linking the use of the abstract idea to a particular technological environment or field of use. MPEP § 2106.04(d)(I) indicates that generally linking an abstract idea to a particular technological environment or field of use cannot provide a practical application. Accordingly, even in combination, this additional element does not integrate the abstract idea into a practical application. The claim is directed to an abstract idea. Also, as discussed above with respect to integration of the abstract idea into a practical application, the additional element of the plurality of sensors includes at least two of an accelerometer, a gyroscope, a microphone, and a video camera is considered generally linking the use of the abstract idea to a particular technological environment or field of use. This has been re-evaluated under the “significantly more” analysis and has also been found insufficient to provide significantly more. MPEP § 2106.05(A) indicates that generally linking an abstract idea to a particular technological environment or field of use cannot provide significantly more. Accordingly, even in combination, this additional element does not provide significantly more; as such, the claim is not patent eligible. Claim(s) 3-4, 7-10, 12-14 and 17-18 merely further describe(s) the abstract idea (e.g. the processing of each of the plurality of signals, the transforming each of the plurality of signals, the machine learning input is provided to the machine learning algorithm (such as logistic regression) for each time instance in a window of time instances, aggregating the plurality of features into a machine learning input occurs for each time instance in a window of time instances, suggesting a corrective action for the environmental interference by flagging or correcting a received signal, suggesting a corrective action by providing a recommendation, suggesting an intervention based on the environment context of the individual, the output, the qualitative environmental score). See analysis, supra. The Examiner notes that the abstract idea could be characterized in Claims 4 and 18 as a certain method of organizing human activity (managing personal behavior or relationships or interactions between people) along with a mathematical process, reciting multiple abstract ideas falling into different abstract idea sub-groupings. This characterization is added for completeness: In Claims 4 and 18, the limitation of “performing a Discrete Fourier Transform (DFT) on each of the plurality of signals”, as drafted, is a process that under broadest reasonable interpretation covers a mathematical concept that includes mathematical relationships, mathematical formulas or equations, and mathematical calculations but for the recitation of generic computer component language (discussed at Step 2A2). See Specification, e.g., at para. 0027. That is, other than reciting the generic computer component language, the claim recites a mathematical procedure for converting signal data. For example, but for the generic computer component language, the claim encompasses a person executing a discrete Fourier transform in the manner described in the identified abstract idea, supra. The Examiner notes that the mathematical concept need not be expressed in mathematical symbols. MPEP § 2106.04(a)(2)(I). If a claim limitation, under its broadest reasonable interpretation, represents the creation of mathematical interrelationships between data but for the recitation of generic computer component language, then it falls within the “Mathematical Concepts” grouping of abstract ideas. Accordingly, the claim recites an abstract idea. The types of identified abstract ideas are considered together as a single abstract idea for analysis purposes. Claims 5 and 19 further recites the additional element of the feature processing module as comprising a convolutional neural network, which given the BRI is considered generally linking the identified abstract idea. See analysis, supra. Claim 6 further recites the abstract idea including recitation of the machine learning algorithm (such as logistic regression) as comprising a recurrent neural network, which given the BRI is considered generally linking the identified abstract idea. See analysis, supra. Claim 11 further recites the abstract idea (e.g., determining a number of times that a device is dropped, determining a manual dexterity based on the number of times that the device is dropped and the recorded movement); the additional element of the device that includes at least one sensor of the plurality of sensors, which is considered generally linking; and further recites the additional elements of the plurality of sensors (e.g., collecting, transmitting or outputting data), which is considered insignificant extra solution activity. See analyses, supra. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claim(s) 1-10 and 15-20 are rejected under 35 U.S.C. 103 as being unpatentable over Krishna Murthy et al. (US 2022/0300786 A1; “Krishna Murthy” or “K. M.” herein) in view of Bures et al. (US 2020/0322703 A1; “Bures” herein). Re. Claim 1, Krishna Murthy teaches a method for assessing environmental context around an individual taking an assessment ([0011], [0025] teach utilizing video activity modality analysis and Neuro-Symbolic artificial intelligence (AI) to analyze physical environment context information of a subject within a classroom.), the method comprising: receiving a plurality of signals, each signal from one sensor of a plurality of sensors, wherein each signal is associated with a modality of assessment ([0015] teaches a network 110 capable of receiving and transmitting data, voice, and/or video signals (a plurality of signals), including multimedia signals that include voice, data, and video information. [0002], [0017], [0019] teach using sensors to perceive a situation… client device 120 includes a user interface 122, application 124, and sensor 126, which represents a variety (plurality) of sensors that collect and provide various kinds of data… For example, client device 120 utilizes one or more sensors (e.g., a camera, etc.) for capturing images of the operating environment (associated with a modality of assessment). Also, [0003] teaches a system is multimodal if it has more than one modality, which is the classification of an independent channel of sensory input/output between a computer and a human.); processing each of the plurality of signals with an individualized signal processing module ([0015], [0019] teach client device 120 includes user interface 122, application 124, and sensor 126… client device 120 utilizes one or more sensors for capturing (processing) images of the operating environment of the client device 120. Abstract, [0018], [0024] teach one or more processors execute event program 200 of the client device 120 to identify a sequence of actions of the subject within the (processed) sensor feed / video feed.); extracting a plurality of features, each from one of the processed plurality of signals ([0036] teaches event program 200 uses a CNN (e.g., machine learning algorithm) with spatio-temporal three-dimensional (3D) kernels (3D CNNs) to extract spatio-temporal features from the sensor feed for action recognition tasks corresponding to one or more users within the classroom (i.e., event program identifies the presence of user and actions a user is performing within the physical environment).); aggregating the plurality of features into a machine learning input with a feature processing module (Abstract, [0018], [0024] teach one or more processors execute event program 200 of the client device 120. [0036] teaches in step 212, event program 200 identifies a user within the physical environment of the subject… event program 200 utilizes client device 120 and/or IOT device 130 to identify a set of conditions within a physical environment of a subject… event program 200 utilizes a sensor feed / video feed… to identify a person/user within a classroom/physical environment of a student/subject… event program 200 uses the CNN to extract the spatio-temporal features and inputs the extracted spatio-temporal features from the sensor/video feed… into a machine learning algorithm to identify the set of conditions / the context of the classroom. Note: additionally, Fig. 3, [0032], [0045] teach event program 200 feeds the outputs of the separate layers of dense layer 330 into attention layer 340 that concatenates (also aggregating) the extracted features of RNN 310 and RNN 320.); providing the machine learning input to a machine learning algorithm ([0036] teaches … event program 200 inputs the extracted spatio-temporal features from the sensor/video feed… into a machine learning algorithm to identify the set of conditions / the context of the classroom. Note: additionally, Fig. 3, [0032], [0045] teach the event program 200 feeds the outputs… into attention layer 340 that concatenates the extracted features of RNN 310 and RNN 320; and passes the output of attention layer 340 into neural network 350.); […] for each of one or more environmental interference types […] (see below); determining the one or more environmental interference types is present based on […] ([0032] teach event program 200 utilizes a machine learning algorithm to generate and transmit an interactive activity to a subject using IOT device 130… For example, event program 200 can sort identified distraction modalities (determined environmental interference types) based at least in part on a determined disturbance level of the user / a disturbance score with the physical environment (presence for each of the one or more environmental interference types), an effectiveness score with a student, and/or a responsiveness score with the student… various aggregation methods can be applied.); and outputting an environmental context of the individual, the environmental context comprising one or more determined present environmental interference types ([0026], Fig. 3, [0032], [0045] teach the event program 200 passes the output of attention layer 340 into neural network 350, which performs the aggregation methods to output a score for each identified distraction modality within the physical environment (outputting… one or more determined present environmental interference types). [0002] teaches context-aware systems are concerned with the acquisition of context, the abstraction of understanding of context, and application behavior based on the recognized context (the output of the machine learning algorithm).) Krishna Murthy does not teach associating the aggregated plurality of features, by the machine learning algorithm, to likelihood values… based on a series of time instances; or a predetermined likelihood threshold. Bures teaches associating the aggregated plurality of features, by the machine learning algorithm, to likelihood values… based on a series of time instances; and a predetermined likelihood threshold ([0199], [0389] teach a measurement database 542 storing measurement entries corresponding to time-series measurement values. Fig. 1, [0294] teach inference functions can be generated by the monitoring data analysis system 140 by training a machine learning model on a set of training data. [0296] teaches a plurality of feature vectors (the aggregated plurality of features) can be generated from the plurality of measurement entries, where a training function is performed on the plurality of feature vectors to train the model. [0284] teaches detection functions can be performed… on incoming measurement data to determine if a condition of interest of a particular feature is determined/predicted. [0285] teaches the detection functions can take one or more measurement entries as input and can generate a probability value (associating the aggregated plurality of features to likelihood values), where the probability value indicates a probability of whether or not the condition of interest exists. [0285] also teaches generating a binary value by comparing the probability value to a probability threshold (predetermined likelihood threshold), where the binary value indicates whether or not the condition of interest exists.) Therefore, it would have been prima facie obvious to one of ordinary skill in the art before the effective filing date to have modified the audio-visual activity safety recommendation system and method of Krishna Murthy to perform data acquisition and monitoring data analysis with a system implementing machine learning and to use this information as part of a system and method of processing time-series measurement entries of a measurement database as taught by Bures, with the motivation of improving environmental monitoring technologies in an indoor physical environment, machine learning technology application in the data environment, data acquisition, and signal processing (see Bures at para. 0004, 0019, 0253, 0420, 0441). Re. Claim 2, Krishna Murthy/Bures teaches the method claim 1, wherein the plurality of sensors includes at least […] a video camera ([0015], [0017], [0019] teach utilizing one or more sensors (e.g., a camera) for capturing images / multimedia signals that include voice data and video information.) Krishna Murthy may not teach at least two of an accelerometer, a gyroscope, a microphone, and a video camera. Bures teaches at least an accelerometer, a gyroscope, a microphone (Fig. 1, [0149], [0160], [0175] teach the set of sensor devices 1-W can include one or more accelerometers, gyroscopes, and/or magnetometers… a microphone… cameras or other imaging sensors that can capture visible video and/or still image data.) Therefore, it would have been prima facie obvious to one of ordinary skill in the art before the effective filing date to have modified the audio-visual activity safety recommendation system and method of Krishna Murthy to perform data acquisition with multiple sensor units having multiple PCB mounted sensors such as these and to use this information as part of a system and method of processing time-series measurement entries of a measurement database as taught by Bures, with the motivation of improving environmental monitoring technologies in an indoor physical environment, machine learning technology application in the data environment, data acquisition, and signal processing (see Bures at para. 0004, 0019, 0253, 0420, 0441). Re. Claim 3, Krishna Murthy/Bures teaches the method of claim 1, wherein the processing of each of the plurality of signals comprises […] each of the plurality of signals (see claim 1 prior art rejection). Krishna Murthy may not teach transforming each of the plurality of signals. Bures teaches transforming each of the plurality of signals ([0253] teaches the function database 543 can further include function entries for a plurality of signal processing functions to process time-series data, to determine summary measures for time-series data in the measurement database. The signal processing functions can include and/or utilize… discrete Fourier transform function… configured to transform, filter, aggregate, summarize, and/or otherwise process discrete time-series data. Some or all signal processing functions can be configured to process time-series data for a particular sensor device of a particular multi-sensor unit 120. For example, measurement entries for a particular sensor device of a particular multi-sensor unit 120 with timestamps within a pre-defined time frame can be processed via one or more signal processing functions.) Therefore, it would have been prima facie obvious to one of ordinary skill in the art before the effective filing date to have modified the audio-visual activity safety recommendation system and method of Krishna Murthy to perform signal processing on data acquired with the multiple sensor units and their multiple PCB mounted sensors and to use this information as part of a system and method of processing time-series measurement entries of a measurement database as taught by Bures, with the motivation of improving environmental monitoring technologies in an indoor physical environment; machine learning technology application in the data environment; and data acquisition, processing and analysis (see Bures at para. 0004, 0019, 0253, 0420, 0441). Re. Claim 4, Krishna Murthy/Bures teaches the method of claim 3, wherein the transforming each of the plurality of signals comprises performing a Discrete Fourier Transform (DFT) on each of the plurality of signals (see claim 3 prior art rejection, e.g., Bures [0253].) Re. Claim 5, Krishna Murthy/Bures teaches the method of claim 1, wherein the feature processing module comprises a convolutional neural network (K.M. [0036] teaches in step 212… event program 200 utilizes a sensor feed / video feed… to identify a person/user within a classroom/physical environment of a student/subject… event program 200 inputs the extracted spatio-temporal features from the sensor/video feed… into a machine learning algorithm to identify the set of conditions / the context of the classroom. Additionally, the event program 200 uses a CNN (the feature processing module comprises a convolutional neural network) to extract the spatio-temporal features from the sensor feed for the action recognition tasks.) Re. Claim 6, Krishna Murthy/Bures teaches the method of claim 1, wherein the machine learning algorithm comprises a recurrent neural network (K. M. Fig. 3, [0032], [0036], [0045] teach in step 212, event program 200 identifies a user within the physical environment of the subject… event program 200 utilizes client device 120 and/or IOT device 130 to identify a set of conditions within a physical environment of a subject… event program 200 utilizes a sensor feed / video feed… to identify a person/user within a classroom/physical environment of a student/subject… event program 200 inputs the extracted spatio-temporal features from the sensor/video feed… into a machine learning algorithm, i.e., RNN 310 or RNN 320 (the machine learning algorithm comprises a recurrent neural network), to identify the set of conditions / the context of the classroom.) Re. Claim 7, Krishna Murthy/Bures teaches the method of claim 1, wherein the machine learning input is provided to the machine learning algorithm […] (see claim 1 prior art rejection.) Krishna Murthy may not teach providing the machine learning input for each time instance in a window of time instances. Bures teaches collecting data for each time instance in a window of time instances ([0044] teaches the current mode of can indicate a quality, precision, resolution, and/or tuning of the amount and/or type of data collected by each of the sensor devices 1-W. For example, this can dictate the quality and quantity of data collected in each measurements by a particular sensor device, in accordance with the particular measurement rate that is determined for the particular sensor device. For example, a number of data points (time instances) captured by the sensor device at each collected measurement… and/or other tuning of the amount of and/or richness of data captured at each time and/or within each time window can be dictated by the measurement rate. [0054] teaches performing one or more processing functions on measurement data collected by multiple sensor devices at the same time and/or within the same time window (for each time instance in the window).) Therefore, it would have been prima facie obvious to one of ordinary skill in the art before the effective filing date to have modified the audio-visual activity safety recommendation system and method of Krishna Murthy to perform various upstream processes using the data collected in a measurement by a particular sensor device, for example, a number of data points captured by the sensor device at a collected measurement and within a time window, and to use this information as part of a system and method of processing time-series measurement entries of a measurement database as taught by Bures, with the motivation of improving environmental monitoring technologies in an indoor physical environment; machine learning technology application in the data environment; and data acquisition, processing and analysis (see Bures at para. 0004, 0019, 0253, 0420, 0441). Re. Claim 8, Krishna Murthy/Bures teaches the method of claim 7, wherein the aggregating the plurality of features into a machine learning input occurs (see claim 1 prior art rejection) for each time instance in a window of time instances (see claim 7 prior art rejection). Re. Claim 9, Krishna Murthy/Bures teaches the method of claim 1, further comprising suggesting a corrective action for the environmental interference by flagging or correcting a received signal (K. M. [0036] teaches in this example, event program 200 inputs the sensor feed of the one or more IOT enabled devices within a classroom into a machine learning algorithm to identify a set of conditions (e.g., context) of the classroom. K. M. [0039]-[0039] teaches, additionally, event program 200 utilizes reinforcement techniques to calibrate audio-visual event recommendations (suggest a corrective action) based on a distraction level (e.g., deviation from current sequence of actions) of the teacher due to the audio-visual event… Additionally, event program 200 utilizes the item and actions of the student to determine whether the activity of the student corresponds to the hazardous activity (flagging the received signal) (i.e., automatically stimulating audiovisual interaction between a distraction modality and a subject based on a set of conditions of a physical environment of the subject using a recommended interactive activity for current state (e.g., activity) of the subject) (correcting the received signal), e.g., triggering performance of the audio-visual event to avert actions of the student.) Re. Claim 10, Krishna Murthy/Bures teaches the method of claim 1, further comprising suggesting a corrective action to negate an effect of the environmental interference by providing a recommendation to modify the environmental context (K. M. [0036] teaches in this example, event program 200 inputs the sensor feed of the one or more IOT enabled devices within a classroom into a machine learning algorithm to identify a set of conditions (e.g., context) of the classroom. K. M. [0039]-[0039] teaches, additionally, event program 200 utilizes reinforcement techniques to calibrate audio-visual event recommendations (suggest a corrective action) based on a distraction level (e.g., deviation from current sequence of actions) of the teacher due to the audio-visual event (based on the environmental interference)… Additionally, event program 200 utilizes the item and actions of the student to determine whether the activity of the student corresponds to the hazardous activity (i.e., automatically stimulating audiovisual interaction between a distraction modality and a subject based on a set of conditions of a physical environment of the subject using a recommended interactive activity for current state (e.g., activity) of the subject (provided)), e.g., triggering performance of the audio-visual event to avert (modify) actions of the student.) Re. Claim 15, the subject matter of claim 15 is essentially defined in terms of a system, which is technically corresponding to method claim 1. Since claim 15 is analogous to claim 1, it is similarly analyzed and rejected in a manner consistent with the rejection of claim 1. Further, Krishna Murthy teaches the feature(s) of a computing node comprising a computer readable storage medium having program instructions embodied therewith, the program instructions executable by a processor of the computing node to cause the processor to perform the method (K. M. [0058], [0060] teaches an article of manufacture (non-transitory CRM) including instructions which implements aspects of the functions/acts specified in the flowcharts and/or block diagrams… may be provided to a processor of a general purpose computer to produce a machine (computing node), such that the instructions execute via the processor of the computer to carry out operations of the present invention.) Re. Claim 16, the subject matter of claim 16 is essentially defined in terms of a system, which is technically corresponding to method claim 2. Since claim 16 is analogous to claim 2, it is similarly analyzed and rejected in a manner consistent with the rejection of claim 2. Re. Claim 17, the subject matter of claim 17 is essentially defined in terms of a system, which is technically corresponding to method claim 3. Since claim 17 is analogous to claim 3, it is similarly analyzed and rejected in a manner consistent with the rejection of claim 3. Re. Claim 18, the subject matter of claim 18 is essentially defined in terms of a system, which is technically corresponding to method claim 4. Since claim 18 is analogous to claim 4, it is similarly analyzed and rejected in a manner consistent with the rejection of claim 4. Re. Claim 19, the subject matter of claim 19 is essentially defined in terms of a system, which is technically corresponding to method claim 5. Since claim 19 is analogous to claim 5, it is similarly analyzed and rejected in a manner consistent with the rejection of claim 5. Re. Claim 20, the subject matter of claim 20 is essentially defined in terms of (presumably) a manufacture, which is technically corresponding to method claim 1 and system claim 15. Since claim 20 is analogous to claims 1 and 15, it is similarly analyzed and rejected in a manner consistent with the rejection of claims 1 and 15. Claims 11 and 13-14 are rejected under 35 U.S.C. 103 as being anticipated by Krishna Murthy in view of Bures and Krimon et al. (US 2019/0038222 A1; “Krimon” herein). Re. Claim 11, Krishna Murthy/Bures teaches the method of claim 1, further comprising: […], wherein the device includes at least one sensor of the plurality of sensors (K. M. [0002], [0017], [0019] teaches using sensors to perceive a situation… client device 120 includes a user interface 122, application 124, and sensor 126, which represents a variety of sensors that collect and provide various kinds of data.); recording movement of the individual from at least one sensor of the plurality of sensors; and determining a manual dexterity of the individual based on the number of times that the device is dropped and the recorded movement. Krishna Murthy may not teach determining a number of times that a device is dropped, wherein the device includes at least one sensor of the plurality of sensors; recording movement of the individual from at least one sensor of the plurality of sensors; and determining a manual dexterity of the individual based on the number of times that the device is dropped and the recorded movement. Krimon teaches determining a number of times that a device is dropped ([0037] the intended motion inference and motion modulation engine 320 may detect motion, and identify objects 321 and the user's intended actions 323 with respect to the objects… The output of the report generation 345 may be used by a variety of people or institutions to better care for the user. For example, medical professionals may monitor the user's condition, care givers or relatives may be alerted to an incident (e.g., dropped cup) (determined a number of times), device maintainers may be alerted to device malfunctions, etc.); recording movement of the individual from at least one sensor of the plurality of sensors ([0036] teaches a consolidated sensory network 330 may provide an intended motion inferencer and motion modulation engine 320 with data from a variety of sensors on the assistive device and in the environment (necessarily recorded). Hand motion tracking 331 using sensors on the assistive device, and environmental modeling using data from location 333, speech 335, and vision 337 sensors, may be used to provide the intended motion inferencer and motion modulation engine 320 with environmental information from the sensors.); and determining a manual dexterity of the individual based on the number of times that the device is dropped and the recorded movement (see previous citations. [0065] teaches adding objective assessment per a pre-defined scale such as the Unified Parkinson Disease Rating Scale (UPDRS)… timing tests may be performed to see how many times the patient may touch their index finger to the thumb in a specified interval, or how many times the user may pronate/supinate their hands, as measures of dexterity (determining a manual dexterity). A subset of tests that may be easily automated by the assistive device may be run regularly (e.g., weekly or biweekly). These strength and dexterity tests may be performed much more frequently than if performed only at an annual/semi-annual physical, which is currently typical for patients.) Therefore, it would have been prima facie obvious to one of ordinary skill in the art before the effective filing date to have modified the audio-visual activity safety recommendation system and method of Krishna Murthy/Bures to determine a number of times that a device is dropped and to use this information as part of mitigating the effects of neuro-muscular ailments (i.e., analysis of data from a variety of sensors and regularly running automated test routines) as taught by Krimon (e.g., para. 0036-0037, 0065), with the motivation of improving aspects of patient health and quality of life, e.g., treating the effects of neuro-muscular ailments or neurodegenerative diseases as well as improving machine learning technology of the system (see Krimon at Abstract, para. 0015-0016, 0020, 0023, 0052, 0065). Re. Claim 13, Krishna Murthy/Bures teaches the method of claim 1, wherein the output comprises […] an environmental score (K. M. [0036] teaches in step 212, event program 200 inputs… into a machine learning algorithm to identify the set of conditions / the context within the physical environment of the subject, for example, identifies a person within the classroom… additionally, event program identifies the presence of user and actions a user is performing within the physical environment. K. M. Fig. 2, [0011], [0036], [0040] teaches identifying a context-aware audio-visual activity based on the set of conditions / context of the physical environment identified in step 212 and the level of the risk event/harmful object for an identified action of the student (environmental score) identified in step 216.) Krishna Murthy may not teach a quantitative environmental score and a qualitative environmental score. Krimon teaches a quantitative environment score and a qualitative environment score ([0030] teaches location 231 and environmental context 233 may be used to assist with action weighting 230… a list of possible actions may be defined… For some contexts, the weight assigned to an action may be 0 or 1 (e.g., for a scale of percentages 0 to 1) (quantitative environment score). Thus, different levels of strength may be applied to grabbing objects; e.g., a stronger grip is used on a ceramic coffee mug and a lighter grip for a foam cup (also a qualitative score). See also [0032].) Therefore, it would have been prima facie obvious to one of ordinary skill in the art before the effective filing date to have modified the audio-visual activity safety recommendation system and method of Krishna Murthy/Bures to perform various functions using actions weighted given the environmental context as taught by Krimon, with the motivation of improving aspects of patient health and quality of life, e.g., treating the effects of neuro-muscular ailments or neurodegenerative diseases as well as improving machine learning technology of the system (see Krimon at Abstract, para. 0015-0016, 0020, 0023, 0052, 0065). Re. Claim 14, Krishna Murthy/Bures/Krimon teaches the method of claim 13, wherein the qualitative environmental score indicates one or more of a degree of distraction for a particular interference, a potential degree of impact on the individual to perform the assessment, and an ability to process the plurality of signals compared to processing under an optimal set of conditions (see claim 13 prior art rejection. K. M. Fig. 3, [0026], [0032] teaches Attention layer 340 is a seq2seq model that turns one sequence into another sequence with an attention optimization that allows a decoder to look at the input sequence selectively that concatenates extracted features… Neural network 350 is a classifier that performs aggregation methods (qualitative and quantitative scoring) … to output a score for each identified distraction modality within the physical environment (indicates a degree of distraction for a particular interference).) Claim 12 is rejected under 35 U.S.C. 103 as being anticipated by Krishna Murthy in view of Bures and Sicconi et al. (US 2020/0057487 A1; “Sicconi” herein). Re. Claim 12, Krishna Murthy/Bures teaches the method of claim 1, further comprising […] based on the environmental context of the individual (K. M. [0036] teaches identifying a set of conditions (e.g., context) of the classroom. K. M. [0032] teaches sorting identified distraction modalities (determined environmental interference types) based at least in part on a determined disturbance level of the user (the environmental context comprising the one or more environmental interference types). K. M. [0026], Fig. 3, [0032], [0045] also teaches outputting a score for each identified distraction modality within the physical environment (the environmental context). [0044] teaches transmitting a notification to a user / teacher.) Krishna Murthy/Bures may not teach suggesting an intervention based on the environmental context of the individual. Sicconi teaches suggesting an intervention based on the environmental context of the individual (Abstract teaches using artificial intelligence to evaluate, correct, and monitor user attentiveness… a user alert mechanism outputting a directional alert to a user (suggesting an intervention). Additionally, Fig. 3, [0050], [0094] teach providing audio feedback to the distracted user requesting that the user pay attention, e.g., “Eyes on road”, “Hands on wheel”, “Watch the car that stopped before you”, “Pay attention to the cyclist on your right”. See also [0052], [0068], [0071] (the environmental context).) Therefore, it would have been prima facie obvious to one of ordinary skill in the art before the effective filing date to have modified the audio-visual activity safety recommendation system and method of Krishna Murthy/Bures to provide directional alerts / request that the user pay attention based on analyzing conditions to perform distraction detection as taught by Sicconi (see at least para. 0071), with the motivation of improving user attentiveness in the presence of distractions or improving physical and mental conditions during a cognitive task (e.g., driving) using machine-learning processes (see Sicconi at Abstract and para. 0003, 0043, 0071). Response to Arguments Claim Objections Regarding the objection(s), the Applicant has amended the claims to overcome the claim objections, hereby withdrawn. Rejection Under 35 U.S.C. §112(a) Regarding the rejection of claim 12, the Applicant has amended the claim to obviate the written description rejection, hereby withdrawn. Rejection Under 35 U.S.C. §112(b) Regarding the rejection of claim 12, the Applicant has amended the claim to obviate the issue of indefiniteness, hereby withdrawn. Rejections under 35 U.S.C. §101 Regarding the rejection of claim 20, the claim is now recited as a non-transitory computer program product and falls within at least one of the statutory categories (Step 1 = Yes). Regarding the rejection of Claims 1-20, the Examiner has considered the Applicant’s arguments but does not find them persuasive for at least the following reasons. Applicant argues: A1. “the actual performance of managing personal behavior or relationships or interactions between people is not recited in the claims… the claims do not actually recite the performance of managing personal behavior. The claims do not actually recite the performance of managing relationships. The claims do not actually recite the performance of interactions between people.” (Remarks, pgs. 11-12). Re. argument A1: The Examiner respectfully submits the basis of rejection. The identified claim elements under broadest reasonable interpretation (BRI) cover a method of organizing human activity wherein a person follows a series of rules or steps to implement the identified abstract idea by interacting with a computer system. Given the broadest reasonable interpretation, the claims recite Certain Methods of Organizing Human Activity (managing personal behavior and/or interactions between people, which includes one or more persons following a series of steps or rules, and which includes interaction of a person with a computer) (see 2019 PEG, pg. 5). The recited steps are a series of rules or instructions that a person would follow to monitor environmental interference and assess environmental context. The Examiner notes that multiple CAFC court decisions that were found to recite a method of organizing human activity did not actively recite a person or persons performing the steps of the claims (see, e.g., EPG, TLI communications, Ultramercial). A2. “This amendment clearly states that the association of the aggregated plurality is performed by the machine learning algorithm” (Remarks, pg. 11). Re. argument A2: The Examiner respectfully submits that the steps added in amendment may improve the identified abstract idea; however, an improved abstract idea is still an abstract idea. For example, the machine learning algorithm still encompasses simplistic mathematical models, such as logistic regression, that a person could readily perform as part of the rules following. Only additional elements can provide a practical application or a significantly more. A3. “If it were to be argued that the claims, in reciting steps for assessing an environmental context around an individual taking an assessment, relate to actually performing management of personal behavior or relationships or interactions between people, this is not sufficient to trigger an eligibility issue. The MPEP states that if a claim is based on or involves an abstract idea, but does not recite it, then the claim is not directed to an abstract idea…” (Remarks, pg. 12). Re. argument A3: The MPEP § 2106.04(a) states that some claims are not directed to an abstract idea because they do not recite an abstract idea. That is not the case here. The Examiner has respectfully asserted that the identified limitations fall within at least one of the enumerated groupings of abstract ideas (i.e., the claim are directed to certain methods of organizing human activity). A4. “the Office Action alleges that there are no additional elements other than generic computer components and functions” (Remarks, pg. 12). Re. argument A4: The plurality of sensors is also considered an additional element. A5. “Excluding nearly the entire claim from being considered an "additional element" is wildly inconsistent with the 2019 PEG and the 2019 PEG Examples” (Remarks, pg. 12). Re. argument A5: The MPEP states “Examiners should determine whether a claim recites an abstract idea by (1) identifying the specific limitation(s) in the claim under examination that the examiner believes recites an abstract idea, and (2) determining whether the identified limitations(s) fall within at least one of the groupings of abstract ideas… If the identified limitation(s) falls within at least one of the groupings of abstract ideas, it is reasonable to conclude that the claim recites an abstract idea in Step 2A Prong One.” The Examiner respectfully asserts that the identified limitations cover at least one of the enumerated groupings of abstract ideas. A6. “Example 42 recites a claim for receiving notification when medical records are updated. The claim is held to recite receiving medical updates, hence the claims recite managing interactions between people (organizing human activity). Yet, rather than exclude nearly the entire claim from further consideration, even though most limitations are in service of the identified abstract idea, the major features of the claim are still counted as additional elements… Applicant submits that, given the same treatment, the instant claims clearly are integrated into the practical application, and exhibit significantly more” (Remarks, pgs. 13-14). Re. argument A6: As a preliminary matter, Applicant previously argued (see, e.g., argument A3) there was no abstract idea, but Example 42 recites an abstract idea. Regardless, Examiner has identified limitations that fall within at least one of the abstract ideas. Like Example 42 (example claim 1), the instant claims as a whole recite a method of organizing human activity. The instant claimed invention is a method, a series of rules or instructions, that allows for a person (e.g., a proctor) to monitor for environmental interference(s) around an individual and assess the environmental context of the individual (i.e., determine whether environmental interference(s) is/are present). Unlike Example 42 (example claim 1), the instant claims as a whole merely describe how to generally “apply” the concepts of monitoring and assessing environmental interference(s) for the environmental context in a computer environment. (See Example 42 claim 2). The claimed computer components are recited at a high level of generality and are merely invoked as tools to perform an existing environmental interference monitoring and assessment process. Simply implementing the abstract idea on a generic computer is not a practical application of the abstract idea. As for Step 2B, since the claim as a whole merely describes how to generally “apply” the concept in a computer environment, even when viewed as a whole, nothing in the claimed invention adds significantly more to the abstract idea. As such, the claims are ineligible. Regarding the rejection of Claims 2-20, the Applicant has not offered any arguments with respect to these claims other than to reiterate the argument(s) present for the claim(s) from which they depend or are analogous to. As such, the rejection of these claims is also maintained. Rejection under 35 U.S.C. §103 Regarding the rejection of Claims 1-20, the Examiner has considered the Applicant’s arguments but does not find them persuasive for at least the following reasons. Applicant argues: B1. “With the present submission, Applicant has amended Independent Claim 1 to recite, among other things…” (Remarks, pg. 15). Regarding B1: The Examiner respectfully submits the basis of rejection. Given the broadest reasonable interpretation, Krishna Murthy in view of Bures teaches or renders obvious the features of claim 1. B2. “Krishna Murthy is silent on "associating the aggregated plurality of features to likelihood values for each of one or more environmental interference types based on a series of time instances," and instead determines a disturbance level of the user based on an audio-visual event at a single time corresponding to the student. (Krishna Murthy, at [0032].)” (Remarks, pgs. 15-16). Re. argument B2: Krishna Murthy in view of Bures renders obvious the amended feature. B3. “Krishna Murthy describes outputting an aggregation score of concatenated data, but does not relate the concatenated data to specific time instances, instead outputting a score based solely on the concatenated data. (Id.)” (Remarks, pg. 16). Re. argument B3: Krishna Murthy in view of Bures renders obvious the amended feature. B4. “Krishna Murthy does not differentiate between disturbance types or unsafe events, but generalizes all disturbances. (Krishna Murthy, at [0024].) In contrast, the amended claim language recites "associating the aggregated plurality of features, by the machine learning algorithm, to likelihood values for each of one or more environmental interference types based on a series of time instances." The one or more environmental interference types may include changes in patterns of an activity, or changes in the source of input in the example of a difference in voice being perceived in a recording. (Specification, at [0012].) These different types of disturbance are not mentioned in Krishna Murthy” (Remarks, pg. 16). Re. argument B4: Krishna Murthy in view of Bures renders obvious the amended feature. In response to applicant's argument that the references fail to show certain features of the invention, it is noted that the features upon which applicant relies (i.e., disturbance types consisting of changes in patterns of an activity or perceiving changes in the source of voice input in a recording as a different user) are not recited in the rejected claim(s). Although the claims are interpreted in light of the specification, limitations from the specification are not read into the claims. See In re Van Geuns, 988 F.2d 1181, 26 USPQ2d 1057 (Fed. Cir. 1993). B5. “Claim 1 is amended to recite the step of "determining the one or more environmental interference types is present based on a predetermined likelihood threshold." As Krishna Murthy is silent on different environmental interference types, it is not clear how Krishna Murthy could determine the presence of one or more environmental interference types, much less the presence of the environmental interference types based on a predetermined threshold” (Remarks, pg. 16-17). Re. argument B5: The recited environmental interference types encompass Krishna Murthy’s disclosed distraction modalities (i.e., distraction types). Krishna Murthy in view of Bures teaches generating monitoring data from a plurality of feature vectors to perform detection functions and determine if a condition of interest (an environmental interference) of a particular feature is present by comparing a probability value to a probability threshold. This information can be used in combination with Krishna Murthy to identify and sort/rank the distraction modalities within the physical environment. B6. “Independent Claim 1 recites the step of "outputting an environmental context of the individual, the environmental context comprising one or more determined present environmental interference types." As Krishna Murthy does not address different types of environmental interference, it is unclear how the reference could be applied to this language, especially as the output of Krishna Murthy is an identified set of conditions of the classroom, as opposed to the condition of the individual” (Remarks, pg. 17). Re. argument B6: See response to argument B5. Further, Krishna Murthy teaches identifying with respect to the physical environment of a subject. See, e.g., Fig. 2. See also Abstract: “selecting a modality to distract a subject… that minimize[s] disturbances to users within the surrounding of the subject”. The distraction modalities are identified within the physical environment of the individual. Cited para. 0032 and 0045 also teach that the physical environment is around the individual. Regarding the rejection of Claims 2-20, the Applicant has not offered any arguments with respect to these claims other than to reiterate the argument(s) present for the claim(s) from which they depend or are analogous to. As such, the rejection of these claims is also maintained. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure: Bach et al (US 2020/0008725 A1) for teaching physiological sensors, e.g., a 3-axis accelerometer, gyroscopes, that generate physiological data about a person while performing a task (see Abstract, [0125], [0264], Table 2); and performing digital signal processing transforming analog information collected into digital information (see [0119]). Bower et al. (US 2019/0216392 A1) for teaching a cognitive platform with a physiological component… computing performance metrics of an individual based at least in part on user interactions with computerized tasks and/or interference and at least one physiological measure of the individual, where the performance metric provides an indication of the cognitive abilities of the individual. The apparatus can be coupled to at least one physiological component to perform the physiological measurement of the individual. The apparatus also can be configured to adapt the tasks and/or interferences to enhance the individual's cognitive abilities. See Abstract. Martucci et al. (US 2016/0262680 A1) for teaching a cognitive assessment tool for assessing cognitive ability while multitasking. See Abstract. Wall et al. (US 2021/0133509 A1) for teaching model optimization and data analysis related to cognitive disorders, developmental delays and neurologic impairments (see Abstract and para. 0002-0003). Simon et al. (US 2018/0184964 A1) for teaching methods for diagnosing Autism and/or Autism Spectrum Disorder (ASD) of a subject include establishing baseline brain wave patterns of the subject by having the subject perform a series of task and measuring brain waves during the tasks using an EEG measurement device, applying a light stimulus or images to the subject's eyes and capturing eye movements and/or changes in facial expression in response to the light stimulus or images, and giving a neuropsychological and cognition battery of tasks to the subject to generate a provoked cognitive assessment of the subject. A processing device correlates the baseline brain wave patterns, eye movements and/or facial expressions, and provoked cognitive assessment of the subject to profile data indicative of Autism and/or ASD. The corresponding system may also include an auditory testing device that tests the subject's sensitivity to sound and records the subject's speech in response to verbal tasks. The processing device performs language processing of the recorded speech and correlates the processed language to the profile data indicative of Autism and/or ASD. See Abstract. Duffy (US 2014/0356825 A1) for teaching quantitative assessment of spatial sequence history (the memory of a subject). See Abstract. Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to Jessica M Webb whose telephone number is (469)295-9173. The examiner can normally be reached Mon-Fri 9:00am-1:00pm CST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Robert Morgan can be reached on (571) 272-6773. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /J.M.W./Examiner, Art Unit 3683 /CHRISTOPHER L GILLIGAN/Primary Examiner, Art Unit 3683
Read full office action

Prosecution Timeline

Sep 28, 2023
Application Filed
Jul 24, 2025
Non-Final Rejection — §101, §103
Jan 29, 2026
Response Filed
Mar 10, 2026
Final Rejection — §101, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12585721
SINGLE BARCODE SCAN CAST SYSTEM FOR PHARMACEUTICAL PRODUCTS
2y 5m to grant Granted Mar 24, 2026
Patent 12525336
INTELLIGENT MEDICAL ASSESSMENT AND COMMUNICATION SYSTEM WITH ARTIFICIAL INTELLIGENCE
2y 5m to grant Granted Jan 13, 2026
Patent 12394505
ELECTRONIC HEALTH RECORD INTEROPERABILITY TOOL
2y 5m to grant Granted Aug 19, 2025
Patent 12347541
CAREGIVER SYSTEM AND METHOD FOR INTERFACING WITH AND CONTROLLING A MEDICATION DISPENSING DEVICE
2y 5m to grant Granted Jul 01, 2025
Patent 12293001
REFERENTIAL DATA GROUPING AND TOKENIZATION FOR LONGITUDINAL USE OF DE-IDENTIFIED DATA
2y 5m to grant Granted May 06, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
33%
Grant Probability
86%
With Interview (+52.5%)
3y 0m
Median Time to Grant
Moderate
PTA Risk
Based on 99 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month