Prosecution Insights
Last updated: April 19, 2026
Application No. 17/308,415

AI AND ML ASSISTED SYSTEM FOR DETERMINING SITE COMPLIANCE USING SITE VISIT REPORT

Final Rejection §103§112
Filed
May 05, 2021
Examiner
ORR, HENRY W
Art Unit
2172
Tech Center
2100 — Computer Architecture & Software
Assignee
Iqvia Inc.
OA Round
6 (Final)
50%
Grant Probability
Moderate
7-8
OA Rounds
3y 10m
To Grant
88%
With Interview

Examiner Intelligence

Grants 50% of resolved cases
50%
Career Allow Rate
230 granted / 456 resolved
-4.6% vs TC avg
Strong +37% interview lift
Without
With
+37.2%
Interview Lift
resolved cases with interview
Typical timeline
3y 10m
Avg Prosecution
29 currently pending
Career history
485
Total Applications
across all art units

Statute-Specific Performance

§101
6.8%
-33.2% vs TC avg
§103
53.4%
+13.4% vs TC avg
§102
19.4%
-20.6% vs TC avg
§112
15.1%
-24.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 456 resolved cases

Office Action

§103 §112
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . DETAILED ACTION 1. This action is responsive to applicant’s amendment dated 1/23/2026. 2. Claims 1-18 and 24-28 are pending in the case. 3. Claim 19-23 is cancelled. 4. Claims 1, 8 and 15 are independent claims. Applicant’s Response 5. In Applicant’s response dated 1/23/2026, applicant has amended the following: a) Claims 1, 8, 15 and 28 Claim Rejections - 35 USC § 112 The following is a quotation of the first paragraph of 35 U.S.C. 112(a): (a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention. The following is a quotation of the first paragraph of pre-AIA 35 U.S.C. 112: The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor of carrying out his invention. Claims 1-18 and 24-28 are rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the written description requirement. The claim(s) contains subject matter which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor, or for applications subject to pre-AIA 35 U.S.C. 112, the inventor(s), at the time the application was filed, had possession of the claimed invention. The dependent claims included in the statement of rejection but not specifically addressed in the body of the rejection have inherited the deficiencies of their parent claim and have not resolved the deficiencies. Therefore, they are rejected based on the same rationale as applied to their parent claims above. Claims 1, 8 and 15 : Claim recite: “wherein the natural language notes comprise one or more words entered into a comments field of the first graphical user interface ”. (emphasis added) At best, the instant specification merely describes data capturing the natural language notes (see par. 28). In other words, there is no designated “comments field” in the original disclosure or the drawings. Therefore, there is no mention of the newly amended limitation in the original Specification. Thus, the limitations include subject matter that was not described in the original Specification. If the examiner has overlooked the portion of the original Specification that describes this feature of the present invention, then Applicant should point it out (by page number and line number) in the response to this Office Action. For the purpose of examination, Examiner considers the limitation as follows: “wherein the natural language notes comprise one or more words captured by Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 1-18 and 25-28 are rejected under 35 U.S.C. 103 as being unpatentable over Guthrie et al. (hereinafter “Guthrie”), U.S. Published Application No. 20140222463 A1 in view of Ramaci; Jonathan, U.S. Published Application No. 20190066822 A1 in further view of Thompson, Bradley, U.S. Published Application No. 20050071185 A1 in further view of Anisingaraju; Vidya, (hereinafter “Vidya”), U.S. Patent No. 20160203217 A1. Claim 1: Guthrie teaches A machine-implemented method, comprising: presenting questions of a site visit report in a sequential fashion on a first graphical user interface displayed on a on a user device during a clinical site visit to a clinical trial site, (e.g., mobile user (i.e., user device during a clinical site visit to a clinical trial site) entering and collecting data via graphical user interface on a work station device in response to a request for data (i.e., questions of a site report) par. 3; The disclosed subject matter relates to a system for enhanced monitoring of data during a variety of medical investigations and/or procedures. par. 36; Referring now to FIG. 1, an onsite user 101 is responsible for entering data into workstation 102. Workstation 102 may be a desktop computer, laptop, tablet, mobile phone, or other computing device having a human interface device. In some embodiments, workstation 102 provides a graphical user interface for the entry of data. In some embodiments, the user interface is a web-based user interface and provides for form-based entry of data.) the questions of the site visit report relating to monitoring administration of a clinical trial at the clinical trial site; (e.g., collected data reviewed by onsite user is associated with enhanced monitoring of procedures related to clinical trials (i.e., monitoring administration of a clinical trial) par. 13; The monitoring plan should be tailored to the needs of the trial and the protocol should clearly identify those procedures and data that are critical to subject safety and the integrity and reliability of the study findings. par. 17; Enhanced Monitoring (EM) is a new approach to monitoring of medical studies and procedures (e.g., clinical trials). EM includes a combination of on-site monitoring which includes Targeted Source Data Verification (TSDV) as well as Remote Review (RR). Par. 19; According to an embodiment of the present disclosure, a system for monitoring a clinical trial is provided. The system includes a data input terminal. The data input terminal is located at a data collection point and includes a plurality of input validation rules. Par. 67; The Remote Review allows for the real-time identification of: status of data entry; logic of related data issues; errors; omissions; query resolutions trends of non-compliance and issues requiring attention. ) receiving, at a site visit report (SVR) engine and via a communications infrastructure, responses to the questions that are being received via the first graphical user interface displayed on the user device during the clinical trial site visit, wherein the responses include user-selected answers and natural language notes of a user; (e.g., using a GUI on a device to collect data (i.e., responses include user-selected answers and natural language notes of a user) and sending the collected data to a server via network to evaluate the collected data. par. 36; Referring now to FIG. 1, an onsite user 101 is responsible for entering data into workstation 102. Workstation 102 may be a desktop computer, laptop, tablet, mobile phone, or other computing device having a human interface device. In some embodiments, workstation 102 provides a graphical user interface for the entry of data. In some embodiments, the user interface is a web-based user interface and provides for form-based entry of data. Par. 19; The system includes a data input terminal. The data input terminal is located at a data collection point and includes a plurality of input validation rules. The data input terminal receives data from a user. The data has a datatype. The data input terminal applies at least one of the plurality of input validation rules to the data. The system includes a first datastore receiving data from the data input terminal. Par. 40; In some embodiments, data validation module 108 resides on server 106. Par. 55; Non-critical data points not otherwise excluded may be reviewed remotely unless a change in the site monitoring strategy is necessary due to non-compliance issues. Examples of non-critical variables include: Visit dates; Medical history Demographics; Patient diaries/questionnaires; Concomitant medications; and Lab values not related to endpoints. Par. 67; The Remote Review allows for the real-time identification of: status of data entry; logic of related data issues; errors; omissions; query resolutions trends of non-compliance and issues requiring attention.) evaluating, by the SVR engine, the user-selected answers and the natural language notes from the responses being received via the first graphical user interface displayed on the user device to detect an anomaly in the clinical trial site visit, including evaluating the user-selected answers and based on a combination of pre-configured rules (e.g., evaluating the collected data (i.e., answers ) from the user device based on validation rules to detect issues related to the procedures of a clinical trial par. 19; The system also includes a data analysis server. The data analysis server includes a plurality of data validation rules. par. 36; If data entered by user 101 is found to be invalid by validation module 103, a verification query 104 is initiated.) and a computer-trained model, (e.g., validating the collected data based on model rules par. 40; Data validation module 108 reads data from datastore 107 either directly, or via server 106. Data validation module 108 includes a plurality of rules. Rules include threshold rules 109, critical values 110, and model rules 111. Par. 45; In this way, patient model 114 assists in identifying non-compliance with treatment guidelines even where individual data points do not appear abnormal. ) wherein the anomaly (e.g., discovering adverse events with the collected data par. 36; If data entered by user 101 is found to be invalid by validation module 103, a verification query 104 is initiated. par. 41; In other embodiments, a function may be applied to multiple values to determine whether a value is suspect. Par. 42; In such embodiments, they output a probability indicating the likelihood that an input value, series of values, or numeric function of values is erroneous. Par. 45; In this way, patient model 114 assists in identifying non-compliance with treatment guidelines even where individual data points do not appear abnormal. Par. 48; Messages include information describing the validation failure and the suspect data. Par. 67; The Remote Review allows for the real-time identification of: status of data entry; logic of related data issues; errors; omissions; query resolutions trends of non-compliance and issues requiring attention.) is detected based on the evaluating the user-selected answers and the natural language notes identifying a discrepancy between different sections of the site visit report that are co-related to determine compliance (e.g., evaluating numeric value of 900 mmHG based on discrepancy between normal blood pressure range and blood pressure range indicating sickness which indicate that 900 mmHG is likely an error answer (i.e., anomaly) par. 37; Examples of verification queries include a request for clarification or correction of a numeric value. For example, normal blood pressure is in the range of 90-119 mmHg systolic and 60-79 mmHg diastolic. Blood pressure in the range of 120-180 mmHg systolic and 80-110 mmHg diastolic may indicate disease. Blood pressures above these ranges are likely the result of an error in measurement or data entry. Thus, if user 101 entered a numeric value of 900 mmHg, input validation module 103 would issue a verification query requesting clarification of this numeric value. Par. 41; Threshold rules 109 may be entered by user 112, who in some embodiments is the Lead Clinical Data Manager. Threshold rules provide ranges in which a value is considered likely accurate, and ranges in which a value is considered likely inaccurate. ) Guthrie teaches generating, a compliance score for the clinical trial based on the user- selected answers and text analytics of the natural language notes, wherein the compliance score is calculated based on one or more of a number of protocol deviations, patient eligibility for the clinical trial, investigational product supply and availability, (e.g., user manually reviewing on site and determining the validity (i.e., compliance score) of user- selected answers and text analytics of the natural language notes, number of protocol deviations, patient eligibility for the clinical trial, investigational product supply and availability par. 34; Critical Variable (CV): Critical variables are data that must be 100% source data verified. Examples of critical variables include: safety and efficacy variables, endpoints, eligibility criteria, informed consent information, and inventory accountability (if applicable). Par. 43; Data validation module 108 may also include critical variables 110. As discussed above, critical variables are those which must be 100% source data verified. Critical variables vary from study to study, and may include safety and efficacy variables, endpoints, eligibility criteria, informed consent information, and inventory accountability. Per study critical variables 113 may be entered by user 112, preloaded, or transmitted from a remote repository. Par. 54; The critical variables will be 100% source data verified. Examples of critical variables include but are not limited to: Adverse events/adverse device effects; Endpoints (primary and secondary); Reasons for study termination; Stratification variables; Informed consent forms (ICFs); Eligibility criteria; Product experiences/device deficiencies or malfunctions; and Device inventory information. Par. 71; In general, the safety and efficacy data that must be 100% SDV are: Informed Consent Form (ICF); Eligibility Criteria; End Points (Primary and Secondary); Adverse Events; Product experiences, deficiencies or malfunctions; Screen Failures; Reasons for Termination; Stratification Variables; and Verification of Discrepancies found during Remote Review. ) Guthrie fails to expressly teach wherein the natural language notes comprise one or more words captured by the first graphical user interface, and wherein the evaluating of the natural language notes includes the SVR engine performing natural language processing on the natural notes to detect the anomaly; identifying, by the SVR engine, a recommendation to resolve the anomaly in the clinical trial site visit; and generating an alert on the user device, during the clinical trial site visit, wherein the alert includes a recommendation to resolve the anomaly. However, Ramaci teaches wherein the natural language notes comprise one or more words captured by the first graphical user interface, and wherein the evaluating of the natural language notes includes the SVR engine performing natural language processing on the natural notes to detect the anomaly; (e.g., relational agent performing natural language processing on the user responses to detect an anomaly or adverse events par. 10; The relational agent provides conversational interactions, utilizing automated voice recognition-response, natural language processing, predictive algorithms, and the like, to perform functions, interact with the user (e.g., subject, family member, etc.), fulfill user requests, educate and inform user, monitor user compliance, collect data such as endpoints (e.g., primary, secondary), safety (e.g. adverse events), outcomes (e.g., PROs), and the like. par. 10; The said device enables the participant (i.e., subject enrolled in a study) to access and interact with the said relational agent for compliance with a study objective, protocol, and procedures that include, but are not limited to, instructions, following dosing regimens, receiving reminders (e.g., medication), scheduling visits, reporting symptoms/adverse events par. 14; The responses or answers provided to the relational agent serve as input to one or more predictive algorithms to calculate a risk stratification profile and trends. Such a profile can provide an assessment for the need of any intervention required by either the participant, clinical investigation team members, caregivers, or family members. The relational agent facilitates real-time EDC for the clinical study.) identifying, by the SVR engine, a recommendation to resolve the anomaly in the clinical trial site visit; (e.g., providing real-time interactions with relational agent that includes identifying communications for the purpose of complying with a protocol or procedure associated with a clinic study (i.e., recommendation to resolve the anomaly in the site clinical visit) par. 10; The said device enables the participant (i.e., subject enrolled in a study) to access and interact with the said relational agent for compliance with a study objective, protocol, and procedures that include, but are not limited to, instructions, following dosing regimens, receiving reminders (e.g., medication), scheduling visits, reporting symptoms/adverse events, accessing educational information, accessing social support, and communicating with the clinical investigation team (e.g., principal investigator, nurse, etc.). par. 11; In another preferred embodiment, the wearable device can communicate with a secured HIPAA-compliant remote server. Par. 12; In an alternative embodiment, the said secured remote server is accessible using said stand-alone speech interface device or the speech interface is incorporated into one or more smart appliances, or mobile apps, capable of communicating with the same or another remote server, providing cloud-based control service, to perform natural language or speech-based interaction with the user, acting as said relational agent par. 14; The relational agent facilitates real-time EDC for the clinical study. ) and generating an alert on the user device, during the clinical trial site visit, wherein the alert includes a recommendation to resolve the anomaly. (e.g., generating alerts and real time feedback associated with corrective actions par. 10; The said device enables communication with one or more remote servers capable of providing automated voice recognition-response, natural language understand-processing, predictive algorithm processing, reminders, alerts, general and specific information for the management of chronic pain. par. 11; The relational agent provides conversational interactions, utilizing automated voice recognition-response, natural language learning-processing, perform various functions and the like, to: interact with the user, fulfill user requests, educate, monitor study protocol/procedure compliance, provide one or more skills, ask one or more questions, collect clinical/outcomes data, storing responses/answers, perform predictive algorithms with user responses, determine health status and well-being, and provide suggestions for corrective actions including instructions for reporting adverse events, symptoms, protocol deviations, and the like. par. 14; The relational agent facilitates real-time EDC for the clinical study.) In the analogous art of clinical trial management, It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the monitoring process of adverse events as taught by Guthrie to include a server for providing an application to allow a relational agent to send real time feedback to a user of a device as taught by Ramaci, with a reasonable expectation of success, to provide the benefit of a more convenient, efficient, and cost-effective methodology for conducting clinical trials (see Ramaci; par. 7; par. 10). Guthrie/Ramaci fails to expressly teach generating, by the SVR engine, a compliance score; communicating the compliance score to the user device via the communications infrastructure; and presenting, in a feedback window of a second graphical user interface displayed on the user device, a graphical representation of the compliance score. However, Thompson teaches generating, by the SVR engine, a compliance score; communicating the compliance score to the user device via the communications infrastructure; and presenting, in a feedback window of a second graphical user interface displayed on the user device, a graphical representation of the compliance score. (e.g., presenting via system engine, a calculated compliance score based on any type of questions associated with requirements of a regulatory compliance of a business Examiner notes that notifying a user via email teaches presenting, in a feedback window of a second graphical user interface displayed on the user device, a graphical representation of the compliance score. abstract; Finally, the audit, interview, inspection, and regulatory quality information is combined and scored to create a compliance index related to the efficiency of the regulatory compliance of the business entity and then identifying any more general risk factors for that company. Par. 35; These audits might cover such areas as design controls, clinical trials, corrective and preventive actions, complaints, medical device reporting and management controls. Par. 37; . When the grading process is complete, the system may notify for example by e-mail, a pre-designated individual any time an audit report receives a failing grade. Par. 38; A further benefit is the rapid update of an entity's overall index score. Rapid update enables entities to effectively monitor their compliance levels over time, if desired, even daily. Par. 61; Scoring of the company compliance interviews involves first calculating a score for the written questionnaire by calculating an average score for responders on the questions. ) In the analogous art of determining compliance based on audits, It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the evaluation process of data associated with clinical trial procedures/protocols as taught by Guthrie/Ramaci to be used to calculate a compliance score that is presented to a user as taught by Thompson, with a reasonable expectation of success, to provide the benefit of providing of better assessing key functions of a company in respect to required compliance regulations (see Thompson; par. 35). Guthrie/Ramaci/Thompson fails to expressly teach text analytics of the natural language notes and wherein the text analytics includes sentiment analytics and topical analytics; and a score that is determined based on the sentiment analytics; However, Anisingaraju; teaches text analytics of the natural language notes based on a combination of pre-configured rules and a computer trained model (e.g., processing the aggregated textual data using natural language par. 10; The method additionally includes processing the aggregated data using natural language processing to generate a set of contributors, the set of contributors pertaining to the set of attributes and analyzing, using the set of attributes and the set of contributors, to generate the recommendations, wherein the set of attributes represent at least one of a set of topics, a set of sentiments, and a set of emotions. Par. 46; In subcomponent 354, the model is trained and once the model is selected 356 and trained in 354, the model may be persisted 352 to process the incoming data. Par. 54; The data analysis subcomponent 432 represents the intelligence component and includes therein statistical models and modules related to model training, pre- and post-processing. ) wherein the text analytics includes sentiment analytics and topical analytics; (e.g., text data analytics include sentiment analytics and topical analytics par. 95; The aggregated data is then processed using natural language processing (NLP) to ascertain attributes and contributors. These attributes can be in the form of sentiment (i.e., positive, negative, or neutral), emotions, topics (e.g., trend, hot topics, topics specified to be important to the organization). Par. 102; Key attributes include such things as sentiment, emotions, topics, all of which may be calibrated to the metadata values (e.g., attributes for the department, for the whole hospital, for specific days, for specific group of patients or doctors, etc.). Sentiment may be positive, negative, or neutral. Emotion represent the subjective representation of the intensity of the sentiment, as discussed earlier (e.g., hate, avoid, accepting, satisfied, happy, elated, ecstatic intensity gradations). Topics can be analyzed for trending topics, the top (N) topics discussed, or the topics of special interest to the business, for example.) and a score that is determined based on the sentiment analytics; (e.g., scoring in the form of sentiment (i.e., positive, negative, or neutral) Par. 93; In one or more embodiments, the process of aggregating the narrative and structured data as well as the scoring data from both the internal and external sources employs the earlier discussed data analysis system that partially or wholly utilizes cloud-based and/or big data techniques. Par. 95; The aggregated data is then processed using natural language processing (NLP) to ascertain attributes and contributors. These attributes can be in the form of sentiment (i.e., positive, negative, or neutral), emotions, topics (e.g., trend, hot topics, topics specified to be important to the organization). ) It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the evaluation process of data associated with compliance with clinical protocol/procedures as taught by Guthrie/Ramaci/Thompson to include aggregating the data and then using natural language processing (NLP) for processing based on sentiment analytics and topical analytics as taught by Anisingaraju; with a reasonable expectation of success, to provide the benefit of adding business intelligence in effort to generate insights for improving a health organization from unstructured and structured data. Claim 2 depends on claim 1: Guthrie/Ramaci teaches further comprising: selecting the questions to include in the site visit report based on features of the clinical trial site and the clinical trial; (e.g., collecting data based on questions related to monitoring of medical studies and procedures (e.g., clinical trials). Guthrie; Par. 13; The monitoring plan should be tailored to the needs of the trial and the protocol should clearly identify those procedures and data that are critical to subject safety and the integrity and reliability of the study findings. In addition the monitoring plan may include a schema identifying those subjects targeted for on-site review. Par. 16; The Enhanced Monitoring (EM) method disclosed herein allows clinical trial Sponsors to have better oversight of site activity earlier by ensuring Remote Review (RR) of data is performed. Par. 17; Utilization of EM allows the Clinical Research Associates (CRAs) to focus their efforts on the review of critical safety and efficacy variables and ensuring overall site management and compliance. ) ( e.g., relational agent asking one or more questions based on monitoring study protocol/procedure compliance, Ramaci; par. 7; Mobile surveys may serve as a better form of Electronic Data Capture (EDC), enabling respondents to send pictures, record their voice, or write notes/diaries all on smartphones. Par. 12; The relational agent provides conversational interactions, utilizing automated voice recognition-response, natural language learning-processing, perform various functions and the like, to: interact with the user, fulfill user requests, educate, monitor study protocol/procedure compliance, provide one or more skills, ask one or more questions, collect clinical/outcomes data, storing responses/answers, perform predictive algorithms with user responses, determine health status and well-being, and provide suggestions for corrective actions including instructions for reporting adverse events, symptoms, protocol deviations, and the like.) configuring the rules to identify anomalies in the responses; (e.g., validating the collected data to identify anomalies in the data of a user response based on model rules Guthrie;par. 40; Data validation module 108 reads data from datastore 107 either directly, or via server 106. Data validation module 108 includes a plurality of rules. Rules include threshold rules 109, critical values 110, and model rules 111. Par. 45; In this way, patient model 114 assists in identifying non-compliance with treatment guidelines even where individual data points do not appear abnormal. ) and training the model to correlate historical medical data with supervisor-identified anomalies in the historical medical data, wherein the historical medical data includes patient data, trial data, and laboratory test results. (e.g., the model used to correlate prior data to identify anomalies with patient data Guthrie par. 45; In some embodiments, model rules 111 may also be included in validation module 108. Model rules 111 are generated by patient model 114. Patient model 114 provides a simulation of a subject. In some embodiments, patient model 114 is generally applicable, while in some embodiment, patient model provides a subject-specific simulation based on an individual subject's characteristics. In one embodiment, patient model 112 simulates changes over time of physical characteristics based on a physiological model and based on prior data.) Claim 3 depends on claim 1: Guthrie/Ramaci/Thompson teaches wherein: the evaluating comprises computing a compliance score for each response and detecting when the anomaly when the compliance score for a respective response exceeds a threshold. (e.g., evaluating the collected data to determine valid data (i.e., compliance score of 100% when data is valid ) and determining when the data is acceptable based on applied threshold rules Guthrie; par. 19; The system also includes a data analysis server. The data analysis server includes a plurality of data validation rules. par. 36; If data entered by user 101 is found to be invalid by validation module 103, a verification query 104 is initiated. Par. 41; Threshold rules 109 may be entered by user 112, who in some embodiments is the Lead Clinical Data Manager. Threshold rules provide ranges in which a value is considered likely accurate, and ranges in which a value is considered likely inaccurate. ) (e.g., calculating a compliance score for each response to an audit or interview and determine a failing grade (i.e., compliance score exceeding threshold that determines failing grade) see Thompson; par. 19; The present invention, in one form, relates to an evaluation method for assessing regulatory compliance involving audit information, personnel interviews, and regulatory which is combined and scored to create a compliance index. Par. 41; For all audits conducted, their quality, scope and outcome will be assessed and scored. Par. 61; Scoring of the company compliance interviews involves first calculating a score for the written questionnaire by calculating an average score for responders on the questions. ) Claim 4 depends on claim 1: Guthrie/Ramaci teaches wherein: the generating an alert comprises ranking the detected anomaly based on a safety-related risk factor associated with the anomaly, during the clinical trial site visit. (e.g., ranking detected anomalies as either critical or non critical based on safety variables Guthrie; par. 35; Non-critical Variable: Non-critical variables are data that are not related to safety and efficacy, endpoints, eligibility criteria, etc., and, therefore, may be reviewed remotely if a review is required. In some embodiments, a subset of non-critical variables are identified as not requiring any review. par. 43; Critical variables vary from study to study, and may include safety and efficacy variables, endpoints, eligibility criteria, informed consent information, and inventory accountability. par. 44; Per study critical variables may designate those values that inherently require further action based on the individual study. Per study critical variables may also designate those values for which an alternative threshold value is applicable. For example, in a study of diabetes management, blood glucose may always require on-site verification. Alternatively, the acceptable range of values may be narrower, requiring verification in more cases than in another study.) (e.g., ranking detected anomalies as unsafe Ramaci; par. 7; EDC systems have been shown to improve the quality of clinical trials, halt the development of ineffective or unsafe drugs earlier, reduce unnecessary work, reduce cost, and accelerate time to market of new drugs. There are also benefits in terms of data quality, performance, productivity and costs in clinical trial management. par. 10; safety (e.g. adverse events) Claim 5 depends on claim 1: Guthrie/Ramaci/Thompson/Anisingaraju teaches further comprising: training a probabilistic topic model to detect topics from historical natural language notes associated with historical medical data; and training a sentiment model to detect sentiments from the historical natural language notes; wherein the evaluating comprises computing the text analytics with the probabilistic topic model and the sentiment model. (e.g., text data analytics include sentiment analytics and topical analytics Anisingaraju; par. 95; The aggregated data is then processed using natural language processing (NLP) to ascertain attributes and contributors. These attributes can be in the form of sentiment (i.e., positive, negative, or neutral), emotions, topics (e.g., trend, hot topics, topics specified to be important to the organization). Par. 102; Key attributes include such things as sentiment, emotions, topics, all of which may be calibrated to the metadata values (e.g., attributes for the department, for the whole hospital, for specific days, for specific group of patients or doctors, etc.). Sentiment may be positive, negative, or neutral. Emotion represent the subjective representation of the intensity of the sentiment, as discussed earlier (e.g., hate, avoid, accepting, satisfied, happy, elated, ecstatic intensity gradations). Topics can be analyzed for trending topics, the top (N) topics discussed, or the topics of special interest to the business, for example.) Claim 6 depends on claim 1: Guthrie/Ramaci/Thompson/Anisingaraju teaches further comprising: training the model to correlate text analytics extracted from historical natural language notes associated with historical medical data and answers of historical site visit reports, with corresponding supervisor-declared adverse events; wherein the evaluating comprises evaluating model the text analytics and at least a subset of the responses with the trained. (e.g., processing aggregated data from multiple sources and applying text data analytic related models based on sentiment analytics and topical analytics Anisingaraju; par. 10; The method includes aggregating unstructured data from various sources to form aggregated data. Par. 46; The three subcomponents 352, 354, and 356 represent the machine learning approach that is employed for this example of FIG. 3. In subcomponent 356, the model is selected which may be prebuilt or an external model may be integrated. In subcomponent 354, the model is trained and once the model is selected 356 and trained in 354, the model may be persisted 352 to process the incoming data. par. 95; The aggregated data is then processed using natural language processing (NLP) to ascertain attributes and contributors. These attributes can be in the form of sentiment (i.e., positive, negative, or neutral), emotions, topics (e.g., trend, hot topics, topics specified to be important to the organization). Par. 102; Key attributes include such things as sentiment, emotions, topics, all of which may be calibrated to the metadata values (e.g., attributes for the department, for the whole hospital, for specific days, for specific group of patients or doctors, etc.). Sentiment may be positive, negative, or neutral. Emotion represent the subjective representation of the intensity of the sentiment, as discussed earlier (e.g., hate, avoid, accepting, satisfied, happy, elated, ecstatic intensity gradations). Topics can be analyzed for trending topics, the top (N) topics discussed, or the topics of special interest to the business, for example.) Claim 7 depends on claim 1: Guthrie teaches further comprising: evaluating multiple site visit reports in combination with one another to detect a pattern of anomalies. (e.g., collecting data from multiple investigation sites to detect a pattern anomalies/outliers Guthrie Par. 30; Remote Review (RR): RR activities are performed outside the clinical research site setting. RR may include: reviewing data entries, issuing and closing queries, running reports to identify outliers and trends in protocol deviations and other types of non-compliance, as well as other site management activities. RR is conducted as dictated by site activity and trial-specific requirements. RR activities may include generating reports and listing that allow a reviewer to identify those sites that are outliers, for example with extremely high or low reported adverse events. An outlying number of reported adverse events may be indicative of underreporting or other methodological issues requiring further investigation. par. 47; As rules are run against data from individual sites, a history 115 is built for each site. History 115 is persisted in a database or other suitable data storage such as a log file. As data collected from a given site for a given value fails validation, patterns emerge as to those values for which a given site is particularly unreliable. Based on history 115, the critical values for each individual site are identified by identification module 116.) Claim 8: Claim 8 is substantially encompassed in claim 1, therefore, Examiner relies on the same rationale set forth in claim 1 to reject claim 8. Claim 9 depends on claim 8: Claim 9 is substantially encompassed in claim 2, therefore, Examiner relies on the same rationale set forth in claim 2 to reject claim 9. Claim 10 depends on claim 8: Claim 10 is substantially encompassed in claim 3, therefore, Examiner relies on the same rationale set forth in claim 3 to reject claim 10. Claim 11 depends on claim 8: Claim 11 is substantially encompassed in claim 4, therefore, Examiner relies on the same rationale set forth in claim 4 to reject claim 11. Claim 12 depends on claim 8: Claim 12 is substantially encompassed in claim 5, therefore, Examiner relies on the same rationale set forth in claim 5 to reject claim 12. Claim 13 depends on claim 8: Claim 13 is substantially encompassed in claim 6, therefore, Examiner relies on the same rationale set forth in claim 6 to reject claim 13. Claim 14 depends on claim 8: Claim 14 is substantially encompassed in claim 7, therefore, Examiner relies on the same rationale set forth in claim 7 to reject claim 14. Claim 15: Claim 15 is substantially encompassed in claim 1, therefore, Examiner relies on the same rationale set forth in claim 1 to reject claim 15. Claim 16-18: Claims 16-18 is substantially encompassed in claims 9-11, therefore, Examiner relies on the same rationale set forth in claims 9-11 to reject claims 16-18. Claim 25 depends on claim 1: Guthrie teaches wherein the discrepancy comprises: the natural language notes in a first section of the site visit report indicating a protocol deviation having attributes that qualify as an adverse event; and data representative of the adverse event is missing in a second section of the site visit report. (e.g., remote review system for real-time evaluation of status of data entry, logic of related data issues and errors indicating protocol deviations Guthrie; par. 13; The monitoring plan should be tailored to the needs of the trial and the protocol should clearly identify those procedures and data that are critical to subject safety and the integrity and reliability of the study findings. Par. 30; Remote Review (RR): RR activities are performed outside the clinical research site setting. RR may include: reviewing data entries, issuing and closing queries, running reports to identify outliers and trends in protocol deviations and other types of non-compliance, as well as other site management activities. Par. 67; The Remote Review allows for the real-time identification of: status of data entry; logic of related data issues; errors; omissions; query resolutions trends of non-compliance and issues requiring attention. Par. 69; Performing remote data reviews include checks for: i) Logic (one's reasoned and reasonable judgment of the study data); ii) compliance: involves looking across systems to ensure that the subject is following the protocol, e.g., completing follow-up visit assessments; and iii) conventions: with each trial there are conventions or trial-specific information that needs to be followed, e.g., protocol, EDC completion guidelines.) Claim 26 depends on claim 25: Guthrie teaches wherein the natural language notes in the first section of the site visit report indicate a hospital stay that is missing from the second section of the site visit report. (e.g., remote review system for real-time evaluation of data omissions related to protocol data being reviewed Guthrie; Par. 40; In some embodiments, data validation module 108 resides on server 106. Par. 67; The Remote Review allows for the real-time identification of: status of data entry; logic of related data issues; errors; omissions; query resolutions trends of non-compliance and issues requiring attention. Par. 69; Performing remote data reviews include checks for: i) Logic (one's reasoned and reasonable judgment of the study data); ii) compliance: involves looking across systems to ensure that the subject is following the protocol, e.g., completing follow-up visit assessments; and iii) conventions: with each trial there are conventions or trial-specific information that needs to be followed, e.g., protocol, EDC completion guidelines.) Claim 27 depends on claim 1: Guthrie teaches wherein the presenting, the receiving, and the evaluating to detect the anomaly are performed by the SVR engine in real-time. (e.g., remote review system for real-time evaluation of received data to detect anomalies Guthrie; Par. 40; In some embodiments, data validation module 108 resides on server 106. Par. 67; The Remote Review allows for the real-time identification of: status of data entry; logic of related data issues; errors; omissions; query resolutions trends of non-compliance and issues requiring attention. Par. 69; Performing remote data reviews include checks for: i) Logic (one's reasoned and reasonable judgment of the study data); ii) compliance: involves looking across systems to ensure that the subject is following the protocol, e.g., completing follow-up visit assessments; and iii) conventions: with each trial there are conventions or trial-specific information that needs to be followed, e.g., protocol, EDC completion guidelines.) Claim 28 depends on claim 1: Guthrie/Ramaci teaches wherein the natural language notes comprise a plurality of sentences captured by the first graphical user interface, the sentences associated with a question among the questions of the site visit report. (e.g., form-based entry of data (i.e., natural language notes) in response to questions Guthrie; par. 36; Referring now to FIG. 1, an onsite user 101 is responsible for entering data into workstation 102. Workstation 102 may be a desktop computer, laptop, tablet, mobile phone, or other computing device having a human interface device. In some embodiments, workstation 102 provides a graphical user interface for the entry of data. In some embodiments, the user interface is a web-based user interface and provides for form-based entry of data.) (e.g., a relational agent performing natural language processing on the user responses to detect an anomaly or adverse events see Ramaci; par. 10; par. 14;) Claim 24 is rejected under 35 U.S.C. 103 as being unpatentable over Guthrie/ Ramaci/ Thompson/ Anisingaraju as cited above and applied to claim 1, in further view of Byun et al. (hereinafter “Byun”), U.S. Published Application No. 20180322107 A1. Claim 24 depends on claim 1: Guthrie teaches wherein: the evaluating comprises computing a compliance score for each question of the site visit report based on the responses to the questions, (e.g., reviewing a form for collecting data for critical or non critical questions and whether the response is valid data (i.e., compliance score of 100% when data is valid ) par. 19; The system also includes a data analysis server. The data analysis server includes a plurality of data validation rules. par. 36; If data entered by user 101 is found to be invalid by validation module 103, a verification query 104 is initiated. Par. 34; Critical Variable (CV): Critical variables are data that must be 100% source data verified. Examples of critical variables include: safety and efficacy variables, endpoints, eligibility criteria, informed consent information, and inventory accountability (if applicable). Par. 35; Non-critical Variable: Non-critical variables are data that are not related to safety and efficacy, endpoints, eligibility criteria, etc., and, therefore, may be reviewed remotely if a review is required. In some embodiments, a subset of non-critical variables are identified as not requiring any review. ) Guthrie/Ramaci/Thompson/Anisingaraju fails to expressly teach wherein a compliance score for a question is computed based on answers to sub-questions associated with the question and different weights applied to the sub-questions. However, Byun teaches wherein: the evaluating comprises computing a compliance score for each question of the site visit report based on the responses to the questions, wherein a compliance score for a question is computed based on answers to sub-questions associated with the question and different weights applied to the sub-questions. (e.g., determining a risk score based on different weighted questions par. 4; The enterprise may set the weights via the graphical user interface, and the weights may be based on the importance of particular compliance information, as represented by the forms. Par. 121; Enterprise 612 may assign various weights to the questionnaires or questions. These weights may correspond to internal rules enterprise 612 has created regarding risk assessment. The internal rules may be set by enterprise 612 based on what enterprise 612 identifies as important. The weights may relate to the priority or risk of a particular question. As an example, the weights may be a number with a range from 1-10. A weight of 1 may indicate a lower weight, while a weight of 10 may indicate a higher weight. Par. 123; The process for assessing risk may include dynamically calculating a risk score based on the predefined weights set by enterprise 612 for each of the questionnaire answers or documents attached in response to a document request form. ) In the same field of endeavor, namely, collecting and processing data to determine compliance with associated guidelines of an organization, , It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the evaluation process of data associated with compliance with clinical protocol/procedures as taught by Guthrie/Ramaci/Thompson/Anisingaraju to include assignment of varying the weight of questions used to gather data responses as taught by Byun, with a reasonable expectation of success, to provide the benefit of better reflecting the importance of internal rules in effort of improving the assessment of the collected data from questionnaires. (see Byun; paras. 121, 123) Response to Arguments Applicant's arguments filed 1/23/2026 have been fully considered but they are not persuasive. Prior Art Rejections 1)Applicant argues that Guthrie does not teach or suggest that data entered by a user into a workstation 102 includes both “user-selected answers and natural language notes of a user” as recited in claim 1. Form-based entry of data into a user interface, such as entry of numeric values representative of blood pressure (see 0037 of Guthrie), does not amount to a teaching or suggestion of natural language notes of a user being input into the user interface. Moreover, while Guthrie describes validation of data input by a user and examples of validating data, Guthrie does not teach or suggest text analytics, natural language processing, or use of text analytics or natural language processing to validate the input data. Ramaci, Thompson, and Anisingaraju do not cure these deficiencies of Guthrie, not does the Office Action assert that Ramaci, Thompson, or Anisingaraju cures these deficiencies of Guthrie. (see Response; page 17) Examiner respectfully disagrees. Examiner submits that Applicant does not provide a definition for the recited “natural language notes” that would preclude data entry of numeric values representative of blood pressure (see 0037 of Guthrie). Under broadest reasonable interpretation (BRI), natural language notes includes any data response in any human language. Therefore, the review answers and patient diaries as taught by Guthrie teaches or suggests both “user-selected answers and natural language notes of a user” as recited in claim 1. Examiner notes that Ramaci and Anisingaraju is relied upon to teach natural language processing or use of text analytics or natural language processing to validate the input data. Ramaci teaches a relational agent performing natural language processing on the user responses to detect an anomaly or adverse events (see par. 10; par. 14;) Anisingaraju is relied upon to teach evaluating text analytics of natural language notes based on a combination of pre-configured rules and a computer-trained model (see office action). 2)Applicant argues that the text analytics described by Anisingaraju are limited to the context of generating insights for improvement pertaining to a healthcare organization, which insights include recommendations to improve at least one attribute such as a topic, a sentiment, or an emotion identified by the text analytics. Anisingaraju, abstract, 0008, 0010, 0095, 0102, 0108. There is no teaching or suggestion in Anisingaraju of using natural language processing or text analytics to evaluate data to detect an anomaly in a clinical trial site visit. The use of natural language processing to generate insights/recommendations for improvement pertaining to a healthcare organization is entirely different from the use of natural language processing to generate a different output, such as to detect an anomaly in a clinical trial site visit. It is not trivial, practical, or obvious to take natural language processing that is used to generate one type of output such as insights/recommendations and apply it in a different context to generate a different type of output such as detecting an anomaly in a clinical trial site visit. (see Response; page 19) Examiner notes that Ramaci is relied upon to teach "evaluating, by the SVR engine, the user-selected answers and the natural language notes from the responses being received via the first graphical user interface displayed on the user device to detect an anomaly in the clinical trial site visit”, (e.g., a relational agent performing natural language processing on the user responses to detect an anomaly or adverse events see par. 10; par. 14;) and Anisingaraju is relied upon to teach evaluating text analytics of natural language notes based on a combination of pre-configured rules and a computer-trained model (see office action). Examiner submits that is unclear as to why Applicant believes Ramaci fails to teach or suggest natural language processing to detect an anomaly in light of applied Ramaci as set forth in office action. For at least the foregoing reasons, the claims are not in condition for allowance. Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to HENRY ORR whose telephone number is (571)270-1308. The examiner can normally be reached 9AM-5PM EST M-F. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Adam Queler can be reached at (571)272-4140. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. HENRY ORR Primary Examiner Art Unit 2145 /HENRY ORR/Primary Examiner, Art Unit 2172
Read full office action

Prosecution Timeline

May 05, 2021
Application Filed
Oct 18, 2023
Non-Final Rejection — §103, §112
Mar 13, 2024
Response Filed
Apr 05, 2024
Examiner Interview Summary
Apr 05, 2024
Applicant Interview (Telephonic)
May 30, 2024
Final Rejection — §103, §112
Jun 06, 2024
Interview Requested
Aug 01, 2024
Examiner Interview Summary
Aug 01, 2024
Applicant Interview (Telephonic)
Aug 05, 2024
Response after Non-Final Action
Aug 14, 2024
Examiner Interview (Telephonic)
Aug 14, 2024
Response after Non-Final Action
Sep 04, 2024
Request for Continued Examination
Sep 11, 2024
Response after Non-Final Action
Nov 21, 2024
Non-Final Rejection — §103, §112
Feb 21, 2025
Applicant Interview (Telephonic)
Feb 21, 2025
Examiner Interview Summary
Feb 26, 2025
Response Filed
May 30, 2025
Final Rejection — §103, §112
Aug 28, 2025
Examiner Interview Summary
Aug 28, 2025
Applicant Interview (Telephonic)
Sep 04, 2025
Request for Continued Examination
Sep 10, 2025
Response after Non-Final Action
Oct 21, 2025
Non-Final Rejection — §103, §112
Jan 23, 2026
Response Filed
Feb 03, 2026
Examiner Interview Summary
Feb 03, 2026
Applicant Interview (Telephonic)
Apr 07, 2026
Final Rejection — §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12578851
SYSTEMS, METHODS, AND GRAPHICAL USER INTERFACES FOR GENERATING SHORT RUN CONTROL CHARTS
2y 5m to grant Granted Mar 17, 2026
Patent 12572268
ACCELERATED SCROLLING AND SELECTION
2y 5m to grant Granted Mar 10, 2026
Patent 12561589
SYSTEM AND METHOD FOR INDUSTRIAL AUTOMATION RULES ENGINE
2y 5m to grant Granted Feb 24, 2026
Patent 12547304
INFORMATION PROCESSING SYSTEM, METHOD, AND NON-TRANSITORY COMPUTER READABLE MEDIUM FOR DISPLAYING ENLARGEED IMAGE CORRESPONDING TO A FILE IMAGE
2y 5m to grant Granted Feb 10, 2026
Patent 12530968
MAP-BASED EMERGENCY CALL MANAGEMENT AND DISPATCH
2y 5m to grant Granted Jan 20, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

7-8
Expected OA Rounds
50%
Grant Probability
88%
With Interview (+37.2%)
3y 10m
Median Time to Grant
High
PTA Risk
Based on 456 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month