Prosecution Insights
Last updated: April 19, 2026
Application No. 19/090,509

METHOD AND SYSTEM FOR GENERATING SEMANTIC RESPONSE TO QUERY

Non-Final OA §101§103
Filed
Mar 26, 2025
Examiner
BIBBEE, JARED M
Art Unit
2161
Tech Center
2100 — Computer Architecture & Software
Assignee
NEC Corporation
OA Round
1 (Non-Final)
80%
Grant Probability
Favorable
1-2
OA Rounds
3y 0m
To Grant
94%
With Interview

Examiner Intelligence

Grants 80% — above average
80%
Career Allow Rate
529 granted / 660 resolved
+25.2% vs TC avg
Moderate +14% lift
Without
With
+13.7%
Interview Lift
resolved cases with interview
Typical timeline
3y 0m
Avg Prosecution
12 currently pending
Career history
672
Total Applications
across all art units

Statute-Specific Performance

§101
15.3%
-24.7% vs TC avg
§103
51.1%
+11.1% vs TC avg
§102
17.7%
-22.3% vs TC avg
§112
5.2%
-34.8% vs TC avg
Black line = Tech Center average estimate • Based on career data from 660 resolved cases

Office Action

§101 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Priority Receipt is acknowledged of certified copies of papers required by 37 CFR 1.55. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-14 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. As to independent claims 1 and 8: At Step 1: The claims are directed to a “method” and “system” and thus directed to a statutory category. At Step 2A, Prong One: The claims recite the following limitations directed to an abstract idea: “classifying the unstructured continuous data into: a first, second, and third type of data” as drafted recites a mental process. One can mentally evaluate/judge data and them make classification. “classification model” as drafted recites a mathematical concept. Specifically, Applicant’s specification [0041] states unstructured continuous data is first identified and extracted according to classification labels using text classification algorithms. At Step 2A, Prong Two: The claims recite the following additional elements: That the method and system are performed by a “a user device”, “an input/output unit”, “a first, second, and third database”, “memory”, “processor”, and “classification model” which is a high-level recitation of a generic computer components and represents mere instructions to apply on a computer as in MPEP 2106.05(f), which does not provide integration into a practical application. “receiving unstructured continuous data” is insignificant extra-solution activity. This limitation is recited as receiving data (i.e. mere data gathering). This does not provide integration into a practical application. “storing data in a database” is insignificant extra-solution activity. This limitation is recited as mere data gathering data. This does not provide integration into a practical application. “receiving a query to search for data” is insignificant extra-solution activity. This limitation is recited as mere data gathering data. This does not provide integration into a practical application. “retrieving data from a database” is insignificant extra-solution activity. This limitation is recited as mere data gathering data. This does not provide integration into a practical application. “generating a semantic response to the query” is insignificant extra-solution activity. This limitation is recited as providing/presenting data. This does not provide integration into a practical application. Viewing the additional limitations together and the claim as a whole, nothing provides integration into a practical application. At Step 2B: The conclusions for the mere implementation using a computer are carried over and do not provide significantly more. With respect to the “receiving” identified as extra-solution activity in Step 2A Prong 2, when re-evaluated as Step 2B this limitation is well-understood, routine, and conventional and remains insignificant extra-solution activity. See MPEP 2106.05(d)(II) “i. Receiving or transmitting data over a network, e.g., using the Internet to gather data, Symantec, 838 F.3d at 1321, 120 USPQ2d at 1362 (utilizing an intermediary computer to forward information); TLI Communications LLC v. AV Auto. LLC, 823 F.3d 607, 610, 118 USPQ2d 1744, 1745 (Fed. Cir. 2016) (using a telephone for image transmission); OIP Techs., Inc., v. Amazon.com, Inc., 788 F.3d 1359, 1363, 115 USPQ2d 1090, 1093 (Fed. Cir. 2015) (sending messages over a network); buySAFE, Inc. v. Google, Inc., 765 F.3d 1350, 1355, 112 USPQ2d 1093, 1096 (Fed. Cir. 2014) (computer receives and sends information over a network).” To the extent this is a request for “input” on records that is well-understood, routine and conventional. See MPEP 2106.05(d)(II) “iii. Electronic recordkeeping, Alice Corp. Pty. Ltd. v. CLS Bank Int'l, 573 U.S. 208, 225, 110 USPQ2d 1984 (2014) (creating and maintaining "shadow accounts"); Ultramercial, 772 F.3d at 716, 112 USPQ2d at 1755 (updating an activity log).” With respect to the “generating” identified as extra-solution activity in Step 2A Prong 2, when re-evaluated as Step 2B this limitation is well-understood, routine, and conventional and remains insignificant extra-solution activity. See MPEP 2106.05(d)(II) “iv. Storing and retrieving information in memory, Versata Dev. Group, Inc. v. SAP Am., Inc., 793 F.3d 1306, 1334, 115 USPQ2d 1681, 1701 (Fed. Cir. 2015); OIP Techs., 788 F.3d at 1363, 115 USPQ2d at 1092-93.” With respect to the “classification model” identified as extra-solution activity in Step 2A Prong 2, when re-evaluated as Step 2B this limitation is well-understood, routine, and conventional and remains insignificant extra-solution activity. Specifically, Applicant’s specification [0030] and [0042]-[0046] merely mentions that the classification model may be machine learning. There is nothing in the specification that states that the classification model is anything more than a well-understood, routine, and conventional machine learning model. The specification also fails to identify an improvement to machine learning technology, which would provide significantly more. Looking at the claims as a whole does not change this conclusion and the claim is ineligible. As to dependent claims 2-7 and 9-14: At Step 1: The claims are directed to a “method” and “system” and thus directed to a statutory category. At Step 2A, Prong One: The claims recite the following limitations directed to an abstract idea: “the first, second and third classification models comprises machine learning models, and are optimized using an optimizer” as drafted recites a mental process. One can mentally evaluate/judge data and them make optimizations. “classification model” as drafted recites a mathematical concept. Specifically, Applicant’s specification [0041] states unstructured continuous data is first identified and extracted according to classification labels using text classification algorithms. At Step 2A, Prong Two: The claims recite the following additional elements: That the method and system are performed by a “a user device”, “an input/output unit”, “a first, second, and third database”, “memory”, “processor”, and “classification model” which is a high-level recitation of a generic computer components and represents mere instructions to apply on a computer as in MPEP 2106.05(f), which does not provide integration into a practical application. “the unstructured continuous data comprises medical related data” is insignificant extra-solution activity. This limitation is recited as mere data gathering. This does not provide integration into a practical application. “the first type of data includes medical related data of the user” is insignificant extra-solution activity. This limitation is recited as mere data gathering. This does not provide integration into a practical application. “the name related data includes names of medicines and tests prescribed to the user” is insignificant extra-solution activity. This limitation is recited as mere data gathering. This does not provide integration into a practical application. “the event related data includes chronological events related to the user” is insignificant extra-solution activity. This limitation is recited as mere data gathering. This does not provide integration into a practical application. “retrieving the portions is performed by Retrieval Augmented Generation (RAG) techniques” is insignificant extra-solution activity. This limitation is recited as mere data gathering. This does not provide integration into a practical application. Viewing the additional limitations together and the claim as a whole, nothing provides integration into a practical application. At Step 2B: The conclusions for the mere implementation using a computer are carried over and do not provide significantly more. With respect to the “retrieving” identified as extra-solution activity in Step 2A Prong 2, when re-evaluated as Step 2B this limitation is well-understood, routine, and conventional and remains insignificant extra-solution activity. See MPEP 2106.05(d)(II) “i. Receiving or transmitting data over a network, e.g., using the Internet to gather data, Symantec, 838 F.3d at 1321, 120 USPQ2d at 1362 (utilizing an intermediary computer to forward information); TLI Communications LLC v. AV Auto. LLC, 823 F.3d 607, 610, 118 USPQ2d 1744, 1745 (Fed. Cir. 2016) (using a telephone for image transmission); OIP Techs., Inc., v. Amazon.com, Inc., 788 F.3d 1359, 1363, 115 USPQ2d 1090, 1093 (Fed. Cir. 2015) (sending messages over a network); buySAFE, Inc. v. Google, Inc., 765 F.3d 1350, 1355, 112 USPQ2d 1093, 1096 (Fed. Cir. 2014) (computer receives and sends information over a network).” To the extent this is a request for “input” on records that is well-understood, routine and conventional. See MPEP 2106.05(d)(II) “iii. Electronic recordkeeping, Alice Corp. Pty. Ltd. v. CLS Bank Int'l, 573 U.S. 208, 225, 110 USPQ2d 1984 (2014) (creating and maintaining "shadow accounts"); Ultramercial, 772 F.3d at 716, 112 USPQ2d at 1755 (updating an activity log).” With respect to the “classification model” identified as extra-solution activity in Step 2A Prong 2, when re-evaluated as Step 2B this limitation is well-understood, routine, and conventional and remains insignificant extra-solution activity. Specifically, Applicant’s specification [0030] and [0042]-[0046] merely mentions that the classification model may be machine learning. There is nothing in the specification that states that the classification model is anything more than a well-understood, routine, and conventional machine learning model. The specification also fails to identify an improvement to machine learning technology, which would provide significantly more. Looking at the claims as a whole does not change this conclusion and the claim is ineligible. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1-6 and 8-13 is/are rejected under 35 U.S.C. 103 as being unpatentable over BOLOOR et al (US 20160019299 A1) in view of Laish et al (US 20240111999 A1). As to claims 1 and 8, BOLOOR teaches A method for generating a semantic response to a query, the method comprising: receiving the query to search for one or more details corresponding to the user, wherein the one or more details are based on the unstructured continuous data of the user (BOLOOR [0027] and [0028] discloses the entire contents of the EMR (Electronic Medical Record) are analyzed including both unstructured documents (e.g. clinician notes) and structured data (e.g. lab results and medications). A user 300, such as a medical professional, will select an EMR 308 to search and submit a query 302, such as a natural language question or search term(s), for the relevant information (i.e. details corresponding to the user) sought from the EMR 308.); retrieving, based on the received query, portions of at least the first type of data, the second type of data and the third type of data, from the first database, the second database, and the third database, respectively (BOLOOR [0029] discloses a content specific search 306 is performed against the semantically analyzed EMR contents 310 for medically relevant information in both unstructured and structured data of the EMR (using the one or more computerized devices 54B, 54C, etc.). The results from the unstructured data may include documents/notes, passages, and terms, whereas the results from the structured data may include lists of medications, procedures, or lab results (i.e. various types of data). The unstructured data can be stored in various databases over many different medical facilities.); and generating the semantic response based on the received query and the retrieved portions (BOLOOR [0030] and [0031] discloses the results can, for example, comprise both passages from the EMR 308 that semantically match the query 302 and passages having a relationship based on the medical concepts identified in the query and the passages (i.e. semantic response). Part of identifying the relationships may include identifying causation of medical conditions and treatments for medical conditions (e.g., “treats”, “causes”, etc.) based on the medical concepts.). BOLOOR fails to teach: receiving unstructured continuous data of a user; classifying the unstructured continuous data into: a first type of data, by a first classification model, a second type of data, by a second classification model, and a third type of data, by a third classification model, wherein the second and third type of data includes name related data and event related data respectively; storing the first, second and third type of data in a first, a second and a third database respectively. However, Laish teaches: receiving unstructured continuous data of a user (Laish [0018] and [0021] discloses unstructured text may be entered by healthcare providers, such as physicians, nurses, or other medical professionals, into the patient's medical record. Receive a clinical note 101 as input and process the clinical note to generate multiple text spans. Each of the text spans identifies a section of a plurality of sections separated by separator tokens in the text of the clinical note.); classifying the unstructured continuous data into: a first type of data, by a first classification model, a second type of data, by a second classification model, and a third type of data, by a third classification model, wherein the second and third type of data includes name related data and event related data respectively (Laish [0019], [0033]-[0055] discloses segmenting and classifying unstructured text in a clinical note. The section type classification neural network 110 is configured to, for each of the one or more text segments, process the respective set of text segment embeddings 106 to classify the text segment into a section type of a plurality of section types. The section type characterizes a type of patient information or a type of a clinical procedure that resulted in the clinical note being generated. the system processes, using a section type classification neural network, the respective set of text segment embeddings to classify the text segment into a section type of a plurality of section types (step 308). The section type characterizes a type of a clinical procedure that resulted in the clinical note being generated. For example, a section type may be past medical history, review of systems, social history, imaging, medication, physical examination, lab results, assessment and plan, problem list, hospital course, discharge transfer diagnosis, discharge transfer medication, follow up, or interval events.); storing the first, second and third type of data in a first, a second and a third database respectively (Laish [0063] discloses the data does not need to be structured in any particular way, or structured at all, and it can be stored on storage devices in one or more locations.). Before the effective filing date of the invention, it would have been obvious to one of ordinary skill in the art, to modify the teachings of BOLOOR to incorporate the segmenting and classification of unstructured medical records as taught by Laish for the purpose of increasing the accuracy and efficiency with which unstructured data is analyzed in order to prevent loss (See Laish [0010]). As to claims 2 and 9, BOLOOR teaches the unstructured continuous data comprises medical related data (BOLOOR [0027] discloses the entire contents of the EMR (Electronic Medical Record) are analyzed including both unstructured documents (e.g. clinician notes) and structured data (e.g. lab results and medications).). As to claims 3 and 10, Laish teaches the first type of data includes medical related data of the user (Laish [0033]-[0055] discloses the section type characterizes a type of patient information or a type of a clinical procedure that resulted in the clinical note being generated. the system processes, using a section type classification neural network, the respective set of text segment embeddings to classify the text segment into a section type of a plurality of section types (step 308).). As to claims 4 and 11, Laish teaches the name related data includes names of medicines and tests prescribed to the user (Laish [0019], [0033]-[0055] discloses segmenting and classifying unstructured text in a clinical note. The section type classification neural network 110 is configured to, for each of the one or more text segments, process the respective set of text segment embeddings 106 to classify the text segment into a section type of a plurality of section types. The section type characterizes a type of patient information or a type of a clinical procedure that resulted in the clinical note being generated. the system processes, using a section type classification neural network, the respective set of text segment embeddings to classify the text segment into a section type of a plurality of section types (step 308). The section type characterizes a type of a clinical procedure that resulted in the clinical note being generated. For example, a section type may be past medical history, review of systems, social history, imaging, medication, physical examination, lab results, assessment and plan, problem list, hospital course, discharge transfer diagnosis, discharge transfer medication, follow up, or interval events.). As to claims 5 and 12, Laish teaches the event related data includes chronological events related to the user (Laish [0019], [0033]-[0055] discloses segmenting and classifying unstructured text in a clinical note. The section type classification neural network 110 is configured to, for each of the one or more text segments, process the respective set of text segment embeddings 106 to classify the text segment into a section type of a plurality of section types. The section type characterizes a type of patient information or a type of a clinical procedure that resulted in the clinical note being generated. the system processes, using a section type classification neural network, the respective set of text segment embeddings to classify the text segment into a section type of a plurality of section types (step 308). The section type characterizes a type of a clinical procedure that resulted in the clinical note being generated. For example, a section type may be past medical history, review of systems, social history, imaging, medication, physical examination, lab results, assessment and plan, problem list, hospital course, discharge transfer diagnosis, discharge transfer medication, follow up, or interval events.). As to claims 6 and 13, Laish teaches the first, second and third classification models comprises machine learning models, and are optimized using an optimizer (Laish [0019], [0033]-[0055] discloses segmenting and classifying unstructured text in a clinical note. The section type classification neural network (i.e. machine learning models with optimization) 110 is configured to, for each of the one or more text segments, process the respective set of text segment embeddings 106 to classify the text segment into a section type of a plurality of section types. The section type characterizes a type of patient information or a type of a clinical procedure that resulted in the clinical note being generated. the system processes, using a section type classification neural network (i.e. machine learning models with optimization), the respective set of text segment embeddings to classify the text segment into a section type of a plurality of section types (step 308). The section type characterizes a type of a clinical procedure that resulted in the clinical note being generated.). Claim(s) 7 and 14 is/are rejected under 35 U.S.C. 103 as being unpatentable over BOLOOR in view of Laish as applied to claims 1-6 and 8-13 above, and further in view of Khair (US 20250252952 A1). As to claims 7 and 14, BOLOOR and Laish fail to teach retrieving the portions is performed by Retrieval Augmented Generation (RAG) techniques. However, Khair teaches retrieving the portions is performed by Retrieval Augmented Generation (RAG) techniques (Khair [0057] and [0061] discloses the model component can execute the LLM on the second natural language sentence in retrieval-augmented generative (RAG) fashion, using the plurality of inferencing task results as references or context. The medical device can capture or measure health data of the medical patient, and the plurality of diagnostic machine learning models can by executed on that health data so as to yield medical results (e.g., medical classification labels, medical segmentation masks, medical regression outputs). Such medical results can be treated as RAG references for the LLM.). Before the effective filing date of the invention, it would have been obvious to one of ordinary skill in the art, to modify the teachings of BOLOOR and Laish to incorporate the TOUCHLESS OPERATION OF MEDICAL DEVICES VIA LARGE LANGUAGE MODELS as taught by Khair for the purpose of increasing the accuracy and efficiency with which medical data is gathered and distributed. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Casella dos Santos et al (US 20140108423 A1) - Data stores that store content units and annotations regarding the content units derived through a semantic interpretation of the content units. When annotations are stored in a database, different parts of an annotation may be stored in different tables of the database. For example, one or more tables of the database may store all semantic classifications for the annotations, while one or more other tables may store content of all of the annotations. A user may be permitted to provide natural language queries for searching the database. A natural language query may be semantically interpreted to determine one or more annotations from the query. The semantic interpretation of the query may be performed using the same annotation model used to determine annotations stored in the database. Semantic classifications and format of the annotations for a query may be the same as one or more annotations stored in the database. Heinrich et al (US 20180173850 A1) - This document presents a system and method to extract highly meaningful terms from unstructured fields in an EMR that distinguish two different patient populations. Using this system, healthcare providers can quickly identify terms that are highly associated with a specific set of patients, for instance, chronic heart failure (CHF) patients who are high utilizers of the emergency department (ED) compared to CHF patients with low ED utilization. The system enables healthcare providers to identify root causes of health outcomes and to discover potential targets for intervention and improving healthcare delivery. ZHANG et al (US 20200327964 A1) - A method of medical data auto collection segmentation and analysis, includes collecting, from a plurality of sources, unstructured medical data in a plurality of formats, recognizing a medical name entity of each piece of the unstructured medical data, using a medical dictionary, and performing semantic text segmentation on each piece of the unstructured medical data so that each piece of the unstructured medical data is partitioned into groups sharing a same topic. The method further includes generating, as structured medical data, each piece of the unstructured medical data of which the medical name entity is recognized, each piece of the unstructured medical data being partitioned into the groups, and indexing the structured medical data into elastic search clusters. Rich et al (US 20210027894 A1) - A model-assisted system for determining probabilities associated with a patient attribute. The processor may be programmed to access a database storing an unstructured medical record associated with a patient and analyze the medical record to identify snippets of information associated with the patient attribute. The processor may generate, based on each snippet, a snippet vector comprising a plurality of snippet vector elements comprising weight values associated with at least one word included in the snippet. The processor may analyze the snippet vectors to generate a summary vector comprising a plurality of summary vector elements, wherein each of the plurality of summary vector elements is associated with a corresponding snippet vector element and is determined based on an analysis of the corresponding snippet vector element. The processor may further generate, based on the summary vector, at least one output indicative of a probability associated with the patient attribute. BOUREZ et al (US 20240054280 A1) - There is provided a computer implemented method of transforming an unstructured set of data to a structured set of data. In some examples, the method comprises segmenting the unstructured set of data into segments, classifying each segment, extracting key terms from each segment using an extraction model, the extraction model selected from a plurality of extraction models based on the classification of the segment, generating the structured set of data using the segments and the extracted key terms. Contact Information Any inquiry concerning this communication or earlier communications from the examiner should be directed to JARED M BIBBEE whose telephone number is (571)270-1054. The examiner can normally be reached Monday-Thursday 8AM-6PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, APU MOFIZ can be reached at 5712724080. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /JARED M BIBBEE/Primary Examiner, Art Unit 2161
Read full office action

Prosecution Timeline

Mar 26, 2025
Application Filed
Feb 20, 2026
Non-Final Rejection — §101, §103
Apr 01, 2026
Interview Requested
Apr 07, 2026
Applicant Interview (Telephonic)
Apr 08, 2026
Examiner Interview Summary
Apr 08, 2026
Applicant Interview (Telephonic)
Apr 10, 2026
Examiner Interview Summary

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12596742
METHOD AND SYSTEM FOR CURATING MEDIA CONTENT
2y 5m to grant Granted Apr 07, 2026
Patent 12596747
NATURAL LANGUAGE SEARCH OVER SECURITY VIDEOS
2y 5m to grant Granted Apr 07, 2026
Patent 12572427
PARALLELIZATION OF INCREMENTAL BACKUPS
2y 5m to grant Granted Mar 10, 2026
Patent 12572578
CONTENT COLLABORATION PLATFORM WITH DYNAMICALLY-POPULATED TABLES
2y 5m to grant Granted Mar 10, 2026
Patent 12566747
RECURSIVE ENDORSEMENTS FOR DATABASE ENTRIES
2y 5m to grant Granted Mar 03, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
80%
Grant Probability
94%
With Interview (+13.7%)
3y 0m
Median Time to Grant
Low
PTA Risk
Based on 660 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month