Prosecution Insights
Last updated: April 19, 2026
Application No. 18/752,715

ELECTRONIC HEALTH RECORD NAVIGATION

Non-Final OA §101§103§DP
Filed
Jun 24, 2024
Examiner
PATEL, SHREYANS A
Art Unit
2659
Tech Center
2600 — Communications
Assignee
Medicalmine Inc.
OA Round
1 (Non-Final)
89%
Grant Probability
Favorable
1-2
OA Rounds
2y 3m
To Grant
96%
With Interview

Examiner Intelligence

Grants 89% — above average
89%
Career Allow Rate
359 granted / 403 resolved
+27.1% vs TC avg
Moderate +7% lift
Without
With
+7.4%
Interview Lift
resolved cases with interview
Typical timeline
2y 3m
Avg Prosecution
46 currently pending
Career history
449
Total Applications
across all art units

Statute-Specific Performance

§101
21.3%
-18.7% vs TC avg
§103
36.0%
-4.0% vs TC avg
§102
22.6%
-17.4% vs TC avg
§112
8.8%
-31.2% vs TC avg
Black line = Tech Center average estimate • Based on career data from 403 resolved cases

Office Action

§101 §103 §DP
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-20 are rejected under 35 U.S.C. 101. Claims 1 and 10 are directed to an abstract idea. The claims are about taking user inputs (text/voice/icon selections), interpreting them into actions, selecting patient-related options, and navigating an EHR interface accordingly. That is fundamentally an information-processing/workflow navigation concept implemented on a computer interface, rather than a concrete technical improvement to how computers or speech processing work. The claims recites an abstract idea in the form of mental processes (e.g., interpreting language into “verbs” and associated actions; selecting patient specific parameters; selecting a patient; forming commands) and organizing human activity (managing and navigating patient record information in a healthcare workflow). These claims are grouping of abstract ideas. Grouping of abstract ideas into categories including mental processes and certain methods of organizing human activity. The extra features in the claims (natural language interface; receiving text/sound/icon inputs; displaying a patient list; changing a navigational state) are generic user-interface and data navigation steps, not a specific technical solution. The claim mentions sound having “noise, redundancy, and verbosity,” but it does not recite a particular speech/noise-handling technique or a concrete algorithm that improves computer/speech technology. The USPTO guidance indicates that claims are more likely eligible when additional elements integrate the exception into a practical application, such as by improving computer functionality or another technology; that kind of specific technical improvement is not apparent from this claim text. The claim(s) does/do not include additional elements that are sufficient to amount to significantly more than the judicial exception because the claims are (i) mere instructions to implement the idea on a computer, and/or (ii) recitation of generic computer structure that serves to perform generic computer functions that are well-understood, routine, and conventional activities previously known to the pertinent industry. Viewed as a whole, these additional claim element(s) do not provide meaningful limitation(s) to transform the abstract idea into a patent eligible application of the abstract idea such that the claim(s) amounts to significantly more than the abstract idea itself. Therefore, the claim(s) are rejected under 35 U.S.C. 101 as being directed to non-statutory subject matter. There is further no improvement to the computing device. Dependent claims 2-10 and 12-20 further recite an abstract idea performable by a human and do not amount to significantly more than the abstract idea as they do not provide steps other than what is conventionally known in data management. Claims 2 and 12: adds only generic input modalities (voice + icon taps) for giving a request, which is just routine UI interaction. Claims 3 and 13: merely displays command options in a “prioritized list,” i.e., organizing/presenting information for navigation. Claims 4 and 14: merely ranks options using context/history/workflow/patient data, i.e., an abstract recommendation/decision rule based on information. Claims 5 and 15: describes high-level language reformatting (rearranging stored words into command syntax), i.e., abstract data manipulation. Claims 6 and 16: describes deriving commands from ordered linguistic components in a database, i.e., generic parsing/combining text elements. Claims 7 and 17: describes predicting components/values and assembling commands from lists, i.e., abstract prediction and command construction from data. Claims 8 and 18: only limits who uses it (healthcare provider/agent), which is a field-of-use limitation without technical improvement. Claims 9 and 19: only limits who uses it (patient), which is a field-of-use limitation without technical improvement. Claims 10 and 20: only places it on an IoT device, which is a generic “do it on a device” implementation limitation. Double Patenting The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969). A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b). The filing of a terminal disclaimer by itself is not a complete reply to a nonstatutory double patenting (NSDP) rejection. A complete reply requires that the terminal disclaimer be accompanied by a reply requesting reconsideration of the prior Office action. Even where the NSDP rejection is provisional the reply must be complete. See MPEP § 804, subsection I.B.1. For a reply to a non-final Office action, see 37 CFR 1.111(a). For a reply to final Office action, see 37 CFR 1.113(c). A request for reconsideration while not provided for in 37 CFR 1.113(c) may be filed after final for consideration. See MPEP §§ 706.07(e) and 714.13. The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The actual filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/apply/applying-online/eterminal-disclaimer. Claims 1-20 are rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1-20 of U.S. Patent No. 12,020,698. Although the claims at issue are not identical, they are not patentably distinct from each other because of the following: Pending US Application No. 18/752,715 US Patent No. 12,020,698 Claims 1 and 11: A method comprising: providing a natural language interface to enable a user to access an electronic health record (EHR) system; receiving, via the natural language interface, one or more first input stimuli from the user, wherein the one or more first input stimuli include text, sound, and one or more activation of icons, and wherein the sound includes noise, redundancy, and verbosity; converting the one or more first input stimuli into a plurality of different verbs, each of the verbs associated with a different action performable in the EHR system; obtaining multiple respective command parameters for each of the different verbs, each of the multiple respective command parameters being specific to a patient; receiving a selection of one of the multiple respective command parameters for each of the different verbs; receiving, via the natural language interface, one or more second input stimuli from the user, wherein the one or more second stimuli include second text, second sound, and one or more second activation of icons; displaying one or more patients having records in the EHR system based on the one or more second input stimuli; receiving a selection of one of the one or more patients; converting the plurality of different verbs, the selected multiple respective command parameters, and the selected patient into one or more commands; and changing a navigational state of the EHR system based on the one or more commands. Claims 1 and 11: A method comprising: providing a natural language interface to enable a user to access an electronic health record (EHR) system; receiving, via the natural language interface, one or more first input stimuli from the user, wherein the one or more first input stimuli includes text, sound, and one or more activation of icons, and wherein the sound includes noise, redundancy, and verbosity, each of the noise, the redundancy, and the verbosity including respective data that is not useful for parsing an intended command; converting the one or more first input stimuli into a plurality of different verbs, each of the verbs associated with a different action performable in the EHR system, wherein the converting includes extracting the noise, the redundancy, and the verbosity from the sound of the input stimuli; displaying multiple respective command parameters for each of the different verbs, each of the multiple respective command parameters being specific to a patient; receiving from the user a selection of one of the multiple respective command parameters for each of the different verbs; receiving, via the natural language interface, one or more second input stimuli from the user, wherein the one or more second stimuli includes second text, second sound, and one or more second activation of icons; displaying one or more patients having records in the EHR system based on the one or more second input stimuli; receiving from the user a selection of one of the one or more patients; converting the plurality of different verbs, the selected multiple respective command parameters, and the selected patient into one or more commands, the one or more commands including the intended command; and changing a navigational state of the EHR system based on the one or more commands. Claims 2 and 12 correspond to Claims 2 and 12 Claims 3 and 13 correspond to Claims 3 and 13 Claims 4 and 14 correspond to Claims 4 and 14 Claims 5 and 15 correspond to Claims 5 and 15 Claims 6 and 16 correspond to Claims 6 and 16 Claims 7 and 17 correspond to Claims 7 and 17 Claims 8 and 18 correspond to Claims 8 and 18 Claims 9 and 19 correspond to Claims 9 and 19 Claims 20 and 20 correspond to Claims 20 and 20 Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1-20 are rejected under 35 U.S.C. 103 as being unpatentable over Nenov et al. (WO 2009048984) in view of Fors et al. (US 2007/0083395). Claims 1 and 11, Nenov teaches a method comprising: providing a natural language interface to enable a user to access an electronic health record (EHR) system ([Abstract] [0002] the method provides an audio interface for controlling at least one of the controls through audio commands; the digitization of patient information has resulted in specialized databases that allow different medical practitioners to contribute to a single digital copy of the patient file); receiving, via the natural language interface, one or more first input stimuli from the user, wherein the one or more first input stimuli include text, sound, and one or more activation of icons, and wherein the sound includes noise, redundancy, and verbosity ([0035-0036] [0041] [0065] [0067] [0071] attached to a speech input device (e.g. microphone) and includes other input devices (e.g.) mouse, keyboard, touchscreen; the voice recognition engine translates auditory signals into text; GUI controls include a button, a menu, a menu item etc.; for noise [Wingdings font/0xE0] avoiding unintentional voice commands being issued; redundancy [Wingdings font/0xE0] multiple voice commands may correspond to the same dashboard function, overloaded; verbosity [Wingdings font/0xE0] a macro voice command that corresponds to multiple dashboard functions); converting the one or more first input stimuli into a plurality of different verbs, each of the verbs associated with a different action performable in the EHR system ([0041] the programmatic interface of some embodiments converts this text into tokens; the programmatic interface of some embodiments correlates each token to a function or set of functions that can be provided to the dashboard for execution by the dashboard); receiving, via the natural language interface, one or more second input stimuli from the user, wherein the one or more second stimuli include second text, second sound, and one or more second activation of icons ([0035] [0059] [0067] speech input device include a microphone and other input device (e.g. mouse, keyboard, touchscreen etc.; GUI controls include buttons, menu items; selectable tabs from the drop down menu; multiple dashboard function (e.g. multiple buttons, menu selections, etc.). converting the plurality of different verbs, the selected multiple respective command parameters, and the selected patient into one or more commands ([0075-0076] [00113-00114] speech recognition module converts speech into recognized text and scripting treats the recognized text as a potential voice command; dynamic patient selection command creation [Wingdings font/0xE0] retrieves list of patients names, generates token and corresponding function for selecting each patient and creates an entry); and changing a navigational state of the EHR system based on the one or more commands ([0086] voice-invoked function include opening and closing modalities, minimizing and maximizing, rearranging locations; for example, receives voice command to minimize open, maximize, close modality). The difference between the prior art and the claimed invention is that Nenov does not explicitly teach obtaining multiple respective command parameters for each of the different verbs, each of the multiple respective command parameters being specific to a patient; receiving a selection of one of the multiple respective command parameters for each of the different verbs; displaying one or more patients having records in the EHR system based on the one or more second input stimuli; receiving a selection of one of the one or more patients. Fors teaches obtaining multiple respective command parameters for each of the different verbs, each of the multiple respective command parameters being specific to a patient ([0050] secondary menu, a variety of configurations for available data for the selected patient including order view, allergies, order entry etc.); receiving a selection of one of the multiple respective command parameters for each of the different verbs ([0051] a user may select the “Show All Patient visits” option upon selection; the data may be displayed); displaying one or more patients having records in the EHR system based on the one or more second input stimuli ([0036] [0045] configuring a list of patients based on specified criteria and user may search the patient list by name, last name, visit etc.; work screen is displayed to a user); receiving a selection of one of the one or more patients ([0037] the patient list 410 allows a user to select a patient, for example, by a checkbox or other mode, such as for example, using a computer mouse). Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to modify the teachings of Nenov with teachings of Fors by modifying the voice-controlled clinical information dashboard as taught by Nenov to include obtaining multiple respective command parameters for each of the different verbs, each of the multiple respective command parameters being specific to a patient; receiving a selection of one of the multiple respective command parameters for each of the different verbs; displaying one or more patients having records in the EHR system based on the one or more second input stimuli; receiving a selection of one of the one or more patients as taught by Fors for the benefit of managing a more efficient way to organize patient information (Fors [0004]). Claims 2 and 12, Nenov further teaches the method of claim 1, wherein the one or more first input stimuli further comprise voice and activation of one or more icons ([00115] a voice command was described as performing a single dashboard function on a visible control (e.g., operating one user interface tool of a dashboard, such as clicking a single button, selecting a patient from a list, etc.); a voice command performs a "macro," which invokes several dashboard functions (e.g., operates multiple user interface tools of a dashboard, such as clicking several buttons)). Claims 3 and 13, Fors further teaches the method of claim 1, further comprising displaying the one or more command parameters in a navigation prioritized list ([0050] displaying a secondary menu presenting a navigable set of selectable configurations; secondary menu 620 illustrates a variety of configurations). Claims 4 and 14, Fors further teaches the method of claim 3, wherein a priority of each of the one or more command parameters in the navigation prioritized list is based on a context of the user, a history of past commands, a current workflow, and patient information ([0038] [0041] [0043-0044] the patient list 410 may be configured based on, for example, the user, type of user or location; information may vary based on the user; configured to show what is new since a prior point in time; the prior point in time may be, for example, a fixed period or since the user last viewed the list; criteria may include person’s a user is responsible for, and/or condition; new information may include new lab results and new orders). Claims 5 and 15, Nenov further teaches the method of claim 1, wherein converting the verb, the selected command parameter, and the selected patient into one or more commands comprises rearranging linguistic components stored in an input stimuli database into a command syntax ([0079-0080] [00100-00114] storing converted/recognized text values (linguistic components) in a text token database and converting/mapping those linguistic components into token strings (command syntax) (e.g., “ZoomIn()”, “Select JohnDoe()”) including retrieving the command token via an SQL query; database stores converted text values corresponding to tokens; a row may have ‘zoom in’ in the Text column ‘ZoomIn()’ in the Token column; explicit correlation of linguistic phrase to token (command-syntax token); Patient-specific command token formed using verb and patient name (command syntax); linguistic component (patient name) is stored/entered into the text-token database to correspond recognized text to the token). Claims 6 and 16, Nenov further teaches the method of claim 1, wherein converting the verb, the selected command parameter, and the selected patient into one or more commands comprises deriving the one or more commands from various ordered linguistic components stored in an input stimuli database ([0079-0081] [00112] a text-token database and it stores converted text values that correspond to tokens/command; shows deriving a token (command syntax) from recognized text using the database query; the scripting module 320, automatically forms a query, using the received recognized text as a parameter of the query and see the query [0079]; using present recognized text plus subsequent recognized text (i.e. ordered components) to locate an associated token; the scripting module hold the recognized text, in case subsequent recognized text, in conjunction with the present recognized text, is associated with a token in the text-token database; combinatorial voice commands built from ordered components; the modules 320 and 330 create a combinatorial voice command, see example). Claims 7 and 17, Nenov further teaches the method of claim 1, wherein converting the verb, the selected command parameter, and the selected patient into one or more commands ([0041] the voice recognition engine translates auditory signals into text; the programmatic interface converts this text into tokens; the programmatic interface correlates each token to a function or set of functions that can be provided to the dashboard for execution) to build the one or more commands from one or more of predicted components and predicted values ([00104] the available voice commands are generated when new data is received in the dashboard; for instance, if a dashboard displays a list of patients, the set of available voice commands, “select John Doe”, alternatively the command may simply be “John Doe”). Fors further teaches comprises using one or more navigation prioritized lists ([0050] the secondary menu 620 may illustrate a variety of configurations for available data for the selected patient; the configurations on the secondary menu 620 may be configured by a specific criteria). Claims 8 and 18, Nenov further teaches the method of claim 1, wherein the user is a healthcare provider, a human agent of the healthcare provider, or an artificial agent of the healthcare provider ([0002] medical practitioners). Claims 9 and 19, Nenov further teaches the method of claim 1, wherein the user is a patient ([Abstract] medical patient). Claims 10 and 20, Nenov further teaches the method of claim 1, wherein the natural language interface is incorporated into an Internet of Things (IoT) device ([Abstract] voice-controlled clinical information dashboard; an audio interface for controlling at least one of the controls through audio commands). Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Schultz (US 2009/0292554) – A tool for planning and management of clinical trials. The tool computes a patient enrollment timeline in a clinical trial using multiple factors that bear on the rate of patient enrollment. The factors may be site-dependent factors or may be country-dependent factors. When these factors are applied, different sites may have different rates of enrollment in the same interval. Further, the factors may be time dependent such that even the same sites may have different enrollment rates in different intervals. Once the timeline is created, the tool may use it to calculate a schedule of monitor visits, project trial completion or otherwise generate output used in management of the clinical trial. Any inquiry concerning this communication or earlier communications from the examiner should be directed to SHREYANS A PATEL whose telephone number is (571)270-0689. The examiner can normally be reached Monday-Friday 8am-5pm PST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Pierre Desir can be reached at 571-272-7799. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. SHREYANS A. PATEL Primary Examiner Art Unit 2653 /SHREYANS A PATEL/ Examiner, Art Unit 2659
Read full office action

Prosecution Timeline

Jun 24, 2024
Application Filed
Feb 23, 2026
Non-Final Rejection — §101, §103, §DP (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12586597
ENHANCED AUDIO FILE GENERATOR
2y 5m to grant Granted Mar 24, 2026
Patent 12586561
TEXT-TO-SPEECH SYNTHESIS METHOD AND SYSTEM, A METHOD OF TRAINING A TEXT-TO-SPEECH SYNTHESIS SYSTEM, AND A METHOD OF CALCULATING AN EXPRESSIVITY SCORE
2y 5m to grant Granted Mar 24, 2026
Patent 12548549
ON-DEVICE PERSONALIZATION OF SPEECH SYNTHESIS FOR TRAINING OF SPEECH RECOGNITION MODEL(S)
2y 5m to grant Granted Feb 10, 2026
Patent 12548583
ACOUSTIC CONTROL APPARATUS, STORAGE MEDIUM AND ACCOUSTIC CONTROL METHOD
2y 5m to grant Granted Feb 10, 2026
Patent 12536988
SPEECH SYNTHESIS METHOD AND APPARATUS, DEVICE, AND STORAGE MEDIUM
2y 5m to grant Granted Jan 27, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
89%
Grant Probability
96%
With Interview (+7.4%)
2y 3m
Median Time to Grant
Low
PTA Risk
Based on 403 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month