Prosecution Insights
Last updated: April 19, 2026
Application No. 19/092,478

MEDICAL RECORD SUPPORT APPARATUS, MEDICAL RECORD SUPPORT SYSTEM, AND MEDICAL RECORD SUPPORT METHOD

Non-Final OA §101§103
Filed
Mar 27, 2025
Examiner
PATEL, SHERYL GOPAL
Art Unit
3685
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
National Center For Global Health And Medicine
OA Round
1 (Non-Final)
13%
Grant Probability
At Risk
1-2
OA Rounds
2y 11m
To Grant
31%
With Interview

Examiner Intelligence

Grants only 13% of cases
13%
Career Allow Rate
3 granted / 23 resolved
-39.0% vs TC avg
Strong +18% interview lift
Without
With
+18.3%
Interview Lift
resolved cases with interview
Typical timeline
2y 11m
Avg Prosecution
34 currently pending
Career history
57
Total Applications
across all art units

Statute-Specific Performance

§101
39.7%
-0.3% vs TC avg
§103
35.6%
-4.4% vs TC avg
§102
12.7%
-27.3% vs TC avg
§112
9.7%
-30.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 23 resolved cases

Office Action

§101 §103
DETAILED ACTION The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-9 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Step 1 Claims 1-9 are within the four statutory categories. However, as will be shown below, claims 1-9 are nonetheless unpatentable under 35 U.S.C. 101. Claims 1, 8, and 9 are representative of the inventive concept and recite: Claim 1 A medical record support apparatus comprising: at least one memory; and at least one processor coupled to the at least one memory, wherein the at least one processor is configured to: acquire text information of an utterance content converted by speech recognition in which speakers are distinguished for conversational speech data between a medical worker and a patient during a medical interview; generate an instruction sentence for a language model to be output in a medical record format from the text information; and acquire medical record information of the patient by inputting the instruction sentence to the language model. Claim 8 A medical record support system comprising: an eyewear-type wearable terminal worn by a medical worker during a medical interview; and an information processing apparatus communicably connected to the eyewear-type wearable terminal, wherein the information processing apparatus includes: at least one apparatus memory; and at least one apparatus processor coupled to the at least one apparatus memory, the at least one apparatus processor is configured to: acquire text information of an utterance content converted by speech recognition in which speakers are distinguished for conversational speech data between the medical worker and a patient during the medical interview, the conversational speech data being collected by the eyewear-type wearable terminal; generate an instruction sentence for a language model to be output in a medical record format from the text information; and acquire medical record information of the patient by inputting the instruction sentence to the language model, the eyewear-type wearable terminal includes: at least one terminal memory; and at least one terminal processor coupled to the at least one terminal memory, and the at Least one terminal processor is configured to display the medical record information on a wearer screen. Claim 9 A medical record support method executed by a computer, the medical record support method comprising: acquiring text information of an utterance content converted by speech recognition in which speakers are distinguished for conversational speech data between a medical worker and a patient during a medical interview; generating an instruction sentence for a language model to be output in a medical record format from the text information; and acquiring medical record information of the patient by inputting the instruction sentence to the language model. Step 2A Prong One The broadest reasonable interpretation of these steps includes mental processes because the highlighted components can practically be performed by the human mind (in this case, the process of acquiring and generating) or using pen and paper. Other than reciting generic computer components/functions such as “apparatus” , “memory”, “processor coupled to memory”, “system”, “terminal”, nothing in the claims precludes the highlighted portions from practically being performed in the mind. If a claim limitation, under its broadest reasonable interpretation, covers performance of the limitation in the mind, but for the recitation of generic computer components/functions, then it falls within “Mental Processes” grouping of abstract ideas. Additionally, the mere nominal recitation of a generic computer does not take the claim limitation out of the mental process grouping. Thus, the claim recites a mental process. Additionally, the recitation of generic computer components and functions also covers behavioral or interactions between people (i.e. the computer), and/or managing personal behavior or relationships or interactions between people (i.e. social activities, teaching, and following rules or instructions), hence the claim falls under “Certain Methods of Organizing Human Activity”. Dependent claims 2-7 recite additional subject matter which further narrows or defines the abstract idea embodied in the claims. Step 2A Prong Two This judicial exception is not integrated into a practical application. In particular, the claims recite the following additional limitations: Claim 1 recites: “apparatus” , “memory”, “processor coupled to memory”, “system”, “terminal” , “acquire medical record information of the patient by inputting the instruction sentence to the language model, and “to display the medical record information on a wearer screen”. In particular, the additional elements do no integrate the abstract idea into a practical application, other than the abstract idea per se, because the additional elements amount to no more limitations which: Amount to mere instructions to apply an exception (MPEP 2106.05(f)). The limitations are recited as being performed by a “apparatus” , “memory”, “processor coupled to memory”, “system”, “terminal”, and “language model”. These limitations are recited at a high level of generality and amounts to no more than mere instructions to apply the exception using a generic computer. The model is used to generally apply the abstract idea without limiting how it functions. Add insignificant extra-solution activity (MPEP 2106.05(g)) to the abstract idea such as the recitation of “acquire medical record information of the patient by inputting the instruction sentence to the language model, and “to display the medical record information on a wearer screen”. Dependent claims 2-7 do not include additional elements beyond those already recited in Independent claims 1, 8, and 9, and hence do not integrate the aforementioned abstract idea into a practical application. Looking at the limitations as an ordered combination adds nothing that is not already present when looking at the elements taken individually. There is no indication that the combination of elements improves the functioning of a computer, machine learning model, or any other technology. Their collective function merely provides conventional computer implementation and do not impose a meaningful limit to integrate the abstract idea into a practical application. Step 2B Claims 1, 8, and 9 do not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to discussion of integration of the abstract idea into a practical application, the additional elements: A system in claim 1; amount to no more than mere instructions to apply an exception to the abstract idea. Additionally, the additional limitations, other than the abstract idea per se, amount to no more than limitations which amount to elements that have been recognized as well-understood, routine, and conventional activity in particular fields as demonstrated by the recitation of: • Display, which refers to showing information on a screen(Para 0039, Cachro(US 20230216923 A1) discloses: “The display 308, may comprise any conventional display mechanism such as a cathode ray tube (CRT), flat panel display, or any other display mechanism known to those having ordinary skill in the art.”) in a manner that would be well-understood, routine, and conventional. • Acquiring, which refers to the process of obtaining data via digital methods (TLI Communications LLC v. AV Auto. LLC, 823 F.3d 607, 614, 118 USPQ2d 1744, 1748 (Fed. Cir. 2016)) in a manner that would be well-understood, routine, and conventional. The claims do not include any additional elements beyond those already recited in independent claims 1, 8, and 9 and dependent claims 2-7. Therefore, they are not deemed to be significantly more than the abstract idea because, as stated above, the limitations of the aforementioned dependent claims amount to no more than generally linking the abstract idea to a particular technological environment or field of use, and/or do not recite and additional elements not already recited in independent claim 1 hence do not amount to “significantly more” than the abstract idea. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 1-9 are rejected under 35 U.S.C. 103 is being unpatentable over Haddick(US20130127980A1) in view of Sugiyama(US20250149140A1). Claim 1 Haddick discloses: A medical record support apparatus comprising: at least one memory(Para 0285, Haddick discloses memory); and at least one processor coupled to the at least one memory(Para 0285, Haddick discloses a processor which contains memory), wherein the at least one processor is configured to: acquire text information of an utterance content converted by speech recognition(Para 0684, Haddick discloses speech recognition) in which speakers are distinguished for conversational speech data between a medical worker and a patient during a medical interview(Para 0824, Haddick discloses speech identification); Haddick does not explicitly disclose: generate an instruction sentence for a language model to be output in a medical record format from the text information; and acquire medical record information of the patient by inputting the instruction sentence to the language model. Sugiyama discloses: generate an instruction sentence for a language model to be output in a medical record format from the text information(Para 0060, Sugiyama discloses sentences input into a large language model based on text information of medical records); and acquire medical record information of the patient(Figure 9, Sugiyama discloses acquiring patient and examination information) by inputting the instruction sentence to the language model(Para 0060, Sugiyama discloses inputting into a large language model). Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art to have modified the system for video display based on sensor input of Haddick to add generate an instruction sentence for a language model to be output in a medical record format from the text information and acquire medical record information of the patient by inputting the instruction sentence to the language model, as taught by Sugiyama. One of ordinary skill would have been so motivated to provide a way for medical data to be easily processed and converted to more easily be integrated into a medical record, but in this case for a system which provides medical examination support (Para 0005, Sugiyama discloses: “However, such techniques tend to basically add the same tags to the same sentences or words so that it may be difficult to perform tagging with each patient's background taken into account.”). Claim 2 Haddick discloses: The medical record support apparatus according to claim 1, wherein the medical record support apparatus is an eyewear-type wearable terminal worn by the medical worker during the medical interview(Figure 1, Haddick discloses an eyepiece), and the at least one processor is configured to: acquire the converted text information based on the conversational speech data collected by the eyewear-type wearable terminal(Para 0684, Haddick discloses speech recognition by the eyepiece); and display the acquired medical record information on a wearer screen of the eyewear-type wearable terminal(Para 0683, Haddick discloses an eyepiece which displays content). Claim 3 Haddick discloses: The medical record support apparatus according to claim 2, wherein the at least one processor is configured to perform the speech recognition in which the speakers are distinguished(Para 0824, Haddick discloses speech identification) based on at least one of a volume and a speech collection direction of the conversational speech data(Para 0608, Haddick discloses sound direction sensing) collected by the eyewear-type wearable terminal(Figure 1, Haddick discloses an eyepiece), and the at least one processor is configured to distinguish the speakers(Para 0824, Haddick discloses speech identification) and acquire the text information converted from a recognition result of the speech recognition(Para 0464, Haddick discloses transcribing voice to text). Claim 4 Haddick discloses: The medical record support apparatus according to claim 1, wherein the at least one processor is configured to: acquire the converted text information based on the conversational speech data (Para 0464, Haddick discloses transcribing voice to text) collected by the eyewear-type wearable terminal worn by the medical worker during the medical interview(Figure 1, Haddick discloses an eyepiece); and perform control to display the medical record information(Para 1123, Haddick discloses displaying patient data on eyepiece) on a wearer screen of the eyewear-type wearable terminal(Figure 7, Haddick discloses displaying content on eyepiece screen, which can be a medical record). Claim 5 Haddick discloses: The medical record support apparatus according to claim 4, wherein the at least one processor is configured to perform the speech recognition in which the speakers are distinguished(Para 0824, Haddick discloses speech identification) based on at least one of a volume and a speech collection direction of the conversational speech data(Para 0608, Haddick discloses sound direction sensing) collected by the eyewear-type wearable terminal(Figure 1, Haddick discloses an eyepiece), and the at least one processor is configured to distinguish the speakers(Para 0824, Haddick discloses speech identification) and acquire the text information converted from a recognition result of the speech recognition(Para 0464, Haddick discloses transcribing voice to text). Claim 6 Haddick discloses: The medical record support apparatus according to claim 1, wherein the text information includes text information converted as an utterance content of the medical worker by speech recognition for uttered speech data of the medical worker before and after the medical interview(Para 0464, Haddick discloses transcribing voice to text, which can occur before or after an interview). Claim 7 Haddick discloses: The medical record support apparatus according to claim 1, wherein the medical worker is a nurse(Para 1126, Haddick discloses medical personnel), and the medical record information includes information of a Haddick does not explicitly disclose: nursing record Sugiyama discloses: nursing record(Para 0070, Sugiyama discloses different types of medical records) Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art to have modified the system for video display based on sensor input of Haddick to add nursing record, as taught by Sugiyama. One of ordinary skill would have been so motivated to provide a way to process all data related to a patient for improved diagnosis and patient outcomes, but in this case for a system which provides medical examination support (Para 0005, Sugiyama discloses: “However, such techniques tend to basically add the same tags to the same sentences or words so that it may be difficult to perform tagging with each patient's background taken into account.”). Claim 8 Haddick discloses: A medical record support system comprising: an eyewear-type wearable terminal worn by a medical worker during a medical interview(Figure 1, Haddick discloses an eyepiece); and an information processing apparatus communicably connected to the eyewear-type wearable terminal(Para 0477, Haddick discloses a processor connected to the eyepiece), wherein the information processing apparatus includes: at least one apparatus memory(Para 0285, Haddick discloses a processor which contains memory); and at least one apparatus processor coupled to the at least one apparatus memory(Para 0285, Haddick discloses a processor which contains memory), the at least one apparatus processor is configured to: acquire text information of an utterance content converted by speech recognition(Para 0684, Haddick discloses speech recognition) in which speakers are distinguished for conversational speech data between the medical worker and a patient during the medical interview(Para 0824, Haddick discloses speech identification), the conversational speech data being collected by the eyewear-type wearable terminal(Para 0684, Haddick discloses speech recognition by the eyepiece); (Para 0285, Haddick discloses a processor which contains memory), and the at Least one terminal processor is configured to display the medical record information on a wearer screen(Para 0683, Haddick discloses an eyepiece which displays content). Haddick does not explicitly disclose: generate an instruction sentence for a language model to be output in a medical record format from the text information; and acquire medical record information of the patient by inputting the instruction sentence to the language model Sugiyama discloses: generate an instruction sentence for a language model to be output in a medical record format from the text information(Para 0060, Sugiyama discloses sentences input into a large language model based on text information of medical records); and acquire medical record information of the patient(Figure 9, Sugiyama discloses acquiring patient and examination information) by inputting the instruction sentence to the language model(Para 0060, Sugiyama discloses inputting into a large language model) Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art to have modified the system for video display based on sensor input of Haddick to add generate an instruction sentence for a language model to be output in a medical record format from the text information and acquire medical record information of the patient by inputting the instruction sentence to the language model, as taught by Sugiyama. One of ordinary skill would have been so motivated to provide a way for medical data to be easily processed and converted to more easily be integrated into a medical record, but in this case for a system which provides medical examination support (Para 0005, Sugiyama discloses: “However, such techniques tend to basically add the same tags to the same sentences or words so that it may be difficult to perform tagging with each patient's background taken into account.”). Claim 9 Haddick discloses: A medical record support method executed by a computer, the medical record support method comprising: acquiring text information of an utterance content converted by speech recognition(Para 0684, Haddick discloses speech recognition) in which speakers are distinguished for conversational speech data between a medical worker and a patient during a medical interview(Para 0824, Haddick discloses speech identification); Haddick does not explicitly disclose: generating an instruction sentence for a language model to be output in a medical record format from the text information; and acquiring medical record information of the patient by inputting the instruction sentence to the language model. Sugiyama discloses: generating an instruction sentence for a language model to be output in a medical record format from the text information(Para 0060, Sugiyama discloses sentences input into a large language model based on text information of medical records); and acquiring medical record information of the patient(Figure 9, Sugiyama discloses acquiring patient and examination information) by inputting the instruction sentence to the language model(Para 0060, Sugiyama discloses inputting into a large language model). Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art to have modified the system for video display based on sensor input of Haddick to add generate an instruction sentence for a language model to be output in a medical record format from the text information and acquire medical record information of the patient by inputting the instruction sentence to the language model, as taught by Sugiyama. One of ordinary skill would have been so motivated to provide a way for medical data to be easily processed and converted to more easily be integrated into a medical record, but in this case for a system which provides medical examination support (Para 0005, Sugiyama discloses: “However, such techniques tend to basically add the same tags to the same sentences or words so that it may be difficult to perform tagging with each patient's background taken into account.”). Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Tran(US11304023B1) discloses an enhanced hearing system that is coupled to a virtual reality system. Hannaford(US9645785B1) discloses a heads-up display for augmented reality in a medical environment. Any inquiry concerning this communication or earlier communications from the examiner should be directed to SHERYL GOPAL PATEL whose telephone number is (703)756-1990. The examiner can normally be reached Monday - Friday 5:30am to 2:30pm PST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Kambiz Abdi can be reached at 571-272-6702. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /S.G.P./Examiner, Art Unit 3685 /KAMBIZ ABDI/Supervisory Patent Examiner, Art Unit 3685
Read full office action

Prosecution Timeline

Mar 27, 2025
Application Filed
Feb 25, 2026
Non-Final Rejection — §101, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12597525
HEALTHCARE SYSTEM FOR PROVIDING MEDICAL INSIGHTS
2y 5m to grant Granted Apr 07, 2026
Patent 12580055
MEDICAL LABORATORY COMPUTER SYSTEM
2y 5m to grant Granted Mar 17, 2026
Study what changed to get past this examiner. Based on 2 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
13%
Grant Probability
31%
With Interview (+18.3%)
2y 11m
Median Time to Grant
Low
PTA Risk
Based on 23 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month